Caching in with Resolvers

Norman Walsh

XML Standards Architect
Sun Microsystems, Inc.

One Network Drive, MS UBUR02-201
BurlingtonMA 01803-0902

$Id: resolvers.xml,v 1.1 2003/09/30 18:36:04 ndw Exp $

10 Dec 2003

Revision History
Revision 0.524 Sep 2003ndw
First draft.


This paper discusses entity resolvers, caches, and other strategies for dealing with access to sporadically available resources. Our principle focus is on XML Catalogs and local proxy caches. We’ll also consider in passing the ongoing debate of names and addresses, most often arising in the context of URNs vs. URLs.

Table of Contents

What's the Problem?
Absolute URIs on the Network
Absolute URIs on the Local File System
Relative URIs
Solution: Brute Force
Solution: Catalog-Based Resolution
XML Catalogs
Pros and Cons
Solution: Local Caching Proxy
Pros and Cons
Other Mechanisms
RFC 3401: Dynamic Delegation Discovery System
Names and Addresses: URNs and URLs

From a standards point of view, it's often convenient to imagine a world where networks are universally available and have no significant latency. In such a world, resources can be distributed across the web without regard for the practical matters of resource access. Namespace documents, schemas, stylesheets, and other ancillary files can reside where it is convenient for their owners to place them. In such a world, applications are always able to access them directly.

Unfortunately, we don't live in that world. The reality is that networks go down, firewalls and security measures interfere with our ability to access the web, and our machines are sometimes physically disconnected from the network.

In the real world, it's convenient, if not absolutely necessary, to be able to store resources locally and access them transparently instead of “hitting the web” for them each time.

This paper explores three strategies for dealing with this problem: brute force, catalog-based resolution, and local caching proxy. After we've discussed how local resolution can be achieved, we'll consider briefly the more general problem of what it means to be “resolvable” at all.

What's the Problem?

XML documents often refer, implicitly or explicitly, to other documents:

  • Document type declarations point to external subsets (DTDs), entity sets, parsed and unparsed external entities.

  • Schema location hints refer to W3C XML Schemas.

  • Stylesheet processing instructions refer to XSL and CSS stylesheets.

  • RELAX NG Schemas, W3C XML Schemas, XSL Stylesheets, and CSS Stylesheets include or import other modules.

  • Compound documents are constructed with XInclude.

  • Documents refer to external images and ancillary files.

These references are almost universally accomplished with URIs. XML external identifiers, comprised of a public and a system identifier, are an exception. External identifiers have advantages over bare URIs, but those advantages have been lost in more recent specifications which rely simply on URIs. For the moment, we'll ignore these distinctions and examine the general problem as one of resource availability.

If we're going to be productive in collaborative environments or on machines that are not always connected to the network, we're going to have to deal with the problems of limited availability.

How are these resources identified? There are several possibilities: absolute URIs on the network, absolute URIs on the local file system, and relative URIs. None of these is entirely satisfactory.

Absolute URIs on the Network

These offer the most unambiguous identification possible:

As names, either literally or effectively, these are fine identifiers. But if the only resolution mechanism possible is literal retrieval from the identified location, they aren't very useful if they have no location (for example, URNs) or when the network is down or your machine is disconnected (for example, HTTP or FTP URIs).

Absolute URIs on the Local File System

These identifiers are only useful (in general) on the machine where they were created:


Collaborative authoring with identifers like this is often tedious. Unless each author has the same setup, everytime documents change hands, the identifiers have to be updated.

Relative URIs

Relative identifiers, like absolute URIs on the local file system, only work in a particular context. However, that context is easier to transport between locations, so they are often perfect for closely related documents:


Making identifiers relative isn't a guarantee of portability, however. The URI “../xml/dtd/docbookx.dtd” might be useful on your system, but it isn't useful to me.

Solution: Brute Force

The brute force “solution” isn't really very practical. To use this method, you simply edit every single document that you use so that it refers to a local copy of the document the original author intended. Assuming, of course, that you have write access to all the documents.

In a collaborative environment, these edits have to be made every time documents change hands. For stylesheets and other documents that are maintained externally, they need to be repeated every time a new distribution is installed.

We have to be able to do better than this.

Solution: Catalog-Based Resolution

Catalog-based resolution uses an explicit mapping from global identifiers to local storage. Catalogs have been widely used since at least 1994 when OASIS (then SGML Open) began standardizing a plain text catalog format.

More recently, the Entity Resolution Technical Committee at OASIS has developed an XML catalog format. We’ll concentrate our discussion on [XML Catalogs]. An XML Catalog maps an external identifier or URI to some other URI, generally one that resolves to the local file system.

In practice, catalogs are used by a resolver, a layer in the architecture that sits between the application and the network. When an application, for example, a validator or a stylesheet processor, needs to access a document, it asks the resolver for it. The resolver considers how to satisfy the request, by consulting catalogs in this case, and returns the resource if it can.

Resolvers are available as a standard component ([Apache Commons]) from Apache for Java-based processors. They are also implemented by [libxml2], a popular component of the Gnome architecture, and a number of other open source and commercial applications.

Consider the catalog in Example 1, “An XML Catalog”.

Example 1. An XML Catalog

<?xml version='1.0'?>
<catalog xmlns="urn:oasis:names:tc:entity:xmlns:xml:catalog">

  <public publicId="-//OASIS//DTD DocBook XML V4.2//EN"

  <system systemId=""

  <uri name=""

  <uri name=""

  <uri name=""

  <uri name=""

  <uri name=""

  <!-- If you don't find it here ... -->
  <nextCatalog catalog="/share/doctypes/catalog.xml"/>


Armed with this catalog, a validator attempting to process this document type declaration:

<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"

would not have to access the network. The resolver would match the external identifier against the public or system entries in the catalog and return /share/doctypes/docbook42/xml/docbookx.dtd.

An attempt to retrieve the RELAX NG Schema for DocBook V4.2 would return the absolute URI associated with schema/relaxng/docbook.rng. Relative URIs in an XML Catalog are resolved relative to the location of the catalog.

Similarly, attempts to retrieve stylesheets, the bibliography, or the “draft” watermark image would return local resources. Attempts to retrieve any other resources would be processed by examining the “next catalog” (and possibly subsequent next catalogs as well). If no mapping is found, the original resource is returned.

XML Catalogs

In this section, we offer a brief tour of the functionality available in XML Catalogs.

Direct Mapping

Using the public, system, and uri elements, an XML Catalog can provide a direct mapping from external identifiers or URIs to other resources.

Several mapping examples can be found in the example catalog.


Sometimes its useful to mirror whole sections of a public repository. For example, we might mirror all of locally to /share/doctypes/ so that we have access to all of the XML DTD versions of DocBook.

In such a case, creating a direct mapping for every entry would be tedious. Instead, we can take advantage of the rewriteSystem and and rewriteURI catalog entries.

Consider the following entry:

<rewriteSystem systemIdStartString=""

Using this entry, the resolver will “rewrite” any system identifier that begins with the systemIdStartString. For example, a request for will be fulfilled with file:///share/doctypes/

Similarly, rewriteURI will be used to rewrite URIs that begin with the specified prefix.


Rewriting is appropriate when a set of resources is mirrored locally, but sometimes a collection of resources is entirely managed by another authority. If that authority uses an XML Catalog, you can delegate responsibility for those resources to that catalog.

For example, if OASIS provided a top-level catalog for resolving all public identifiers that began “-//OASIS//”, then the following entry would delegate responsibility to that catalog:

<delegatePublic publicIdStartString="-//OASIS//"

Used this way, delegation would access the network. It would be approprate to delegate to a mirrored catalog instead, if all of the resources (and the catalog) are mirrored.


XML Catalogs can be chained together using nextCatalog. This allows a central catalog to rely on other catalogs, maintained independently.

This can be seen in the example catalog.

Pros and Cons

XML Catalogs provide a flexible, convenient mechanism for users to manage resources. They can be:

  • easily configured without privileged access to the machine.

  • changed on a per-application basis, if necessary.

  • managed automatically by install processes (such as Debian’s apt-get and other modern installation tools).

  • configured manually, without ever requiring network access to the resources that they manage.

  • extended by resolvers. For example, the XML Commons resolver includes an extension that implements “suffix rewriting,” which is sometimes quite handy.

On the other hand, XML Catalogs:

  • must be explicitly maintained, either directly by the user or by processes the user runs. At present, there are no mechanisms for XML Catalogs to automatically cache new resources accessed by the user.

  • only function for applications that explicitly support XML Catalogs (or are built on top of libraries that explictly support them). Your web browser isn’t likely to refer to an XML Catalog for access to a stylesheet, for example. At least not this year.

Solution: Local Caching Proxy

Local caching uses a system-level cache to maintain local copies of accessed resources. Proxy caches are a common feature of corporate web architectures where they support not only a reduction in bandwidth, but also transmission of information through the corporate firewall.

Proxy caches can also be used by individuals. A proxy cache sits between the system on which it is running and the rest of the network. When an application accesses a document through the proxy, the proxy examines the documents that it has cached and returns a local copy if it has one. If the document does not exist in the cache, the proxy retrieves the resource from the network and stores a local copy for future use. (In practice, proxy cache behavior is a little more sophisticated, considering the time stamp and expiration date of local resources and the current network connection state of the machine.)

There are a great many options available when configuring a proxy cache. It may, for example, offer facilities for compression, security, indexing, document modification, content filtering and other services. We’re not going to attempt to explore any of those issues. Instead, we’ll limit our discussion to just the basics as they relate to local resolution.

One popular personal proxy cache is [WWWOffle], the World Wide Web Offline Explorer. Our concrete examples are drawn from WWWOffle, but the principles apply to any proxy cache.

Once established, a proxy cache will store anything that it doesn’t consider “local”. Local, in this sense, includes at least the machine that it’s running on. In some environments, it may be useful to identify other machines as local as well. For example, in a home office environment, all the machines in the office might be considered local since they are accessible even if connections to the external network are down.

In order to take advantage of the cache, applications have to be configured to use it. Exactly how this configuration is achieved varies by application and platform. Some operating environments provide system level configuration of proxies, others use a combination of configuration files and command line arguments. Corporate firewalls have made proxies a necessity in many environments, so almost all modern applications provide some mechanism for establishing a proxy. (And note that your local proxy will likely be able to forward requests through your corporate proxy if necessary, so having a corporate proxy need not prevent you from also running a local one.)

The size of cache maintained, and the length of time that unaccessed documents remain in it, is a function of your configuration. It’s likely that you’ll want to configure the cache so that schemas, stylesheets, and other resources are kept longer than web pages. Here’s a simple configuration:

 <> age = 2y
 <> age = 2y
 <ftp://*> age = 7
 age = 14

This configuration simply keeps resources from W3C and OASIS around for two years; it discards resources accessed through FTP in seven days and everything else in fourteen days. More sophisticated rules could be constructed, keeping documents that end in .dtd or .xsl forever, for example.

Pros and Cons

Proxy caches provide a simple mechanism for automatically managing resources. They:

  • operate transparently, requiring no explicit setup by the user.

  • are applicable to almost every application that accesses the network.

On the other hand, proxy caches:

  • are substantially more complex to configure and may require privileged access to the machine.

  • apply globally and cannot be configured on a per-application basis.

  • only cache resources that can be accessed at least once. You can’t, for example, install a package that you received in email and expect it to work without actually receiving it at least once.

  • may discard resources that have not been accessed for some period of time.

  • are not easily extensible.

Other Mechanisms

XML Catalogs and proxy caches are not the only mechanisms that have been devised to support access to distributed resources, though they are arguably the most widely deployed today. Two other proposals that may see wide deployment in the future are [RDDL] and [RFC 3401].


RDDL, the Resource Directory Description Language, maps URIs, particularly XML Namespace names, to a collection of resources. RDDL associates a nature and purpose with each mapping, so it can direct applications to the appropriate type of resource.

For example, one might consult a RDDL document to find the RELAX NG Grammar (nature) for validation (purpose), or the XSLT Stylesheet for transformation.

RDDL processing is a natural adjunct to resolution, the two processes work entirely cooperatively:

  1. The application requests the RDDL document. This request is processed by the resolver, perhaps returning a local copy.

  2. The application consults the RDDL document to find a specific kind of resource.

  3. Once the resource is identified, it may be requested, once again passing through the resolver.

RFC 3401: Dynamic Delegation Discovery System

The Dynamic Delegation Discovery System (DDDS) is designed to support the lazy binding of strings to data. RFC 3401 describes the resolution process:

The DDDS functions by mapping some unique string to data stored within a DDDS Database by iteratively applying string transformation rules until a terminal condition is reached.

Designed to operate as an extension of DNS, it includes a specific component, [RFC 3404], for resolving URIs. Although not yet widely deployed, DDDS may someday offer resolution services not just for http: and other typically retrievable URIs, but also for URNs and, in fact, any arbitrary URI.

How applicable this will be to offline resolution is an open question. It’s likely to depend on network services that may not be readily available on disconnected machines.

Names and Addresses: URNs and URLs

Any discussion of URI resolution invariable touches on the long-standing issue of names and addresses. The debate about what constitutes a name and what constitutes an address is at least partly philosophical and seems unlikely to be resolved any time soon.

Some people feel that there is no distinction to be made ([Cool URIs]), that any URI is a valid name. Others feel that the distinction is significant ([Vicious Circle]).

Regardless of how you feel about it, it’s pretty clear that some people want to treat as names things that others call addresses and people want to perform resolution on not just addresses but also names.

Luckily, the existing and developing mechanisms will support both of these cases.


[Apache Commons] Apache Software Foundation. xml-commons.

[Cool URIs] Berners-Lee, Tim. Cool URIs don’t change.

[WWWOffle] Bishop, Andrew. World Wide Web Offline Explorer.

[RDDL] Borden, Jonathan and Bray, Tim. XML Resource Directory Description Language (RDDL).

[libxml2] Veillard, Daniel. The XML C Parser and toolkit of Gnome.

[XML Catalogs] Walsh, Norman, ed. XML Catalogs.

[Vicious Circle] Walsh, Norman. Vicious Circle.