Aliasing on the World Wide Web:
Prevalence and Performance Implications
Electrical Engineering & Computer Science
University of Michigan
Ann Arbor MI 48109 USA
Compaq Western Research Lab
250 University Avenue
Palo Alto CA 94301 USA
Copyright is held by the author/owner(s).
WWW2002, May 7-11, 2002, Honolulu, Hawaii, USA.
Aliasing occurs in Web transactions when requests containing different URLs elicit replies containing identical data payloads. Conventional caches associate stored data with URLs and can therefore suffer redundant payload transfers due to aliasing and other causes. Existing research literature, however, says little about the prevalence of aliasing in user-initiated transactions, or about redundant payload transfers in conventional Web cache hierarchies.
This paper quantifies the extent of aliasing and the performance impact of URL-indexed cache management using a large client trace from WebTV Networks. Fewer than 5% of reply payloads are aliased (referenced via multiple URLs) but over 54% of successful transactions involve aliased payloads. Aliased payloads account for under 3.1% of the trace's ``working set size'' (sum of payload sizes) but over 36% of bytes transferred. For the WebTV workload, roughly 10% of payload transfers to browser caches and 23% of payload transfers to a shared proxy are redundant, assuming infinite-capacity conventional caches. Our analysis of a large proxy trace from Compaq Corporation yields similar results.
URL-indexed caching does not entirely explain the large number of redundant proxy-to-browser payload transfers previously reported in the WebTV system. We consider other possible causes of redundant transfers (e.g., reply metadata and browser cache management policies) and discuss a simple hop-by-hop protocol extension that completely eliminates all redundant transfers, regardless of cause.
NOTE: The authors recommend that readers use the PDF version of this paper rather than the HTML version. We regard the PDF version as definitive and provide a copy at http://ai.eecs.umich.edu/~tpkelly/papers/
Categories and Subject DescriptorsC.2.2 [Computer-Communication Networks]: Network Protocols--Applications; C.4 [Performance of Systems]: Measurement Techniques; C.2.4 [Computer-Communication Networks]: Distributed Systems--Client/server
General TermsManagement, Measurement, Performance, Design
KeywordsWorld Wide Web, WWW, Hypertext Transfer Protocol, HTTP, aliasing, redundant transfers, duplicate suppression, Duplicate Transfer Detection, DTD, resource modification, caching, cache hierarchies, performance analysis, Zipf's law
Aliasing occurs in Web transactions when different request URLs yield replies containing identical data payloads. Existing browsers and proxies perform cache lookups using URLs, and aliasing can cause redundant payload transfers when the reply payload that satisfies the current request has been previously received but is cached under a different URL. Awareness of this problem is slowly growing in the commercial world: A major cache appliance vendor now encourages site designers to make Web pages cache-friendly by avoiding aliasing [13, page 9]. Within well-administered sites, aliasing might decrease when such advice is heeded. Other trends, however, are moving the Web in the opposite direction and are beyond the control of individual sites. For example, commercial Web design tools typically include many small images, and a particular image bundled with such a tool is available through a different URL at each site that uses it.
Given the proliferation of technologies that create aliases and the potential for aliasing to cause redundant transfers, it is surprising that relatively little is known about the scale and consequences of this phenomenon. Only a few previous studies have considered the prevalence of aliasing in Web transactions, the performance penalty of conventional URL-indexed cache management in large multi-level cache hierarchies, or ways to eliminate redundant payload transfers [36,50]. Few of the Web workload traces that researchers have collected can illuminate the relationship between request URLs and reply payloads, because they do not describe payloads in sufficient detail.
In this paper we quantify aliasing and the impact of URL-indexed cache management on browser and proxy cache miss rates by examining large anonymized client and proxy traces collected, respectively, at WebTV Networks in September 2000 and at Compaq Corporation in early 1999. The traces include message digests of reply payloads, allowing us to detect aliasing. We investigate whether conventional URL-indexed caching is primarily responsible for the high rate of redundant proxy-to-browser payload transfers previously reported in the WebTV system, consider other causes of redundant payload transfers, and discuss a simple way to eliminate all redundant transfers, regardless of cause.
In this paper, we use several terms that require precise definition:
Payload: The data carried in the body of an HTTP response. We consider only full-body responses (status code 200); partial-body (status code 206) responses are extremely rare, and would complicate the discussion.
Payload hit: A cache reference that returns, from cache storage, exactly the payload the origin server would return at the time of access, whether or not messages are exchanged with the origin server. A revalidation request followed by a ``Not Modified'' response is a payload hit, because it averts the payload transfer.
Payload miss: A reference for which the correct payload is not obtained from cache but rather from elsewhere (e.g., the origin server or from a remote intermediate cache).
Aliased payload: In the context of a given trace of Web transactions, a payload is aliased if it is accessed via more than one URL in the entire trace.
In our trace-driven simulations we do not consider caches that violate semantic transparency in the sense of RFC 2616 , i.e., we consider caches that either suffer payload misses or return exactly the same payload that the origin server would return at the moment of access.
Certain terms related to aliasing, such as ``duplication,'' lack clear, widely-accepted definitions in the literature, and we will avoid them when discussing our own work.
2 Related Work
While few studies have directly addressed our central topic, many have investigated aspects of the HTTP namespace and their impact on cache performance. In this section we review literature on the relationship between URLs and reply payloads.
2.1 Resource Modification
``Resource modification'' is the complement of aliasing: requests containing identical URLs yield different reply bodies. Because it has direct implications for cache consistency and object ``cachability,'' resource modification has been extensively studied. In Section 4 we compare the prevalence of aliasing and resource modification and report that more transactions are affected by the former.
Douglis et al. report that rates of resource modification in a trace from a corporate environment are high enough to substantially reduce the hit rates of conventional URL-indexed caches . More recently, Brewington & Cybenko consider the burden that modification rates place on search engines . After fitting a combination of exponential and Weibull models to their data, they report that roughly 10% of inter-modification intervals are 10 days or less and roughly 72% are 100 days or less. Brewington's doctoral thesis considers the problem of monitoring changing information resources in greater theoretical and empirical depth . This research is based on polling URLs obtained from users who have requested notification when specified resources change, and might therefore reflect a sample of resources with atypical rates of change. Padmanabhan & Qiu analyze the dynamics of content creation, modification and deletion at the MSNBC Web site . They report a median inter-modification interval of approximately 10,000 seconds and note that most alterations to files are relatively minor.
Resources expected to change frequently are often called ``dynamic,'' although this poorly-defined term blurs the distinction between the process by which a response is generated, and whether it is ``cachable.'' In practice, cache implementors and researchers employ heuristics to identify uncachable responses by looking either for signs of dynamic generation (such as ``cgi'' in a URL) or for metadata, such as cookies, implying that a resource gives a different response to every request. Wolman et al. report that Squid deems uncachable 40% of replies in a large trace collected at the University of Washington in May 1999; Zhang reports that customized and dynamic content together render roughly 7.1% of the objects in his trace uncachable .
Work by Wills & Mikhailov, however, casts doubt on the assumption that it is pointless to cache seemingly ``dynamic'' or ``customized'' content. They report that even if a previous access to a URL had returned a ``Set-Cookie'' header, in most cases the absence of a request cookie, or the presence of a different cookie, does not affect the reply payload returned for a subsequent access . Repeated accesses to query resources at E-commerce sites sometimes return identical payloads . Iyengar & Challenger exploited the cachability of dynamic replies at a large, busy Web server and report impressive performance gains . Smith et al. report dynamic reply cache hit rates of 13.6% and 38.6% for two workloads .
Wolman et al. incorporate observed resource popularity and modification rates into an analytic model of hierarchical caching . Their model illustrates the impact of resource modification rates on cache hit rates and suggests that cooperative caching schemes yield diminishing returns as client populations increase.
``Mirroring'' typically refers to a special case of aliasing in which replicas of pages or entire sites are deliberately made available through different URLs. Shivakumar & Garcia-Molina investigate mirroring in a large crawler data set . They report far more aliasing than appears in the WebTV client trace: 36% of reply bodies are accessible through more than one URL. Bharat et al. survey techniques for identifying mirrors on the Internet . Bharat & Broder investigate mirroring in a large crawler data set and report that roughly 10% of popular hosts are mirrored to some extent .
Broder et al. consider approximate mirroring or ``syntactic similarity'' . Although they introduce sophisticated measures of document similarity, they report that most ``clusters'' of similar documents in a large crawler data set contain only identical documents. In other words, simple aliasing is the dominant form of similarity in their workload.
2.3 Duplicate Suppression
Douglis et al. report that 18% of the full-body responses recorded at a corporate firewall that resulted in a new instance of a particular resource were identical to at least one other instance of a different resource .
Several ``duplicate suppression'' proposals address performance problems caused by duplication. The HTTP Distribution and Replication Protocol (DRP) employs payload digests to avoid unnecessary data transmission in deliberate replication over HTTP . A DRP client obtains ``index files'' containing digests indicating the current state of resources, and the client can then request precisely those resources for which its copies are obsolete.
Mogul reviewed a variety of end-to-end duplicate-suppression schemes involving ``hints'' supplied by origin servers to clients, and by clients to caches. These proposals do not entirely eliminate the problem of redundant payload transfers, and a trace-driven simulation demonstrates that one such scheme yields 5.4% and 6.2% improvements in hit rates and byte hit rates, respectively. Even these modest gains are upper bounds, because they assume the full participation of all origin servers [35,36].
Santos & Wetherall  and Spring & Wetherall  describe a general protocol-independent network-layer technique for eliminating redundant traffic by caching packet payloads and transmitting digests thereof to avoid redundant transfers. Muthitacharoen et al. designed a network file system for low-bandwidth environments that performs similar operations on chunks of files .
Inktomi's Traffic Server proxy cache product has included a technique called ``content fingerprinting,'' which uses payload digests to avoid storing multiple copies of identical payloads . Content fingerprinting suppresses duplicates in storage, but not on the network. Bahn et al. describe a similar scheme .
2.4 Data-Collection Methodology
In order to evaluate the performance impact of aliasing on large-scale browser-proxy cache hierarchies, we require a detailed record of (request, reply) transactions from a large client population. (Existing synthetic workload generators such as SURGE , WebPolygraph , and SPECweb  were not designed to mimic aliasing ``in the wild,'' and are therefore inappropriate for our purposes.) The logs from a server might reflect large numbers of users, but they do not record all of the users' requests, and so are of little use in evaluating client or proxy caching.
In order to record transactions involving large numbers of both users and servers, researchers typically employ packet sniffers [21,22] or proxy logs . However, the use of caching proxies can complicate either approach. If a sniffer is located between a caching proxy and the Internet, it will fail to record requests served from the proxy cache. The logs of a caching proxy will not suffer from this problem, but such logs do not necessarily reflect the payloads that origin servers would provide: proxies might serve stale content unless they revalidate payloads with the origin server upon every cache hit. Moreover, a trace collected at a proxy normally fails to capture any user requests that hit in the client (browser) caches. Other problems with conventional proxy logs include inadequate detail, low-resolution timestamps, and poor clock synchronization in multiple-host proxy arrays [12,15,17,28].
In principle, one could avoid such problems by collecting traces using an instrumented client; this could capture every user reference. Instrumented browsers have been used to collect traces from small user populations [14,16]. It is difficult to instrument popular browsers today because source code is unavailable, but a client proxy such as Medusa  can collect much of the same data. The main problem with client-end data collection is the difficulty of deployment across a large client sample.
In collecting the anonymized traces we analyze, WebTV employed a non-caching, cache-busting proxy. A cache-busting proxy marks as uncachable all of the replies it sends to clients, effectively disabling browser caches. And because the proxy itself maintains no cache, it never suffers from the stale-response problem. Since the source code for the proxy was available, WebTV could instrument it to collect data not ordinarily logged (e.g., payload digests, which have on rare occasions been logged by modified proxies in the past [35,36]). For more discussion of WebTV's trace collection methodology, see Reference .
A cache-busting proxy allowed WebTV to collect a very large trace from a large user population at reasonable cost. The WebTV trace is comparable in size to the largest traces used in HTTP namespace investigations. Unlike these proxy, sniffer, server, and crawler traces, however, the WebTV client trace can support trace-driven simulation of browser-proxy cache hierarchies.
2.5 Harmful Practices
The HTTP/1.1 specification is long and complex . Not all servers are fully compliant, and the compliance of products does not always improve over time . Non-compliance can clearly cause redundant payload transfers and other kinds of waste. However, redundant transfers can also occur if mechanisms introduced into HTTP/1.1 to improve cache correctness are used in strange but compliant ways. For instance, identical payloads served by a single site are sometimes accompanied by different entity tags , causing new-style ``If-None-Match'' revalidation attempts to fail where old-fashioned ``If-Modified-Since'' requests might succeed. In this case, the server is compliant with the specification, but not with the most efficient possible implementation.
Furthermore, several common practices that do not violate the protocol complicate the HTTP namespace in harmful ways. Mikhailov & Wills report, for instance, that content providers sometimes embed session identifiers in dynamically-written URLs rather than cookies . Ad rotation often creates many minor variants of the HTML for a Web page, inflating resource modification rates. Padmanabhan & Qiu document other types of minor changes that occur frequently at the MSNBC site .
Existing literature touches on a number of issues surrounding aliasing on the Web, but the prevalence of this phenomenon across user-initiated transactions and the impact of URL-indexed cache organization on miss rates in multi-level cache hierarchies is poorly understood. Most proxy traces employed in empirical Web caching research shed no light on aliasing because they do not record data payloads or digests thereof. Data sets collected by Web crawlers often include payload digests but cannot support trace-driven simulations of cache hierarchies; they illuminate aliasing across available resources rather than accessed resources. The WebTV trace described in Section 3 is well suited to our investigation because it reflects all client requests and corresponding server replies in a large, cacheless production environment.
We analyze aliasing in an anonymized Web client trace collected at WebTV Networks in September 2000, summarized in Table 1. The trace spans sixteen days and reflects over 347 million references to over 40 million resources by over 37,000 clients. It was collected using a specially instrumented ``cache-busting proxy,'' as described in Section 2.4 and in Reference . This allowed the proxy to record requests that would otherwise be served silently from browser caches.
In this paper we consider only successful (status code 200) transactions in the WebTV trace. We furthermore exclude transactions involving seventeen payloads for which accurate sizes are not available; these account for slightly over 100,000 transactions. Our reduced trace is summarized in the right-hand column of Table 1. Due to differences in trace-reduction procedures used in the two investigations, the summary statistics in Table 1 differ from those in Table 3 of Reference . As in the earlier paper we associate with each payload a single size that includes protocol overhead (HTTP headers). In the present study we add to each payload's Content-Length a median header size of 247 bytes, whereas the earlier paper used the maximum size-related field observed in any transaction involving a payload.
WebTV clients are inexpensive devices that enable Web surfing on conventional television sets. Most are used in homes, and all rely on telephone modems operating at roughly 33.6 Kbps. For our purposes the most important features of the WebTV trace are:
- it reflects activity in a cacheless system, i.e., both browser and proxy caches were disabled during data collection;
- the clients never used HTTP features such as Ranges or Delta encoding , so every reply body is self-contained; and
- it contains anonymized message digests of every reply body.
The last feature is crucial to the present investigation, for it illuminates the relationship between a URL and the data payload returned as a result of a specific access to that URL at a given instant. The first feature ensures that the payloads recorded in every transaction are those returned directly from the origin server; there is no chance that the WebTV proxy served (and logged) a stale payload from its cache. Another nice feature of the WebTV trace is that it contains no ``robots,'' which can only be identified with complex and unreliable heuristics .
While data collected via polling or crawling would shed light on aliasing across available resources, we must focus on a request stream in order to evaluate the cache performance impact of aliasing. Furthermore a client trace is essential in order to evaluate the impact of aliasing on browser/proxy cache hierarchies. The WebTV client trace is particularly attractive because it reflects workload in a well-integrated production environment controlled by a single organization; performance enhancements suggested by workload analysis are far easier to implement in such environments than in the overall Web.
The WebTV system is a somewhat peculiar environment, and it is reasonable to suspect that thin-client surfing might differ systematically from browsing with conventional rich clients. We therefore repeated our performance evaluations using a large proxy trace recorded on the Compaq corporate network. This trace, described in detail in Reference , is summarized in Table 2.Like the WebTV trace, the Compaq trace contains payload digests, but because browser caches were enabled it reflects only browser cache misses. Therefore the Compaq trace cannot be used to evaluate browser cache performance. As with the WebTV data, we use a reduced Compaq trace containing only status-200 transactions; a small number of erroneous transactions are also excluded.
3.1 Trace Characteristics
The WebTV trace is large not merely in terms of number of clients and transactions but also in terms of its ``working set.'' The sum of distinct payload sizes in the WebTV trace is roughly 600 GB. At the time the WebTV trace was collected, thirty-one production-grade cache products competed in a ``Cache-Off'' benchmark exercise . The mean capacity of these caches was 83.5 GB and the median size was 42 GB. The WebTV workload could fill the largest entrant's cache (315 GB) nearly twice. However the trace's working set is not impossibly large by the standards of September 2000; the bank of modified WebTV proxies that collected the trace had a total capacity of roughly 600 GB. Similarly, the sum of distinct payload sizes requested by typical clients in the trace is moderately large for a set-top device, but not excessively so. The median client receives under 20 MB of distinct payloads, and over 26% of the client devices that generated the WebTV trace had larger browser caches . In Section 5 we consider browser and proxy caches sufficiently large that they suffer no capacity misses.
The popularity distributions of URLs and reply bodies affect cache performance,
and these are shown in Figure 1. As in most Web workloads studied to date
the popularity distribution of URLs is Zipf-like, as is that of distinct data
payloads. Table 3 reports Zipf parameters for the popularity
distribution of both URLs and reply bodies, obtained by fitting to the WebTV
data linear least-squares models of the form
using the algorithms described in References [27,44]. The Zipf parameters in the WebTV client trace are remarkably close to unity. Cunha et al. report similar findings on the Boston University client trace . By contrast, the Zipf parameter is often higher at servers  and lower at proxies . For an interesting analysis of why the Zipf varies at different levels in a cache hierarchy, see Padmanabhan & Qiu . Breslau et al. discuss the implications of Zipf-like popularity distributions for caching .
We furthermore compute the Zipf parameter of the payload popularity distribution for each client in the WebTV trace. Figure 2 shows the distributions of across three subsets of the client population: 1) all clients that request more than thirty distinct payloads (this excludes only a handful of clients), 2) clients whose number of distinct referenced payloads is between the 25th and 75th percentiles, and 3) clients whose model fit is particularly good (). The figure displays separately the distributions of for the first two groups. Three remarkable features are apparent in Figure 2: For most clients a Zipf model describes the popularity of accessed payloads reasonably well (). Furthermore, whereas for the overall trace (shared proxy serving cacheless clients) is roughly 0.938, it is lower in over 87% of individual client reference streams. At present we have no theoretical explanation for why should be higher at a shared proxy than at clients. Finally, is noticeably higher in clients for which the Zipf model fit is close.
Figure 3 shows the cumulative concentration of references across URLs and payloads sorted in descending order of popularity. The top one percent of payloads account for over two thirds of all transactions and the top ten percent account for nearly 85%; for URLs the figures are respectively 62.2% and 82.4%. Concentration of references in the WebTV client trace is much stronger than in proxy traces (e.g., Figure 5a of Arlitt et al. ). Browser caches filter reference concentration, as well as reference locality, from the original client reference streams, so that both locality and concentration are markedly lower in the reference stream that reaches proxies. See Figure 10 of Reference  for server vs. proxy reference concentration.
3.2 Trace Representativeness
Any study that generalizes from one or two traces must consider whether they are representative of Web use in general. We compared our data sets with traces used in prior literature in terms of Zipf parameters (Section 3.1), response sizes, and MIME type distributions.
Table 4 shows mean and median response body sizes from several relatively large, recent trace sets. Our WebTV and Compaq traces roughly span the range of means and medians, except for those of the HP cable modem trace. The WebTV sizes are similar to (but slightly smaller than) the Berkeley sizes, consistent with the use of slow final hops in both environments. The Compaq sizes are not inconsistent with those from the AT&T and Washington broadband environments. We do not know why the cable modem sizes are so large.
Table 5 shows the fraction of transactions, bytes transferred, payloads, and working set associated with popular MIME types; all types that account for 1% or more in any category are shown. In terms of transactions and bytes transferred, the WebTV trace is roughly similar to other workloads reported in the literature, e.g., the AT&T trace described in Table 1 of Douglis et al. . JPEG files are more prominent in WebTV's client trace, probably because client caches handled many JPEG accesses in the AT&T trace. In terms of distinct payloads, HTML is far more prevalent in the WebTV trace (54% vs. 24%). The practice of decomposing logical pages into multiple HTML frames, more common in September 2000 than in November 1996, might partly explain the difference.
Wolman et al. collected a large Web trace at the University of Washington using a packet sniffer in May 1999. Figure 1 of Reference  reports the distribution of MIME types in this trace. Image files account for more transactions and more bytes transferred in the WebTV trace, probably due to client caching on the University of Washington campus.
On the basis of the available quantitative evidence, therefore, the WebTV and Compaq traces used in this paper appear consistent with other recent traces.
4 Prevalence of Aliasing
For the purposes of this section, a transaction record is a pair (U, P) where U is a request URL and P is a reply data payload. We say that a reply payload P is aliased if there exist two or more records (U, P), (U',P) containing the same reply payload P but different URLs U and U'. Similarly, we say that a URL U is modified if there exist two or more transactions containing U as the URL and different reply payloads P and P'. The degree of a payload is the number of distinct URLs that appear with it in transaction records, and the degree of a URL is the number of distinct reply payloads that appear with it in the trace. Aliased payloads and modified URLs each have degree two or greater.
Table 6 shows that aliased payloads account for over 54% of transactions and 36% of bytes transferred in the WebTV trace, suggesting that conventional URL-indexed caching might lead to many redundant transfers and much redundant network traffic. We address these issues in Section 5. The table also shows that under 5% of payloads are aliased, and aliased payloads constitute only 3% of the working set.
Note that because the WebTV trace was made with all caching disabled, many of the aliased-payload transactions in Table 6 would not have been seen in a more typical environment; they would have been avoided by traditional client caches. However, caching would also reduce the total number of transactions, so the fraction of aliased payloads in a cache-filtered reference stream depends (in part) on whether aliased payloads experience more or less locality than others, as well as on the specific cache configuration. We do not investigate this issue.
The distributions of the degrees of payloads and URLs in the WebTV trace are shown on the left in Figure 4. Fewer than 5% of payloads are aliased, but one is accessed via 348,491 different URLs. Similarly only 5.7% of URLs are modified, but one yields 491,322 distinct payloads. This analysis downplays the prevalence of aliasing and modification because it does not consider the number of times that different (URL, payload) pairs occur in the trace. The plot in the center shows the distributions of payload and URL degrees weighted by reference count. Whereas only 10% of transactions involve modified URLs, over 54% involve aliased payloads; reply-payload aliasing affects more transactions than resource modification. The plot on the right shows the distribution of bytes transferred by the degree of the payload involved; 36% of the bytes transferred in the WebTV workload involve aliased payloads.
In over 41 million successful transactions (12.72%) a payload is accessed through a different URL than in the previous access to the same payload. By contrast, under 14.3 million transactions (4.37%) involve a different payload than the previous transaction with the same URL. Here again the prevalence of aliasing exceeds that of resource modification. Note that this does not imply that aliasing causes more cache misses than resource modification; in fact, the reverse might be true. Our simulations do not address this question.
Following Douglis et al.  we compute for each multiply-referenced URL its ``change ratio,'' the fraction of its accesses that return a different data payload than its previous access. We furthermore compute for each multiply-referenced payload an analogous metric, the ``alias ratio,'' defined as the fraction of its accesses made through a different URL than its previous access. The distributions of change ratios and alias ratios across multiply-referenced URLs and payloads, respectively, are shown in Figure 5. The figure shows that 15.3% of multiply-referenced payloads are aliased and 12.4% of multiply-referenced URLs are modified. However the figure also shows that alias ratios are generally lower than change ratios. For example, only 2% of multiply-referenced payloads have alias ratios above 0.5 whereas 4.7% of multiply-referenced URLs have change ratios over 0.5.
4.1 Aliasing and Response Attributes
Techniques meant to eliminate redundant transfers usually impose some costs. If we could impose those costs only on those subsets of responses that are most likely to benefit from an alias elimination technique, we could (in principle) reduce overall costs without similarly reducing overall benefits.
Table 7 shows the prevalence of aliasing among popular MIME types in the WebTV trace. The table uses the same sort order as Table 5. Roughly 4.3% of payloads are served (at different times) with more than one MIME type; in such cases we define the payload's type to be the most common type.
Aliasing is most common among MIDI payloads: 35% of MIDI payloads are accessed via two or more different URLs, and over 80% of MIDI transactions involve aliased payloads. However Table 5 shows that MIDI accounts for under 2% of all traffic and under 1% of all transactions.
GIF files account for over two thirds of transactions and over one third of bytes transferred in the WebTV trace (Table 5), and roughly two thirds of GIF transactions involve aliased payloads (Table 7). Taken together, these facts imply that nearly half of all transactions involve aliased GIF payloads ( ). By contrast, aliasing is far less prevalent among HTML and JPEG payloads, which together account for roughly 29% of transactions and 48% of bytes transferred; fewer than 7.5% of transactions involve aliased HTML or JPEG payloads. Our findings are consistent with the hypothesis that Web authoring tools account for much of the aliasing in Web transactions; unfortunately our traces are anonymized in such a way as to prevent more detailed investigation of the issue.
Techniques that attempt to eliminate redundant payload transfers might be best applied to MIME types, such as images and audio, subject to frequent aliasing. Frequently-used types that seldom suffer aliasing, such as HTML, should perhaps not be burdened with additional overheads.
We also examined the relationship between aliasing and payload size. Figure 6 shows several distributions involving the sizes of payloads in the WebTV trace. The top row of distributions shows that aliased payloads, and the transactions and bytes transferred due to them, tend to be smaller than their non-aliased counterparts. However when we examine particular MIME types this generalization does not always hold. For example, aliasing is associated with slightly larger payload sizes in JPEG transactions and HTML traffic. Techniques that attempt to eliminate redundant payload transfers should add a minimal number of header bytes, since the bias towards aliasing of small payloads implies that potential benefits can easily be squandered.
4.2 Causes of Aliasing
Aliasing can arise in several different ways, e.g., deliberate mirroring, aliasing within a single site, and identical content available at different sites. We can further decompose the last cause into cases where the abs_path component of the URL is the same, or different. Table 8 provides examples of the possibilities.
Knowing the cause of aliasing can help us decide where to focus efforts at remediation. A site can replace an ad-hoc mirroring strategy with a CDN, which does not introduce aliasing into the HTTP namespace . Site administrators can avoid type 2 aliasing, following the advice of a CacheFlow white paper on cache-friendly site design . Unfortunately the widespread use of Web authoring tools can cause type 3 aliasing, and this is beyond the control of individual sites. Furthermore aliasing occurs even within such tools: DreamWeaver , for instance, contains 632 unique image files under 642 different filenames.
The raw WebTV trace is anonymized in such a way that aliasing of types 1 and 3 cannot be distinguished. Furthermore the reduced trace that we use in our empirical work omits anonymized abs_path fields, preventing us from distinguishing between types 3 and 4. We can, however, identify cases where different URLs contain identical versus different host components.
Payloads of degree 2, i.e., payloads accessed via exactly two different URLs, fall into exactly one of the categories in Table 8. Degree-2 payloads account for 70% of aliased payloads, 31% of transactions involving aliased payloads, and 34.6% of aliased payload bytes transferred. 80.56% of degree-2 payloads are accessed via URLs with different host components; the remainder are cases of within-site aliasing.
5 Performance Implications
In any reference sequence, the first access to a given payload cannot be served from cache; we refer to these as ``new-payload'' misses. In this section we compare the payload miss rates of conventional URL-indexed caches with new-payload miss rates. The use of URLs to organize and locate payloads in conventional caches entirely accounts for the difference between the two, and as we explain below, new-payload miss rates represent an achievable bound on the performance of a sufficiently large cache.
As explained in Section 1.1 we use the terms ``payload hit'' and ``payload miss'' to mean respectively ``required payload is obtained from cache'' and ``payload must be fetched from elsewhere.'' In this section we assume infinite-capacity caches that require no removal policy and suffer no capacity misses. We furthermore assume that caches are semantically transparent, i.e., that a cache hit supplies exactly the same payload as the origin server would. This can easily be achieved in practice by revalidating every request, but for brevity we do not discuss freshness checks and revalidations in this section.
5.1 Abstract Cache Models
A conventional ``URL-indexed'' cache stores the most recently received data payload in association with each requested URL. In addition to new-payload misses, an infinite-capacity URL-indexed cache suffers a payload miss, and resulting redundant transfer, if it has previously received the payload that satisfies the current request, but this payload is not currently cached in association with the current request URL.
Just as it is possible to design a cache that wastes no storage on multiple copies of payloads [3,32], it is possible to design a cache that suffers only from new-payload misses (assuming infinite capacity). One can construct such a ``frugal'' cache by assuming that the cache computes a message digest of every stored payload, that the cache maintains an index mapping digest values to stored payloads, and that every payload-bearing response message received by the cache is preceded by a digest of that message's payload, computed by the message sender (i.e., proxy or origin server). This allows the cache to avoid the payload transfer if it already stores a copy of the payload. The strategy is very simple: ``1) cache forever every payload you receive, and 2) before receiving a payload, verify via digest lookup that you don't already have it.'' This approach ensures that the current request is served from cache if the required payload has ever been received before, and therefore only new-payload misses occur; redundant transfers cannot.
Section 7 sketches a realizable protocol design that can avoid redundant payload transfers in this manner. For the present we simply note that new-payload miss rates represent an achievable bound on the performance of infinite-capacity caches. Figure 7 describes, in pseudocode, how our two cache models process requests. A ``frugal'' cache conceptually maintains two caches of received payloads: a ``u_cache'' indexed by URLs and a ``d_cache'' indexed by payload digest. As noted above, we omit consistency checks from our pseudocode for brevity and clarity; such checks are necessary in order to ensure semantic transparency.
Our idealized cache models differ only in their susceptibility to payload misses due to the way they associate stored payloads with URLs. In other words, URL-indexed caching accounts for the difference between the payload miss rate of an infinite-capacity conventional cache and the new-payload miss rate inherent in the reference sequence. By comparing the two we can quantify precisely what fraction of URL-indexed cache payload transfers are redundant.
Aliased payloads can cause redundant transfers for URL-indexed caches, but are not the only cause. For example, a single resource (URL) whose value alternates between two payloads can also cause redundant transfers. Multiple causes may occur together, so a redundant transfer could fall into several categories. The abstract models in Figure 7, and our simulations in Section 5.2, do not isolate the contribution of aliasing to redundant transfers.
5.2 Simulation Results
We computed new-payload and URL-indexed payload miss rates and byte-weighted payload miss rates for 1) the aggregate browser cache population of over 37,000 clients, 2) a shared proxy cache serving cacheless clients, 3) a proxy serving infinite URL-indexed browser caches, and 4) a proxy serving infinite ``frugal'' browser caches.Table 9 shows our results, including the percentage of redundant payload retrievals made by an infinite URL-indexed cache. The table separately reports cold and warm proxy simulation results; we used the first nine days of transactions in the sixteen-day WebTV trace to warm the simulated proxy for the latter. We do not report warm client results because at no time are all client caches equally warm: At any given point in the overall trace, some clients have issued many requests while others have issued few or none.
Our results show that conventional URL-indexed caching entails large numbers of redundant transfers at both levels of the cache hierarchy: nearly 10% of payload transfers to clients and over 20% of payload transfers to a shared proxy are redundant. Even if redundant proxy-to-browser payload transfers are eliminated by ``frugal'' browser caches, nearly 12% of payload transfers to a URL-indexed proxy would be redundant. Under 4% of the network traffic between the proxy and infinite-capacity conventional clients is redundant, but over 13% of the traffic reaching a URL-indexed proxy serving URL-indexed or cacheless clients is redundant.
The last two rows of Table 9 show simulated payload miss rates and byte-weighted payload miss rates for an infinite-capacity shared proxy serving the Compaq workload. For the warm-proxy results we warm the simulated cache with the first 50 million transactions and tabulate payload miss rates based only on subsequent transactions. The Compaq results are roughly similar to the WebTV results: Over 17% of a URL-indexed cache's payload retrievals are redundant, as is roughly 12% of origin-to-proxy traffic.
We have not yet extended our simulator to model finite caches of various sizes. We expect, however, that the finite-cache results would lie between our infinite-cache and cacheless results.
6 Explaining Redundant
The original motive for our investigation of aliasing was to explain the high rates of redundant proxy-to-browser payload transfers previously reported in the WebTV system . Actual client payload miss rates are far higher than predicted by a simulated client-cache model that included only new-payload misses and capacity payload misses.
Redundant transfers can result from at least three causes: 1) faulty metadata supplied by origin servers or intermediaries, 2) inappropriate browser cache management, and 3) URL-indexed cache organization. It now appears that URL-indexed caching accounts for a substantial fraction of redundant payload transfers to WebTV clients, but not all of them. A thorough investigation of the remaining possibilities is the subject of our ongoing research; we offer a few tentative observations below.
Inappropriate metadata appears frequently in reply headers, and sometimes takes surprising forms. For instance, in the WebTV trace, different replies from the same server containing the same payload sometimes contain different entity tags. This curious phenomenon can cause ``If-None-Match'' revalidation attempts to fail needlessly, resulting in redundant payload transfers. Other researchers have explained this problem, which arises when large server farms fail to harmonize entity tags across server replicas .
Mogul investigated erroneous HTTP timestamps in a large trace and reported that 38% of responses contained impossible Date header values, and 0.3% had impossible Last-Modified values . Some timestamp errors might cause transparency failures; others might cause needless revalidations. Wills & Mikhailov report a different kind of timestamp error: the Last-Modified reply header of a resource sometimes changes even when the reply body does not .
Anecdotal evidence suggests that Web design tools do not encourage content creators to associate reasonable expiration dates with pages. This is unfortunate because many commercial sites might incorporate business rules into Expires headers, e.g., ``resources are only modified during business hours''; however, such practices seem to be rare. Finally, origin servers are not the only source of faulty metadata: for example, the popular Squid proxy does not update cached object headers after revalidations .
The WebTV browser cache might go too far in its efforts to avoid serving stale content to users. It truncates expiration dates to a maximum of 24 hours, and it evicts expired items rather than re-validating them with conditional GET requests [11,51]. The Mozilla browser cache, by contrast, is designed to comply with the letter and spirit of HTTP/1.1 [24,39]. WebTV's strategy prevents transparency failures when expiration dates are overly optimistic and might simplify implementation in memory-constrained client devices, but it might also inflate client miss rates. In future work we intend to quantify the relative contributions of faulty metadata and browser caching policies to redundant transfers.
7 Avoiding Redundant Transfers
In Section 5.1, we described an abstract model for a cache that suffers only new-payload misses. Here we sketch how this could be realized in a practical protocol design, as an extension to HTTP. Our design averts all redundant payload transfers, including but not limited to those caused by aliasing. We call our design ``Duplicate Transfer Detection'' (DTD). DTD can be applied both to client and proxy caches.
First, consider the behavior of a traditional HTTP cache. Such a cache is URL-indexed: if the cache finds that it does not currently hold an entry for a requested URL U, this is a cache miss. On a miss, the cache issues or forwards a request for the URL towards the origin server, which would normally send a response containing payload P. If the cache holds an expired entry for U, it may send a ``conditional'' request, and if the server's view of the resource has not changed, it may return a ``Not Modified'' response without a payload. Real HTTP caches differ from the abstract URL-indexed model defined in Section 5.1 because they implement HTTP's cache-consistency mechanisms, and so may suffer redundant transfers resulting from inappropriate metadata (see Section 6) as well as from aliasing.
Now consider an idealized, infinite cache that retains in storage every payload it has ever received, even those that a traditional HTTP cache would not treat as valid cache entries. A finite, URL-indexed cache differs from this idealization because it implements both an update policy (it only stores the most recent payload received for any given URL), and a replacement policy (it only stores a finite set of entries, selected for maximum expected value).
The concept behind Duplicate Transfer Detection is quite simple: If our idealized cache can determine, before receiving the payload, whether it had ever previously received P, then we can avoid transferring that payload. Such a cache would experience only new-payload misses and would never suffer redundant payload transfers. A finite-cache realization of DTD would, of course, also suffer capacity misses.
How does the cache know whether it has received a payload P before the server sends the entire response? DTD follows the model of the abstract ``frugal'' cache described in Section 5.1. The cache maintains one set of cache entries but two lookup tables: one indexed by URL, and one indexed by the digest of each stored payload. If a DTD cache finds no fresh entry under the requested URL U, it forwards a (possibly conditional) request to the origin server. If the server replies with a payload, it first sends the digest D of the payload, and the cache checks for a stored entry with a matching digest value. Upon a digest match, the cache can signal the server not to send the payload (although the server must still send the HTTP message headers, which might be different). Thus, while DTD does not avoid transferring the request and response message headers, it can avoid any redundant payload transfer. We say that a ``DTD hit'' occurs when DTD prevents a payload transfer that would have occurred in a conventional URL-indexed cache.
An idealized, infinite DTD cache stores all payloads that it has received. In particular, it does not delete a payload P from storage simply because it has received a different payload P' for the same URL U. A realistic, finite DTD cache will eventually delete payloads from its storage, based on some replacement policy. A DTD cache might benefit from retaining old cache entries that other cache replacement and update algorithms would discard, speculating that such an entry will yield a future DTD hit.
7.1 Practical Issues for DTD
Practical implementation of DTD requires the solution of several problems. We mention a few of the problems here, but defer their solutions, and discussion of how to define DTD as a simple, compatible extension to HTTP, to a future paper.
When a client detects a DTD hit, how does it avoid the payload transfer? In one approach, the server sends the response headers (including the digest) but defers sending the payload until the client sends an explicit ``proceed'' request. In an alternative approach, the server sends the payload immediately, but stops if the client sends an ``abort'' message. (Note that HTTP would have to be extended to support either of these mechanisms.) The ``proceed'' model imposes an extra round-trip time (RTT) on every new-payload or capacity miss, but never sends any aliased payload bytes. (A more intricate form of the ``proceed'' model could amortize this delay over several misses.) The ``abort'' model does not impose additional delays, but the abort message may fail to reach the server in time to do any good. Choosing the right tradeoff will require experimental work.
The choice of digest algorithm also requires some tradeoffs. The algorithm must resist accidental or malicious collisions, but it must also not be expensive to compute, nor should the digest representation consume too many header bytes. A cryptographic hash algorithm such as SHA-1  might have the right properties.
7.2 Similar Proposals
DTD is an application-level analogue of the router-level approach proposed by Santos & Wetherall  and Spring & Wetherall . The two approaches are in some sense complementary, because each can avoid some redundant data transfers eliminated by the other. For example, the router-based approach can eliminate transfers of common prefixes of slightly different payloads, while DTD does not suffer from the re-packetization potentially caused by pipelining in HTTP. The approaches also differ in adoption dynamics: the router-level technique is easier to deploy for an organization that controls network infrastructure, while the application-level technique may be preferable for a single organization that controls two levels of a cache hierarchy (e.g., the client and proxy caches of AOL or WebTV).
Our analysis of a large, detailed, recent Web client trace reveals that aliasing occurs frequently in Web transactions. Conventional URL-indexed caches are susceptible to misses caused by aliasing, and in our simulations such caches generate many redundant payload transfers: roughly 10% of payload transfers to infinite-capacity browser caches and over 20% of payload transfers to an infinite shared proxy serving either cacheless or infinite clients are redundant.
Aliasing is not the only cause of redundant payload transfers, so we describe a simple and completely general way to eliminate all redundant transfers, regardless of cause: Duplicate Transfer Detection. In future work we intend to quantify precisely the relative contributions of aliasing, faulty metadata, and inefficient cache management to the problem of redundant payload transfers, and describe in greater detail the latency and bandwidth savings achievable with DTD.
We thank WebTV Networks for allowing us to study their trace, and especially Stuart Ozer, Arnold de Leon, and Jay Logue (and others listed in ) for gathering the trace, and David Surovell and Jake Brutlag for describing how WebTV client caches work. Compaq's Western Research Lab gave us access to their trace and generous equipment support, and Geoff Voelker of UC San Diego lent us additional system time. Glenn Cooper (Compaq WRL) and Jeff Kopmanis (U. Michigan AI Lab) provided computer support services.
Mikhail Mikhailov explained several puzzling phenomena we observed in our data, and with Brian Davison and Dave Pennock directed us to relevant literature. Mikhailov furthermore performed for us several tabulations of his own data sets to investigate reply header sizes and cookie sizes. Martin Arlitt of HP Labs answered several statistical questions. Mark Manasse helped us apply the Birthday Paradox. Peter Honeyman guided Kelly's research at the University of Michigan.
Finally, we thank John Dilley, Mikhail Mikhailov, Jake Brutlag, Flavia Peligrinelli Ribeiro, and the anonymous reviewers for carefully reading drafts and offering thoughtful, detailed comments.
- V. Almeida, D. Menascé, R. Riedi, F. Peligrinelli,
R. Fonseca, and W. Meira Jr.
Analyzing Web robots and their impact on caching.
In Proc. 6th Web Caching Workshop, June 2001.
- M. Arlitt, R. Friedrich, and T. Jin.
Workload characterization of a Web proxy in a cable modem environment.
Technical Report HPL-1999-48, HP Labs, 1999.
- H. Bahn, H. Lee, S. H. Noh, S. L. Min, and K. Koh.
Replica-aware caching for Web proxies.
Computer Communications, 25(3):183-188, Feb. 2002.
- P. Barford and M. Crovella.
Generating representative Web workloads for network and server performance evaluation.
In Proc. ACM SIGMETRICS, pages 151-160, July 1998.
- K. Bharat and A. Broder.
Mirror, mirror on the Web: A study of host pairs with replicated content.
In Proc. 8th WWW Conf., May 1999.
- K. Bharat, A. Broder, J. Dean, and M. R. Henzinger.
A comparison of techniques to find mirrored hosts on the WWW.
In Proc. Workshop on Organizing Web Space at 4th ACM Conference on Digital Libraries, Aug. 1999.
- L. Breslau, P. Cao, L. Fan, G. Phillips, and S. Shenker.
Web caching and Zipf-like distributions: Evidence and implications.
In Proc. IEEE INFOCOM, Mar. 1999.
- B. E. Brewington.
Observation of changing information sources.
PhD thesis, Dartmouth, June 2000.
- B. E. Brewington and G. Cybenko.
How dynamic is the web?
In Proc. 9th WWW Conf., May 2000.
- A. Z. Broder, S. C. Glassman, M. S. Manasse, and G. Zweig.
Syntactic clustering of the Web.
In Proc. 6th WWW Conf., Apr. 1997.
- J. Brutlag.
- R. Cáceres, B. Krishnamurthy, and J. Rexford.
HTTP 1.0 logs considered harmful.
Position Paper, W3C Web Characterization Group Workshop, Nov. 1998.
- CacheFlow Corporation.
White paper: Creating a cache-friendly Web site, Apr. 2001.
- L. D. Catledge and J. E. Pitkow.
Characterizing browsing strategies in the World-Wide Web.
Computer Networks and ISDN Systems, 27(6):1065-1073, Apr. 1995.
- I. Cooper and J. Dilley.
RFC 3143: Known HTTP proxy/caching problems, June 2001.
- C. R. Cunha, A. Bestavros, and M. E. Crovella.
Characteristics of WWW client-based traces.
Technical Report BU-CS-95-010, Boston U. CS Dept., July 1995.
- B. D. Davison.
Web traffic logs: An imperfect resource for evaluation.
In Proc. 9th Annual Conf. of the Internet Society, June 1999.
- J. Dilley.
- J. Dilley and M. Arlitt.
Improving proxy cache performance--analyzing three cache replacement policies.
Technical Report HPL-199-142, HP Labs, Oct. 1999.
- F. Douglis, A. Feldmann, B. Krishnamurthy, and J. Mogul.
Rate of change and other metrics: A live study of the World Wide Web.
In Proc. 1st USITS, pages 147-158, Dec. 1997.
- A. Feldmann.
Continuous online extraction of HTTP traces from packet traces.
In Proc. W3C Web Characterization Group Workshop, 1999.
- A. Feldmann, R. Cáceres, F. Douglis, G. Glass,
and M. Rabinovich.
Performance of Web proxy caching in heterogeneous bandwidth environments.
In Proc. IEEE INFOCOM, Mar. 1999.
- R. Fielding, J. Gettys, J. Mogul, H. Frystyk, L. Masinter,
P. Leach, and T. Berners-Lee.
RFC 2616: Hypertext transfer protocol--HTTP/1.1, June 1999.
- D. Fisher.
- S. Gribble and E. Brewer.
System design issues for Internet middleware services: Deductions from a large client trace.
In Proc. 1st USITS, pages 207-218, Dec. 1997.
- A. Iyengar and J. Challenger.
Improving Web server performance by caching dynamic data.
In Proc. 1st USITS, pages 49-60, Dec. 1997.
- R. Jain.
The Art of Computer Systems Performance Analysis.
- T. Kelly.
Thin-client Web access patterns: Measurements from a cache-busting proxy.
Computer Communications, 25(4):357-366, Mar. 2002.
- M. Koletsou and G. M. Voelker.
The Medusa proxy: A tool for exploring user-perceived Web performance.
In Proc. 6th Web Caching Workshop, June 2001.
- B. Krishnamurthy and M. Arlitt.
PRO-COW: Protocol compliance on the Web--a longitudinal study.
In Proc. 3rd USITS, pages 109-122, Mar. 2001.
Dreameaver, Nov. 2001.
- P. Mattis, J. Plevyak, M. Haines, A. Beguelin, B. Totty,
and D. Gourley.
U.S. Patent #6,292,880: ``Alias-free content-indexed object cache'', Sept. 2001.
- M. Mikhailov and C. E. Wills.
Change and relationship-driven content caching, distribution and assembly.
Technical Report WPI-CS-TR-01-03, Worcester Polytechnic Institute, Mar. 2001.
- J. C. Mogul.
Errors in timestamp-based HTTP header values.
Technical Report 99/3, Compaq Western Research Laboratory, Dec. 1999.
- J. C. Mogul.
A trace-based analysis of duplicate suppression in HTTP.
Technical Report 99/2, Compaq Western Research Laboratory, Nov. 1999.
- J. C. Mogul.
Squeezing more bits out of HTTP caches.
IEEE Network, 14(3):6-14, May/June 2000.
- J. C. Mogul, F. Douglis, A. Feldmann, and B. Krishnamurthy.
Potential benefits of delta encoding and data compression for HTTP (corrected version).
Technical Report 97/4a, Digital Western Research Lab, Dec. 1997.
- J. C. Mogul, B. Krishnamurthy, F. Douglis, A. Feldmann,
Y. Y. Goland, A. van Hoff, and D. M. Hellerstein.
RFC 3229: Delta encoding in HTTP, Jan. 2002.
- FAQ of the Caching Mechanism in [Mozilla] 331 Release, Apr. 2000.
- A. Muthitacharoen, B. Chen, and D. Mazieres.
A low-bandwidth network file system.
In Proc. 18th SOSP, pages 174-187, Oct. 2001.
- National Institute of Standards and Technology.
Secure hash standard.
FIPS Pub. 180-1, U.S. Dept. of Commerce, Apr. 1995.
- H. Nordstrom.
Squid cache revalidation and metadata updates.
Posting to squid-dev mailing list, Oct. 2001.
- V. N. Padmanabhan and L. Qiu.
The content and access dynamics of a busy Web server: Findings and implications.
In Proc. ACM SIGCOMM, pages 111-123, Aug. 2000.
- W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P.
Numerical Recipes in C.
Cambridge University Press, second edition, 1992.
- A. Rousskov and D. Wessels.
The third cache-off: The official report.
Technical report, The Measurement Factory, Inc., Oct. 2000.
- J. Santos and D. Wetherall.
Increasing effective link bandwidth by suppressing replicated data.
In Proc. USENIX Annual Technical Conf., June 1998.
- N. Shivakumar and H. Garcia-Molina.
Finding near-replicas of documents on the Web.
In Proc. Workshop on Web Databases, Mar. 1998.
- B. Smith, A. Acharya, T. Yang, and H. Zhu.
Exploiting result equivalence in caching dynamic content.
In Proc. 2nd USITS, pages 209-220, Oct. 1999.
- N. T. Spring and D. Wetherall.
A protocol-independent technique for eliminating redundant network traffic.
In Proc. ACM SIGCOMM, pages 87-95, Aug. 2000.
- D. Surovell.
- A. van Hoff, J. Giannandrea, M. Hapner, S. Carter,
and M. Medin.
The HTTP distribution and replication protocol.
Technical Report NOTE-DRP, World Wide Web Consortium, Aug. 1997.
- Web Polygraph.
- C. E. Wills and M. Mikhailov.
Examining the cacheability of user-requested Web resources.
In Proc. 4th Web Caching Workshop, Apr. 1999.
- C. E. Wills and M. Mikhailov.
Towards a better understanding of Web resources and server responses for improved caching.
In Proc. 8th WWW Conf., May 1999.
- C. E. Wills and M. Mikhailov.
Studying the impact of more complete server information on Web caching.
In Proc. 5th Web Caching Workshop, May 2000.
- A. Wolman, G. Voelker, N. Sharma, N. Cardwell, M. Brown,
T. Landray, D. Pinnel, A. Karlin, and H. Levy.
Organization-based analysis of Web-object sharing and caching.
In Proc. 2nd USITS, Oct. 1999.
- A. Wolman, G. M. Voelker, N. Sharma, N. Cardwell,
A. Karlin, and H. M. Levy.
On the scale and performance of cooperative Web proxy caching.
Operating Systems Review, 34(5):16-31, Dec. 1999.
Originally in SOSP '99.
- X. Zhang.
Cachability of Web objects.
Technical Report 2000-019, Boston U. CS Dept, Aug. 2000.
11 About this document ...Created by latex2html and lots of hacks
- The number of client hosts in Table 2 is an underestimate, due to lost backup tapes.
- ... caches.
- Our simulations ignore ``no-store'' Cache-Control directives, which forbid payloads from being cached (``no-cache'' merely requires that the payload be re-validated on every access). Only 0.14% of WebTV replies carry no-store; had our simulated caches heeded these directives the impact on payload miss rates would have been negligible.
Jeffrey Mogul 2002-03-20