Next: Performance Evaluation Up: A Caching Mechanism for Previous: Information Search On IMANET
Aggregate Cache Management
In IMANET, caching data items in the local cache helps in reducing latency and increasing accessibility. If an MT is located along the path in which the request packet travels to an AP, and has the requested data item in its cache, then it can serve the request without forwarding it to the AP. In the absence of caching, all requests should be forwarded to the appropriate APs. Since the local caches of the MTs virtually form an aggregate cache, a decision as to whether to cache the data item depends not only on the MT itself, but also on the neighboring MTs. Therefore, we propose a cache admission control and a cache replacement algorithm.
Cache Admission Control: When an MT receives the requested data item, a cache admission control is triggered to decide whether it can cache this item. In this paper, the cache admission control allows an MT to cache a data item based on the distance of other APs or MTs, which have the requested data item. If the MT is located within hops from them, then it does not cache the data item; Otherwise it caches the data item. The cached data items can be used by closely located MTs. Therefore, the same data items are cached at least hops apart. Here, is a system parameter.
The primary idea is that, in order to increase accessibility, we try to cache as many data items as possible, while trying to avoid too many duplications. Although caching popular data items aggressively in closer MTs helps to reduce the latency, in this work, we give more weight to the data accessibility than to access latency. A rationale behind this is that it is meaningless to reduce access latency when a set of MTs is isolated from other MTs or the AP, and they can not access any interested data items. Instead of waiting until the network topology changes, it is better for the MTs to have even high probability of finding the requested data items. Since value enables more distinct data items to be distributed over the entire cache due to admission control, more data items can be accessible and thus the overall data accessibility is increased.
Cache Replacement Policy: A cache replacement policy is required when an MT wants to cache a data item, but the cache is full, and thus it needs to victimize a data item for replacement. Two factors are considered in selecting a victim. The first factor is the distance (), measured by the number of hops away from the AP or MTs, which has the requested data item. Since is closely related to the latency, if the data item with a higher is selected as a victim, then the latency would be high. Therefore, the data item with the least value is selected as the victim.
The second factor is the access frequency of data items. Due to mobility of the MTs, the network topology may change frequently. As the topology varies, the values become obsolete. Therefore, we use a parameter (), which captures the elapsed time of the last updated . The value is obtained by , where tcur and tupdate are the current time and the last updated time of for the data item, respectively. If is close to 1, has recently been updated. If it is close to 0, the updated gap is long. Thus, is used as an indicator of to select a victim. In this paper, we suggest a Time and Distance Sensitive (TDS) replacement policy based on these two factors.
Next: Performance Evaluation Up: A Caching Mechanism for Previous: Information Search On IMANET Sunho Lim