|Ph.D Student||Rashkovits Rami|
|Subject||Preference-Based Content Management in Wide Area Networks|
|Department||Department of Industrial Engineering and Management||Supervisor||Professor Avigdor Gal|
|Full Thesis text|
This thesis introduces and establishes a theoretical foundation for modeling preference-based content management. Preference-based content management involves caching mechanism that serves clients accessing content and introduces preferences regarding their expectations for the time they are willing to wait, and the level of obsolescence they are willing to tolerate. The main goal of this thesis is to provide cache managers with a cache policy that allows them to decide whether a cached content is sufficiently fresh for user needs, or more fresh content is required to be downloaded . Contemporary caches decide to deliver cached content or forward the clients' requests to the origin servers based on arbitrary time-to-live set by servers, or heuristically estimated by caches. As a result, some users may wait for a long time for fresh content extracted from the origin server although they would settle for obsolescent content, while other users may receive the local copy which is considered valid, although they would be ready to wait longer for fresher content . In this work a new model for caching is introduced, where a cache manager considers three dimensions of user preferences, namely popularity, freshness and latency, and is capable of balancing the relative importance of each dimension. A novel approach is proposed, based on the observation that users can specify their tolerance towards content obsolescence using a simple-to-use method, and servers can supply content update patterns. The cache uses a cost model to determine which of the following three alternatives is most promising: delivery of a local copy, delivery of a copy from a cooperating cache, or delivery of a fresh copy from the origin server . A preference-based replacement policy is also introduced to assist caches with storage decisions such as which objects to keep under capacity constraints. The suggested replacement policy considers user preferences, and gives higher score to objects that are more likely to be used from the local proxy . A preference-based pre-fetching mechanism is also introduced, assisting cache managers with proactive content management, which is aimed to further improve cache performance by bringing accessed content ahead of time to reduce latency. The proposed mechanism uses user preferences and content update pattern to schedule a download operation of the content, such that it will be present at the cache when the next access takes place, and be considered fresh enough for user needs.