The following sections describe aspects of the caching framework.
The cache system is built by combining one or more layers together to form a single, logical structure. Each layer sits a top another creating a hierarchy that's used when satisfying requests. Cache requests are processed in the order specified by the hierarchy. In a typical setup, the fastest caches (such as the HttpContext cache) will sit higher than slower caches (such as a distributed cache like AppFabric).When an object is found in a cache, that object will be placed in all caches that lead back to the caller. If, for example, an object is found in the 2nd level cache, it would be copied to the 1st level cache before returning to the caller. This pushes data from farthest to closest and, more often than not, lead to increases in performance due to locality principles.
All cache layers support one or more of the available CacheScope values: Context, Process, and Distributed. When a request comes into the system, a CacheScope will be specified along with the operation to perform. Each cache layer is checked to see if it supports the requested scope. If it does, that layer will attempt to satisfy the request.
For example, the built-in type HttpContextCache works in the [ Context ] scope, while AspNetCache uses [ Process ] scope. If an item is placed in [ Context ] scope, only the HttpContextCache would honor the request. The AspNetCache would ignore the request as it only supports [ Process ] scope. In contrast, if the item specified Context scope and Process scope [ Context, Process ], then both caches would be used.
Depending on the operation, the request is checked against some or all of the layers defined in the hierarchy. In the example above, the operation was a Put (store an object in cache) and therefore goes to all caches. If the operation was a Get, for example, then the flow happens differently. In the case of a Get, a cache's supported scope is still checked, but once one cache finds an object, processing will stop and no more caches are searched. There is no reason to check lower caches as we already have the requested object (and, typically, caches get slower the lower you go on the hierarchy).
All items in the cache will have an associated key that can be used to identify it. Keys are not case-sensitive and will always be stored in their lowered-case form.
In addition to a key, you can further identify an item by placing one or more tags on it. Like keys, tags are not case-sensitive and will be stored in their lowered-cased form.
Tags are not searchable, but are useful when grouping data so that you can expire all items in the same group together. While tags are powerful in this regard, they should never be used in substitution of a good key generating algorithm. Tag operations are very slow in comparison to single key lookups and removals so be very selective about how and when you use them.
Included in the Evolution cache system is a mechanism allowing runtime adjustments to be made to cache operations. This allows users to override default values used in the Evolution platform (or added by a 3rd party component) if they have a site-specific need to do so. Overrides should be used sparingly, but offer a level of control that may yield a large benefit.
Currently, overrides are one of two types: an object override or a regular expression override. An object override will intercept requests at an System.Type level and regular expression overrides will intercept requests by matching a pattern over the key or set of tags. Regardless of type, each override can modify either the timeout for the given request or the allowed scope for the given request.
Overriding the timeout simply means that whatever value was specified when the request was made will be replaced with the overriding value. Overriding allowed scope means that only the values specified in the override will be allowed as scope targets. It is important to note that this does not add scopes to a request, but filters out certain scopes if they are specified. For example, if the requested scope was [ Context, Process ] and the override set allowed scope to be [ Process ], then the values would be AND'ed together and the result would be [ Process ].
When items are placed into cache, it is not necessary to give a timeout. In these cases, the currently configured default timeout value is used. Many times choosing static timeout values doesn't yield the best results as all systems are different and would benefit in different ways.
In addition to a very simple scaling mechanism, controlling the default timeout allows for finer control over the cache system in web farm scenarios where staleness is a concern. Generally, if more than one server is running a site, the default timeout value should be low to keep staleness at a minimum. The lower the value, the less likely you are to see staleness, but at the cost of decreased cache performance.
Many objects in the Evolution platform use the default timeout value. Depending on your setup and tolerance for cache inaccuracies, you can tweak this setting and find the value that best suits your site.
To complete the picture and give a final level of control, you can introduce a cache factor. Cache factor values can be set for any cache type and is used as a universal adjustment on timeout.
Where cache factors become important is when a cache is setup using a distributed cache (ie. AppFabric). If the web site is running in a web farm, you will be running the AspNetCache provider to cache items locally (on the server) in addition to the distributed provider. Because you are running in a farm, you can't cache items too long or you risk cache staleness and inconsistencies. To compensate, you'll want to have a low default timeout time. However, with a distributed cache (one that will be shared among all web site instances), you can cache objects longer as they will be updated/removed/added from each web node when that instance recognizes a change. Therefore old or stale values will be removed from cache when they are changed.
In this instance, you can boost the cache factor value for the distributed provider. Typically this factor is based off the value you set for the default timeout. This will allow you to cache local items shorter, and distributed objects longer, and have a good guess as to how long object may live in cache.
e.g., Assume I have multiple web servers running my site, and I'm using a distributed cache. One setup could be each web server setting the default timeout to 5 seconds and the distributed cache factor to 24. With this setup, objects without an explicit timeout (default) would have a 5 second timeout when placed outside of a distributed cache (such as ASP.NET cache), and a 120 seconds (24 x 5 = 120 seconds = 2 minutes) timeout when placed inside a distributed cache.
It should be noted, all timeouts are subjected to the factor, not just default timeouts.
Default timeouts and cache factors are set inside of the caching configuration XML settings. For more information on how to set these and other values, see the Caching configuration documentation.
Powered by Zimbra