This project is read-only.

Aricie - Distributed Caching Provider

Aricie - Distributed Caching Provider is a DotNetNuke module and a DNN caching provider, which leverages distributed caching technologies (Redis, App Fabric, Memcached etc.)

It is mainly useful in a web farm scenario, with:
  • Synchronization between application servers, where editing performed on a server are to invalidate other servers local caches.
  • Distribution of cached objects, relieving database and file system usage and providing cloud scaling
This is implemented through dedicated caching strategies, associated to individual keys or patterns, defining many parameters on top of those supported by the native DNN provider API.

Client Settings

Start by configuring one or several cluster hosts and enable the Distributed Caching Engine.
A default Redis cluster is enabled with a single local node. A default App Fabric Cluster is also provided, disabled by default. If you want to use an App Fabric cluster, you should disable the former.
In either case, navigate to the selected cluster in the Client tab, update hosts with your available nodes and other parameters according to your provisioned cluster configuration.

With AppFabric, you need to:
  • Configure security: In the Powershell admin console, before you start your cluster, either grant the appropriate permission using the "Grant-CacheAllowedClientAccount" or remove security with "Set-CacheClusterSecurity -SecurityMode None -ProtectionLevel None". Configure your cluster settings accordingly within this form
  • Enable notifications: In the Powershell admin console, before you start your cluster, enter "Set-CacheConfig cacheName -NotificationsEnabled True", where the default cache name is "default"
  • Set up a cache size limit (AppFabric will use all the memory available to it); according to whether you want to use the cluster only as a mean of synchronization or to distributed a small or a large number of cached objects: In the Powershell admin console, before you start your cluster, on each available node enter: "Set-cacheHostConfig {Machine name} {port(22233)} -CacheSize {cache allocation in MB}", taking into account that the overheads are 1:1 (you need to double the estimated value). You can refine that limit later on according to the observed usage statistics.

In the Redis configuration file associated to your windows service or standalone instance:
  • Define memory usage limits with "maxheap" and "maxmemory" parameters (you can refine those limits later on according to the observed usage statistics)
  • If you wish to use keyspace notifications, update the "notify-keyspace-events" parameter (optional: default synchronization uses a key-less dedicated notification channel instead)

Cluster Settings

On top of the form is a master switch to start the provider (enable and save) and the cluster tab contains all settings related to how the engine behave.

By default, only Synchronization is enabled, and it is recommended that you start validating that feature, before moving on to objects distribution. The engine works by associating caching strategies to identified keys, and a number of strategies are responsible for controlling system features, synchronization being one of them.

Because AppFabric and Redis have different architectures, a conservative default synchronization strategy is defined, that works in both environments. It relies on distributing dedicated keys associated to the local cache objects, and removal notifications to perform local synchronization. However, Redis offers alternatively a channel subscription model, which does not require actual keys being involved, and that is the default selected mode in the Redis cluster client settings. Accordingly, you can update the synchronization strategy in the System Strategies tab and set the Distribution mode to Not Distributed, which will yield better performances and a lower memory footprint.

Objects Distribution can be enabled with the corresponding check box and is controlled by means of Custom Strategies defined in the corresponding tab, either applied to groups of keys, providing regular expression to be matched and enumerated lists (local keys can be visualized in the Local cache tab), or to individual keys. A default strategy is used for keys otherwise unknown to the system.

The need for distinct strategies comes from the fact that the variety of objects cached locally cannot be processed identically. Some of them simply cannot be serialized and distributed, while others will require a certain type of serialization applied, or simply perform better under a particular serialization mode. This is the reason why the default strategy is set to not distribute unknown objects. A number of sample strategies is provided to get you started, but so far it is up to you to create and test your strategies for optimal performances. It is expected that future versions will provide a larger set of default custom strategies.

Another important set feature can be found in the Engine Settings tab: the engine is made of a collection of asynchronous queues responsible for the various steps involved. This is so that the main application thread is impacted minimally, while distribution operations are performed in the background with a lower priority. Those queues are processed on a distinct pace by one or several threads dynamically provisioned in a smart thread pool engine. The default configuration is quite conservative, but for aggressive optimization, you may activate windows performance counter to monitor the evolution of each queue's status in order to figure out the ideal timings and thread numbers.

Logs, Statistics and Analysis

In order to help you with crafting the optimal strategies, a dedicated system can help monitor the system live by generating DNN event logs, computing statistics and an analysis of suggested strategies. The statistics collected will compute the fastest distribution mode, provided several were tested with enough logs collected, and plot an usage sequence graph, in order to visualize whether common sequences of cached objects usage can lead to grouping them into single bundles, optionally compressed, for additional performance gains.

The optimization process involves:
  • collecting timing measurements in the DNN event log (a listener must be enabled for the native DEBUG log type, with no caps since the system will require a significant amount of logs to be useful)
  • Computing statistics from the event logs
  • Performing an Analysis on the statistics.
  • Updating the configuration according to the analysis recommendations (not finalized, manual updates are needed for now)
Configuration, Statistics and Analysis are loaded from and saved to Xml Files

Finally, and "Auto-analysis" automation is provided, that will automatically go through the various combinations of distributions modes, emptying the local and distributed caches on a regular basis, and generating statistics and analysis to prevent you from having to do all those tedious operations manually.

Ideally, you would only need to leave that system on for a couple of hours to get sorted, but it might be more tedious, since some DNN core objects might allow being distributed under a particular serialization mode, while being reconstructed corrupted on the next round, resulting in a crash. This will need further investigation to add more default strategies, but turning off your caching cluster at any point should bring the site back home, and if automation is enabled, it should be a matter of waiting for the next switch, before a recovery is observed. However, it is strongly advised against performing those kinds of test in a production environment for now.

DNN labels usually provide explanations on the numerous parameters controlling the engine and those operations.

Last edited Nov 21, 2015 at 6:27 PM by Aricie, version 8