I've got an instance of SilverStripe running on two servers behind an AWS load balancer. To share the session information I'm running Elasticache Redis server. I'm setting my php session store info as such:
ini_set('session.save_handler', 'redis');
ini_set('session.save_path', 'tcp://127.0.0.1:6379');
After I've signed into the admin section of the CMS I can jump between servers and it remembers me, however when switching between sections in the CMS the main section doesn't render (an AJAX call). From what I can tell the other server doesn't realise (which ever one you request from second) you already have the CMS admin loaded and in the response headers says to load a new version of JS dependencies which then wigs out the admin and it doesn't load.
Reading into the docs SilverStripe is using Zend_Cache for some extra information. I figure if I load the admin interface, then delete the cache directory it would replicate the problem. It doesn't.
I then tried to use this module to change the storage engine that Zend_Cache is using. I added:
SS_Cache::add_backend(
'primary_redis',
'Redis',
array(
'servers' => array(
'host' => 'localhost',
'port' => 6379,
'persistent' => true,
'weight' => 1,
'timeout' => 5,
'retry_interval' => 15,
'status' => true,
'failure_callback' => null
)
)
);
SS_Cache::pick_backend('primary_redis', 'any', 10);
To my mysite/_config.php and this is storing some cms information in redis like for the key CMSMain_SiteTreeHints9b258b19199db9f9ed8264009b6c351b
, however this still doesn't fix the problem of changing between servers in the load balanced environment.
Where else could SilverStripe be storing cache data? Have I implemented the module correctly?