Memcached manages it’s memory for storing your items using a concept called slab allocation. With this model, memcached creates slabs that are associated with a certain byte size range. For example it might have slabs that are configured something like this:
slab class 1: chunk size 96 slab class 2: chunk size 120 slab class 3: chunk size 152 slab class 4: chunk size 192 slab class 5: chunk size 240 ...
set an item that is less that 96 bytes it will end up getting associated with slab 1 (technically it’s item size in bytes + the overhead bytes of the C struct memcached uses for the item). If I set an item that is greater than 96 bytes, but less than 120 bytes it’ll end up associated with slab 2. Slabs by themselves don’t store the data that we
set, that is done through pages and chunks. When an item is associated with a particular slab class, memcached will request a page of memory for that slab. A page is a fixed size of 1MB of memory. This page is then divided into chunks of the given slab class chunk size. For slab class 1 that means the 1MB page is divided into 96 byte chunks.
To make this a little more clear let’s imagine we are running memcached with only 4MB of data allotted to it (
memcached -m 4). Also let’s pretend we only have 2 slab classes available with the size 256KB and 512KB. This is what it would look like visually with a few items (represented in green) being stored.
Here we have 3 pages across our two slabs. Remember, a page is 1MB, so we are using 3/4 of our available MB of data allotted to memcached. One of the first things you might notice is that an item might not fully fill up a chunk. This is wasted memory. If we keep adding items to slab #1 eventually memcached will allocate another 1MB page for the slab so it has more room for items in the slab class.
Once a page is assigned to a [slab] class, it is never moved 1. This has some really important implications for an application that leverages memcached. Let’s imagine that a change shipped which changes our caching pattern. All the sudden we need to start storing more items in slab #2 and less in slab #1. Eventually a bunch of the items in slab #1 will expire, resulting in a couple of free pages worth of memory. This can show up in metrics as free memory. However, because a page is never moved after it’s assigned to a class we don’t really have any more memory available for slab #2. If we need to start inserting more items into that slab memcached won’t be able to allocate new pages for it. Instead it’ll look for the oldest item and evict it.
This eviction churn results can result in an item that was
set to expire in 24 hours disappearing from the cache in a couple of second. This will continue indefinitely until your access patterns change or you restart memcached. The effects of this can be increased latency across your application as expensive operations, like network calls, need to be run more often due to the cache miss. As of memcached 1.4.25 there is a feature called slab automover that can be enabled. It will move free pages into a global pool and reassign them to a new slab class.