In this concluding part of the article, I would like to discuss the various Cache management methodologies. But before we delve in that, Architects and Software Designers of today need to first make sure they are using the right Database/Storage technology. Because a Distributed Scalable No-SQL service like CosmosDB with it’s concept of request units (RUs) might not even need caching. The scale issue is solved by partitioning, distributing and scaling the database itself.
But if your design calls for a classic RDBMS style database, then a Caching layer and caching techniques need to be thought through.
In this section we will cover Cache Updates techniques and Caching Infrastructure considerations.
Fetch on miss
Most basic Cache systems are designed to be empty to begin with. When the application needs data, it tries to read it from the Cache. Since the cache is empty, it generates a “Not found” event which then can trigger a Database fetch. All subsequent reads can then fetch from the cache.
The advantage of this method is that you don’t have to populate the entire cache with data which may or may not be used. Only the required data is uploaded to cache. So you save on space and hence infrastructure cost. If you manage cache TTL (Time to live) properly, using this method you can very efficiently manage your Cache with a minimal infrastructure by keeping only the most frequently used data in the cache and nothing else.
The disadvantage of this method is that cold run for the application has is slow on response time as the data needs to get loaded.
This method is ideal for scenarios where only some parts of the data is being used frequently and occasional cache miss cost is acceptable to the users. Small cache is more important.
Here you pre-load the entire cache-able data all at once. And then only update the database as and when data changes using one of the Cache Update methods (described later). This is an Anti-pattern. Loading everything into Cache could undo the performance benefits because of the added Cache management. While initially it might be inefficient, the system should eventually evict cache that not being used and come to optimum cache store.
Cache Eviction Policy
If you are using Redis, you can use a combination of TTL (Time to Live) and Expire commands to manage the Cache optimally. A good Cache Eviction policy can help you manage the size and availability of your Caching system.
Cache Update Methods
These are the standard patterns of updating your cache. Each pattern has it’s merits and demerits.
Write Through: When data changes, it is simultaneously updated to the Cache and to the Database. Advantage is consistency between Database and Cache. Disadvantage is keeping everything in the cache updated whether needed or not.
Write Around: Data is first written to the Database first to ensure that data is persisted first and then fetched into the Cache when accessed. The write logic can expire the cache when it writes to the database, so that the application knows that the data needs to be fetched when a Cache hit comes in.
Write Back: Data is first written to the cache and asynchronously updated to the database. Data loss risk is very high. So this method must be used only when data loss is affordable but data access needs to be very fast. This method can be used when the Cache layer is replicated and hence loss of one Cache server will not impact the database update.
Hopefully, this three-part article has covered areas of caching that most people are concerned with. One of the purposes of writing these articles was to make my job easier. I don’t have to point customers in different directions when caching is being discussed.