This write-up is an in-depth guide on Distributed Cache. It does cover all the frequently asked questions about it such as What is a distributed cache? What is the need for it? How is it architecturally different from the traditional caching? What are the use cases of Distributed Caches? What are the popular distributed caches used in the industry?
So, without further ado.
Let’s get on with.
Before jumping right into distributed caching. It’ll be a good idea to get a quick insight into caching. Why do we use caching in our web applications, after all?
1. What is Caching? & Why Do We Use It in Our Web Applications?
Caching means saving frequently accessed data in-memory, that is in RAM instead of the hard drive. Accessing data from RAM is always faster than accessing it from the hard drive.
Caching serves the below-stated purposes in web applications.
First, it reduces application latency by notches. Simply, due to the fact that it has all the frequently accessed data stored in RAM, it doesn’t has to talk to the hard drive when the user requests for the data. This makes the application response times faster.
Second, it intercepts all the user data requests before they go to the database. This averts the database bottleneck issue. The database is hit with comparatively lesser number of requests eventually making the application as a whole performant.
Third, Caching often comes in really handy in bringing the application running costs down.
Caching can be leveraged at every layer of the web application architecture be it a database, CDN, DNS etc.
1.1 How Does Caching Help In Bringing the Application Compute Costs Down?
In-memory is cheap when compared to the hard drive persistence costs. Database Read Write operations are expensive. You know what I am talking about if you’ve ever used a DBaaS Database as a Service.
There are times when there are just too much writes on the application, for instance, in massive multi-player online games.
Data which has not so much priority can be just written to the cache & periodic batch operations can be run to persist data to the database. This helps in cutting down the database write costs by a significant amount.
Well, this is what I did when I deployed the game I wrote & on Google Cloud. I used the Google Cloud Datastore NoSQL service. The approach helped a lot with the costs.
Now, that we are clear on what cache & caching is. Let’s move on to distributed caching.
2. What is Distributed Caching? What is the Need For It?
A distributed cache is a cache which has its data spread across several nodes in a cluster also across several clusters across several data centres located around the globe.
Distributed caching is being primarily used in the industry today, solely for the reason that it has the potential to scale on demand & It is highly available.
Scalability, High Availability, Fault-tolerance are crucial to the large scale services running online today. Businesses cannot afford to have their services go offline. Think about health services, stock markets, military. They have no scope for going down. They are distributed across multiple nodes with a pretty solid amount of redundancy.
Distributed cache, & not just cache, distributed systems are the preferred choice for cloud computing. Solely due to the ability to scale & being available.
Google Cloud uses Memcache for caching data on its public cloud platform. Redis is used by internet giants for caching, NoSQL datastore & several other use cases.
3. How is Distributed Caching Different From the Traditional/Local Caching?
Systems are designed distributed to scale inherently, they are designed in a way such that their power can be augmented on the fly be it compute, storage or anything. The same goes for the distributed cache.
The traditional cache is hosted on a few instances & has a limit to it. It’s hard to scale it on the fly. It’s not so much available & fault-tolerant in comparison to the distributed cache design.
In the article above I already brought this up how cloud computing uses distributed systems to scale & stay available. Well, the cloud is just a marketing term, technically it’s a massively distributed system of services such as compute, storage, etc running under the covers.
Do read Why Use Cloud? How Is Cloud Computing Different from Traditional Computing? To get a good insight into the underlying architecture.
4. What Are the Use Cases Of Distributed Caches?
Distributed caches have several use cases stated below:
The Cache layer in-front of a database saves frequently accessed data in-memory to cut down latency & unnecessary load on it. There is no DB bottleneck when the cache is implemented.
Storing User Sessions
User sessions are stored in the cache to avoid losing the user state in case any of the instances go down.
If any of the instances goes down, a new instance spins up, reads the user state from the cache & continues the session without having the user notice anything amiss.
Cross-Module Communication & Shared Storage
In memory distributed caching is also used for message communication between the different micro-services running in conjunction with each other.
It saves the shared data which is commonly accessed by all the services. It acts as a backbone for micro-service communication. Distributed caching in specific use cases is often used as a NoSQL datastore.
In-memory Data Stream Processing & Analytics
As opposed to traditional ways of storing data in batches & then running analytics on it. In-memory data stream processing involves processing data & running analytics on it as it streams in real-time.
This is helpful in many situations such as anomaly detection, fraud monitoring, Online gaming stats in real-time, real-time recommendations, payments processing etc.
5. How Does A Distributed Cache Work? – Design & Architecture
Designing a Distributed Cache demands a separate thorough write-up. For now, I’ll give you a brief overview of how a distributed cache works behind the scenes?
A Distributed cache under the covers is a Distributed Hash Table which has a responsibility of mapping Object Values to Keys spread across multiple nodes.
The distributed Hash table allows a Distributed cache to scale on the fly, it manages the addition, deletion, failure of nodes continually as long as the cache service is online. Distributed hash tables were originally used in the peer to peer systems.
Speaking of the design, caches evict data based on the LRU Least Recently Used policy. Ideally, it uses a Linked-List to manage the data pointers. The pointer of the data frequently accessed is continually moved to the top of the list/Queue. A Queue is implemented using a Linked List.
The data which is not so frequently used is moved to the end of the list & eventually evicted.
Linked-List is an apt data structure where are too many insertions and deletions, updates in the list. It provides better performance as opposed to an ArrayList.
Regarding the concurrency there are several concepts involved with it such as eventual consistency, strong consistency, distributed locks, commit logs and stuff. That will be too much for this article. I’ll cover everything in the write-up where I design a distributed cache from the bare bones.
Also, distributed cache often works with distributed system co-ordinators such as Zookeeper. It facilitates communication & helps maintain a consistent state amongst the several running cache nodes.
6. What Are the Different Types Of Distributed Caching Strategies?
There are different kinds of caching strategies which serve specific use cases. Those are Cache Aside, Read-through cache, Write-through cache & Write-back
Let’s find out what they are & why do we need different strategies to cache.
This is the most common caching strategy, in this approach the cache works along with the database trying to reduce the hits on it as much as possible. The data is lazy loaded in the cache.
When the user sends a request for particular data. The system first looks for it in the cache. If present it’s simply returned from it. If not, the data is fetched from the database, the cache is updated and is returned to the user.
This kind of strategy works best with read-heavy workloads. The kind of data which is not much frequently updated, for instance, user profile data in a portal. His name, account number etc.
The data in this strategy is written directly to the database. This means things between the cache and the database could get inconsistent.
To avoid this data on the cache has a TTL Time to Live. After that stipulated period the data is invalidated from the cache.
This strategy is pretty similar to the Cache Aside strategy with the subtle differences such as in the Cache Aside strategy the system has to fetch information from the database if it is not found in the cache but in Read-through strategy, the cache always stays consistent with the database. The cache library takes the onus of maintaining the consistency with the backend;
The Information in this strategy too is lazy loaded in the cache, only when the user requests it.
So, for the first time when an information is requested it results in a cache miss, the backend has to update the cache while returning the response to the user.
Though the developers can pre-load the cache with the information which is expected to be requested most by the users.
In this strategy, every information written to the database goes through the cache. Before the data is written to the DB, the cache is updated with it.
This maintains high consistency between the cache and the database though it adds a little latency during the write operations as data is to be updated in the cache additionally. This works well for write-heavy workloads like online massive multiplayer games.
This strategy is used with other caching strategies to achieve optimized performance.
I’ve already talked about this approach in the introduction of this write-up. It helps optimize costs significantly.
In the Write-back caching strategy the data is directly written to the cache instead of the database. And the cache after some delay as per the business logic writes data to the database.
If there are quite a heavy number of writes in the application. Developers can reduce the frequency of database writes to cut down the load & the associated costs.
A risk in this approach is if the cache fails before the DB is updated, the data might get lost. Again this strategy is used with other caching strategies to make the most out of these.
7. What Are the Popular Distributed Caches Used in the Industry?
The popular distributed caches used in the industry are Eh-cache, Memcache, Redis, Riak, Hazelcast.
Memcache is used by Google Cloud in its Platform As A Service. It is a high performant distributed key-value store primarily used in alleviating the database load.
It’s like a large hash table distributed across several machines. Enabling data access in O(1) i’e constant time.
Besides having the key-value pair Redis is an open source in-memory distributed system which supports other data structures too such as distributed lists, queues, strings, sets, sorted sets. Besides caching, Redis is also often treated as a NoSQL data store.
8. More On the Blog
Well, Guys!! This is pretty much it on distributed caching. Hope you’ve got a fair idea about it by now. If you liked the write-up, do share it with your folks.
You can follow 8bitmen on social media. Also, consider subscribing to the browser notifications to stay notified on the new content published on the blog.
I’ll see you in the next write-up.
- What is Lift & Shift Migration to the Cloud? – An In-Depth Insight
- Distributed Cache 101 – The Only Guide You’ll Ever Need
- Facebook Real-time Chat Architecture Scaling With Over Multi-Billion Messages Daily
- Twitter’s Migration to Google Cloud – An Architectural Insight
- What Is an Instance In Cloud Computing? – A Thorough Guide