- Client-side cache: The client will maintain their own cache. For this example, let's say the client is maintaining a simple hashtable, with a key-value pair, where the key is the employee ID and the value is the employee data. Whenever the client needs data for an employee, it will first check its internal cache, and if the employee ID is found in the client-side cache, it would not need to make a call to the server, saving a round trip to the server. The results will be lightning fast in this case, as the client is serving the results internally. As with any cache, we will need to take care of certain aspects, such as an expiring cache and limiting the number of records that can be cached. The number of records that can be cached is critical in client-side caching as we are dependent on the user's machine. If the caller to the employee service is another service, we can have a greater number of records in the cache, depending on who is calling and the business requirements. Similarly, we need to make sure to expire cache records. Again, this is dependent on our business needs, such as how often we expect our employee records to be updated.
- Server-side cache: Caching is done at the service level rather than the caller level. The Microservice will maintain a cache on its own. There are many libraries that provide caching off the shelf, such as Jcache or Memcached. You can also use third-party caching, such as a Redis cache, or build a simple caching mechanism within the code, as per your application's need. The core idea is that we need not do all the work again while refetching some data. For example, when we ask for an employee record against an employee ID, we might be fetching data from one or more databases and doing several calculations. The first time, we will do all the tasks, but then store the record in cache. Next time, when the data is asked for against the same employee id, we will just send back the record in the cache. Again, we need to consider aspects such as expiration and cache size, as we discussed in the case of a client-side cache.
- Proxy caching: Proxy caching is another technique that is gaining popularity. The idea is not to directly hit the main application server. The request first goes to a Proxy Server, and if the required data or artifact is available on the Proxy Server, we can avoid a call to the main server. The Proxy Server is usually close to the client, mostly in the same geographical area, so it is faster to access. Moreover, it helps us reduce the load on the main server.
- Distributed caching: As the name suggests, distributed caching is a mechanism for maintaining a cache in more than one place. There are multiple ways to implement a distributed cache. In its simplest form, we just keep a copy of the cache in multiple places, which helps us to divide the load among multiple sources. This approach is useful when we are expecting too much load and the amount of data to be cached is not too great. The other example of distributed caching is when we have lots of data to cache, and we cannot create a single cache. We would divide the data to be cached into multiple caching servers. The division can be based on application requirements. In some cases, we can have the cache distributed based on geography. For example, users in India are being served from a different cache than users in the US. Another simple piece of logic for cache distribution can be based on data. For example, we cache employee details based on employee IDs on different machines, based on the first letters of their first names such as A to F on machine 1, G to M on machine 2, and N to Z on machine 3. The method can be devised based on application requirements, but the core idea is to cache data on multiple distributed machines for easy access.
Sunday, January 31, 2021
Scaling Microservices with caching
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment