Cache (as defined by Wikipedia) is a component that transparently stores data such that future requests for data can be faster. I hereby presume that you understand cache as a component and any architectural patterns around caching and thereby with this presumption I will not go into depth of caching in this article. This article will cover some of the very basics of fundamentals of caching (wherever relevant) and then will take a deep dive into the point-of-view on the caching architecture with respect a Content Management Plan in context to Adobe’s AEM implementation.
Principles for high performance and high availability don’t change but for conversation sakes lets assume we have a website where we have to meet the following needs.
- 1 Billion hits on a weekend (hit is defined by a call to the resource and includes static resources like CSS, JS, Images, etc.)
- 700 million hits in a day
- 7.2 million page views in a day
- 2.2 million page views in an hour
- 80K hits in a second
- 40K page views in a minute
- 612 page views in a second
- 24×7 site availability
- 99.99% uptime
- Content availability to consumers in under 5 minutes from the time editors publish content
While the data looks steep the use case is not uncommon one. In current world where everyone is moving to devices, and digital there will be cases when brands are running campaigns. When those campaigns are running there will be needs for support such steep loads. These loads don’t stay for long but when then come they come fast, they come thick and we will have to support them.
For the record, this is not some random theory I am writing, I have had the opportunity of being on a project (I cant name) where we supported similar number.
The use case I picked here is of a Digital Media Platform where we have a large portion of the content is static, but the principles I am going to talk here will apply to any other platform or application.