From  http://simongui.github.io/2016/12/02/improving-cache-consistency.html

A typically web application introduces an in-memory cache like memcache or redis to reduce load on the primary database for reads requesting hot data. The most primitive design looks something like Figure 1.

+--------------------------------+        +------------+        +----------------+
| database <--------+ web server +--------> cache |
| mssql, mysql, oracle, postgres | +------------+ | memcache/redis |
+--------------------------------+ +----------------+

Figure 1

Unfortunately this design is really common despite the many issues it introduces. I’ve seen some organizations with large scale applications still using this design and they maintain a bunch of hacks to overcome these issues which increases the systems operational complexity and sometimes surfaces as inconsistent data to end users.

Issue 1. Pool of connections to the cache services per web server instance

In a large application sometimes thousands of web server instances (especially in slower languages like Ruby) are hosting the web application. Each one has to maintain connections to the infrastructure the web application code communicates with directly. This can include primary databases like MSSQL, MySQL, Oracle, Postgres and cache services like Memcache or Redis. Each web server instance would for example have a pool of connections for each database or cache service instance it communicates with.

         --------------------------------------------------------------------------
| database (mssql, mysql, oracle, postgres) |
+----^--^-----------^--^-----------^--^-----------^--^-----------^--^----+
| | | | | | | | | |
N connections | | | | | | | | | |
| | | | | | | | | |
+------------+ +------------+ +------------+ +------------+ +------------+
| web server | | web server | | web server | | web server | | web server |
+------------+ +------------+ +------------+ +------------+ +------------+
| | | | | | | | | |
N connections | | | | | | | | | |
| | | | | | | | | |
-----v--v-----------v--v-----------v--v-----------v--v-----------v--v-----
| cache (memcache, redis) |
+------------------------------------------------------------------------+

Figure 2

This can be a strain on resources both on the web server but more importantly the database or cache service as shown in Figure 2. This is why I included a 16,384 connection benchmark in my benchmarks of Redis server libraries for Go to see how they scaled. It’s not uncommon to see 10,000or 20,000 connections to a Memcache or Redis server in a large system designed like this.

Issue 2. Many web app requests have to execute cache set operations

Similar to how a HTTP request may issue multiple SQL INSERT or UPDATE statements, multiple SET operations may be issued against the cache service. Even though these can be done asynchronously, they still consume resources on the web server and it would be great if the web servers only had to be concerned with updating the primary database.

Issue 3. No fault tolerance. Data loss if cache set operations fail

The typical sequence of operations of how Figure 2 in a web application would be designed would be as follows.

  • Update the primary database (MSSQL, MySQL, Oracle, Postgres, etc).
  • If the transaction fails return a HTTP error.
  • If the transaction succeeds send SET operations to the cache server(s) (memcache, redis, etc).

Any SET operation could fail even after retrying which puts the cache service(s) inconsistent with the primary database which could result in users seeing incorrect information. Even worse depending how the application is designed you could experience partial failures which results in users seeing partially correct and partially incorrect information after a change and a cache hit.

Some cache service protocols support sending multiple SET operations in one command but some do not. Not all web applications are smart enough to group SET operations that happen in different areas of the code into a single command either. If this is the case you could have partial failures where some of the SET operations succeeded and some failed.

Outside of retrying there’s not much the web application can do to eventually correct the missing cache SET operations. It has to retry and give up at some point. The cache will be serving cache hits that are inconsistent with the primary database until the cache key(s) invalidate via a TTL or some other process.

Messaging middleware

Sometimes this gets solved by messaging middleware like Kafka where the web applications push SET operations into Kafka and consumers pull changes from Kafka and execute the SET operations on the cache service(s). This greatly increases the cache consistency and allows the caches survive failures and catch up after short or long failures.

This introduces latency in the system. Changes may not be seen right away to users. Some web applications solve this by doing sticky sessions and caching in-memory in the web application to hide that data is inconsistent. Stale results are still possible if the web server fails and requests route to a different web server instance. This introduces complexity in the request routing tier of the system.

         +------------------------------------------------------------------------+
| database (mssql, mysql, oracle, postgres) |
+----^--^-----------^--^-----------^--^-----------^--^-----------^--^----+
| | | | | | | | | |
N connections | | | | | | | | | |
| | | | | | | | | |
+----+--+----+ +----+--+----+ +----+--+----+ +----+--+----+ +----+--+----+
| web server | | web server | | web server | | web server | | web server |
+----+--+----+ +----+--+----+ +----+--+----+ +----+--+----+ +----+--+----+
| | | | | | | | | |
N connections | | | | | | | | | |
| | | | | | | | | |
+----v--v-----------v--v-----------v--v-----------v--v-----------v--v----+
| message queue (kafka, rabbitmq) |
+----------------------------------^--^----------------------------------+
| |
N connections | |
| |
+------+--+------+
| kafka consumer |
+------+--+------+
| |
N connections | |
| |
+----------------------------------v--v----------------------------------+
| cache (memcache, redis) |
+------------------------------------------------------------------------+

Figure 3

As shown in Figure 3 this greatly reduces the connection load on the cache service but introduces a lot of operational complexity such as the following.

  • Deploy and operate a high throughput messaging system like Kafka with multiple brokers to survive broker failures.
  • Deploy and operate multiple consumer processes that consume messages in Kafka and execute SET operations to the cache service(s) to survive consumer failures.

Issue 4. No sequential consistency with the primary database

Leslie Lamport describes sequential consistency as follows.

The result of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program.

Figure 3 greatly improves fault tolerance and reduces the chances of losing an update however does not address order. Issue 3 describes the possibility of complete and partial failures and explains how a user could see partially up-to-date and partially stale results. Diving deeper operations could fail before following operations succeed. The order of visible changes could be out-of-order. Some applications may be more sensitive to this kind of inconsistency. Some applications may require strict partial order. Even if order isn’t critical, providing sequential consistency is a better experience for users and less confusing.

Solution: MySQL binlog replication

Figure 3 shows the benefits of a shared message queue however deploying one with fault tolerance is not trivial and operating one smoothly isn’t trivial either. If you use a database with replication there’s already a queue in your system and you may not need to deploy yet another queue and new piece of infrastructure like Kafka to solve some of these problems.

+----------+---+---+---+---+---+   binlog replication   +--------------------------+
| MySQL | 1 | 2 | 3 | 4 | 5 <------------------------+ MySQL replication client |
+----------+---+---+---+---+---+ +--------------------------+
MySQL binlog
binlog positions

Figure 4

MySQL has a binlog replication protocol which is used for primary/secondary replication. This is essentially a replicated queue that has all the transactions recorded in-order as shown in Figure 4.

This isn’t a popular solution but I say, why not? It works very well. You can write an application that can speak the MySQL binlog replication protocol that consumes the binlog entries and execute SET operations against the cache service(s). There are two ways you could consume the binlog data.

  • Interpret the raw SQL syntax and issue SET operations.
  • The web application embeds cache keys as a comment in the SQL.

Both of these options are good because you can even get the transaction scope of each transaction in the binlog statements if you need to and if the target system supports atomic multi-set operations. I prefer the 2nd option because it’s easier to parse and the application already has this information in most cases.

         +------------+ +------------+ +------------+ +------------+ +------------+
| web server | | web server | | web server | | web server | | web server |
+------------+ +------------+ +------------+ +------------+ +------------+
| | | | | | | | | |
N connections | | | | | | | | | |
| | | | | | | | | |
+----v--v-----------v--v-----------v--v-----------v--v-----------v--v----+
| database (mssql, mysql,,oracle, postgres) |
+------------------------------------^-----------------------------------+
|
1 connection |
|
+---------------------------+
| binlog replication client |
+---------------------------+
| |
N connections | |
| |
+----------------------------------v--v----------------------------------+
| cache (memcache, redis) |
+------------------------------------------------------------------------+

Figure 5

Figure 5 shows the overall architecture with the binlog replication in place.

Benefits

  • Drastically reduces connection load on the cache service(s). Web servers only connect to the database.
  • Sequential consistency because we are reading the databases commit log into the cache service(s).
  • Possible to connect to any MySQL replica in the replication chain since they are all sequentially consistent.

I love Kafka and have nothing against it, I use it myself. Reducing infrastructure simplified the architecture and reduced operational complexity. By replicating the MySQL commit log to the cache service(s) we have increased the consistency as well as gained strict partial order between the database and the cache service(s).

I’m currently working on a project in Go that provides this proposed functionality that I’ll announce at a later date. Contact me if you want to know more about it.

Improving cache consistency redis和db的一致性维护的更多相关文章

  1. DB,Cache和Redis应用场景分析

    最近做一产品,微博方面的.数据存储同时用到了DB(mysql),Cache(memcache),Redis.其实最开始架构设计的时候是准备用MongoDB的,由于学习成本太高,最终选择放弃了,采用了比 ...

  2. Redis与DB的数据一致性解决方案(史上最全)

    文章很长,而且持续更新,建议收藏起来,慢慢读! 高并发 发烧友社群:疯狂创客圈(总入口) 奉上以下珍贵的学习资源: 疯狂创客圈 经典图书 : 极致经典 + 社群大片好评 < Java 高并发 三 ...

  3. SmartSql = Dapper + MyBatis + Cache(Memory | Redis) + ZooKeeper + R/W Splitting + ......

    SmartSql Why 拥抱 跨平台 DotNet Core,是时候了. 高性能.高生产力,超轻量级的ORM.156kb (Dapper:168kb) So SmartSql TargetFrame ...

  4. django自带cache结合redis创建永久缓存

    0916自我总结 django自带cache结合redis创建永久缓存 1.redis库 1.安装redis与可视化操作工具 1.安装redis https://www.runoob.com/redi ...

  5. Spring配置cache(concurrentHashMap,guava cache、redis实现)附源码

    在应用程序中,数据一般是存在数据库中(磁盘介质),对于某些被频繁访问的数据,如果每次都访问数据库,不仅涉及到网络io,还受到数据库查询的影响:而目前通常会将频繁使用,并且不经常改变的数据放入缓存中,从 ...

  6. springboot学习笔记-4 整合Druid数据源和使用@Cache简化redis配置

    一.整合Druid数据源 Druid是一个关系型数据库连接池,是阿里巴巴的一个开源项目,Druid在监控,可扩展性,稳定性和性能方面具有比较明显的优势.通过Druid提供的监控功能,可以实时观察数据库 ...

  7. 【Spring】17、spring cache 与redis缓存整合

    spring cache,基本能够满足一般应用对缓存的需求,但现实总是很复杂,当你的用户量上去或者性能跟不上,总需要进行扩展,这个时候你或许对其提供的内存缓存不满意了,因为其不支持高可用性,也不具备持 ...

  8. springboot整合spring @Cache和Redis

    转载请注明出处:https://www.cnblogs.com/wenjunwei/p/10779450.html spring基于注解的缓存 对于缓存声明,spring的缓存提供了一组java注解: ...

  9. Spring Boot(八)集成Spring Cache 和 Redis

    在Spring Boot中添加spring-boot-starter-data-redis依赖: <dependency> <groupId>org.springframewo ...

随机推荐

  1. LinuxShell脚本基础 6-case...esac的使用和通配符

    1.case...esac的使用 #!/bin/bash echo "请输入编号 选择不同的显示文件和目录方式:" echo "1 - 普通显示" echo & ...

  2. 【ExtJS】 布局Layout

    布局用于定义容器如何组织内部子元素和控制子元素的大小. ExtJS中有两种类型的布局:Container容器类布局与Component组件类布局. Containter容器类布局:负责容器内容Extj ...

  3. 九度oj 1001 A+B for Matrices 2011年浙江大学计算机及软件工程研究生机试真题

    题目1001:A+B for Matrices 时间限制:1 秒 内存限制:32 兆 特殊判题:否 提交:15235 解决:6172 题目描述: This time, you are supposed ...

  4. RabbitMQ入门-理论

    目录 RabbitMQ简介 RabbitMQ原理简介 RabbitMQ安装 .NET Core 使用 RabbitMQ Hello World 工作队列 扇型交换机 直连交换机 主题交换机 远程过程调 ...

  5. 如何优雅地使用 VSCode 来编辑 vue 文件?

    最近有个项目使用了 Vue.js ,本来一直使用的是 PHPStorm 来进行开发,可是遇到了很多问题. 后来,果断放弃收费的 PHPStorm ,改用 vscode (Visual Stdio Co ...

  6. php对图片加水印--将图片先缩小,再在上面加水印

    方法: /**  * 图片加水印(适用于png/jpg/gif格式)  *  * @author flynetcn  *  * @param $srcImg  原图片  * @param $water ...

  7. linux 查看服务器系统资源和负载,以及性能监控

    1.查看磁盘 df -h 2.查看内存大小 free [-m|g]#按MB,GB显示内存 3.查看每个进程的情况 cat /proc/5346/status PID 4.查看负载 uptime 5.查 ...

  8. [转]VS清除打开项目时的TFS版本控制提示

    本文转自:http://www.cnblogs.com/weixing/p/5219294.html 对于曾经做过TFS版本控制的项目,在版本控制服务不可用的时候,依然会在每次打开项目的时候都提示:当 ...

  9. access 2010,数学

    access 2010(窗体控制和创建窗体) 窗体向导:选择表格---创建---窗体---窗体向导---选择表/查询---全选可用字段---选择布局---设置标题---完成. 其他窗体:选择表格--- ...

  10. linux压缩和解压缩命令大全[转]

    .tar 解包:tar zxvf FileName.tar 打包:tar czvf FileName.tar DirName ------------------------------------- ...