Kafka Replication: The case for MirrorMaker 2.0
Apache Kafka has become an essential component of enterprise data pipelines and is used for tracking clickstream event data, collecting logs, gathering metrics, and being the enterprise data bus in a microservices based architectures. Kafka is essentially a highly available and highly scalable distributed log of all the messages flowing in an enterprise data pipeline. Kafka supports internal replication to support data availability within a cluster. However, enterprises require that the data availability and durability guarantees span entire cluster and site failures.
The solution, thus far, in the Apache Kafka community was to use MirrorMaker, an external utility, that helped replicate the data between two Kafka clusters within or across data centers. MirrorMaker is essentially a Kafka high-level consumer and producer pair, efficiently moving data from the source cluster to the destination cluster and not offering much else. The initial use case that MirrorMaker was designed for was to move data from clusters to an aggregate cluster within a data center or to another data center to feed batch or streaming analytics pipelines. Enterprises have a much broader set of use cases and requirements on replication guarantees.
Multiple vendors and Internet service companies have their own proprietary solutions (Brooklin MirrorMaker from Linkedin, Mirus from Salesforce, uReplicator from Uber, Confluent Replicator from Confluent) for cross-cluster replication that points to the need for the community Apache Kafka to have an enterprise ready cross-cluster replication solution too.
Typical MirrorMaker Use Cases
There are many uses cases why data in one Kafka cluster needs to be replicated to another cluster. Some of the common ones are:
Aggregation for Analytics
A common use case is to aggregate data from multiple streaming pipelines possibly across multiple data centers to run batch analytics jobs that provide a holistic view across the enterprise, for example, a completeness check that all customer requests had been processed..
Data Deployment after Analytics
This is the opposite of the aggregation use case in which the data generated by the analytics application in one cluster (say the aggregate cluster) is broadcast to multiple clusters possibly across data centers for end user consumption.
Isolation
Sometimes access to data in a production environment is restricted for performance or security reasons and data is replicated between different environments to isolate access. In many deployments the ingestion cluster is isolated from the consumption clusters.
Disaster Recovery
One of the most common enterprise use cases for cross-cluster replication is for guaranteeing business continuity in the presence of cluster or data center-wide outages. This would require application and the producers and consumers of the Kafka cluster to failover to the replica cluster.
Geo Proximity
In geographically distributed access patterns where low latency is required, replication is used to move data closer to the access location.
Cloud Migration
As more enterprises have an on prem and cloud presence Kafka replication can be used to migrate data to the public or private cloud and back.
Legal and Compliance
Much like the isolation uses case, a policy driven replication is used to limit what data is accessible in a cluster to meet legal and compliance requirements.
Limitations of MirrorMaker v1
MirrorMaker is widely deployed in production but has serious limitations for enterprises looking for a flexible, high performing and resilient mirroring pipeline. Here are some of the concerns:
Static Whitelists and Blacklists
To control what topics get replicated between the source and destination cluster MirrorMaker uses whitelists and blacklists with regular expressions or explicit topic listings. But these are statically configured. Mostly when new topics are created that match the whitelist the new topic gets created at the destination and the replication happens automatically. However, when the whitelist itself has to be updated, it requires MirrorMaker instances to be bounced. Restarting MirrorMaker each time the list changes creates backlogs in the replication pipeline causing operational pain points.
No Syncing of Topic Properties
Using MMv1, a new or existing topic at the source cluster is automatically created at the destination cluster either directly by the Kafka broker, if auto.create.topics is enabled, or by MirrorMaker enhancements directly using the Kafka admin client API. The problem happens with the configuration of the topic at the destination. MMv1 does not promise the topic properties from the source will be maintained as it relies on the cluster defaults at the destination. Say a topic A had a partition count of 10 on the source cluster and the destination cluster default was 8, the topic A will get created on the destination with 8 partitions. If an application was relying on message ordering within a partition to be carried over after replication then all hell breaks loose. Similarly, the replication factor could be different on the destination cluster changing the availability guarantees of the replicated data. Even if the initial topic configuration was duplicated by an admin, any dynamic changes to the topic properties are not going to be automatically reflected. These differences become an operational nightmare very quickly.
Manual Topic Naming to avoid Cycles
By default, MirrorMaker creates a topic on the destination cluster with the same name as that on the source cluster. This works fine if the replication was a simple unidirectional pipeline between a source and destination cluster. A bidirectional active-active setup where all topics in cluster A are replicated to cluster B and vice versa would create an infinite loop which MirrorMaker cannot prevent without explicit naming conventions to break the cycle. Typically the cluster name is added in each topic name as a prefix with a blacklist to not replicate topics that had the same prefix as the destination cluster. In large enterprises with multiple clusters in multiple data centers it is easy to create a loop in the pipeline if care is not taken to set the naming conventions correctly.
Scalability and Throughput Limitations due to Rebalances
Internally, MirrorMaker uses the high-level consumer to fetch data from the source cluster where the partitions are assigned to the consumers within a consumer group via a group coordinator (or earlier via Zookeeper). Each time there is a change in topics, say when a new topic is created or an old topic is deleted, or a partition count is changed, or when MirrorMaker itself is bounced for a software upgrade, it triggers a consumer rebalance which stalls the mirroring process and creates a backlog in the pipeline and increases the end to end latency observed by the downstream application. Such constant hiccups violate any latency driven SLAs that a service dependent on mirrored pipeline needs to offer.
Lack of Monitoring and Operational Support
MirrorMaker provides minimal monitoring and management functions to configure, launch and monitor the state of the pipeline and has no ability to trigger alerts when there is a problem. Most enterprises require more than just the basic scripts to start and stop a replication pipeline.
No Disaster Recovery Support
A common enterprise requirement is to maintain service availability in the event of a catastrophic failure such as the loss of the entire cluster or an entire data center. Ideally in such an event, the consumers and producers reading and writing to a cluster should seamlessly failover to the destination cluster and failback when the source cluster comes back up. MirrorMaker doesn’t support this seamless switch due to a fundamental limitation in offset management. The offsets of a topic in the source cluster and the offset of the replica topic can be completely different based on the point in the topic lifetime the replication began. Thus the committed offsets in the consumer offsets topic are tracking a completely different location at the source than at the destination. If the consumers make a switch to the destination cluster they cannot simply use the value of the last committed offset at the source to continue. One approach to deal with this offset mismatch is to rely on timestamps (assuming time is relatively in sync across clusters). But timestamps get messy too and we will discuss that at length in the next blog in the series, “A look inside MirrorMaker 2.
Lack of Exactly Once Guarantees
MirrorMaker is not setup to utilize the support for exactly once processing semantics in Kafka and follows the default at least once semantics provided by Kafka. Thus duplicate messages can show up in the replicated topic especially after failures, as the produce to the replicated topic at the destination cluster and the update to the __consumer_offsetstopic at the source cluster are not executed together in one transaction to get exactly once replication. Mostly it is a problem left to the downstream application to handle duplicates correctly.
Too many MirrorMaker Clusters
Traditionally a MirrorMaker cluster is paired with the destination cluster. Thus there is a mirroring cluster for each destination cluster following a remote-consume and local-produce pattern. For example, for 2 data centers with 8 clusters each and 8 bidirectional replication pairs there are 16 MirrorMaker clusters. For large data centers this can significantly increase the operational cost. Ideally there should be one MirrorMaker cluster per destination data center. Thus in the above example there would be 2 MirrorMaker clusters, one in each data center.
What is coming in MirrorMaker 2
MirrorMaker 2 was designed to address the limitations of MirrorMaker 1 listed above. MM2 is based on the Kafka Connect framework and has the ability to dynamically change configurations, keep the topic properties in sync across clusters and improve performance significantly by reducing rebalances to a minimum. Moreover, handling active-active clusters and disaster recovery are use cases that MM2 supports out of the box. MM2 (KIP-382) is accepted as part of Apache Kafka. If you’re interested in learning more, take a look at Ryanne Dolan’s talk at Kafka Summit, and standby for the next blog in this series for “A Look inside MirrorMaker 2”.
Kafka Replication: The case for MirrorMaker 2.0的更多相关文章
- KIP-382: MirrorMaker 2.0
Status Motivation Public Interfaces Proposed Changes Remote Topics, Partitions Aggregation Cycle det ...
- Apache Kafka Replication Design – High level
参考,https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Replication Kafka Replication High-level ...
- Kafka replication
Kafka replication kafka_replication_detailed_design_v2.pdf kafka Detailed Replication Design V3 Apac ...
- Kafka实践、升级和新版本(0.10)特性预研
本文来自于网易云社区 一.消息总线MQ和Kafka (挡在请求的第一线) 1. 几个应用场景 case a:上游系统往下游系统推送消息,而不关心处理结果: case b:一份新数据生成,需要实时保存到 ...
- case a.ass_term_unit when '01' then (case a.ass_profit_mode when '0' then round(sum(a.ass_amount*a.ass_annual_rate/365*365*a.ass_term/100) ,2) when '1' then round(sum(a.ass_amount*a.ass_annual_rate/
--01 年 02 月 03 日 select a.ass_due_date, case a.ass_term_unit when '01' then (case a.ass_profit_mode ...
- sum(case when ct.tradeTotal >= 0 then 1 else 0 end)的意思
String hql = "select new com.ks.admin.report.dto.ReportMonthWithDrawalDto(" + "count( ...
- Kafka跨集群同步工具——MirrorMaker
MirrorMaker是为解决Kafka跨集群同步.创建镜像集群而存在的.下图展示了其工作原理.该工具消费源集群消息然后将数据又一次推送到目标集群. watermark/2/text/aHR0cDov ...
- Kafka集群搭建 (2.11-0.9.0.1)
之前写过kafka_2.9.2-0.8.2.2版本的安装,kafka在新的0.9版本以上改动比较大,配置和api都有很大更新,并且broker对应的partition支持多线程生产和消费,所以性能比之 ...
- kafka之consumer参数auto.offset.reset 0.10+
https://blog.csdn.net/dingding_ting/article/details/84862776 https://blog.csdn.net/xianpanjia4616/ar ...
随机推荐
- 解决linux环境下nohup: redirecting stderr to stdout问题
在生产环境下启动Weblogic时,发现原来好好的nohup信息输出到指定文件中的功能,突然出问题了.现象是控制台输出的信息一部分输出到了我指定的文件,另一部分却输出到了nohup.out,而我是不想 ...
- UDF——提取指定线上随时间变化的物理量
Fluent版本:Fluent 19.0 Visual Studio版本:Visual Studio 2013 有时候我们想要实现一些功能,比如:我们在使用Fluent进行瞬态计算的时候,想要获取某条 ...
- bzoj4868 期末考试 题解
https://www.lydsy.com/JudgeOnline/problem.php?id=4868 显然我们只关注最后出分的学科. 刚开始想的是dp,然而不知道如何记录状态. 突然就想到了正解 ...
- 关于python的四舍五入
参考https://blog.csdn.net/qq_39234705/article/details/82817703 四舍五入有很多相关资料,主要用两种方法round()和'%.2f' 两种方法取 ...
- Java里方法的参数传递方式
Java里方法的参数传递方式只有一种:值传递. Java中参数传递的都是参数值 下面从两个维度来看 1.传递的参数是8种基本数据类型 这个比较好理解,8种基本数据类型,作为参数时,可以理解为原来的一个 ...
- 探索ENCODE数据库 | Encyclopedia of DNA Elements
ENCODE: Encyclopedia of DNA Elements 目标:按不同组织,收集人类(还有小鼠.worm.fly)基因组里面的所有功能元件 The primary goal of th ...
- Go安装配置和《菜鸟教程之Go语言教程》学习笔记
Go 语言是一种让代码分享更容易的编程语言 菜鸟教程-Go语言教程(这个教程过于基础,体现不了Go的特性和强大.) 下载/安装Go语言 https://golang.org/dl/ Mac OS X ...
- Xamarin.FormsShell基础教程(3)Shell项目构成
Xamarin.FormsShell基础教程(3)Shell项目构成 在创建的ShellDemo解决方案中,有3个子项目,分别为ShellDemo.ShellDemo.Android和ShellDem ...
- thinkphp项目部署在phpstudy里的nginx上
朋友的一个thinkphp做的项目,让我帮他部署一下的,LINUX服务器,用宝塔. 第一台服务器,装上宝塔,宝塔里装NGINX,PHP5.6,再建立网站,绑定域名,访问成功,一切正常! 昨天试着给另一 ...
- win10 搜索栏输入后长期没反应
博客转载自:https://blog.csdn.net/qq_40875146/article/details/81742533 Get-AppXPackage -Name Microsoft.Win ...