In this article, we discuss the necessity of segregate data model for read and write and use event sourcing for capture detailed data changing. These two aspects are critical for data analysis in big data world. We will compare some candidate solutions and draw a conclusion that CDC strategy is a perfect match for CQRS pattern.

Context and Problem

To support business decision-making, we demand fresh and accurate data that’s available where and when we need it, often in real-time.

But,

  • as business analysts try to run analysis, the production databases are (will be) overloaded;
  • some process details (transaction stream) valuable for analysis may have been overwritten;
  • OLTP data models may not be friendly to analysis purpose.

We hope to come out with a efficient solution to capture detailed transaction stream and ingest data to Hadoop for analysis.

CQRS and Event Sourcing Pattern

CQRS-based systems use separate read and write data models, each tailored to relevant tasks and often located in physically separate stores.

Event-sourcing: Instead of storing just the current state of the data in a domain, use an append-only store to record the full series of actions taken on that data. 

Decouple: one team of developers can focus on the complex domain model that is part of the write model, and another team can focus on the read model and the user interfaces.

Ingest Solutions - dual writes

Dual Write

  • brings complexity in business system
  • is less fault tolerant when backend message queue is blocked or under maintenance
  • suffers from race conditions and consistency problems

Business log

  • concerns of data sensitivity
  • brings complexity in business system

Ingest Solutions - database operations

Snapshot

  • data in the database is constantly changing, so the snapshot is already out-of-date by the time it’s loaded
  • even if you take a snapshot once a day, you still have one-day-old data in the downstream system
  • on a large database those snapshots and bulk loads can become very expensive

Data offload

  • brings operational complexity
  • is inability to meet low-latency requirements
  • can’t handle delete operations

Ingest Solutions - capture data change

process only “diff” of changes

  • write all your data to only one primary DB;
  • extract two things from that database:
  • a consistent snapshot and
  • a real-time stream of changes 

Benefits:

  • decouple with business system
  • get a latency of less than a second
  • stream is ordering of writes, less race conditions
  • pull strategy is robust to data corruption (log replaying)
  • support as many variant data consumers as required

Ingest Solutions - wrapup

Considering data application under the picture of business application, we will focus on the ‘capture changes to data’ components.

Open Source for Postgres to Kafka

Sqoop
can only take full snapshots of a database, and not capture an ongoing stream of changes. Also, transactional consistency of its snapshots is not wells supported (Apache).
pg_kafka
is a Kafka producer client in a Postgres function, so we could potentially produce to Kafka from a trigger. (MIT license)
bottledwater-pg
is a change data capture (CDC) specifically from PostgreSQL into Kafka (Apache License 2.0, from confluent inc.)
debezium-pg
is a change data capture for a variety of databases (Apache License 2.0, from redhat)

Debezium for Postgres is comparatively better.

Debezium for Postgres Architecture

debezium/postgres-decoderbufs

  • manually build the output plugin
  • change PG configuration, preload the lib file and restart PG service

debezium/debezium

  • compile and package the dependent jar files

Kafka connect

  • deploy distributed kafka connect service
  • start a debezium connector in Kafka connect

HBase connect

  • development work: implement a hbase connect for PG CDC events
  • Start a hbase connector in Kafka connect

Spark streaming

  • development work: implement data process functions atop Spark streaming

Considerations

Reliability
For example

  • be aware of data source exception or source relocation, and automatically/manually restart data capture tasks or redirect data source;
  • monitor data quality and latency;

Scalability

  • be aware of data source load pressure, and automatically/manually scale out data capture tasks;

Maintainability

  • GUI for system monitoring, data quality check, latency statistics etc.;
  • GUI for configuring data capture task scale out

Other CDC solutions

Databus (linkedIn): no native support for PG
Wormhole (facebook): not opensource
Sherpa (yahoo!) : not opensource
BottledWater (confluent): postgres Only (NOT maintained any more!!)
Maxwell: mysql Only
Debezium (redhat): good
Mongoriver: only for MongiDB
GoldenGate (Oracle): for Oracle and mysql, free but not opensource
Canal & otter (alibaba): for mysql world replication

Debezium for PostgreSQL to Kafka的更多相关文章

  1. kafka connect rest api

    1. 获取 Connect Worker 信息curl -s http://127.0.0.1:8083/ | jq lenmom@M1701:~/workspace/software/kafka_2 ...

  2. debezium关于cdc的使用(上)

    博文原址:debezium关于cdc的使用(上) 简介 debezium是一个为了捕获数据变更(cdc)的开源的分布式平台.启动并指向数据库,当其他应用对此数据库执行inserts.updates.d ...

  3. 基于Apache Hudi和Debezium构建CDC入湖管道

    从 Hudi v0.10.0 开始,我们很高兴地宣布推出适用于 Deltastreamer 的 Debezium 源,它提供从 Postgres 和 MySQL 数据库到数据湖的变更捕获数据 (CDC ...

  4. 几篇关于MySQL数据同步到Elasticsearch的文章---第一篇:Debezium实现Mysql到Elasticsearch高效实时同步

    文章转载自: https://mp.weixin.qq.com/s?__biz=MzI2NDY1MTA3OQ==&mid=2247484358&idx=1&sn=3a78347 ...

  5. Build an ETL Pipeline With Kafka Connect via JDBC Connectors

    This article is an in-depth tutorial for using Kafka to move data from PostgreSQL to Hadoop HDFS via ...

  6. Kafka设计解析(八)- Exactly Once语义与事务机制原理

    原创文章,首发自作者个人博客,转载请务必将下面这段话置于文章开头处. 本文转发自技术世界,原文链接 http://www.jasongj.com/kafka/transaction/ 写在前面的话 本 ...

  7. Kafka设计解析(八)Exactly Once语义与事务机制原理

    转载自 技术世界,原文链接 Kafka设计解析(八)- Exactly Once语义与事务机制原理 本文介绍了Kafka实现事务性的几个阶段——正好一次语义与原子操作.之后详细分析了Kafka事务机制 ...

  8. pg 资料大全1

    https://github.com/ty4z2008/Qix/blob/master/pg.md?from=timeline&isappinstalled=0 PostgreSQL(数据库) ...

  9. Awesome Go精选的Go框架,库和软件的精选清单.A curated list of awesome Go frameworks, libraries and software

    Awesome Go      financial support to Awesome Go A curated list of awesome Go frameworks, libraries a ...

随机推荐

  1. Web标准:九、CSS表单设计

    Web标准:九.CSS表单设计 知识点: 1.改变文本框和文本域样式 2.用图片美化按钮 3.改变下拉列表样式 4.用label标签提升用户体验   1)改变文本框和文本域样式 文本框标签:<i ...

  2. .net 委托的用法

    定义了两个委托 //Func有返回值:Action无返回值.两个委托 Func<int,int> f= a =>a+1;//参数,返回值: int reslut=f(5);//6

  3. np.hsplit()

    numpy.hsplit numpy.hsplit(ary, indices_or_sections)[source] Split an array into multiple sub-arrays ...

  4. HAproxy-1.6.X 安装部署

    1. 源码包下载及安装 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 root@iZ23tsilmb7Z:/usr/local ...

  5. ECMAScript5新特性之获取对象特有的属性

    'use strict'; // 父类 function Fruit(){ } Fruit.prototype.name = '水果'; // 子类 function Apple(desc){ thi ...

  6. mysql的外键知识

    外键的作用 1.用来约束两张表中的字段 2.外键也可以用来实现一对多 我们先举一个这样的例子,让大家对外键有一个基本的认识 当前我们有一个需求就是,需要创建一张表,这张表要包括“姓名”,“年龄”,“工 ...

  7. 数据流滑动窗口平均值 · sliding window average from data stream

    [抄题]: 给出一串整数流和窗口大小,计算滑动窗口中所有整数的平均值. MovingAverage m = new MovingAverage(3); m.next(1) = 1 // 返回 1.00 ...

  8. SQL时间格式化 转载备用~

    Sel1 取值后格式化{0:d}小型:如2005-5-6{0:D}大型:如2005年5月6日{0:f}完整型 2 当前时间获取 DateTime.Now.ToShortDateString 3 取值中 ...

  9. 解决mysql无法远程登陆问题

    解决这个问题的思路: 一.先确定能过3306端口 二.再检查授权Host是否存在 (新授权记得flush privileges;)   一 步骤 1.首先打开mysql的配置文件,找到这句话,注释掉. ...

  10. jquery datatables api

    原文地址 学习可参考:http://www.guoxk.com/node/jquery-datatables http://yuemeiqing2008-163-com.iteye.com/blog/ ...