Confluent is a company founded by the team that built Apache Kafka. It builds a platform around Kafka that enables companies to easily access data as real-time streams.

Confluent offers three different ways to get started with Kafka.

  1. Confluent Open Source
  2. Confluent Enterprise
  3. Confluent Cloud

While we in this series of Kafka Tutorial discuss much about Confluent Open Source, you may check the other two ways based on your requirement and interest.

While comparing Confluent Open Source with Apache Kafka, we get the following capabilities or tools in addition to standard Apache Kafka :

  • Additional Clients: Supports C, C++, Python, .NET and several other non-Java Clients.
  • REST Proxy – Provides universal access to Kafka from any network connected device via HTTP
  • Schema Registry – Central registry for the format of Kafka data – guarantees all data is always consumable
  • Pre-Built Connectors – HDFS, JDBC, Elasticsearch, Amazon S3 and other connectors fully certified and supported by Confluent

Confluent installation:

1.Unzip Confluent
2.curl -L https://cnfl.io/cli | sh -s -- -b /u01/confluent-5.3.1/bin
3.set java_home: /etc/alternatives/jre_1.8.0 Oracle Linux 7
4.Start server
[oracle@instance-20191202-1420 ~]$ $CONFLUENT_HOME/bin/confluent local start
The local commands are intended for a single-node development environment
only, NOT for production usage. https://docs.confluent.io/current/cli/index.html

Using CONFLUENT_CURRENT: /tmp/confluent.Vn0uJJY4
Starting zookeeper
zookeeper is [UP]
Starting kafka
kafka is [UP]
Starting schema-registry
schema-registry is [UP]
Starting kafka-rest
kafka-rest is [UP]
Starting connect
connect is [UP]
Starting ksql-server
ksql-server is [UP]
Starting control-center
control-center is [UP]

5.OGG for conluent kafka connect
Confluent version:5.3.1
OGG4BD:Version 19.1.0.0.2 OGGCORE_OGGADP.19.1.0.0.2_PLATFORMS_190916.0039

Error msg:
org.springframework.beans.factory.BeanCreationException: Error creating bean wit
h name 'userExitDataSource' defined in class path resource [oracle/goldengate/da
tasource/DataSource-context.xml]: Bean instantiation via factory method failed;
nested exception is org.springframework.beans.BeanInstantiationException: Failed
to instantiate [oracle.goldengate.datasource.GGDataSource]: Factory method 'get
DataSource' threw exception; nested exception is org.apache.kafka.common.config.
ConfigException: Missing required configuration "converter.type" which has no de
fault value.
Workaround:
0. Add below 3 lines into kafkaconnect.properties
converter.type=key
converter.type=value
converter.type=header
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=184647734022308&parent=EXTERNAL_SEARCH&sourceId=PROBLEM&id=2455697.1&_afrWindowMode=0&_adf.ctrl-state=ifht4s4f7_4

What we have right now:
[oracle@instance-20191202-1420 dirprm]$ cat kafkaconnect.properties
bootstrap.servers=localhost:9092
acks=1

#JSON Converter Settings
key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=false

#Avro Converter Settings
#key.converter=io.confluent.connect.avro.AvroConverter
#value.converter=io.confluent.connect.avro.AvroConverter
#key.converter.schema.registry.url=http://localhost:8081
#value.converter.schema.registry.url=http://localhost:8081

converter.type=key
converter.type=value
converter.type=header
#Adjust for performance
buffer.memory=33554432
batch.size=16384
linger.ms=0

1. Enable listner(Need to confirm if it is a must):
<Confluent_home>/etc/kafka/server.properties
Update:listeners=PLAINTEXT://localhost:9092
Restart the kafka-server

2. Review the kc.props
[oracle@instance-20191202-1420 dirprm]$ cat kc.props

gg.handlerlist=kafkaconnect

#The handler properties
gg.handler.kafkaconnect.type=kafkaconnect
gg.handler.kafkaconnect.kafkaProducerConfigFile=kafkaconnect.properties
gg.handler.kafkaconnect.mode=op
#The following selects the topic name based on the fully qualified table name
gg.handler.kafkaconnect.topicMappingTemplate=ogg_topic
#The following selects the message key using the concatenated primary keys
gg.handler.kafkaconnect.keyMappingTemplate=${primaryKeys}
gg.handler.kafkahandler.MetaHeaderTemplate=${alltokens}

#The formatter properties
gg.handler.kafkaconnect.messageFormatting=row
gg.handler.kafkaconnect.insertOpKey=I
gg.handler.kafkaconnect.updateOpKey=U
gg.handler.kafkaconnect.deleteOpKey=D
gg.handler.kafkaconnect.truncateOpKey=T
gg.handler.kafkaconnect.treatAllColumnsAsStrings=false
gg.handler.kafkaconnect.iso8601Format=false
gg.handler.kafkaconnect.pkUpdateHandling=abend
gg.handler.kafkaconnect.includeTableName=true
gg.handler.kafkaconnect.includeOpType=true
gg.handler.kafkaconnect.includeOpTimestamp=true
gg.handler.kafkaconnect.includeCurrentTimestamp=true
gg.handler.kafkaconnect.includePosition=true
gg.handler.kafkaconnect.includePrimaryKeys=false
gg.handler.kafkaconnect.includeTokens=false

goldengate.userexit.writers=javawriter
javawriter.stats.display=TRUE
javawriter.stats.full=TRUE

gg.log=log4j
gg.log.level=INFO

gg.report.time=30sec

#Apache Kafka Classpath
#gg.classpath={Kafka install dir}/libs
#gg.classpath=/u01/confluent-5.3.1/share/java/schema-registry
#Confluent IO classpath
#gg.classpath={Confluent install dir}/share/java/kafka-serde-tools/*:{Confluent install dir}/share/java/kafka/*:{Confluent install dir}/share/java/confluent-common/*
gg.classpath=/u01/confluent-5.3.1/share/java/kafka-serde-tools/*:/u01/confluent-5.3.1/share/java/kafka/*:/u01/confluent-5.3.1/share/java/confluent-common/*
javawriter.bootoptions=-Xmx512m -Xms32m -Djava.class.path=.:ggjava/ggjava.jar:./dirprm

3. Test
GGSCI (instance-20191202-1420) 1> stats kc

Sending STATS request to REPLICAT KC ...

Start of Statistics at 2019-12-02 09:54:55.

Replicating from QASOURCE.TCUSTMER to QASOURCE.TCUSTMER:

*** Total statistics since 2019-12-02 09:44:32 ***
Total inserts 5.00
Total updates 1.00
Total deletes 0.00
Total upserts 0.00
Total discards 0.00
Total operations 6.00

Oracle GoldenGate to Confluent with Kafka Connect的更多相关文章

  1. Confluent之Kafka Connector初体验

    概述 背景 Apache Kafka 是最大.最成功的开源项目之一,可以说是无人不知无人不晓,在前面的文章<Apache Kafka分布式流处理平台及大厂面试宝典>我们也充分认识了Kafk ...

  2. Streaming data from Oracle using Oracle GoldenGate and Kafka Connect

    This is a guest blog from Robin Moffatt. Robin Moffatt is Head of R&D (Europe) at Rittman Mead, ...

  3. confluent kafka connect remote debugging

    1. Deep inside of kafka-connect start up To begin with, let's take a look at how kafka connect start ...

  4. Build an ETL Pipeline With Kafka Connect via JDBC Connectors

    This article is an in-depth tutorial for using Kafka to move data from PostgreSQL to Hadoop HDFS via ...

  5. 打造实时数据集成平台——DataPipeline基于Kafka Connect的应用实践

    导读:传统ETL方案让企业难以承受数据集成之重,基于Kafka Connect构建的新型实时数据集成平台被寄予厚望. 在4月21日的Kafka Beijing Meetup第四场活动上,DataPip ...

  6. kafka connect rest api

    1. 获取 Connect Worker 信息curl -s http://127.0.0.1:8083/ | jq lenmom@M1701:~/workspace/software/kafka_2 ...

  7. 使用 Oracle GoldenGate 在 Microsoft SQL Server 和 Oracle Database 之间复制事务

    使用 Oracle GoldenGate 在 Microsoft SQL Server 和 Oracle Database 之间复制事务 作者:Nikolay Manchev 分步构建一个跨这些平台的 ...

  8. [转]Oracle GoldenGate安装配置

    ref:http://blog.sina.com.cn/s/blog_5d29418d0101cvyx.html 1 简介 Oracle Golden Gate软件是一种基于日志的结构化数据复制备份软 ...

  9. Oracle GoldenGate for Oracle 11g to PostgreSQL 9.2.4 Configuration

    Installing and setting up Oracle GoldenGate connecting to an Oracle database Also please make sure t ...

随机推荐

  1. RabbitMQ-事务和Confirm消息确认

    ,如果要保证消息的可靠性,需要对消息进行持久化处理,然而消息持久化除了需要代码的设置之外,还有一个重要步骤是至关重要的,那就是保证你的消息顺利进入Broker(代理服务器),如图所示: 正常情况下,如 ...

  2. dotnet 通过 HttpClient 下载文件同时报告进度的方法

    本文告诉大家一个简单的方法通过 HttpClient 下载文件,同时报告下载进度 通过 HttpClient 的 ContentLength 很多时候都可以拿到下载的内容的长度,通过 ReadAsyn ...

  3. Linux中找不到ifconfig命令的解决方法

    1.ifconfig命令是设置或显示网络接口的程序,可以显示出我们机器的网卡信息,可是有些时候最小化安装CentOS等Linux发行版的时候会默认不安装ifconfig等命令,这时候你进入终端,运行i ...

  4. struts2生成随机验证码图片

    之前想做一个随机验证码的功能,自己也搜索了一下别人写的代码,然后自己重新用struts2实现了一下,现在将我自己实现代码贴出来!大家有什么意见都可以指出来! 首先是生成随机验证码图片的action: ...

  5. VMware Workstation 与 Device/Credential Guard 不兼容.在禁用 Device/Credenti

    出现问题的原因: 原因一.出现此问题的原因是Device Guard或Credential Guard与Workstation不兼容. 原因二.Windows系统的Hyper-V不兼容导致. 解决方案 ...

  6. 对QT中QBitArray类进行简单剖析

    我们知道Qt中的QBitArray类支持在位(bit)的层次上进行数据操作.本文剖析该类在二进制文件读写时的一些要点.另外,在Qt中,QDataStream类对于二进制文件的读写提供了诸多便利,需要注 ...

  7. mysql中information_schema.triggers字段说明

    1. 获取所有触发器信息(TRIGGERS) SELECT  *  FROM information_schema.TRIGGERS WHERE  TRIGGER_SCHEMA='数据库名';  TR ...

  8. 【uuid】- 唯一标识

    2020-01-02 UUID ,Universally Unique Identifier ,通用唯一标识符. //定义一个生成 uuid 的方法const getUuid = () => { ...

  9. react super中的props

    有的小伙伴每次写组件都会习惯性在constructor和super中写上props,那么这个是必要的吗?? 首先要明确很重要的一点就是: 可以不写constructor,一旦写了constructor ...

  10. (数据科学学习手札72)用pdpipe搭建pandas数据分析流水线

    1 简介 在数据分析任务中,从原始数据读入,到最后分析结果出炉,中间绝大部分时间都是在对数据进行一步又一步的加工规整,以流水线(pipeline)的方式完成此过程更有利于梳理分析脉络,也更有利于查错改 ...