Confluent is a company founded by the team that built Apache Kafka. It builds a platform around Kafka that enables companies to easily access data as real-time streams.

Confluent offers three different ways to get started with Kafka.

  1. Confluent Open Source
  2. Confluent Enterprise
  3. Confluent Cloud

While we in this series of Kafka Tutorial discuss much about Confluent Open Source, you may check the other two ways based on your requirement and interest.

While comparing Confluent Open Source with Apache Kafka, we get the following capabilities or tools in addition to standard Apache Kafka :

  • Additional Clients: Supports C, C++, Python, .NET and several other non-Java Clients.
  • REST Proxy – Provides universal access to Kafka from any network connected device via HTTP
  • Schema Registry – Central registry for the format of Kafka data – guarantees all data is always consumable
  • Pre-Built Connectors – HDFS, JDBC, Elasticsearch, Amazon S3 and other connectors fully certified and supported by Confluent

Confluent installation:

1.Unzip Confluent
2.curl -L https://cnfl.io/cli | sh -s -- -b /u01/confluent-5.3.1/bin
3.set java_home: /etc/alternatives/jre_1.8.0 Oracle Linux 7
4.Start server
[oracle@instance-20191202-1420 ~]$ $CONFLUENT_HOME/bin/confluent local start
The local commands are intended for a single-node development environment
only, NOT for production usage. https://docs.confluent.io/current/cli/index.html

Using CONFLUENT_CURRENT: /tmp/confluent.Vn0uJJY4
Starting zookeeper
zookeeper is [UP]
Starting kafka
kafka is [UP]
Starting schema-registry
schema-registry is [UP]
Starting kafka-rest
kafka-rest is [UP]
Starting connect
connect is [UP]
Starting ksql-server
ksql-server is [UP]
Starting control-center
control-center is [UP]

5.OGG for conluent kafka connect
Confluent version:5.3.1
OGG4BD:Version 19.1.0.0.2 OGGCORE_OGGADP.19.1.0.0.2_PLATFORMS_190916.0039

Error msg:
org.springframework.beans.factory.BeanCreationException: Error creating bean wit
h name 'userExitDataSource' defined in class path resource [oracle/goldengate/da
tasource/DataSource-context.xml]: Bean instantiation via factory method failed;
nested exception is org.springframework.beans.BeanInstantiationException: Failed
to instantiate [oracle.goldengate.datasource.GGDataSource]: Factory method 'get
DataSource' threw exception; nested exception is org.apache.kafka.common.config.
ConfigException: Missing required configuration "converter.type" which has no de
fault value.
Workaround:
0. Add below 3 lines into kafkaconnect.properties
converter.type=key
converter.type=value
converter.type=header
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=184647734022308&parent=EXTERNAL_SEARCH&sourceId=PROBLEM&id=2455697.1&_afrWindowMode=0&_adf.ctrl-state=ifht4s4f7_4

What we have right now:
[oracle@instance-20191202-1420 dirprm]$ cat kafkaconnect.properties
bootstrap.servers=localhost:9092
acks=1

#JSON Converter Settings
key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=false

#Avro Converter Settings
#key.converter=io.confluent.connect.avro.AvroConverter
#value.converter=io.confluent.connect.avro.AvroConverter
#key.converter.schema.registry.url=http://localhost:8081
#value.converter.schema.registry.url=http://localhost:8081

converter.type=key
converter.type=value
converter.type=header
#Adjust for performance
buffer.memory=33554432
batch.size=16384
linger.ms=0

1. Enable listner(Need to confirm if it is a must):
<Confluent_home>/etc/kafka/server.properties
Update:listeners=PLAINTEXT://localhost:9092
Restart the kafka-server

2. Review the kc.props
[oracle@instance-20191202-1420 dirprm]$ cat kc.props

gg.handlerlist=kafkaconnect

#The handler properties
gg.handler.kafkaconnect.type=kafkaconnect
gg.handler.kafkaconnect.kafkaProducerConfigFile=kafkaconnect.properties
gg.handler.kafkaconnect.mode=op
#The following selects the topic name based on the fully qualified table name
gg.handler.kafkaconnect.topicMappingTemplate=ogg_topic
#The following selects the message key using the concatenated primary keys
gg.handler.kafkaconnect.keyMappingTemplate=${primaryKeys}
gg.handler.kafkahandler.MetaHeaderTemplate=${alltokens}

#The formatter properties
gg.handler.kafkaconnect.messageFormatting=row
gg.handler.kafkaconnect.insertOpKey=I
gg.handler.kafkaconnect.updateOpKey=U
gg.handler.kafkaconnect.deleteOpKey=D
gg.handler.kafkaconnect.truncateOpKey=T
gg.handler.kafkaconnect.treatAllColumnsAsStrings=false
gg.handler.kafkaconnect.iso8601Format=false
gg.handler.kafkaconnect.pkUpdateHandling=abend
gg.handler.kafkaconnect.includeTableName=true
gg.handler.kafkaconnect.includeOpType=true
gg.handler.kafkaconnect.includeOpTimestamp=true
gg.handler.kafkaconnect.includeCurrentTimestamp=true
gg.handler.kafkaconnect.includePosition=true
gg.handler.kafkaconnect.includePrimaryKeys=false
gg.handler.kafkaconnect.includeTokens=false

goldengate.userexit.writers=javawriter
javawriter.stats.display=TRUE
javawriter.stats.full=TRUE

gg.log=log4j
gg.log.level=INFO

gg.report.time=30sec

#Apache Kafka Classpath
#gg.classpath={Kafka install dir}/libs
#gg.classpath=/u01/confluent-5.3.1/share/java/schema-registry
#Confluent IO classpath
#gg.classpath={Confluent install dir}/share/java/kafka-serde-tools/*:{Confluent install dir}/share/java/kafka/*:{Confluent install dir}/share/java/confluent-common/*
gg.classpath=/u01/confluent-5.3.1/share/java/kafka-serde-tools/*:/u01/confluent-5.3.1/share/java/kafka/*:/u01/confluent-5.3.1/share/java/confluent-common/*
javawriter.bootoptions=-Xmx512m -Xms32m -Djava.class.path=.:ggjava/ggjava.jar:./dirprm

3. Test
GGSCI (instance-20191202-1420) 1> stats kc

Sending STATS request to REPLICAT KC ...

Start of Statistics at 2019-12-02 09:54:55.

Replicating from QASOURCE.TCUSTMER to QASOURCE.TCUSTMER:

*** Total statistics since 2019-12-02 09:44:32 ***
Total inserts 5.00
Total updates 1.00
Total deletes 0.00
Total upserts 0.00
Total discards 0.00
Total operations 6.00

Oracle GoldenGate to Confluent with Kafka Connect的更多相关文章

  1. Confluent之Kafka Connector初体验

    概述 背景 Apache Kafka 是最大.最成功的开源项目之一,可以说是无人不知无人不晓,在前面的文章<Apache Kafka分布式流处理平台及大厂面试宝典>我们也充分认识了Kafk ...

  2. Streaming data from Oracle using Oracle GoldenGate and Kafka Connect

    This is a guest blog from Robin Moffatt. Robin Moffatt is Head of R&D (Europe) at Rittman Mead, ...

  3. confluent kafka connect remote debugging

    1. Deep inside of kafka-connect start up To begin with, let's take a look at how kafka connect start ...

  4. Build an ETL Pipeline With Kafka Connect via JDBC Connectors

    This article is an in-depth tutorial for using Kafka to move data from PostgreSQL to Hadoop HDFS via ...

  5. 打造实时数据集成平台——DataPipeline基于Kafka Connect的应用实践

    导读:传统ETL方案让企业难以承受数据集成之重,基于Kafka Connect构建的新型实时数据集成平台被寄予厚望. 在4月21日的Kafka Beijing Meetup第四场活动上,DataPip ...

  6. kafka connect rest api

    1. 获取 Connect Worker 信息curl -s http://127.0.0.1:8083/ | jq lenmom@M1701:~/workspace/software/kafka_2 ...

  7. 使用 Oracle GoldenGate 在 Microsoft SQL Server 和 Oracle Database 之间复制事务

    使用 Oracle GoldenGate 在 Microsoft SQL Server 和 Oracle Database 之间复制事务 作者:Nikolay Manchev 分步构建一个跨这些平台的 ...

  8. [转]Oracle GoldenGate安装配置

    ref:http://blog.sina.com.cn/s/blog_5d29418d0101cvyx.html 1 简介 Oracle Golden Gate软件是一种基于日志的结构化数据复制备份软 ...

  9. Oracle GoldenGate for Oracle 11g to PostgreSQL 9.2.4 Configuration

    Installing and setting up Oracle GoldenGate connecting to an Oracle database Also please make sure t ...

随机推荐

  1. 机器学习——集成学习之Stacking

    摘自: https://zhuanlan.zhihu.com/p/27689464 Stacking方法是指训练一个模型用于组合其他各个模型.首先我们先训练多个不同的模型,然后把之前训练的各个模型的输 ...

  2. Spring Tool Suite记录

    快速查询:选中项目名按CTRL+H

  3. SPOJ - REPEATS Repeats (后缀数组)

    A string s is called an (k,l)-repeat if s is obtained by concatenating k>=1 times some seed strin ...

  4. web应用中web.xml文件的解释

    一.web.xml配置文件常用元素及其意义预览 1 <web-app> 2 3 <!--定义了WEB应用的名字--> 4 <display-name></di ...

  5. python单例设计模式

    class Dog(object): __instance = None def __init__(self): pass def __new__(cls): if not cls.__instanc ...

  6. 第二阶段:4.商业需求文档MRD:1.PRD-产品功能列表

    这就是对功能清单的梳理已经优先级筛选

  7. .NET Core开发的iNeuOS工业互联平台,升级四大特性:配置数据接口、图元绑定数据、预警配置和自定义菜单

    目       录 1.      概述... 2 2.      演示信息... 2 3.      iNeuView(Web组态)配置数据接口... 2 4.      iNeuView(Web组 ...

  8. spring定时器时间设置规则

    单纯针对时间的设置规则org.springframework.scheduling.quartz.CronTriggerBean允许你更精确地控制任务的运行时间,只需要设置其cronExpressio ...

  9. k8s集群———单master节点2node节点

    #部署node节点 ,将kubelet-bootstrap用户绑定到系统集群角色中(颁发证书的最小权限) kubectl create clusterrolebinding kubelet-boots ...

  10. 20191024-3 互评Alpha阶段作品——都是为了生活组

    此作业要求参见https://edu.cnblogs.com/campus/nenu/2019fall/homework/9860 评价:都是为了生活组——All  For  Eating 基于NAB ...