kafka connect 使用说明
KAFKA CONNECT 使用说明
一、概述
kafka connect 是一个可扩展的、可靠的在kafka和其他系统之间流传输的数据工具。简而言之就是他可以通过Connector(连接器)简单、快速的将大集合数据导入和导出kafka。可以接收整个数据库或收集来自所有的应用程序的消息到kafka的topic中,kafka connect 功能包括:
1,kafka连接器通用框架:kafka connect 规范了kafka和其他数据系统集成,简化了开发、部署和管理。
2,分布式和单机式:扩展到大型支持整个organization的集中管理服务,也可以缩小到开发,测试和小规模生产部署。
3,REST接口:通过rest API 来提交(和管理)Connector到kafka connect 集群。
4,offset自动化管理:从Connector 获取少量信息,connect来管理offset提交。
5,分布式和默认扩展:kafka connect建立在现有的组管理协议上,更多的工作可以添加扩展到connect集群。
6,流/批量集成:利用kafka现有能力,connect是一个桥接流和批量数据系统的理想解决方案。
在这里我们测试connect的kafka版本是:0.9.0.0
二,单机模式
单机模式的命令格式如下:
bin/connect-standalone.sh config/connect-standalone.properties Connector1.properties [Connector2.properties ...]
现在就上述文件我的配
1,connect-standalone.sh 是执行单机模式的命令。
#!/bin/sh
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. base_dir=$(dirname $) if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/connect-log4j.properties"
fi
if [ -z "$KAFKA_HEAP_OPTS" ]; then
export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=9901 "
fi
if [ -z "$KAFKA_HEAP_OPTS" ]; then
export KAFKA_HEAP_OPTS="-Xmx1024M"
fi
exec $(dirname $)/kafka-run-class.sh org.apache.kafka.connect.cli.ConnectStandalone "$@"
在这里可以设置给connect的虚拟机内存设置:
if [ -z "$KAFKA_HEAP_OPTS" ]; then
export KAFKA_HEAP_OPTS="-Xmx1024M"
fi 也可以设置JMS配置:
if [ -z "$KAFKA_HEAP_OPTS" ]; then
export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=9901 "
fi 2,connect-standalone.properties的配置:
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. # These are defaults. This file just demonstrates how to override some settings.
bootstrap.servers=10.253.129.237:,10.253.129.238:,10.253.129.239: # The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will
# need to configure these based on the format they want their data in when loaded from or stored into Kafka
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply
# it to
key.converter.schemas.enable=true
value.converter.schemas.enable=true # The internal converter used for offsets and config data is configurable and must be specified, but most users will
# always want to use the built-in default. Offset and config data is never visible outside of Copcyat in this format.
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false offset.storage.file.filename=/datafs//connect.offsets
# Flush much faster than normal, which is useful for testing/debugging
offset.flush.interval.ms=
这里需要注意broker的配置,其余配置在kafka官网都有说明参考:
http://kafka.apache.org/090/documentation.html#connectconfigs
3,connect-file-source.properties的配置
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. name=test_source2
connector.class=FileStreamSource
tasks.max=
file=/datafs//json2/log.out
topic=TEST_MANAGER5
注意路径和topic的配置
4,connect-file-sink.properties 的配置:
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. name=test_sink1
connector.class=FileStreamSink
#connector.class=org.apache.kafka.connect.file.FileStreamSinkConnector
tasks.max=
file=/datafs//a.out
topics=TEST_MANAGER5
三,集群模式
命令格式:
bin/connect-distributed.sh config/connect-distributed.properties
{"name":"test","config":{"topic":"TEST_MANAGER","connector.class":"FileStreamSource","tasks.max":"2","file":"/datafs/log1.out"}}
四,REST API
DELETE /Connectors/{name}:删除 Connector, 停止所有的任务并删除其配置
kafka connect 使用说明的更多相关文章
- Kafka Connect使用入门-Mysql数据导入到ElasticSearch
1.Kafka Connect Connect是Kafka的一部分,它为在Kafka和外部存储系统之间移动数据提供了一种可靠且伸缩的方式,它为连接器插件提供了一组API和一个运行时-Connect负责 ...
- Streaming data from Oracle using Oracle GoldenGate and Kafka Connect
This is a guest blog from Robin Moffatt. Robin Moffatt is Head of R&D (Europe) at Rittman Mead, ...
- Build an ETL Pipeline With Kafka Connect via JDBC Connectors
This article is an in-depth tutorial for using Kafka to move data from PostgreSQL to Hadoop HDFS via ...
- Kafka connect快速构建数据ETL通道
摘要: 作者:Syn良子 出处:http://www.cnblogs.com/cssdongl 转载请注明出处 业余时间调研了一下Kafka connect的配置和使用,记录一些自己的理解和心得,欢迎 ...
- 使用kafka connect,将数据批量写到hdfs完整过程
版权声明:本文为博主原创文章,未经博主允许不得转载 本文是基于hadoop 2.7.1,以及kafka 0.11.0.0.kafka-connect是以单节点模式运行,即standalone. 首先, ...
- 基于Kafka Connect框架DataPipeline可以更好地解决哪些企业数据集成难题?
DataPipeline已经完成了很多优化和提升工作,可以很好地解决当前企业数据集成面临的很多核心难题. 1. 任务的独立性与全局性. 从Kafka设计之初,就遵从从源端到目的的解耦性.下游可以有很多 ...
- 基于Kafka Connect框架DataPipeline在实时数据集成上做了哪些提升?
在不断满足当前企业客户数据集成需求的同时,DataPipeline也基于Kafka Connect 框架做了很多非常重要的提升. 1. 系统架构层面. DataPipeline引入DataPipeli ...
- 以Kafka Connect作为实时数据集成平台的基础架构有什么优势?
Kafka Connect是一种用于在Kafka和其他系统之间可扩展的.可靠的流式传输数据的工具,可以更快捷和简单地将大量数据集合移入和移出Kafka的连接器.Kafka Connect为DataPi ...
- 打造实时数据集成平台——DataPipeline基于Kafka Connect的应用实践
导读:传统ETL方案让企业难以承受数据集成之重,基于Kafka Connect构建的新型实时数据集成平台被寄予厚望. 在4月21日的Kafka Beijing Meetup第四场活动上,DataPip ...
随机推荐
- ALV屏幕捕捉回车及下拉框事件&ALV弹出框回车及下拉框事件
示例展示: 屏幕依据输入的物料编码或下拉框物料编码拍回车自动带出物料描述: 点击弹出框,输入物料编码拍回车带出物料描述,点击确认,更新ALV: 1.创建屏幕9000,用于处理ALV弹出框: 2.针对屏 ...
- url参数解析 and 日期格式化
~function (pro) { //url解析 function queryURLParameter() { var reg = /([^?&=#]+)=([^?&=#]+)/g, ...
- mac 下mysql安装
系统环境: OS X Yosemite 10.10.3 登录用户: fx (有 sudo 权限) MySQL版本: 5.5.49 (mysql-5.5.49-osx10.9-x86_64.tar) M ...
- windows与sql身份登录
windows身份验证:1.“.”代表本地2.127.0.0.1代表本地3.本地ip地址,在dos中可查4.localhost代表本地 sql身份验证:首先用windows登上去,之后再安全性中找到登 ...
- 使用__slots__ __str__ __iter__
__slots__ 为了达到限制的目的,Python允许在定义class的时候,定义一个特殊的__slots__变量,来限制该class实例能添加的属性. __str__ 用这个命令定义方法,可以返 ...
- 家庭版Windows10没有远程桌面的问题
https://github.com/stascorp/rdpwrap/releases
- Qt设置创建部分半透明,上面控件不透明
//头文件#pragma once #include <QWidget> #include "ui_widgetFullAD.h" class widgetFullAD ...
- Python 死锁现象
import time from threading import Thread,Lock,RLock def f1(locA,locB): locA.acquire() print('f1>& ...
- python--基本数据 类型
数据就是我们变量的值:python中变量保存的是内存地址 变量必须先赋值或者声明才能使用!! 1.数值型 整型 int (python3中int就是长整型,与python2中int不同,另外,pyth ...
- PowerDesigner生成PowerBuilder扩展属性~
PowerDesigner版本:11.0.0.1363 步骤: 一.打开PowerDesigner新建模型->物理数据模型(Physical Data Model). 二.在常规选项 DBM ...