1. 获取 Connect Worker 信息
curl -s http://127.0.0.1:8083/ | jq

  1. lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s http://127.0.0.1:8083/ | jq
  2. {
  3. "version": "2.1.0",
  4. "commit": "809be928f1ae004e",
  5. "kafka_cluster_id": "NGQRxNZMSY6Q53ktQABHsQ"
  6. }

2.列出 Connect Worker 上所有 Connector
curl -s http://127.0.0.1:8083/connector-plugins | jq

  1. lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s http://127.0.0.1:8083/connector-plugins | jq
  2. [
  3. {
  4. "class": "io.confluent.connect.hdfs.HdfsSinkConnector",
  5. "type": "sink",
  6. "version": "5.2.1"
  7. },
  8. {
  9. "class": "io.confluent.connect.hdfs.tools.SchemaSourceConnector",
  10. "type": "source",
  11. "version": "2.1.0"
  12. },
  13. {
  14. "class": "io.confluent.connect.storage.tools.SchemaSourceConnector",
  15. "type": "source",
  16. "version": "2.1.0"
  17. },
  18. {
  19. "class": "io.debezium.connector.mongodb.MongoDbConnector",
  20. "type": "source",
  21. "version": "0.9.4.Final"
  22. },
  23. {
  24. "class": "io.debezium.connector.mysql.MySqlConnector",
  25. "type": "source",
  26. "version": "0.9.4.Final"
  27. },
  28. {
  29. "class": "io.debezium.connector.oracle.OracleConnector",
  30. "type": "source",
  31. "version": "0.9.4.Final"
  32. },
  33. {
  34. "class": "io.debezium.connector.postgresql.PostgresConnector",
  35. "type": "source",
  36. "version": "0.9.4.Final"
  37. },
  38. {
  39. "class": "io.debezium.connector.sqlserver.SqlServerConnector",
  40. "type": "source",
  41. "version": "0.9.4.Final"
  42. },
  43. {
  44. "class": "org.apache.kafka.connect.file.FileStreamSinkConnector",
  45. "type": "sink",
  46. "version": "2.1.0"
  47. },
  48. {
  49. "class": "org.apache.kafka.connect.file.FileStreamSourceConnector",
  50. "type": "source",
  51. "version": "2.1.0"
  52. }
  53. ]

3.获取 Connector 上 Task 以及相关配置的信息
curl -s http://127.0.0.1:8083/connectors/<Connector名字>/tasks | jq

  1. lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s localhost:/connectors/inventory-connector/tasks |jq
  2. [
  3. {
  4. "id": {
  5. "connector": "inventory-connector",
  6. "task":
  7. },
  8. "config": {
  9. "connector.class": "io.debezium.connector.mysql.MySqlConnector",
  10. "database.user": "root",
  11. "database.server.id": "",
  12. "tasks.max": "",
  13. "database.history.kafka.bootstrap.servers": "127.0.0.1:9092",
  14. "database.history.kafka.topic": "dbhistory.inventory",
  15. "database.server.name": "127.0.0.1",
  16. "database.port": "",
  17. "task.class": "io.debezium.connector.mysql.MySqlConnectorTask",
  18. "database.hostname": "127.0.0.1",
  19. "database.password": "root",
  20. "name": "inventory-connector",
  21. "database.whitelist": "inventory"
  22. }
  23. }
  24. ]

4.获取 Connector 状态信息
curl -s http://127.0.0.1:8083/connectors/<Connector名字>/status | jq

  1. lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s localhost:/connectors/inventory-connector/status |jq
  2. {
  3. "name": "inventory-connector",
  4. "connector": {
  5. "state": "RUNNING",
  6. "worker_id": "127.0.0.1:8083"
  7. },
  8. "tasks": [
  9. {
  10. "state": "RUNNING",
  11. "id": ,
  12. "worker_id": "127.0.0.1:8083"
  13. }
  14. ],
  15. "type": "source"
  16. }

5.获取 Connector 配置信息
curl -s http://127.0.0.1:8083/connectors/<Connector名字>/config | jq

  1. lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s localhost:/connectors/inventory-connector/config |jq
  2. {
  3. "connector.class": "io.debezium.connector.mysql.MySqlConnector",
  4. "database.user": "root",
  5. "database.server.id": "",
  6. "tasks.max": "",
  7. "database.history.kafka.bootstrap.servers": "127.0.0.1:9092",
  8. "database.history.kafka.topic": "dbhistory.inventory",
  9. "database.server.name": "127.0.0.1",
  10. "database.port": "",
  11. "database.hostname": "127.0.0.1",
  12. "database.password": "root",
  13. "name": "inventory-connector",
  14. "database.whitelist": "inventory"
  15. }

6.暂停 Connector
curl -s -X PUT http://127.0.0.1:8083/connectors/<Connector名字>/pause

7.重启 Connector
curl -s -X PUT http://127.0.0.1:8083/connectors/<Connector名字>/resume

8.删除 Connector
curl -s -X DELETE http://127.0.0.1:8083/connectors/<Connector名字>

9.创建新 Connector (以FileStreamSourceConnector举例)
curl -s -X POST -H "Content-Type: application/json" --data
'{

"name": "hdfs-hive-sink",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "1",
"topics": "127.0.0.1.inventory.customers",
"hdfs.url": "hdfs://127.0.0.1:9000/inventory",
"flush.size": "10",
"format.class":"io.confluent.connect.hdfs.string.StringFormat",
"hive.integration": true,
"hive.database": "inventory",
"hive.metastore.uris": "thrift://127.0.0.1:9083",
"schema.compatibility": "BACKWARD"
}
}'

http://http://127.0.0.1:8083/connectors | jq

  1. lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -H "applicaiton/json" http://127.0.0.1:8083/connectors/hdfs-hive-sink |jq
  2. % Total % Received % Xferd Average Speed Time Time Time Current
  3. Dload Upload Total Spent Left Speed
  4. --:--:-- --:--:-- --:--:--
  5. {
  6. "name": "hdfs-hive-sink",
  7. "config": {
  8. "connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
  9. "format.class": "io.confluent.connect.hdfs.string.StringFormat",
  10. "flush.size": "",
  11. "tasks.max": "",
  12. "topics": "127.0.0.1.inventory.customers",
  13. "hdfs.url": "hdfs://127.0.0.1:9000/inventory",
  14. "name": "hdfs-hive-sink"
  15. },
  16. "tasks": [
  17. {
  18. "connector": "hdfs-hive-sink",
  19. "task":
  20. }
  21. ],
  22. "type": "sink"
  23. }

10.更新 Connector配置 (以FileStreamSourceConnector举例)
curl -s -X PUT -H "Content-Type: application/json" --data
'{"connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector",
"key.converter.schemas.enable":"true",
"file":"demo-file.txt",
"tasks.max":"2",
"value.converter.schemas.enable":"true",
"name":"file-stream-demo-distributed",
"topic":"demo-2-distributed",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"key.converter":"org.apache.kafka.connect.json.JsonConverter"}'
http://127.0.0.1:8083/connectors/file-stream-demo-distributed/config | jq

kafka connect rest api的更多相关文章

  1. 替代Flume——Kafka Connect简介

    我们知道过去对于Kafka的定义是分布式,分区化的,带备份机制的日志提交服务.也就是一个分布式的消息队列,这也是他最常见的用法.但是Kafka不止于此,打开最新的官网. 我们看到Kafka最新的定义是 ...

  2. Streaming data from Oracle using Oracle GoldenGate and Kafka Connect

    This is a guest blog from Robin Moffatt. Robin Moffatt is Head of R&D (Europe) at Rittman Mead, ...

  3. kafka connect 使用说明

    KAFKA CONNECT 使用说明 一.概述 kafka connect 是一个可扩展的.可靠的在kafka和其他系统之间流传输的数据工具.简而言之就是他可以通过Connector(连接器)简单.快 ...

  4. Kafka connect in practice(3): distributed mode mysql binlog ->kafka->hive

    In the previous post Kafka connect in practice(1): standalone, I have introduced about the basics of ...

  5. Hadoop生态圈-Kafka的旧API实现生产者-消费者

    Hadoop生态圈-Kafka的旧API实现生产者-消费者 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.旧API实现生产者-消费者 1>.开启kafka集群 [yinz ...

  6. Kafka: Connect

    转自:http://www.cnblogs.com/f1194361820/p/6108025.html Kafka Connect 简介 Kafka Connect 是一个可以在Kafka与其他系统 ...

  7. kafka connect简介以及部署

    https://blog.csdn.net/u011687037/article/details/57411790 1.什么是kafka connect? 根据官方介绍,Kafka Connect是一 ...

  8. 使用Kafka Connect创建测试数据生成器

    在最近的一些项目中,我使用Apache Kafka开发了一些数据管道.在性能测试方面,数据生成总是会在整个活动中引入一些样板代码,例如创建客户端实例,编写控制流以发送数据,根据业务逻辑随机化有效负载等 ...

  9. Kafka Connect简介

    Kafka Connect简介 http://colobu.com/2016/02/24/kafka-connect/#more Kafka 0.9+增加了一个新的特性Kafka Connect,可以 ...

随机推荐

  1. 学习Spring中遇到关于BeanFactory及测试类的问题

    最近在学习Spring,使用的是Spring 5.0.1 学习书本中使用的是4.0 学习书本中使用以下来加载配置文件及设置 Resource resource = new ClassPathResou ...

  2. Execute to Parse %: 29.76,数据库硬解析过高,监控告警提示数据库硬解析比例过低

    客户反馈,Oracle重启库操作后,监控告警出现pin比例低于25% 根据Oracle体系结构的理解,重启库后,硬解析及buffer命中率肯定有一段时间低. 生成不同时段的AWR报告:不要生成rac ...

  3. border边框属性

    边框属性: 边框宽度(border-width):thin.medium.thick.长度值 边框颜色(border-color):颜色.transparent(透明) 边框样式(border-sty ...

  4. s21day20 python笔记

    s21day20 python笔记 一.内容回顾 面向对象的三大特性 封装 函数封装到类 数据封装到对象 继承 多态 二.成员 2.1 类成员 类变量(静态字段) 定义:写在类的下一级,和方法同一级 ...

  5. cocos creator 刚体卡顿问题(边界会卡住)

    **问题描述:**在项目开发中,使用到了刚体, 在搭建地图过程中,发现两个相邻的砖块,即使贴合的再紧密,但星星人在上面走动的时候还是会有很大概率发生卡顿(被两个刚体的边界处卡住).为了解决这个问题,我 ...

  6. ASP.NET+MVC+EntityFramework快速实现增删改查

    本教程已经录制视频,欢迎大家观看我在CSDN学院录制的课程:http://edu.csdn.net/lecturer/944

  7. vue day4 table

    <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content ...

  8. github 出现 Permission denied (publickey)

    首先,清除所有的key-pairssh-add -Drm -r ~/.ssh删除你在github中的public-key 用下面的命令生成public key $ ssh-keygen -t rsa ...

  9. 软件开发者路线图梗概&书摘chapter5

    恒久学习:整个职业生涯,反馈回路,了解弱点 1.提高带宽:多维度.高效获取知识 博客.关注twitter动态.订阅邮件列表.加入本地用户组.技术大会.联系书的作者.在线教程 从信息的海洋中回到实际软件 ...

  10. 使用C#的aforge类库识别验证码实例

    一: 验证码处理 1.  一般处理原则 这种验证码为什么说跟没有一样,第一点:字体规范工整,第二点:不旋转扭曲粘连,第三点:字体颜色单一,下面看处理步骤. 这里要注意的是,aforge只接受像素格式为 ...