环境准备

  1. zookeeper集群环境

    kafka是依赖于zookeeper注册中心的一款分布式消息对列,所以需要有zookeeper单机或者集群环境。

  2. 三台服务器:

  1. 172.16.18.198 k8s-n1
  2. 172.16.18.199 k8s-n2
  3. 172.16.18.200 k8s-n3
  1. 下载kafka安装包

http://kafka.apache.org/downloads 中下载,目前最新版本的kafka已经到2.2.0,我这里之前下载的是kafka_2.11-2.2.0.tgz.

安装kafka集群

1.上传压缩包到三台服务器解压缩到/opt/目录下

  1. tar -zxvf kafka_2.11-2.2.0.tgz -C /opt/
  2. ls -s kafka_2.11-2.2.0 kafka

2.修改 server.properties

  1. ############################# Server Basics #############################
  2. # The id of the broker. This must be set to a unique integer for each broker.
  3. broker.id=0
  4. ############################# Socket Server Settings #############################
  5. # The address the socket server listens on. It will get the value returned from
  6. # java.net.InetAddress.getCanonicalHostName() if not configured.
  7. # FORMAT:
  8. # listeners = listener_name://host_name:port
  9. # EXAMPLE:
  10. # listeners = PLAINTEXT://your.host.name:9092
  11. listeners=PLAINTEXT://k8s-n1:9092
  12. # Hostname and port the broker will advertise to producers and consumers. If not set,
  13. # it uses the value for "listeners" if configured. Otherwise, it will use the value
  14. # returned from java.net.InetAddress.getCanonicalHostName().
  15. advertised.listeners=PLAINTEXT://k8s-n1:9092
  16. # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
  17. #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
  18. # The number of threads that the server uses for receiving requests from the network and sending responses to the network
  19. num.network.threads=3
  20. # The number of threads that the server uses for processing requests, which may include disk I/O
  21. num.io.threads=8
  22. # The send buffer (SO_SNDBUF) used by the socket server
  23. socket.send.buffer.bytes=102400
  24. # The receive buffer (SO_RCVBUF) used by the socket server
  25. socket.receive.buffer.bytes=102400
  26. # The maximum size of a request that the socket server will accept (protection against OOM)
  27. socket.request.max.bytes=104857600
  28. ############################# Log Basics #############################
  29. # A comma separated list of directories under which to store log files
  30. log.dirs=/var/applog/kafka/
  31. # The default number of log partitions per topic. More partitions allow greater
  32. # parallelism for consumption, but this will also result in more files across
  33. # the brokers.
  34. num.partitions=5
  35. # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
  36. # This value is recommended to be increased for installations with data dirs located in RAID array.
  37. num.recovery.threads.per.data.dir=1
  38. ############################# Internal Topic Settings #############################
  39. # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
  40. # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
  41. offsets.topic.replication.factor=1
  42. transaction.state.log.replication.factor=1
  43. transaction.state.log.min.isr=1
  44. ############################# Log Flush Policy #############################
  45. # Messages are immediately written to the filesystem but by default we only fsync() to sync
  46. # the OS cache lazily. The following configurations control the flush of data to disk.
  47. # There are a few important trade-offs here:
  48. # 1. Durability: Unflushed data may be lost if you are not using replication.
  49. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
  50. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
  51. # The settings below allow one to configure the flush policy to flush data after a period of time or
  52. # every N messages (or both). This can be done globally and overridden on a per-topic basis.
  53. # The number of messages to accept before forcing a flush of data to disk
  54. log.flush.interval.messages=10000
  55. # The maximum amount of time a message can sit in a log before we force a flush
  56. log.flush.interval.ms=1000
  57. ############################# Log Retention Policy #############################
  58. # The following configurations control the disposal of log segments. The policy can
  59. # be set to delete segments after a period of time, or after a given size has accumulated.
  60. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
  61. # from the end of the log.
  62. # The minimum age of a log file to be eligible for deletion due to age
  63. log.retention.hours=24
  64. # A size-based retention policy for logs. Segments are pruned from the log unless the remaining
  65. # segments drop below log.retention.bytes. Functions independently of log.retention.hours.
  66. #log.retention.bytes=1073741824
  67. # The maximum size of a log segment file. When this size is reached a new log segment will be created.
  68. log.segment.bytes=1073741824
  69. # The interval at which log segments are checked to see if they can be deleted according
  70. # to the retention policies
  71. log.retention.check.interval.ms=300000
  72. ############################# Zookeeper #############################
  73. # Zookeeper connection string (see zookeeper docs for details).
  74. # This is a comma separated host:port pairs, each corresponding to a zk
  75. # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
  76. # You can also append an optional chroot string to the urls to specify the
  77. # root directory for all kafka znodes.
  78. zookeeper.connect=k8s-n1:2181,k8s-n2:2181,k8s-n3:2181
  79. # Timeout in ms for connecting to zookeeper
  80. zookeeper.connection.timeout.ms=6000
  81. ############################# Group Coordinator Settings #############################
  82. # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
  83. # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
  84. # The default value for this is 3 seconds.
  85. # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
  86. # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
  87. group.initial.rebalance.delay.ms=0
  88. delete.topic.enable=true

拷贝两份到k8s-n2,k8s-n3

  1. [root@k8s-n2 config]# cat server.properties
  2. broker.id=1
  3. listeners=PLAINTEXT://k8s-n2:9092
  4. advertised.listeners=PLAINTEXT://k8s-n2:9092
  5. [root@k8s-n3 config]# cat server.properties
  6. broker.id=2
  7. listeners=PLAINTEXT://k8s-n3:9092
  8. advertised.listeners=PLAINTEXT://k8s-n3:9092
  1. 添加环境变量 在/etc/profile 中添加
  1. export ZOOKEEPER_HOME=/opt/kafka_2.11-2.2.0
  2. export PATH=$PATH:$ZOOKEEPER_HOME/bin

source /etc/profile 重载生效

  1. 启动kafka
  1. kafka-server-start.sh config/server.properties &

Zookeeper+Kafka集群测试

1.创建topic:

  1. kafka-topics.sh --create --zookeeper k8s-n1:2181, k8s-n2:2181, k8s-n3:2181 --replication-factor 3 --partitions 3 --topic test

2.显示topic

  1. kafka-topics.sh --describe --zookeeper k8s-n1:2181, k8s-n2:2181, k8s-n3:2181 --topic test

3.列出topic

  1. kafka-topics.sh --list --zookeeper k8s-n1:2181, k8s-n2:2181, k8s-n3:2181
  2. test

创建 producer(生产者);

  1. kafka-console-producer.sh --broker-list k8s-n1:9092 --topic test
  2. hello

创建 consumer(消费者)

  1. kafka-console-consumer.sh --bootstrap-server k8s-n1:9092 --topic test --from-beginning
  2. hello

至此,kafka集群搭建就已经完成了。

Linux下kafka集群搭建的更多相关文章

  1. Linux下kafka集群搭建过程记录

    环境准备 zookeeper集群环境kafka是依赖于zookeeper注册中心的一款分布式消息对列,所以需要有zookeeper单机或者集群环境. 三台服务器: 172.16.18.198 k8s- ...

  2. Linux 下kafka集群搭建

    主机的IP地址: 主机IP地址 zookeeper kafka10.19.85.149 myid=1 broker.id=110.19.15.103 myid=2 broker.id=210.19.1 ...

  3. Linux下zookeeper集群搭建

    Linux下zookeeper集群搭建 部署前准备 下载zookeeper的安装包 http://zookeeper.apache.org/releases.html 我下载的版本是zookeeper ...

  4. Linux下kafka集群的搭建

    上一篇日志已经搭建好了zookeeper集群,详细请查看:http://www.cnblogs.com/lianliang/p/6533670.html,接下来继续搭建kafka的集群 1.首先下载k ...

  5. Linux 下redis 集群搭建练习

    Redis集群 学习参考:https://blog.csdn.net/jeffleo/article/details/54848428https://my.oschina.net/iyinghui/b ...

  6. Linux下solr集群搭建

    第一步:创建四个tomcat实例.每个tomcat运行在不同的端口.8180.8280.8380.8480 第二步:部署solr的war包.把单机版的solr工程复制到集群中的tomcat中. 第三步 ...

  7. linux下Mongodb集群搭建:分片+副本集

    三台服务器 192.168.1.40/41/42 安装包 mongodb-linux-x86_64-amazon2-4.0.1.tgz 服务规划  服务器40  服务器41  服务器42  mongo ...

  8. 消息队列kafka集群搭建

    linux系统kafka集群搭建(3个节点192.168.204.128.192.168.204.129.192.168.204.130)    本篇文章kafka集群采用外部zookeeper,没采 ...

  9. kafka集群搭建及结合springboot使用

    1.场景描述 因kafka以前用的不多,只往topic中写入和读取过数据,这次刚好又要用到,记录下kafka集群搭建及结合springboot使用. 2. 解决方案 2.1 简单介绍 (一)关于kaf ...

随机推荐

  1. 导出设计文档总结 plantUML Graphviz jacob

    plantUML https://blog.csdn.net/HelloWorld998/article/details/90676496 http://skyao.github.io/2014/12 ...

  2. DRF视图-5个扩展类以及GenericAPIView基类

    视图 5个视图扩展类 视图拓展类的作用: 提供了几种后端视图(对数据资源进行曾删改查)处理流程的实现,如果需要编写的视图属于这五种,则视图可以通过继承相应的扩展类来复用代码,减少自己编写的代码量. 这 ...

  3. LeetCode刷题2——颠倒二进制位

    一.题目要求 二.题目背景 此题依旧属于位运算范畴 知识点1:有符号和无符号二进制是怎样表现的? 对于有符号数,最高位为1说明是个负数 知识点2:进制之间的相互转换 (1)十进制转十六进制 hex(n ...

  4. 图表:WebChartControl

    #region 画统计图 /// <summary> /// 画统计图 /// </summary> private void LoadWebChartControl() { ...

  5. sd卡挂载方法:

    cd mnt//Sdcard创建目录mkdir -m 777 Sdcard//节点挂载mount /dev/msa1  /mnt/Sdcard//抓包./tcpdump -i eth0 tcp por ...

  6. gdb移植(交叉版本)

    Gdb下载地址: http://ftp.gnu.org/gnu/gdb/ termcap下载地址:http://ftp.gnu.org/gnu/termcap/tar -zxvf termcap-1. ...

  7. [转帖]如何用十条命令在一分钟内检查 Linux 服务器性能

    如何用十条命令在一分钟内检查 Linux 服务器性能 时间:2016-09-28   作者:admin 分类:新手入门 阅读:246次 http://embeddedlinux.org.cn/emb- ...

  8. PAT 甲级真题题解(121-155)

    1121 Damn Single 模拟 // 1121 Damn Single #include <map> #include <vector> #include <cs ...

  9. 大数据学习(2)- export、source(附带多个服务器一起启动服务器)

    linux环境中, A=1这种命名方式,变量作用域为当前线程 export命令出的变量作用域是当前进程及其子进程. 可以通过source 脚本,将脚本里面的变量放在当前进程中 附带自己写的tomcat ...

  10. WEBAPI 最近更新项目时 服务器总是提示:An error has occurred.

    解决办法: 在webconfig中设置 <system.web><customErrors mode="Off"/></system.web> ...