环境:三台虚拟机Host0,Host1,Host2

Host0:192.168.10.2

Host1:  192.168.10.3

Host2:  192.168.10.4

在三台虚拟机上配置zookeeper,具体配置详见CentOS中配置CDH版本的ZooKeeper

下载kafka:http://kafka.apache.org/downloads.html

我的kafka版本是kafka_2.10-0.8.2.0

在各个kafka节点上解压kafka&进入kafka目录

[root@Host0 ~]# tar xfvz kafka_2.-0.8.2.0.tgz 
[root@Host0 ~]# cd kafka_2.10-0.8.2.0

在各个kafka节点上配置config/server.propertieswen文件

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker.
broker.id= ############################# Socket Server Settings ############################# # The port the socket server listens on
port= # Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=211.68.36.127 # Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured. Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
advertised.host.name=211.68.36.127 # The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.
#advertised.port=<port accessible by clients> # The number of threads handling network requests
num.network.threads= # The number of threads doing disk I/O
num.io.threads= # The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes= # The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes= # The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes= ############################# Log Basics ############################# # A comma seperated list of directories under which to store log files
log.dirs=/tmp/kafka-logs # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions= # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir= ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# . Durability: Unflushed data may be lost if you are not using replication.
# . Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# . Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages= # The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms= ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log. # The minimum age of a log file to be eligible for deletion
log.retention.hours= # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes= # The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes= # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms= # By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.
# If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.
log.cleaner.enable=false ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=Host0:,Host1:,Host2: # Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms= delete.topic.enable = true

注意:

broker.id=0   broker的id,每个kafka节点配置不能一样,可以0,1,2等
host.name=192.168.10.2  broker的hostname;如果hostname已经设置的话,broker将只会绑定到这个地址上;如果没有设置,它将绑定到所有接口,并发布一份到ZK。每台节点设置成当前节点的IP地址
advertised.host.name=192.168.10.2  作为broker的hostname发往producer、consumers以及其他brokers。每台节点设置成当前节点的IP地址
log.dirs=/tmp/kafka-logs  消息文件存储的路径,并不是kafka系统日志存放路径,这里不建议存放在/tmp目录下,因为/tmp目录会定时清理
zookeeper.connect=Host0:2181,Host1:2181,Host2:2181  指定连接Zookeeper的连接串,此处填写上一节中安装的三个zk节点的ip和端口即可

在各节点中启动kafka

[root@Host0 kafka_2.-0.8.2.0]# bin/kafka-server-start.sh config/server.properties 

创建topic

[root@Host0 kafka_2.10-0.8.2.0]# bin/kafka-topics.sh --create --zookeeper Host0: --replication-factor  --partition  --topic my-replicated-topic1

模拟生产者

在任意一个节点上打开终端

[root@Host0 kafka_2.10-0.8.2.0]# bin/kafka-console-producer.sh --broker-list Host0: --topic my-replicated-topic1
[2016-09-05 21:51:57,134] WARN Property topic is not valid (kafka.utils.VerifiableProperties)
hello kafka!

模拟消费者

在任意一个节点上打开终端

[root@Host0 kafka_2.-0.8.2.0]# bin/kafka-console-consumer.sh --zookeeper Host2: --topic my-replicated-topic1
hello kafka!


host.name和advertised.host.name会有坑,见以下,以下为转载。

此处的坑:

按 照官方文档的说法,advertised.host.name和advertised.port这两个参数用于定义集群向Producer和 Consumer广播的节点host和port,如果不定义的话,会默认使用host.name和port的定义。但在实际应用中,我发现如果不定义 advertised.host.name参数,使用Java客户端从远端连接集群时,会发生连接超时,抛出异 常:org.apache.kafka.common.errors.TimeoutException: Batch Expired

经过debug发现,连接到集群是成功的,但连接到集群后更新回来的集群meta信息却是错误的:
能够看到,metadata中的Cluster信息,节点的hostname是iZ25wuzqk91Z这样的一串数字,而不是实际的ip地址 10.0.0.100和101。iZ25wuzqk91Z其实是远端主机的hostname,这说明在没有配置advertised.host.name 的情况下,Kafka并没有像官方文档宣称的那样改为广播我们配置的host.name,而是广播了主机配置的hostname。远端的客户端并没有配置 hosts,所以自然是连接不上这个hostname的。要解决这一问题,把host.name和advertised.host.name都配置成绝对 的ip地址就可以了。

CentOS中配置Kafka集群的更多相关文章

  1. KafKa简介和利用docker配置kafka集群及开发环境

    KafKa的基本认识,写的很好的一篇博客:https://www.cnblogs.com/sujing/p/10960832.html 问题:1.kafka是什么?Kafka是一种高吞吐量的分布式发布 ...

  2. docker容器中搭建kafka集群环境

    Kafka集群管理.状态保存是通过zookeeper实现,所以先要搭建zookeeper集群 zookeeper集群搭建 一.软件环境: zookeeper集群需要超过半数的的node存活才能对外服务 ...

  3. Kafka详解二:如何配置Kafka集群

    问题导读1.Kafka有哪几种配制方法?2.如何启动一个Consumer实例来消费消息? Kafka集群配置比较简单,为了更好的让大家理解,在这里要分别介绍下面三种配置 单节点:一个broker的集群 ...

  4. Kafka具体解释二、怎样配置Kafka集群

    Kafka集群配置比較简单,为了更好的让大家理解.在这里要分别介绍以下三种配置 单节点:一个broker的集群 单节点:多个broker的集群 多节点:多broker集群 一.单节点单broker实例 ...

  5. CentOS下配置Hadoop集群:java.net.NoRouteToHostException: No route to host问题的解决

    我用的是hadoop 1.2.1 遇到的问题是: hadoop中datanode无法启动,报Caused by: java.net.NoRouteToHostException: No route t ...

  6. 配置Kafka集群和zookeeper集群

    原文链接请参见:http://www.cnblogs.com/5iTech/articles/6043224.html

  7. kafka学习2:kafka集群安装与配置

    在前一篇:kafka学习1:kafka安装 中,我们安装了单机版的Kafka,而在实际应用中,不可能是单机版的应用,必定是以集群的方式出现.本篇介绍Kafka集群的安装过程: 一.准备工作 1.开通Z ...

  8. CentOS6安装各种大数据软件 第五章:Kafka集群的配置

    相关文章链接 CentOS6安装各种大数据软件 第一章:各个软件版本介绍 CentOS6安装各种大数据软件 第二章:Linux各个软件启动命令 CentOS6安装各种大数据软件 第三章:Linux基础 ...

  9. Kafka集群搭建 (2.11-0.9.0.1)

    之前写过kafka_2.9.2-0.8.2.2版本的安装,kafka在新的0.9版本以上改动比较大,配置和api都有很大更新,并且broker对应的partition支持多线程生产和消费,所以性能比之 ...

随机推荐

  1. POJ3281(KB11-B 最大流)

    Dining Time Limit: 2000MS   Memory Limit: 65536K Total Submissions: 19170   Accepted: 8554 Descripti ...

  2. POJ1811(SummerTrainingDay04-G miller-rabin判断素性 && pollard-rho分解质因数)

    Prime Test Time Limit: 6000MS   Memory Limit: 65536K Total Submissions: 35528   Accepted: 9479 Case ...

  3. 查看linux 内存

    1.vmstat vmstat命令显示实时的和平均的统计,覆盖CPU.内存.I/O等内容.例如内存情况,不仅显示物理内存,也统计虚拟内存. $ vmstat -s 2.top top命令提供了实时的运 ...

  4. art-template辅助函数和子模板

    art-template 前端使用 用途:主要用来处理数据和优化性能,与其他的一些模块化处理数据的插件相比,art-template处理性能好 不废话,上代码 1.art-template基本语法使用 ...

  5. centos7下采用Nginx+uwsgi来部署django

    之前写过采用Apache和mod_wsgi部署django,因为项目需要,并且想比较一下Nginx和Apache的性能,尝试采用Nginx+uwsgi的模式来部署django. 1.安装uwsgi以及 ...

  6. Loadrunner脚本优化-参数化之关联MySQL数据库获取数据

    脚本优化-参数化之关联MySQL数据库获取数据 by:授客 QQ:1033553122 测试环境: Loadrunner 11 Win7 64位 实操: 1.   安装MySQL ODBC驱动程序 O ...

  7. java实现文件复制粘贴功能

    java编程思想中讲到了IO流的思想,以前对于java基础总是不够深入,浅尝辄止,如今碰到语句插桩的时候就感到书到用时方恨少啊! 文件的复制涉及到源文件和新文件(无需手动创建),给出源文件的路径和文件 ...

  8. go语言练习:条件语句和循环语句

    1.for循环+if条件语句简单例子: package main import "fmt" func main() { var a int for a = 0; a <= 2 ...

  9. 使用Eclipse Debug的一些说明

    目录 Debug视图 线程堆栈视图 变量视图 断点视图 表达式视图 代码视图 远程Debug 异常断点 条件断点 表达式 Debug定位第三方插件的问题 Debug一些经验   Debug视图 认识d ...

  10. leveldb源码分析--日志

    我们知道在一个数据库系统中为了保证数据的可靠性,我们都会记录对系统的操作日志.日志的功能就是用来在系统down掉的时候对数据进行恢复,所以日志系统对一个要求可靠性的存储系统是极其重要的.接下来我们分析 ...