zookeeper与Kafka集群安装

集群安装以三台机器(虚拟机,物理机等等)为例子:

192.168.200.100 kafka01   (主节点)

192.168.200.101 kafka02   (从节点)

192.168.200.102 kafka03   (从节点)

一、进入集群kafka01(主节点)节点配置hosts文件:

vim /etc/hosts #打开hosts为每个IP配置别名,相当于java中配置变量,以后只需别名

192.168.200.100 kafka01

192.168.200.101 kafka02

192.168.200.102 kafka03

在kafka01执行一下操作,将其分发到不同的主机上

scp -r /etc/hosts root@kafka02:/etc/hosts

scp -r /etc/hosts root@kafka03:/etc/hosts

二、将三台主机配置免密

ssh-keygen -t rsa    #三台机器都执行该命令,然后一直回车至结束。

登陆kafka01将密匙传输其他机器(包括本机):

ssh-copy-id kafka01

ssh-copy-id kafka02

ssh-copy-id kafka03

登陆kafka02将密匙传输其他机器(包括本机):

ssh-copy-id kafka01

ssh-copy-id kafka02

ssh-copy-id kafka03

登陆kafka03将密匙传输其他机器(包括本机):

ssh-copy-id kafka01

ssh-copy-id kafka02

ssh-copy-id kafka03

注:下面的配置中使用的都为IP,生产环境中尽力使用别名代替,这样如果IP发生变化只需要修改hosts就可以了。

安装JDK(可以使用rpm包或者tar.gz包):

如果使用jdk.rpm包使用,不需要配置环境变量(会安装在/usr/bin 目录下):

rpm -ivh jdk.rpm    #完成后使用 java 进行测试

jdk.tar.gz 包需要配置环境变量:

先解压包:
jar -zxvf xxx.tar.gz 执行vi /etc/profile 修改环境变量,新增以下代码: export JAVA_HOME=/usr/local/java/jdk1.8.0_181 export JRE_HOME=$JAVA_HOME/jre export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin 刷新环境变量:source /etc/profile

安装zk:

将zookeeper解压到目录:tar -zxvf zookeeper-3.4.13.tar.gz -C /usr/local

进入/usr/local目录下修改zookeeper名称:mv zookeeper-3.4.13/ zookeeper

在zookeeper安装目录下新建保存数据的目录:mkdir -p zookeeper/data

在zookeeper安装目录下新建日志目录:mkdir -p zookeeper/dataLog

配置环境变量:vim /etc/profile

添加配置如下:

export ZK_HOME=/usr/local/zookeeper

export PATH=$PATH:$ZK_HOME/bin

刷新环境变量:source /etc/profile

节点配置:

(1)kafka01(192.168.200.100)

进入配置目录:zookeeper/conf,复制一个zoo.cfg文件:

cp -f zoo_sample.cfg zoo.cfg

配置如下:

dataDir=/usr/local/zookeeper/data  #就是刚刚创建的两个目录

dataLogDir=/usr/local/zookeeper/dataLog

#在本节点时就使用0.0.0.0

server.1=0.0.0.0:2888:3888  

server.2=192.168.200.101:2888:3888

server.3=192.168.200.102:2888:3888

进入data目录:cd /usr/local/zookeepe/data

生成myid文件(用于选举leader):echo "1" >myid

(2) kafka02(192.168.200.101)

进入配置目录:zookeeper/conf,复制一个zoo.cfg文件:

cp -f zoo_sample.cfg zoo.cfg

配置如下:

dataDir=/usr/local/zookeeper/data

dataLogDir=/usr/local/zookeeper/dataLog

server.1=192.168.200.100:2888:3888

server.2=0.0.0.0:2888:3888

server.3=192.168.200.102:2888:3888

进入data目录:cd /usr/local/zookeeper/ data

生成myid文件(每个节点下的myid都是唯一的):echo "2" >myid

(3) kafka03(192.168.200.102)

进入配置目录:zookeeper/conf,复制一个zoo.cfg文件:

cp -f zoo_sample.cfg zoo.cfg

配置如下:

dataDir=/usr/local/zookeeper/data

dataLogDir=/usr/local/zookeeper/dataLog

server.1=192.168.200.100:2888:3888

server.2=192.168.200.101:2888:3888

server.3=0.0.0.0:2888:3888

进入data目录:cd /usr/local/zookeeper/data

生成myid文件:echo "3" >myid

以上步骤完成,全部zookeeper节点配置完成,执行以下命令启动集群:

zkServer.sh start可以通过zkServer.sh status命令查看集群状态,zkServer.sh stop命令可以停止集群。

或者通过nestat -lnp | grep 2181 查看该进程是否存在(因为zookeeper的端口配置为2181,该命令是指查询占用端口2181的进程)

或使用jps命令查看(使用自带的open java的无法使用该功能)

配置Kafka:

下载并解压kafka压缩包:

配置vi kafka/config/server.properties如下:
# Licensed to the Apache Software Foundation (ASF) under one or more

# contributor license agreements.  See the NOTICE file distributed with

# this work for additional information regarding copyright ownership.

# The ASF licenses this file to You under the Apache License, Version 2.0

# (the "License"); you may not use this file except in compliance with

# the License.  You may obtain a copy of the License at

#

#    http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.

broker.id=1   #每个节点id不能相同

############################# Socket Server Settings #############################

# The port the socket server listens on

#kafka开启服务的端口号

port=9092  

# Hostname the broker will bind to. If not set, the server will bind to all interfaces

#改配置使用本节点IP,不能使用别名否则java无法访问kafka,亲测- - 。

host.name=192.168.200.100 

# Hostname the broker will advertise to producers and consumers. If not set, it uses the

# value for "host.name" if configured.  Otherwise, it will use the value returned from

# java.net.InetAddress.getCanonicalHostName().

#advertised.host.name=<hostname routable by clients>

# The port to publish to ZooKeeper for clients to use. If this is not set,

# it will publish the same port that the broker binds to.

#advertised.port=<port accessible by clients>

# The number of threads handling network requests

num.network.threads=3

# The number of threads doing disk I/O

num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server

socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server

socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)

socket.request.max.bytes=104857600

############################# Log Basics #############################

# A comma seperated list of directories under which to store log files

#日志文件的保存路径

log.dirs=/tmp/kafka-logs

# The default number of log partitions per topic. More partitions allow greater

# parallelism for consumption, but this will also result in more files across

# the brokers.

#主题下分区的备份数,按需求设置

num.partitions=2

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.

# This value is recommended to be increased for installations with data dirs located in RAID array.

#消息的备份数

num.recovery.threads.per.data.dir=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync

# the OS cache lazily. The following configurations control the flush of data to disk.

# There are a few important trade-offs here:

#    1. Durability: Unflushed data may be lost if you are not using replication.

#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.

#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.

# The settings below allow one to configure the flush policy to flush data after a period of time or

# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk

#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush

#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can

# be set to delete segments after a period of time, or after a given size has accumulated.

# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens

# from the end of the log.

# The minimum age of a log file to be eligible for deletion

#segment的日志保存最大时间,超过将被删除

log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining

# segments don't drop below log.retention.bytes.

#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.

log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according

# to the retention policies

log.retention.check.interval.ms=300000

# By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.

# If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.

log.cleaner.enable=false

export HBASE_MANAGES_ZK=false

offsets.storage=kafka

dual.commit.enabled=true

delete.topic.enable=true

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).

# This is a comma separated host:port pairs, each corresponding to a zk

# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".

# You can also append an optional chroot string to the urls to specify the

# root directory for all kafka znodes.

#之前zookeeper配置的节点,可以使用别名,亲测

zookeeper.connect=192.168.200.100:2181,192.168.200.101:2181,192.168.200.102:2181

# Timeout in ms for connecting to zookeeper

#kafka连接zookeeper的超时间

zookeeper.connection.timeout.ms=6000

以下命令全部在kafka安装目录执行开启kafka集群:

启动kafka不产生日志并且后台运行:

&:指命令运行完后,按下回车,可以继续执行别的命令,该命令会在后台执行,但是关闭该会话窗口,会导致命令终止。如果想要命令继续执行可以使用nohup命令,会一直执行。

nohup bin/kafka-server-start.sh config/server.properties 1>/dev/null 2>&1 &

如果想要看开启kafka时的日志,请去掉 1>/dev/null 2>&1 该命令是将日志指向黑洞(类似windows的回收站,区别在于你无法恢复文件)

bin/kafka-server-start.sh config/server.properties &

输入JPS产看状态:(Jps命令自带的OPenJDK不能使用)

Jps 可以使用  netstat -lnp | grep 9092 查看

创建topic:zookeeper后面的参数就是zookeeper配置时的配置,2181为默认端口

replication-factor:设置主题的备份数量(分区的备份数量在配置文件中设置)

partitions:指定分区数数量。

kafka-topics.sh --create --zookeeper master:2181,slave1:2181,slave2:2181 --replication-factor 1 --partitions 1 --topic book

查看所有topic列表

bin/kafka-topics.sh --zookeeper master:2181,slave1:2181,slave2:2181 --list

查看指定topic信息

bin/kafka-topics.sh --zookeeper master:2181,slave1:2181,slave2:2181 --describe --topic book

控制台向topic生产数据

bin/kafka-console-producer.sh --broker-list master:9092 --topic book

控制台消费topic的数据:

--from-beginning:指定从头消费数据。

bin/kafka-console-consumer.sh --zookeeper master:2181 --topic book --from-beginning

停止kafka:

bin/kafka-server-stop.sh

输入命令:netstat -lnp | grep 9092或jps

查看是否关闭,如果没用使用 kill -9 pid(就是查询出的pid号)

apache.zookeeper-3.4与apache.kafka-2.11的安装的更多相关文章

  1. ZooKeeper - Perl bindings for Apache ZooKeeper Perl绑定用于 Apache ZooKeeper

    ZooKeeper - Perl bindings for Apache ZooKeeper Perl绑定用于 Apache ZooKeeper 监控 master/slave 需要使用zk的临时节点 ...

  2. 【Apache ZooKeeper】为ZNode设置watcher

    众所周知,ZooKeeper中的ZNode是树形结构,现在我需要给/app1结点设置watcher,监听/app1下增减.删除和修改的结点,并将相应的事件使用log4j记录到日志文件中.ZNode的变 ...

  3. Download and Install Apache Zookeeper on Ubuntu

    http://www.techburps.com/misc/download-and-install-apache-zookeepr/36 In previous article of this Bi ...

  4. Zookeeper异常org.apache.zookeeper.KeeperException$ConnectionLossException

    在虚拟机上安装了CenOS Linux系统,然后配置好了 zookeeper的集群环境,在本地写了一个Zookeeper测试程序,如下: package com.xbq.zookeeper; impo ...

  5. Kafka windows下的安装

    1. 安装JDK 1.1 安装文件:http://www.oracle.com/technetwork/java/javase/downloads/index.html 下载JDK1.2 安装完成后需 ...

  6. ERROR:"org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /config/topics/test" when creating or deleting Kafka operations authorized through the Ranger policies

    PROBLEM DESCRIPTION When creating or deleting topics in Kafka, they cannot be authorized through the ...

  7. Apache ZooKeeper在Kafka中的角色 - 监控和配置

    1.目标 今天,我们将看到Zookeeper在Kafka中的角色.本文包含Kafka中需要ZooKeeper的原因.我们可以说,ZooKeeper是Apache Kafka不可分割的一部分.在了解Zo ...

  8. WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn) java.net.ConnectException: Connection refused

    1.启动kafka的脚本程序报如下所示的错误: [hadoop@slaver1 script_hadoop]$ kafka-start.sh start kafkaServer... [-- ::,] ...

  9. Dubbo消费端错误: ClassNotFoundException: org.apache.zookeeper.proto.WatcherEvent

    出现错误的原因是消费端war没有启动成功, 但是zkClient和Dubbo的对应Thread启动了, web container无法加载对应的类, INFO: Initializing Protoc ...

  10. 决战大数据之三-Apache ZooKeeper Standalone及复制模式安装及测试

    决战大数据之三-Apache ZooKeeper Standalone及复制模式安装及测试 [TOC] Apache ZooKeeper 单机模式安装 创建hadoop用户&赋予sudo权限, ...

随机推荐

  1. SVN diff

    http://svnbook.red-bean.com/en/1.6/svn.ref.svn.c.diff.html Name svn diff (di) — This displays the di ...

  2. JDK12的五大重要新特性

    文章目录 JDK12的五大重要新特性 引入JVM常量API 扩展了switch语句 支持Unicode 11.0 为日本Reiwa Era提供了方形字符支持 NumberFormat增加了对以紧凑格式 ...

  3. Flutter自己实现一个ProgressHUD

    用惯了iOS的SVProgressHUD,但是在flutter pub上的并没有找到类似的实现,于是自己实现一个 主要实现四个基本功能 Loading显示 成功显示 错误显示 进度显示:环形进度条和文 ...

  4. 聊聊flink的BlobStoreService

    序 本文主要研究一下flink的BlobStoreService BlobView flink-release-1.7.2/flink-runtime/src/main/java/org/apache ...

  5. 热门云服务超87GB电子邮箱和密码泄露,黑客已验证大部分数据

    热门云存储服务Mega被曝发现超87GB电子邮件地址和密码泄露(源数据目前已被删除,但已流传到个别黑客网站),其中包含近7.73亿电子邮件地址和2200万密码. 近日,国外一名安全研究人员Troy H ...

  6. 日日算法:Dijkstra算法

    介绍 Dijistra算法作为一种最短路径算法,可以用来计算一个节点到图上其他节点的最短距离. 主要是通过启发式的思想,由中心节点层层向外拓展,直到找到中点. 适用于无向图和有向图. 算法思想 假设我 ...

  7. python(文件操作)

    一.文件操作 1.tell()方法返回文件的当前位置,即文件指针当前位置. UTF-8 编码格式,一个中文占3个字符 GBK 编码格式,一个中文占2个字符 #写入前文件内容如下 "" ...

  8. CGI (通用网关接口)

    CGI cgi即 Common Gateway Interface 译作 通用网关接口 是应用程序与应用程序之间的输入输出协议.比如我们写信,规定了开头一句写称呼,中间写内容,最后署名和日期.看到这种 ...

  9. 王颖奇 20171010129《面向对象程序设计(java)》第十二周学习总结

    实验十二  图形程序设计 理论: 10.1 AWT与Swing简介 10.2 框架的创建10.3 图形程序设计10.4 显示图像 (具体学习总结在最后) 实验: 实验时间 2018-11-14 1.实 ...

  10. pyltp安装教程及简单使用

    1.pyltp简介 pyltp 是哈工大自然语言工作组推出的一款基于Python 封装的自然语言处理工具(轮子),提供了分词,词性标注,命名实体识别,依存句法分析,语义角色标注的功能. 2.pyltp ...