环境如下:

CentOS-7-x86_64
zookeeper-3.4.11
kafka_2.12-1.1.0

一.zookeeper下载与安装

1)下载zookeeper

[root@localhost opt]# cd /opt/
[root@localhost opt]# wget https://mirrors.cnnic.cn/apache/zookeeper/zookeeper-3.4.11/zookeeper-3.4.11.tar.gz

2)解压

[root@localhost opt]# tar zxvf zookeeper-3.4.11.tar.gz
[root@localhost opt]# ls
zookeeper-3.4.11 zookeeper-3.4.11.tar.gz

3)配置

[root@localhost opt]# cd zookeeper-3.4.11
[root@localhost zookeeper-3.4.11]# ll
total 1596
drwxr-xr-x. 2 502 games 149 Nov 1 14:52 bin
-rw-r--r--. 1 502 games 87943 Nov 1 14:47 build.xml
drwxr-xr-x. 2 502 games 77 Nov 1 14:52 conf
drwxr-xr-x. 10 502 games 130 Nov 1 14:47 contrib
drwxr-xr-x. 2 502 games 4096 Nov 1 14:54 dist-maven
drwxr-xr-x. 6 502 games 4096 Nov 1 14:52 docs
-rw-r--r--. 1 502 games 1709 Nov 1 14:47 ivysettings.xml
-rw-r--r--. 1 502 games 8197 Nov 1 14:47 ivy.xml
drwxr-xr-x. 4 502 games 4096 Nov 1 14:52 lib
-rw-r--r--. 1 502 games 11938 Nov 1 14:47 LICENSE.txt
-rw-r--r--. 1 502 games 3132 Nov 1 14:47 NOTICE.txt
-rw-r--r--. 1 502 games 1585 Nov 1 14:47 README.md
-rw-r--r--. 1 502 games 1770 Nov 1 14:47 README_packaging.txt
drwxr-xr-x. 5 502 games 47 Nov 1 14:47 recipes
drwxr-xr-x. 8 502 games 211 Nov 1 14:52 src
-rw-r--r--. 1 502 games 1478279 Nov 1 14:49 zookeeper-3.4.11.jar
-rw-r--r--. 1 502 games 195 Nov 1 14:52 zookeeper-3.4.11.jar.asc
-rw-r--r--. 1 502 games 33 Nov 1 14:49 zookeeper-3.4.11.jar.md5
-rw-r--r--. 1 502 games 41 Nov 1 14:49 zookeeper-3.4.11.jar.sha1
[root@localhost zookeeper-3.4.11]# cp -rf conf/zoo_sample.cfg conf/zoo.cfg
[root@localhost zookeeper-3.4.11]# vi conf/zoo.cfg

修改或添加zoo.cfg文件中如下两个配置项:

dataDir=/opt/zookeeper-3.4.11/zkdata #这个目录是预先创建的
dataLogDir=/opt/zookeeper-3.4.11/zkdatalog #这个目录是预先创建的

创建zk数据存储和zk日志存储目录:

[root@localhost zookeeper-3.4.11]# mkdir /opt/zookeeper-3.4.11/zkdata
[root@localhost zookeeper-3.4.11]# mkdir /opt/zookeeper-3.4.11/zkdatalog
[root@localhost zookeeper-3.4.11]# ll
total 1596
drwxr-xr-x. 2 502 games 149 Nov 1 14:52 bin
-rw-r--r--. 1 502 games 87943 Nov 1 14:47 build.xml
drwxr-xr-x. 2 502 games 92 Mar 31 11:12 conf
drwxr-xr-x. 10 502 games 130 Nov 1 14:47 contrib
drwxr-xr-x. 2 502 games 4096 Nov 1 14:54 dist-maven
drwxr-xr-x. 6 502 games 4096 Nov 1 14:52 docs
-rw-r--r--. 1 502 games 1709 Nov 1 14:47 ivysettings.xml
-rw-r--r--. 1 502 games 8197 Nov 1 14:47 ivy.xml
drwxr-xr-x. 4 502 games 4096 Nov 1 14:52 lib
-rw-r--r--. 1 502 games 11938 Nov 1 14:47 LICENSE.txt
-rw-r--r--. 1 502 games 3132 Nov 1 14:47 NOTICE.txt
-rw-r--r--. 1 502 games 1585 Nov 1 14:47 README.md
-rw-r--r--. 1 502 games 1770 Nov 1 14:47 README_packaging.txt
drwxr-xr-x. 5 502 games 47 Nov 1 14:47 recipes
drwxr-xr-x. 8 502 games 211 Nov 1 14:52 src
drwxr-xr-x. 2 root root 6 Mar 31 11:13 zkdata
drwxr-xr-x. 2 root root 6 Mar 31 11:13 zkdatalog
-rw-r--r--. 1 502 games 1478279 Nov 1 14:49 zookeeper-3.4.11.jar
-rw-r--r--. 1 502 games 195 Nov 1 14:52 zookeeper-3.4.11.jar.asc
-rw-r--r--. 1 502 games 33 Nov 1 14:49 zookeeper-3.4.11.jar.md5
-rw-r--r--. 1 502 games 41 Nov 1 14:49 zookeeper-3.4.11.jar.sha1

4)配置环境变量

[root@localhost zookeeper-3.4.11]# vi /etc/profile

配置项如下:

# config java class path
export JAVA_HOME=/usr/local/java/jdk1.8.0_161
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib:$ZOOKEEPER_HOME/lib:
export PATH=${JAVA_HOME}/bin:$ZOOKEEPER_HOME/bin:$PATH # config zookeeper install path
export ZOOKEEPER_HOME=/opt/zookeeper-3.4.11

5)启动zookeeper

[root@localhost bin]# cd /opt/zookeeper-3.4.11/bin
[root@localhost bin]# ll
total 36
-rwxr-xr-x. 1 502 games 232 Nov 1 14:47 README.txt
-rwxr-xr-x. 1 502 games 1937 Nov 1 14:47 zkCleanup.sh
-rwxr-xr-x. 1 502 games 1056 Nov 1 14:47 zkCli.cmd
-rwxr-xr-x. 1 502 games 1534 Nov 1 14:47 zkCli.sh
-rwxr-xr-x. 1 502 games 1628 Nov 1 14:47 zkEnv.cmd
-rwxr-xr-x. 1 502 games 2696 Nov 1 14:47 zkEnv.sh
-rwxr-xr-x. 1 502 games 1089 Nov 1 14:47 zkServer.cmd
-rwxr-xr-x. 1 502 games 6773 Nov 1 14:47 zkServer.sh
[root@localhost bin]# ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.11/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

二.kafka下载与安装

1).下载kafka:

[root@localhost bin]# cd /opt/
[root@localhost opt]# wget http://apache.fayea.com/kafka/1.1.0/kafka_2.12-1.1.0.tgz
--2018-03-31 11:21:52-- http://apache.fayea.com/kafka/1.1.0/kafka_2.12-1.1.0.tgz
Resolving apache.fayea.com (apache.fayea.com)... 202.115.175.188, 202.115.175.187
Connecting to apache.fayea.com (apache.fayea.com)|202.115.175.188|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 50326212 (48M) [application/x-gzip]
Saving to: ‘kafka_2.12-1.1.0.tgz’ 100%[=============================================================================================================================>] 50,326,212 442KB/s in 1m 44s 2018-03-31 11:23:36 (473 KB/s) - ‘kafka_2.12-1.1.0.tgz’ saved [50326212/50326212] [root@localhost opt]# ll
total 84964
-rw-r--r--. 1 root root 50326212 Mar 28 08:05 kafka_2.12-1.1.0.tgz
drwxr-xr-x. 15 502 games 4096 Mar 31 11:20 zookeeper-3.4.11
-rw-r--r--. 1 root root 36668066 Nov 8 13:24 zookeeper-3.4.11.tar.gz

2) 解压:

tar -zxvf kafka_2.12-1.1.0.tgz

3) 配置:

进入kafka安装工程根目录编辑config/server.properties

[root@localhost opt]# cd /opt/kafka_2.12-1.1.0/config/
[root@localhost config]# ll
total 64
-rw-r--r--. 1 root root 906 Mar 23 18:51 connect-console-sink.properties
-rw-r--r--. 1 root root 909 Mar 23 18:51 connect-console-source.properties
-rw-r--r--. 1 root root 5807 Mar 23 18:51 connect-distributed.properties
-rw-r--r--. 1 root root 883 Mar 23 18:51 connect-file-sink.properties
-rw-r--r--. 1 root root 881 Mar 23 18:51 connect-file-source.properties
-rw-r--r--. 1 root root 1111 Mar 23 18:51 connect-log4j.properties
-rw-r--r--. 1 root root 2730 Mar 23 18:51 connect-standalone.properties
-rw-r--r--. 1 root root 1221 Mar 23 18:51 consumer.properties
-rw-r--r--. 1 root root 4727 Mar 23 18:51 log4j.properties
-rw-r--r--. 1 root root 1919 Mar 23 18:51 producer.properties
-rw-r--r--. 1 root root 6851 Mar 23 18:51 server.properties
-rw-r--r--. 1 root root 1032 Mar 23 18:51 tools-log4j.properties
-rw-r--r--. 1 root root 1023 Mar 23 18:51 zookeeper.properties
[root@localhost config]# mkdir /opt/kafka_2.12-1.1.0/kafka_log

添加或者修改以下两个配置项:

log.dirs=/opt/kafka_2.12-1.1.0/kafka_log      #(提前创建)
listeners=PLAINTEXT://192.168.0.111:9092

config/server.properties修改后:

 [root@localhost config]# more server.properties
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. # see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker.
broker.id=0 ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
listeners=PLAINTEXT://192.178.0.111:9092 # Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092 # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL # The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3 # The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8 # The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400 # The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400 # The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600 ############################# Log Basics ############################# # A comma separated list of directories under which to store log files
#log.dirs=/tmp/kafka-logs
log.dirs=/opt/kafka_2.12-1.1.0/kafka_log # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1 ############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1 ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000 ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log. # The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181 # Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000 ############################# Group Coordinator Settings ############################# # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances dur
ing application startup.
group.initial.rebalance.delay.ms=0

4)启动kafka

[root@localhost kafka_2.12-1.1.0]# cd /opt/kafka_2.12-1.1.0/
[root@localhost kafka_2.12-1.1.0]# ll
total 48
drwxr-xr-x. 3 root root 4096 Mar 23 18:55 bin
drwxr-xr-x. 2 root root 4096 Mar 31 11:30 config
drwxr-xr-x. 2 root root 6 Mar 31 11:31 kafka_log
drwxr-xr-x. 2 root root 4096 Mar 31 11:26 libs
-rw-r--r--. 1 root root 28824 Mar 23 18:51 LICENSE
drwxr-xr-x. 2 root root 182 Mar 31 11:33 logs
-rw-r--r--. 1 root root 336 Mar 23 18:51 NOTICE
drwxr-xr-x. 2 root root 44 Mar 23 18:55 site-docs
[root@localhost kafka_2.12-1.1.0]# sh ./bin/kafka-server-start.sh ./config/server.properties &
[2018-03-31 11:35:47,198] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.KafkaException: Socket server failed to bind to 192.178.0.111:9092: Cannot assign requested address.
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:404)
at kafka.network.Acceptor.<init>(SocketServer.scala:308)
at kafka.network.SocketServer.$anonfun$createAcceptorAndProcessors$1(SocketServer.scala:126)
at kafka.network.SocketServer.$anonfun$createAcceptorAndProcessors$1$adapted(SocketServer.scala:122)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.network.SocketServer.createAcceptorAndProcessors(SocketServer.scala:122)
at kafka.network.SocketServer.startup(SocketServer.scala:84)
at kafka.server.KafkaServer.startup(KafkaServer.scala:247)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:92)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at kafka.network.Acceptor.openServerSocket(SocketServer.scala:400)
... 12 more

备注:上边是启动失败信息,启动失败原因是我的服务器9092端口未开启

此时,检测2181与9092端口

[root@localhost kafka_2.12-1.1.0]# netstat -tunlp|egrep "(2181|9092)"
tcp6 0 0 :::2181 :::* LISTEN 8896/java

CentOS 7.0默认使用的是firewall作为防火墙,使用iptables必须重新设置一下

1、直接关闭防火墙

systemctl stop firewalld.service #停止firewall
systemctl disable firewalld.service #禁止firewall开机启动

2、设置 iptables service

[root@localhost kafka_2.12-1.1.0]# yum -y install iptables-services
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.163.com
* extras: mirrors.163.com
* updates: mirrors.cn99.com
Resolving Dependencies
--> Running transaction check
---> Package iptables-services.x86_64 0:1.4.21-18.3.el7_4 will be installed
--> Processing Dependency: iptables = 1.4.21-18.3.el7_4 for package: iptables-services-1.4.21-18.3.el7_4.x86_64
--> Running transaction check
---> Package iptables.x86_64 0:1.4.21-18.0.1.el7.centos will be updated
---> Package iptables.x86_64 0:1.4.21-18.3.el7_4 will be an update
--> Finished Dependency Resolution Dependencies Resolved =======================================================================================================================================================================
Package Arch Version Repository Size
=======================================================================================================================================================================
Installing:
iptables-services x86_64 1.4.21-18.3.el7_4 updates 51 k
Updating for dependencies:
iptables x86_64 1.4.21-18.3.el7_4 updates 428 k Transaction Summary
=======================================================================================================================================================================
Install 1 Package
Upgrade ( 1 Dependent package) Total download size: 479 k
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
(1/2): iptables-services-1.4.21-18.3.el7_4.x86_64.rpm | 51 kB 00:00:00
(2/2): iptables-1.4.21-18.3.el7_4.x86_64.rpm | 428 kB 00:00:01
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 447 kB/s | 479 kB 00:00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Updating : iptables-1.4.21-18.3.el7_4.x86_64 1/3
Installing : iptables-services-1.4.21-18.3.el7_4.x86_64 2/3
Cleanup : iptables-1.4.21-18.0.1.el7.centos.x86_64 3/3
Verifying : iptables-1.4.21-18.3.el7_4.x86_64 1/3
Verifying : iptables-services-1.4.21-18.3.el7_4.x86_64 2/3
Verifying : iptables-1.4.21-18.0.1.el7.centos.x86_64 3/3 Installed:
iptables-services.x86_64 0:1.4.21-18.3.el7_4 Dependency Updated:
iptables.x86_64 0:1.4.21-18.3.el7_4 Complete!
[root@localhost kafka_2.12-1.1.0]#

备注:默认iptables是没有安装的,需要先安装iptables

如果要修改防火墙配置,如增加防火墙端口 9092

vi /etc/sysconfig/iptables 

增加规则 -A INPUT -m state --state NEW -m tcp -p tcp --dport 9092 -j ACCEPT

[root@localhost kafka_2.12-1.1.0]# vi /etc/sysconfig/iptables
[root@localhost kafka_2.12-1.1.0]# more /etc/sysconfig/iptables
# sample configuration for iptables service
# you can edit this manually or use system-config-firewall
# please do not ask us to add additional ports/services to this default configuration
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9092 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

保存退出后

systemctl restart iptables.service #重启防火墙使配置生效
systemctl enable iptables.service #设置防火墙开机启动 systemctl start firewalld.service
systemctl enable firewalld.service

最后重启系统使设置生效即可。

重新启动zk,重新启动kafaka,启动后检测端口是否通:

[root@localhost opt]# netstat -tunlp|egrep "(2181|9092)"
tcp6 0 0 192.178.0.111:9092 :::* LISTEN 10299/java
tcp6 0 0 :::2181 :::* LISTEN 8896/java

5)新建一个TOPIC

--创建topic

/opt/kafka_2.12-1.1.0/bin/kafka-topics.sh --create --zookeeper localhost:2181 --partitions 1 --replication-factor 1 --topic kafkatopic 

此时kafaka服务器开启窗口(执行[root@localhost kafka_2.12-1.1.0]# sh ./bin/kafka-server-start.sh ./config/server.properties &的窗口)会有变化:

--查看所有topic

 /opt/kafka_2.12-1.1.0/bin/kafka-topics.sh --list --zookeeper localhost:2181 

--查看指定topic

/opt/kafka_2.12-1.1.0/bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic logTopic100

6) 把KAFKA的生产者启动起来:

/opt/kafka_2.12-1.1.0/bin/kafka-console-producer.sh --broker-list 192.178.0.111:9092 --sync --topic kafkatopic

7)另开一个终端,把消费者启动起来:

sh /opt/kafka_2.12-1.1.0/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic kafkatopic --from-beginning
也可以采用:
sh /opt/kafka_2.12-1.1.0/bin/kafka-console-consumer.sh --bootstrap-server 192.178.0.111:9092 --topic kafkatopic --from-beginning

(--from beginning 是从头开始消费,不加则是消费当前正在发送到该topic的消息)

8)使用
在发送消息的终端输入aaa,则可以在消费消息的终端显示

生产者生产:
[root@localhost ~]# /opt/kafka_2.12-1.1.0/bin/kafka-console-producer.sh --broker-list 192.178.0.111:9092 --topic kafkatopic
>a
>b
>c
>d
> 消费者接收:
[root@localhost ~]# /opt/kafka_2.12-1.1.0/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic kafkatopic --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
a
b
c
d

参考资料如下:

1)《 CentOS 7 开放防火墙端口命令

2)《 kafka+zookeeper环境配置(linux环境单机版)

centos单机安装zookeeper+kafaka的更多相关文章

  1. 在CentOS上安装ZooKeeper集群

    一共准备3个CentOS虚拟机 172.16.9.194 172.16.9.195 172.16.9.196 上传zookeeper-3.3.6.tar.gz到服务器并解压,3台服务器的目录结构如下 ...

  2. CentOS单机安装k8s并部署.NET 6程序 压测 记录

    前面部分依照CentOS单机安装k8s并部署.NET 6程序来进行,内存.cpu.centos版本一致,之前222元买的三年8M 2c4g腾讯云轻量服务器,这个教程算是写的很详细的了,基本可以一致执行 ...

  3. CentOS单机安装FastDFS&整合Nginx

    单机安装 一 准备工作 准备linux服务器或虚拟机,这里是虚拟机,操作系统CentOS 6.4 Tracker 和 Storage 安装在一台机器上 FastDFS 5.08版本 1,准备软件 软件 ...

  4. centos下安装ZooKeeper

    1.需求 安装ZooKeeper,metaQ 2.下载 http://zookeeper.apache.org/releases.html 当前stable版是zookeeper-3.4.6 3.解压 ...

  5. CentOS下安装zookeeper并设置开机自启动

    转自: 一.安装zookeeper # cd /opt/ # mkdir zookeeper # cd zookeeper/ # tar -zxvf zookeeper-3.4.6.tar.gz # ...

  6. zookeeper(二):linux centos下安装zookeeper(单机和集群)

    下载 http://zookeeper.apache.org/releases.html 解压 tar –zxvf zookeeper-3.4.6.tar.gz 解压文件到"/usr/loc ...

  7. centos单机安装Hadoop2.6

    一,安装环境 硬件:虚拟机 操作系统:Centos 6.4 64位 IP:10.51.121.10 主机名:datanode-4 安装用户:root 二,安装JDK 安装JDK1.6或者以上版本.这里 ...

  8. Linux单机安装Zookeeper

    一.官网 https://zookeeper.apache.org/ 二.简介 Apache ZooKeeper致力于开发和维护开源服务器,实现高度可靠的分布式协调. ZooKeeper是一种集中式服 ...

  9. centos单机安装nginx、gitlab、nexus、mysql共存

    思路就是不同系统设不同端口号,通过nginx做反向代理绑定不同域名. nginx 安装 1.安装pcre软件包(使nginx支持http rewrite模块)yum install -y pcreyu ...

随机推荐

  1. FastJson简单使用

    首先建立两个实体类,Student.java 和 Teacher.java public class Student { private int id; private String name; pr ...

  2. 第六届蓝桥杯B组java最后一题

    10.压缩变换(程序设计) 小明最近在研究压缩算法. 他知道,压缩的时候如果能够使得数值很小,就能通过熵编码得到较高的压缩比. 然而,要使数值很小是一个挑战. 最近,小明需要压缩一些正整数的序列,这些 ...

  3. 数据段、代码段、堆栈段、BSS段的区别

    进程(执行的程序)会占用一定数量的内存,它或是用来存放从磁盘载入的程序代码,或是存放取自用户输入的数据等等.不过进程对这些内存的管理方式因内存用 途 不一而不尽相同,有些内存是事先静态分配和统一回收的 ...

  4. NodeJs的async

    async.auto最强大的一个api,它适合逻辑复杂的代码,代码中你一部分需要串行,两部分相互依赖,一部分又需要并行,代码中不需要依赖,这个时候你就可以通过auto随性所欲控制你的代码逻辑. var ...

  5. C语言描述链表的实现及操作

    一.链表的创建操作 // 操作系统 win 8.1 // 编译环境 Visual Stuido 2017 #include<stdio.h> #include<malloc.h> ...

  6. BigDecimal 转成 double

    NUMBER(20,2) 数据库里的字段number  ,实体是BigDecimal 将BigDecimal转成double public double getOrderamount() { if ( ...

  7. wipefs进程

    wipefs进程是啥,占用了百分之90多的cpu wipefs进程是啥,占用了百分之90多的cpu,把这个进程干掉了,过了一天又自动启动了,很多朋友应该遇到过类似的问题. wipefs是linux自带 ...

  8. mysql gtid 主从复制

    基于GTID环境搭建主从复制 1.环境 ----------------------------------------------------------| |mysql版本 | 5.7.14 | ...

  9. 解决Hystrix Dashboard 一直是Loading ...的情况

    Hystrix是什么 Hystrix 能使你的系统在出现依赖服务失效的时候,通过隔离系统所依赖的服务,防止服务级联失败,同时提供失败回退机制,更优雅地应对失效,并使你的系统能更快地从异常中恢复. Hy ...

  10. CentOS7搭建solr7.2

    solr介绍 一.Solr它是一种开放源码的.基于 Lucene Java 的搜索服务器,易于加入到 Web 应用程序中. 二.Solr 提供了层面搜索(就是统计).命中醒目显示并且支持多种输出格式( ...