阿里云构建Kafka单机集群环境
简介
在一台ECS阿里云服务器上构建Kafa单个集群环境需要如下的几个步骤:
- 服务器环境
- JDK的安装
- ZooKeeper的安装
- Kafka的安装
1. 服务器环境
- CPU: 1核
- 内存: 2048 MB (I/O优化) 1Mbps
- 操作系统 ubuntu14.04 64位
感觉服务器性能还是很好的,当然不是给阿里打广告,汗。
随便向kafka里面发了点数据,性能图如下所示:
2. 安装JDK
想要跑Java程序,就必须安装JDK。JDK版本,本人用的是JDK1.7。
基本操作如下:
- 从JDK官网获取JDK的tar.gz包;
- 将tar包上传到服务器上的opt/JDK下面;
- 解压tar包;
- 更改etc/profile文件,将下列信息写在后面;(ps mac环境需要sudo su 以root权限进行操作)
cd /
cd etc
vim profile
然后进行修改 添加如下部分:
export JAVA_HOME=/opt/JDK/jdk1.7.0_79
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 1
- 2
- 3
- 4
- 5
- 6
- 7
改好后的profile文件信息如下:
# /etc/profile: system-wide .profile file for the Bourne shell (sh(1))
# and Bourne compatible shells (bash(1), ksh(1), ash(1), ...).
if [ "$PS1" ]; then
if [ "$BASH" ] && [ "$BASH" != "/bin/sh" ]; then
# The file bash.bashrc already sets the default PS1.
# PS1='\h:\w\$ '
if [ -f /etc/bash.bashrc ]; then
. /etc/bash.bashrc
fi
else
if [ "`id -u`" -eq 0 ]; then
PS1='# '
else
PS1='$ '
fi
fi
fi
# The default umask is now handled by pam_umask.
# See pam_umask(8) and /etc/login.defs.
if [ -d /etc/profile.d ]; then
for i in /etc/profile.d/*.sh; do
if [ -r $i ]; then
. $i
fi
done
unset i
fi
export JAVA_HOME=/opt/JDK/jdk1.7.0_79
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 按下ESC键后,输入“!wq”,按回车保存信息;
- 输入 java -v 查看是否生效(未生效的,貌似需要重新登陆下)。
3. 安装ZooKeeper
Kafka集群是通过ZooKeeper进行选举Leader,和保存存储Topic的信息的,所以想运行Kafka。还需要搭建Zookeeper环境。
ZooKeeper环境搭建步骤如下:
- 从官网获取tar.gz包;
- 将tar.gz包上传到阿里云服务器的opt/zookeeper下面;
- 运行tar -zxvf *.tar.gz 解压缩;
- 进入解压好的Zookeeper目录下的conf目录下面;
- 将zoo_sample.cfg文件改名成zoo.cfg;(当然也可以备份)
- 根据需要修改zoo.cfg文件,当然也可以不改;
- 启动zookeeper。
3-7步骤具体的操作命令如下所示:
cd opt/zookeeper
tar -zxvf zookeeper-3.4.6.tar.gz
cd zookeeper-3.4.6/conf
scp zoo_sample.cfg zoo.cfg
cd ..
#打开zookeeper命令
./bin/zkServer.sh start
#关闭zookeeper命令
./bin/zkServer.sh start
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
结果后可以通过ps -ef|grep zookeeper 查看zookeeper是否成功启动
4. 安装Kafka
经过上面3个步骤的折磨后,我们终于可以来构建自己的kafka单机集群了。(单机你也说是集群,汗——不服来打我QAQ)
kafka具体的步骤如下:
- 下载kafka安装包,我下的包是kafka_2.11-0.10.1.0.tgz,这个官网可找到这;
- 将kafka包上传到阿里云服务器上的opt/kafka目录下;
- 将kafka包解压;
- 进入config目录下,修改server.properties文件;
主要修改内容为:
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
port=9092
host.name=阿里云内网地址
advertised.host.name=阿里云外网映射地址
- 1
- 2
- 3
- 4
- 5
- 1
- 2
- 3
- 4
- 5
修改后的配置文件如下:
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
port=9092
host.name=阿里云内网地址
advertised.host.name=阿里云外网映射地址
# Switch to enable topic deletion or not, default value is false
delete.topic.enable=true
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = security_protocol://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
# The number of threads handling network requests
num.network.threads=3
# The number of threads doing disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma seperated list of directories under which to store log files
log.dirs=/tmp/kafka-logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 启动kafka。
nohup ./bin/kafka-server-start.sh config/server.properties > /dev/null 2>&1 &
- 1
- 2
- 1
- 2
6.验证kafka是否启动成功;
运行jps,查看是否名为kafka的进程即可。
5. 踩过的坑
- 要配置hostname,port端口号 和 其他选项
Bug:ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: replication factor: 1 larger than available brokers: 0
说的很明白,可以使用的broker数量少于1个,可就是Kafka进程没有启动或宕机了。
解决办法:1. 运行JPS 查看是否有Kafka进程 ; 2.重新启动Kafka。 - 无法绑定到某某地址
Bug:Socket server failed to bind to xxx.xxx.xxx.xxx:9092: Cannot assign requested address.
在ECS上面配置kafka的地址千万不要写外部地址,比如139.225.155.153(我随便写的),这样事绑定不上去的,因为这个是阿里云内部;它会去内网去寻找他的地址,所以配成127.0.0.1 会自动识别成本机地址/不然应该使用外网的映射地址。 - host name配置出问题
Bug:报错:java.net.UnknownHostException: 主机名: 主机名
Caused by: java.net.UnknownHostException: iZuf6gsbgu35znsy7ve3s6x: iZuf6gsbgu35znsy7ve3s6x
at java.net.InetAddress.getLocalHost(InetAddress.java:1475)
at kafka.network.RequestChannel$.<init>(RequestChannel.scala:40)
at kafka.network.RequestChannel$.<clinit>(RequestChannel.scala)
... 10 more
- 1
- 2
- 3
- 4
- 5
- 6
- 1
- 2
- 3
- 4
- 5
- 6
配置etc/hostname 和 etc/hosts文件,具体操作请看:
https://my.oschina.net/heguangdong/blog/13678
4 外部调用无法消费kafka
21:45:58,162 DEBUG Selector:365 - Connection with /168.221.153.152 disconnected
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:51)
at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:73)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:323)
at org.apache.kafka.common.network.Selector.poll(Selector.java:291)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:236)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:135)
at java.lang.Thread.run(Thread.java:745)
21:45:58,162 DEBUG NetworkClient:463 - Node -1 disconnected.
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
这个问题是kafka的config/server.properties文件配置不当导致的,详细请看
1.http://stackoverflow.com/questions/33541114/kafka-0-8-2-2-unable-to-publish-messages
2.http://www.cnblogs.com/snifferhu/p/5102629.html
3. http://www.tuicool.com/articles/n632MvV
4. kafka安装出现的几个问题
6. 其他
关于Kafka的配置文件具体内容、Kafka如何构建集群、Kafka常用命令、Kafka简单Demo的编写和Kafka Streams 例子的编写,请看Kafka系列的其它部分内容。
阿里云构建Kafka单机集群环境的更多相关文章
- 配置阿里云SLB全站HTTPS集群
配置阿里云SLB全站HTTPS集群(以下内容仅为流程,信息可能有些对应不上) 1 登录阿里云购买两台实例 1.1 按量付费购买两台实例 1.2 配置网络可以不选择分配外网 1.3 自定义密码 1.4 ...
- Apache httpd和JBoss构建高可用集群环境
1. 前言 集群是指把不同的服务器集中在一起,组成一个服务器集合,这个集合给客户端提供一个虚拟的平台,使客户端在不知道服务器集合结构的情况下对这一服务器集合进行部署应用.获取服务等操作.集群是企业应用 ...
- Zookeeper的单机&集群环境搭建
单机环境的安装 首先下载ZK的二进制安装包:http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.14/ 将安装包上传到Linux上: 进行解 ...
- 阿里云上万个 Kubernetes 集群大规模管理实践
点击下载<不一样的 双11 技术:阿里巴巴经济体云原生实践> 本文节选自<不一样的 双11 技术:阿里巴巴经济体云原生实践>一书,点击上方图片即可下载! 作者 | 汤志敏,阿里 ...
- 阿里云-容器服务之集群服务 k8s(Jenkins+gitlab+k8s的devops)- 01
由于docker官方停止更新Swarm,另外swarm在使用期间出现了很多bug,所以阿里云也在2019年7月发布公告:于2019年12月31日起停止技术支持,请您尽快迁移至容器服务Kubernete ...
- 在阿里云上进行Docker集群的自动弹性伸缩
摘要: 在刚刚结束的云栖大会上,阿里云容器服务演示了容器的自动弹性伸缩,能够从容应对互联网应用的峰值流量.阿里云容器服务不仅支持容器级别的自动弹性伸缩,也支持集群节点级别的自动弹性伸缩.从而真正做到从 ...
- 阿里云服务器 进行zookeeper集群时候的肯!
首先我本地虚拟机 部署过N次没什么问题! 擦,上了云就启动后不能正常节点之间连接和发现! 这上面有手误 擦,吧编号的2 打成22 了,这个看下日志就解决了 重点是zoo.conf 中的IP进行配置时候 ...
- zookeeper 单机. 集群环境搭建
zookeeper分布式系统中面临的很多问题, 如分布式锁,统一的命名服务,配置中心,集群的管理Leader的选举等 环境准备 分布式系统中各个节点之间通信,Zookeeper保证了这个过程中 数据的 ...
- 阿里云-容器服务之集群服务 k8s(Jenkins+gitlab+k8s的devops)- 04
配置jenkins和gitlab: 1.进入jenkins,新增一个项目,demo-piepeline,创建好,点击配置, 2 .设置镜像地址的命名空间: 3.设置镜像的名字 4.设置代码的分支或者t ...
随机推荐
- C++11 function
#include <iostream> #include <functional> #include <vector> using namespace std; / ...
- bind,apply,call的区别
在Javascript中,bind, apply, call方法都可以显式绑定上下文this,这三者有何不同呢? bind只绑定this不马上执行 var person = { firstname: ...
- Windows Phone本地数据库(SQLCE):5、[Association]attribute(翻译)(转)
这是“windows phone mango本地数据库(sqlce)”系列短片文章的第五篇. 为了让你开始在Windows Phone Mango中使用数据库,这一系列短片文章将覆盖所有你需要知道的知 ...
- Python index()方法
Python index()方法 Python 字符串 描述 Python index() 方法检测字符串中是否包含子字符串 str ,如果指定 beg(开始) 和 end(结束) 范围,则检查是否 ...
- Kettle优化就这么多
版权声明:本文为博主原创文章,未经博主允许不得转载. https://blog.csdn.net/ClamReason/article/details/49930479 Kettle正常转换速度 场景 ...
- c++ 实现atoi()函数
1. 问题描写叙述 实现c++函数库中atoi()函数,要考虑到各种特殊情况: 空字符串. +和-号. 字符串前中后n个空格. 溢出. 非数字字符. 2. 解决方式 转换过程并不复杂.复杂的是要考虑到 ...
- Nginx中文手冊
下载 : Nginx 中文手冊 Nginx 常见应用技术指南[Nginx Tips] 第二版 作者:NetSeek http://www.linuxtone.org (IT运维专家网|集群架构|性能调 ...
- 神盾局特工第一季/全集Agents Of SHIELD迅雷下载
神盾局特工 Agents of S.H.I.E.L.D. (2013) 本季看点:如果你熟悉Marvel漫画或者看过创造电影票房记录的<复仇者联盟>(The Avengers),你应该对「 ...
- 《“胡”说IC——菜鸟工程师完美进阶》
<“胡”说IC——菜鸟工程师完美进阶> 基本信息 作者: 胡运旺 出版社:电子工业出版社 ISBN:9787121229107 上架时间:2014-5-15 出版日期:2014 年5月 开 ...
- 如果类型是dynamic的且其属性也是dynamic的
在 MVC 中,如果尝试如下的编码: public ActionResult TeacherInfo(string courseId) { var x = LearningBll.GetTea ...