Kakfa分布式集群搭建
本位以最新版本kafka_2.11-0.10.1.0版本讲述分布式kafka集群环境的搭建过程。服务器列表:
172.31.10.1 172.31.10.2 172.31.10.3
1.下载kafka安装包
登录kafka官网http://kafka.apache.org/,
- 单击左侧“Download”按钮
- 选择对应的版本,版本2.11代表scala版本(kafka是由scala编写的),0.10.1.0代表kafka的版本
- 在弹出的窗口中选择下载链接即可
2.下载zookeeper安装包
kafka整体架构如下:
而kafka集群通常会依赖zookeeper的命名服务,单机版的可以直接用kafka安装包的zookeeper,而通常生产环境为保证命名服务的可用性,一般会单独搭建zookeeper集群。服务器不足可以直接和kafka broker共用服务器,zookeeper命名服务队资源要求不高。
登录zookeeper官网http://www.apache.org/dyn/closer.cgi/zookeeper/,一路选择download下载即可,本文选择稳定版zookeeper-3.4.8
3.安装zookeeper集群
将安装包zookeeper-3.4.8.tar上传至服务器172.31.10.1,
- 解压,目录/opt/zookeeper/zookeeper-3.4.8
tar -zxvf zookeeper-3.4.8.tar
- 配置,切换到conf目录,并更改dataDir和server.x
cd /opt/zookeeper/zookeeper-3.4.8/conf mv zoo_sample.cfg zoo.cfg
更改后的zoo.cfg配置如下:
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/var/logs/data/zookeeper # the port at which the clients will connect clientPort=2181 server.1=172.31.10.1:2888:3888 server.2=172.31.10.2:2888:3888 server.3=172.31.10.3:2888:3888 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1
其中dataDir为zookeeper目录,server.x为zookeeper服务器列表的地址和通信端口
- 远程复制到其他两台服务器,并在dataDir目录下创建myid文件,内容为server.x中的数字。本文设置如下:
#172.31.10.1执行 cd /var/logs/data/zookeeper echo "1" > /var/logs/data/zookeeper/myid #172.31.10.2执行 cd /var/logs/data/zookeeper echo "2" > /var/logs/data/zookeeper/myid #172.31.10.3执行 cd /var/logs/data/zookeeper echo "3" > /var/logs/data/zookeeper/myid
- 启动zookeeper集群和验证
#在每台服务器上启动zookeeper cd /opt/zookeeper/zookeeper-3.4.8/bin /opt/zookeeper/zookeeper-3.4.8/bin/zkServer.sh start #查看服务器上zookeeper节点角色 cd /opt/zookeeper/zookeeper-3.4.8/bin /opt/zookeeper/zookeeper-3.4.8/bin/zkServer.sh status
4.安装kafka集群
- 解压,到/opt/kafka/kafka_2.11-0.10.1.0
tar -zxvf kafka_2.11-0.10.1.0.tgz cd /opt/kafka/kafka_2.11-0.10.1.0
- 更改conf/server.properties配置,主要是更改如下几项:
broker.id=1 host.name=172.31.10.1 log.dirs=/var/logs/data/kafka zookeeper.connect=172.31.10.1:2181,172.31.10.2:2181,172.31.10.2:2181/kafka
注意每台服务器上的broker.id均不同,需要保证整个集群中唯一性
更改后的server.properties如下:
############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker. broker.id=1 # The port the socket server listens on port=9092 # Hostname the broker will bind to. If not set, the server will bind to all interfaces host.name=172.31.10.1 # Switch to enable topic deletion or not, default value is false #delete.topic.enable=true ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners = security_protocol://host_name:port # EXAMPLE: # listeners = PLAINTEXT://your.host.name:9092 #listeners=PLAINTEXT://:9092 # Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). #advertised.listeners=PLAINTEXT://your.host.name:9092 # The number of threads handling network requests num.network.threads=3 # The number of threads doing disk I/O num.io.threads=8 # The send buffer (SO_SNDBUF) used by the socket server socket.send.buffer.bytes=102400 # The receive buffer (SO_RCVBUF) used by the socket server socket.receive.buffer.bytes=102400 # The maximum size of a request that the socket server will accept (protection against OOM) socket.request.max.bytes=104857600 ############################# Log Basics ############################# # A comma seperated list of directories under which to store log files log.dirs=/var/logs/data/kafka # The default number of log partitions per topic. More partitions allow greater # parallelism for consumption, but this will also result in more files across # the brokers. num.partitions=1 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. # This value is recommended to be increased for installations with data dirs located in RAID array. num.recovery.threads.per.data.dir=1 ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync # the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs here: # 1. Durability: Unflushed data may be lost if you are not using replication. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or # every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk #log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush #log.flush.interval.ms=1000 ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can # be set to delete segments after a period of time, or after a given size has accumulated. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens # from the end of the log. # The minimum age of a log file to be eligible for deletion log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining # segments don't drop below log.retention.bytes. #log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created. log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according # to the retention policies log.retention.check.interval.ms=300000 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. zookeeper.connect=172.31.10.1:2181,172.31.10.2:2181,172.31.10.2:2181/kafka # Timeout in ms for connecting to zookeeper zookeeper.connection.timeout.ms=6000
- 同步到其他服务器,更改broker.id
- kafka启动和验证
cd /opt/kafka/kafka_2.11-0.10.1.0/bin nohup /opt/kafka/kafka_2.11-0.10.1.0/bin/kafka-server-start.sh config/server.properties &
创建topic,如能成功创建topic则表示集群安装完成,也可以用jps命令查看kafka进程是否存在。
/opt/kafka/kafka_2.11-0.10.1.0/bin/kafka-topics.sh --create --zookeeper 172.31.10.1:2181,172.31.10.2:2181,172.31.10.2:2181/kafka --replication-factor 3 --partitions 1 --topic test
至此,kafka分布式集群安装完成,后续将深入讲解kafka其他内容。
Kakfa分布式集群搭建的更多相关文章
- Hadoop上路-01_Hadoop2.3.0的分布式集群搭建
一.配置虚拟机软件 下载地址:https://www.virtualbox.org/wiki/downloads 1.虚拟机软件设定 1)进入全集设定 2)常规设定 2.Linux安装配置 1)名称类 ...
- hadoop伪分布式集群搭建与安装(ubuntu系统)
1:Vmware虚拟软件里面安装好Ubuntu操作系统之后使用ifconfig命令查看一下ip; 2:使用Xsheel软件远程链接自己的虚拟机,方便操作.输入自己ubuntu操作系统的账号密码之后就链 ...
- Hadoop分布式集群搭建
layout: "post" title: "Hadoop分布式集群搭建" date: "2017-08-17 10:23" catalog ...
- hbase分布式集群搭建
hbase和hadoop一样也分为单机版.伪分布式版和完全分布式集群版本,这篇文件介绍如何搭建完全分布式集群环境搭建. hbase依赖于hadoop环境,搭建habase之前首先需要搭建好hadoop ...
- 分布式实时日志系统(四) 环境搭建之centos 6.4下hbase 1.0.1 分布式集群搭建
一.hbase简介 HBase是一个开源的非关系型分布式数据库(NoSQL),它参考了谷歌的BigTable建模,实现的编程语言为 Java.它是Apache软件基金会的Hadoop项目的一部分,运行 ...
- kafka系列二:多节点分布式集群搭建
上一篇分享了单节点伪分布式集群搭建方法,本篇来分享一下多节点分布式集群搭建方法.多节点分布式集群结构如下图所示: 为了方便查阅,本篇将和上一篇一样从零开始一步一步进行集群搭建. 一.安装Jdk 具体安 ...
- MinIO 分布式集群搭建
MinIO 分布式集群搭建 分布式 Minio 可以让你将多块硬盘(甚至在不同的机器上)组成一个对象存储服务.由于硬盘分布在不同的节点上,分布式 Minio 避免了单点故障. Minio 分布式模式可 ...
- 阿里云ECS服务器部署HADOOP集群(二):HBase完全分布式集群搭建(使用外置ZooKeeper)
本篇将在阿里云ECS服务器部署HADOOP集群(一):Hadoop完全分布式集群环境搭建的基础上搭建,多添加了一个 datanode 节点 . 1 节点环境介绍: 1.1 环境介绍: 服务器:三台阿里 ...
- 阿里云ECS服务器部署HADOOP集群(三):ZooKeeper 完全分布式集群搭建
本篇将在阿里云ECS服务器部署HADOOP集群(一):Hadoop完全分布式集群环境搭建的基础上搭建,多添加了一个 datanode 节点 . 1 节点环境介绍: 1.1 环境介绍: 服务器:三台阿里 ...
随机推荐
- 原创:SAP LVC ALV编辑小技巧
前两天有个打印需求变更,需要在ALV显示列表中添加两个字段,可编辑,而我自己用的是函数:REUSE_ALV_GRID_DISPLAY_LVC 因为之前做可编辑基本都是固定套路,定义类,画屏幕.... ...
- Web安全开发注意事项
1.sql注入:这个很常规了,不要拼字符串以及过滤关键字都可以防住,需要注意的是,Cookie提交的参 数也是可以导致注入漏洞的.2.旁注:就是说在保证自己的程序没问题的同时,也要保证同台服务器的其他 ...
- .NET下单文件的上传处理
ASP.NET的单文件上传使用控件 <asp:FileUpload ID="upmess" runat="server" Width="248p ...
- iOS开发Facebook POP动效库使用教程
如果说Origami这款动效原型工具是Facebook Paper的幕后功臣,那么POP便是Origami的地基.感谢Facebook开源了POP动效库,让人人都能制作出华丽的动效.我们只需5步,便能 ...
- [android] 手机卫士设置向导页面
设置向导页面,通过SharedPreferences来判断是否已经设置过了,跳转到不同的页面 自定义样式 在res/values/styles.xml中 添加节点<style name=””&g ...
- Cocos2d-X-3.0之后的版本的环境搭建
由于cocos2d游戏开发引擎更新十分频繁,官方文档同步不够及时和完善.所以不要照着官方文档来照做生成工程. <点击图片就能进入网站> 具体的步骤: 1.获取cocos2d-X的源码v3. ...
- IOS-Swift、Objective-C、C++混合编程
1.Objective-C调用C++代码 后缀为m文件的是Objective-C的执行文件,而后缀为mm文件的是Objective-C++文件. 直接在Objective-C中是无法调用C++代码的, ...
- Effective Java 45 Minimize the scope of local variables
Principle The most powerful technique for minimizing the scope of a local variable is to declare it ...
- Eclipse EE 发布项目导致 Tomcate 的配置文件 server.xml 还原
在server.xml中配置SSL时,发现了每次发布项目都导致server.xml被还原了: <Connector port="8443" protocol="or ...
- 【mysql】关于checkpoint机制
一.简介 思考一下这个场景:如果重做日志可以无限地增大,同时缓冲池也足够大,那么是不需要将缓冲池中页的新版本刷新回磁盘.因为当发生宕机时,完全可以通过重做日志来恢复整个数据库系统中的数据到宕机发生的时 ...