Kakfa分布式集群搭建
本位以最新版本kafka_2.11-0.10.1.0版本讲述分布式kafka集群环境的搭建过程。服务器列表:
172.31.10.1 172.31.10.2 172.31.10.3
1.下载kafka安装包
登录kafka官网http://kafka.apache.org/,
- 单击左侧“Download”按钮
- 选择对应的版本,版本2.11代表scala版本(kafka是由scala编写的),0.10.1.0代表kafka的版本
- 在弹出的窗口中选择下载链接即可
2.下载zookeeper安装包
kafka整体架构如下:
而kafka集群通常会依赖zookeeper的命名服务,单机版的可以直接用kafka安装包的zookeeper,而通常生产环境为保证命名服务的可用性,一般会单独搭建zookeeper集群。服务器不足可以直接和kafka broker共用服务器,zookeeper命名服务队资源要求不高。
登录zookeeper官网http://www.apache.org/dyn/closer.cgi/zookeeper/,一路选择download下载即可,本文选择稳定版zookeeper-3.4.8
3.安装zookeeper集群
将安装包zookeeper-3.4.8.tar上传至服务器172.31.10.1,
- 解压,目录/opt/zookeeper/zookeeper-3.4.8
tar -zxvf zookeeper-3.4.8.tar
- 配置,切换到conf目录,并更改dataDir和server.x
cd /opt/zookeeper/zookeeper-3.4.8/conf mv zoo_sample.cfg zoo.cfg
更改后的zoo.cfg配置如下:
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/var/logs/data/zookeeper # the port at which the clients will connect clientPort=2181 server.1=172.31.10.1:2888:3888 server.2=172.31.10.2:2888:3888 server.3=172.31.10.3:2888:3888 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1
其中dataDir为zookeeper目录,server.x为zookeeper服务器列表的地址和通信端口
- 远程复制到其他两台服务器,并在dataDir目录下创建myid文件,内容为server.x中的数字。本文设置如下:
#172.31.10.1执行 cd /var/logs/data/zookeeper echo "1" > /var/logs/data/zookeeper/myid #172.31.10.2执行 cd /var/logs/data/zookeeper echo "2" > /var/logs/data/zookeeper/myid #172.31.10.3执行 cd /var/logs/data/zookeeper echo "3" > /var/logs/data/zookeeper/myid
- 启动zookeeper集群和验证
#在每台服务器上启动zookeeper cd /opt/zookeeper/zookeeper-3.4.8/bin /opt/zookeeper/zookeeper-3.4.8/bin/zkServer.sh start #查看服务器上zookeeper节点角色 cd /opt/zookeeper/zookeeper-3.4.8/bin /opt/zookeeper/zookeeper-3.4.8/bin/zkServer.sh status
4.安装kafka集群
- 解压,到/opt/kafka/kafka_2.11-0.10.1.0
tar -zxvf kafka_2.11-0.10.1.0.tgz cd /opt/kafka/kafka_2.11-0.10.1.0
- 更改conf/server.properties配置,主要是更改如下几项:
broker.id=1 host.name=172.31.10.1 log.dirs=/var/logs/data/kafka zookeeper.connect=172.31.10.1:2181,172.31.10.2:2181,172.31.10.2:2181/kafka
注意每台服务器上的broker.id均不同,需要保证整个集群中唯一性
更改后的server.properties如下:
############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker. broker.id=1 # The port the socket server listens on port=9092 # Hostname the broker will bind to. If not set, the server will bind to all interfaces host.name=172.31.10.1 # Switch to enable topic deletion or not, default value is false #delete.topic.enable=true ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners = security_protocol://host_name:port # EXAMPLE: # listeners = PLAINTEXT://your.host.name:9092 #listeners=PLAINTEXT://:9092 # Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). #advertised.listeners=PLAINTEXT://your.host.name:9092 # The number of threads handling network requests num.network.threads=3 # The number of threads doing disk I/O num.io.threads=8 # The send buffer (SO_SNDBUF) used by the socket server socket.send.buffer.bytes=102400 # The receive buffer (SO_RCVBUF) used by the socket server socket.receive.buffer.bytes=102400 # The maximum size of a request that the socket server will accept (protection against OOM) socket.request.max.bytes=104857600 ############################# Log Basics ############################# # A comma seperated list of directories under which to store log files log.dirs=/var/logs/data/kafka # The default number of log partitions per topic. More partitions allow greater # parallelism for consumption, but this will also result in more files across # the brokers. num.partitions=1 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. # This value is recommended to be increased for installations with data dirs located in RAID array. num.recovery.threads.per.data.dir=1 ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync # the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs here: # 1. Durability: Unflushed data may be lost if you are not using replication. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or # every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk #log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush #log.flush.interval.ms=1000 ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can # be set to delete segments after a period of time, or after a given size has accumulated. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens # from the end of the log. # The minimum age of a log file to be eligible for deletion log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining # segments don't drop below log.retention.bytes. #log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created. log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according # to the retention policies log.retention.check.interval.ms=300000 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. zookeeper.connect=172.31.10.1:2181,172.31.10.2:2181,172.31.10.2:2181/kafka # Timeout in ms for connecting to zookeeper zookeeper.connection.timeout.ms=6000
- 同步到其他服务器,更改broker.id
- kafka启动和验证
cd /opt/kafka/kafka_2.11-0.10.1.0/bin nohup /opt/kafka/kafka_2.11-0.10.1.0/bin/kafka-server-start.sh config/server.properties &
创建topic,如能成功创建topic则表示集群安装完成,也可以用jps命令查看kafka进程是否存在。
/opt/kafka/kafka_2.11-0.10.1.0/bin/kafka-topics.sh --create --zookeeper 172.31.10.1:2181,172.31.10.2:2181,172.31.10.2:2181/kafka --replication-factor 3 --partitions 1 --topic test
至此,kafka分布式集群安装完成,后续将深入讲解kafka其他内容。
Kakfa分布式集群搭建的更多相关文章
- Hadoop上路-01_Hadoop2.3.0的分布式集群搭建
一.配置虚拟机软件 下载地址:https://www.virtualbox.org/wiki/downloads 1.虚拟机软件设定 1)进入全集设定 2)常规设定 2.Linux安装配置 1)名称类 ...
- hadoop伪分布式集群搭建与安装(ubuntu系统)
1:Vmware虚拟软件里面安装好Ubuntu操作系统之后使用ifconfig命令查看一下ip; 2:使用Xsheel软件远程链接自己的虚拟机,方便操作.输入自己ubuntu操作系统的账号密码之后就链 ...
- Hadoop分布式集群搭建
layout: "post" title: "Hadoop分布式集群搭建" date: "2017-08-17 10:23" catalog ...
- hbase分布式集群搭建
hbase和hadoop一样也分为单机版.伪分布式版和完全分布式集群版本,这篇文件介绍如何搭建完全分布式集群环境搭建. hbase依赖于hadoop环境,搭建habase之前首先需要搭建好hadoop ...
- 分布式实时日志系统(四) 环境搭建之centos 6.4下hbase 1.0.1 分布式集群搭建
一.hbase简介 HBase是一个开源的非关系型分布式数据库(NoSQL),它参考了谷歌的BigTable建模,实现的编程语言为 Java.它是Apache软件基金会的Hadoop项目的一部分,运行 ...
- kafka系列二:多节点分布式集群搭建
上一篇分享了单节点伪分布式集群搭建方法,本篇来分享一下多节点分布式集群搭建方法.多节点分布式集群结构如下图所示: 为了方便查阅,本篇将和上一篇一样从零开始一步一步进行集群搭建. 一.安装Jdk 具体安 ...
- MinIO 分布式集群搭建
MinIO 分布式集群搭建 分布式 Minio 可以让你将多块硬盘(甚至在不同的机器上)组成一个对象存储服务.由于硬盘分布在不同的节点上,分布式 Minio 避免了单点故障. Minio 分布式模式可 ...
- 阿里云ECS服务器部署HADOOP集群(二):HBase完全分布式集群搭建(使用外置ZooKeeper)
本篇将在阿里云ECS服务器部署HADOOP集群(一):Hadoop完全分布式集群环境搭建的基础上搭建,多添加了一个 datanode 节点 . 1 节点环境介绍: 1.1 环境介绍: 服务器:三台阿里 ...
- 阿里云ECS服务器部署HADOOP集群(三):ZooKeeper 完全分布式集群搭建
本篇将在阿里云ECS服务器部署HADOOP集群(一):Hadoop完全分布式集群环境搭建的基础上搭建,多添加了一个 datanode 节点 . 1 节点环境介绍: 1.1 环境介绍: 服务器:三台阿里 ...
随机推荐
- 【转】IOS开发资源汇总
转自:http://blog.csdn.net/favormm/article/details/6664970 如何用Facebook graphic api上传视频: http://develope ...
- 初识 TextKit
iOS 7 的发布给开发者的案头带来了很多新工具.其中一个就是 TextKit.TextKit 由许多新的 UIKit 类组成,顾名思义,这些类就是用来处理文本的.在这里,我们将介绍 TextKit ...
- iOS之 随笔Xcode7的lipo
此文是学习所用,若要转载请注明出处 Xcode的lipo xcrun -sdk iphoneos lipo Xcode7以后lipo存放位置变化 /Applications/Xcode.app/Con ...
- TCP & UDP & IP
TCP和UDP的区别 TCP UDP 是否连接 面向连接 面向非连接 应用场合 可靠的 不可靠的 速度 慢 快 传送数据 字节流 数据报 是否可用于广播 否 是 为什么UDP比TCP快 不需要连接 ...
- 记录ubuntu下的svn一些操作
1.install svn serversudo apt-get install subversion 2.make repositorysudo mkdir /home/.svnsudo mkdie ...
- mysql 命令行
drop database mustang; create database mustang; show databases; use database mustang; show tables; s ...
- mysql登录和连接 权限
在一些配置中会要求登录mysql 授权的时候注意ip地址是ip地址,localhost是localhost,在grant授权时,如果用localhost,就必须在所登录的配置文件中使用localhos ...
- 每日Scrum--No.7
Yesterday:学习和设计路线的编程 Today:编写代码 Problem:.在设计查询参观路线的时候,整个逻辑特别的混乱,设想了各种树,图以及网的遍历问题,但经过多次与同学的交流以及网上的查询资 ...
- Effective Java 10 Always override toString() method
Advantage Provide meaningful of an object info to client. Disadvantage Constrain the ability of chan ...
- Android开发之 Windows环境下通过Eclipse创建的第一个安卓应用程序(图文详细步骤)
第一篇 windows环境下搭建创建的第一个安卓应用程序 为了方便,我这里只采用了一体包进行演示. 一.下载安卓环境的一体包. 官网下载:安卓官网(一般被墙了) 网盘下载: http://yunpa ...