今天需要在新机器上安装一个kafka集群,其实kafka我已经装了十个不止了,但是没有一个是为生产考虑的,因此比较汗颜,今天好好地把kafka的安装以及配置梳理一下;

1,kafka版本选取;

现在我写博客的时候kafka的最新版本是1.1.0,如果最新版本稳定我就直接用最新的了,但是不一定稳定,因此,我先观望一下,kafka地址:http://kafka.apache.org/downloads;

2,zookeeper版本选取;

去zookeeper的官网分析了一下zookeeper的版本,也没选出来个所以然,刚想下载3.4.10版本,去了下载页面没想到有一个目录就放着稳定版本:http://mirror.bit.edu.cn/apache/zookeeper/stable/,因此直接就选他啦,3.4.12版本;

3,服务器环境调试;

发现没有装jdk,装上;、

防火墙先关闭;

selinux关闭;

4,zookeeper安装;

我把zookeeper的压缩文件放到了root目录下了,先到/opt目录下,然后:

执行:tar -zxvf /root/zookeeper-3.4.12.tar.gz 将文件解压;

执行: mv zookeeper-3.4.12 zookeeper 将文件重命名,主要是为了方便;

执行: cd /opt/zookeeper/conf 进入zookeeper的配置目录;

执行:mv zoo_sample.cfg zoo.cfg 将示例配置文件重命名;

执行:vi zoo.cfg 开始配置zookeeper,其实没什么需要改的,人家本来的配置就够用的了,如果说有需要的话,那就把允许的最大连接数改大,我改成了300,直接上配置文件,我这里是配置了三台zookeeper,如果多的话自己加上自己的机器就好,把文件中的地址改掉就好了:

# The number of milliseconds of each tick
tickTime=
# The number of ticks that the initial
# synchronization phase can take
initLimit=
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/data1/zookeeper
# the port at which the clients will connect
clientPort=
# the maximum number of client connections.
# increase this if you need to handle more clients
maxClientCnxns=
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=
# Purge task interval in hours
# Set to "" to disable auto purge feature
#autopurge.purgeInterval=
server.=10.16.26.110:: server.=10.16.26.116:: server.=10.16.26.127::

执行:mkdir /data1/zookeeper 先把zookeeper的文件目录创建出来;

执行:cd /data1/zookeeper 进入此目录;

执行:vi myid 创建一个叫myid的文件,此文件中用来表示本台机器的id是多少,放在咱们的集群里肯定就是1/2/3啦;

这样zookeeper就配置完了,按照此步骤把几台机器都配置好;

4,kafka参数配置;

新版本的kafka已经很好用了,不需要做 太多的配置;

 # Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. # see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker.
broker.id= ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://10.16.26.110:9092 # Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092 # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL # The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads= # The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads= # The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes= # The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes= # The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes= ############################# Log Basics ############################# # A comma separated list of directories under which to store log files
log.dirs=/data1/kafka-logs # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions= delete.topic.enable=true default.replication.factor= # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir= ############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than is recommended for to ensure availability such as .
offsets.topic.replication.factor=
transaction.state.log.replication.factor=
transaction.state.log.min.isr= ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# . Durability: Unflushed data may be lost if you are not using replication.
# . Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# . Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages= # The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms= ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log. # The minimum age of a log file to be eligible for deletion due to age
log.retention.hours= # A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes= # The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes= # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms= ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=10.16.26.110:,10.16.26.126:,10.16.26.127: # Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms= ############################# Group Coordinator Settings ############################# # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is seconds.
# We override this to here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=

kafka 配置

5,kafka安装;

和zookeeper查不多,解压然后根据上一步的配置配置好就可以了;

6,zookeeper以及kafka启动;

zookeeper启动,进入zookeeper的bin目录下然后执行:./zkServer.sh start ;即可,然后其他两台也如此执行;

kakfa启动:进入kafka的bin目录执行:./kafka-server-start.sh  -daemon ../config/server.properties &

kafka安装教程的更多相关文章

  1. Kafka安装教程(详细过程)

    安装前期准备: 1,准备三个节点(根据自己需求决定) 2,三个节点上安装好zookeeper(也可以使用kafka自带的zookeeper) 3,关闭防火墙 chkconfig  iptables o ...

  2. kafka 安装教程

    安装详述: https://www.jianshu.com/p/596f107e901a 3.0:运行:cd 到: D:\Installed_software\Professional\kafka_2 ...

  3. Linux下Kafka下载与安装教程

    原文链接:http://www.studyshare.cn/software/details/1176/0 一.预备环境 Kafka是java生态圈中的一员,运行在java虚拟机上,按Kafka官方说 ...

  4. 保姆级别的RabbitMQ教程!一看就懂!(有安装教程,送安装需要的依赖包,送Java、Golang两种客户端教学Case)

    保姆级别的RabbitMQ教程!一看就懂!(有安装教程,送安装需要的依赖包,送Java.Golang两种客户端教学Case)   目录 什么是AMQP 和 JMS? 常见的MQ产品 安装RabbitM ...

  5. kafka实战教程(python操作kafka),kafka配置文件详解

    kafka实战教程(python操作kafka),kafka配置文件详解 应用往Kafka写数据的原因有很多:用户行为分析.日志存储.异步通信等.多样化的使用场景带来了多样化的需求:消息是否能丢失?是 ...

  6. Kafka入门教程(二)

    转自:https://blog.csdn.net/yuan_xw/article/details/79188061 Kafka集群环境安装 相关下载 JDK要求1.8版本以上. JDK安装教程:htt ...

  7. Prometheus安装教程

    Prometheus安装教程 欢迎关注H寻梦人公众号 参考目录 docker安装Prometheus 基于docker 搭建Prometheus+Grafana prometheus官方文档 dock ...

  8. Linux+apache+mono+asp.net安装教程

    Linux+apache+mono+asp.net安装教程(CentOS上测试的) 一.准备工作: 1.安装linux系统(CentOS,这个就不多讲了) 2.下载所需软件 http-2.4.4.ta ...

  9. Greenplum 源码安装教程 —— 以 CentOS 平台为例

    Greenplum 源码安装教程 作者:Arthur_Qin 禾众 Greenplum 主体以及orca ( 新一代优化器 ) 的代码以可以从 Github 上下载.如果不打算查看代码,想下载编译好的 ...

随机推荐

  1. Centos6.5部署vsftpd+mysql认证

    1.FTP传输原理 FTP,文件传输协议,是工作在应用层,基于TCP实现,依赖于互联网即可通讯. 1)连接模式 控制(命令)连接,用来通信,一直在线,客户端随机端口连接服务端TCP:21端口. 数据连 ...

  2. mac下启动mysql

    mac下使用mysql有点蛋疼,每次都要找命令.可能不同版本或者安装方式mysql的位置不太一样, 可以使用locate mysql.server查找一下. # start sudo /usr/loc ...

  3. CMake入门

    CMake入门 CMake是一个跨平台的安装编译工具,可以用简单的语句来描述所有平台的安装(编译过程).他能够输出各种各样的makefile或者project文件,能测试编译器所支持的C++特性,类似 ...

  4. helm-chart7,调试与hook

    调试 几个命令可以帮助进行调试 helm lint 首选工具,返回错误和警告信息. helm install --dry-run --debug:服务器会渲染你的模板,然后返回结果清单文件. helm ...

  5. nginx/php的redis模块扩展

    redis模块介绍 redis2-nginx-module 可以实现 Nginx 以非阻塞方式直接防问远方的 Redis 服务,可以启用强大的 Redis 连接池功能,进而实现更多的连接与更快速的访问 ...

  6. myeclipse连接mysql失败出错,已解决问题

    问题描述: 解决方案:

  7. openlayers3 基础(常见方法,类及实现)

    ol3接口大全1.ol.Map类:(地图容器类) 实现: ol.Map(参数) 参数说明:1.1 target,说明地图所在的html元素. 如果没有指定,必须调用ol.Map类的setTarget( ...

  8. wpf1

    emCombobox.Items[2].IsEnabled = false; 隐藏下拉框里面的一个item wpf 单例模式. [DllImport("user32", CharS ...

  9. 自动化测试如何使用driver.findElements去操作页面元素

    当你要操作的元素页面有很多个的时候,如下图这样,你想使用".datagrid-row-expander.datagrid-row-expand"这个cssSelector,这个时候 ...

  10. oo第二次总结

    第五次作业 度量分析 因为第五次作业是在第三次作业的基础上改的,所以出现了与第三次作业一样的问题,即圈复杂度超标和嵌套现象严重.同时,由于对多线程的不熟悉,将一些功能集中的放入了一个类里,这也是McC ...