RedHat6.5安装kafka单机
版本号:
Redhat6.5 JDK1.8 zookeeper-3.4.6 kafka_2.11-0.8.2.1
1、软件环境
已经搭建好的zookeeper: RedHat6.5安装zookeeper单机
软件版本kafka_2.11-0.8.2.1.tgz
官网下载地址:https://mirrors.tuna.tsinghua.edu.cn/apache/kafka/0.8.2.1/kafka_2.11-0.8.2.1.tgz
百度云盘下载地址:链接:http://pan.baidu.com/s/1qYdl3ys 密码:d9ti
2、创建目录并上传kafka压缩包
- #创建目录
- mkdir /usr/local/kafka
- #创建kafka消息目录,主要存放kafka消息
- mkdir /usr/local/kafka/kafka-logs
把下载好的kafka_2.11-0.8.2.1.tgz压缩包上传到/usr/local/kafka目录下,并执行以下解压命令:
tar -zvxf /usr/local/kafka/kafka_2.11-0.8.2.1.tgz
如图:
3、修改配置文件
3.1修改config/server.properties
进入到config目录
cd /usr/local/kafka/kafka_2.11-0.8.2.1/config
ls
如图:
主要关注:server.properties 这个文件即可,我们可以发现在目录下:
有很多文件,这里可以发现有Zookeeper文件,我们可以根据Kafka内带的zk集群来启动,但是建议使用独立的zk集群
server.properties,参数的解释:
- broker.id=0 #当前机器在kafka机器里唯一标识,与zookeeper的myid一个意思,由于我使用独立zookeeper这里可以注释掉
- port=9092 #这个参数默认是关闭的,当前kafka对外提供服务的端口默认是9092
- #host.name=localhost #broker绑定的IP
- num.network.threads=3 #这个是broker进行网络处理的线程数
- num.io.threads=8 #这个是broker进行I/O处理的线程数
- log.dirs=/tmp/kafka-logs #消息存放的目录,这个目录可以配置为“,”逗号分割的表达式,上面的num.io##3.threads要大于这个目录的个数这个目录,如果配置多个目录,新创建的topic他把消息持久化的地方是,当前以逗号分割的目录中,那个分区数最少就放那一个
- socket.send.buffer.bytes=102400 #发送缓冲区buffer大小,数据不是一下子就发送的,先回存储到缓冲区了到达一定的大小后在发送,能提高性能
- socket.receive.buffer.bytes=102400 #kafka接收缓冲区大小,当数据到达一定大小后在序列化到磁盘
- socket.request.max.bytes=104857600 #这个参数是向kafka请求消息或者向kafka发送消息的请请求的最大数,这个值不能超过java的堆栈大小
- num.partitions=1 #默认的分区数,一个topic默认1个分区数
- log.retention.hours=168 #默认消息的最大持久化时间,168小时,7天 message.max.byte=5242880 #消息保存的最大值5M
- default.replication.factor=2 #kafka保存消息的副本数,如果一个副本失效了,另一个还可以继续提供服务
- replica.fetch.max.bytes=5242880 #取消息的最大直接数 log.segment.bytes=1073741824 #这个参数是:因为kafka的消息是以追加的形式落地到文件,当超过这个值的时候,kafka会新起一个文件
- log.retention.check.interval.ms=300000 #每隔300000毫秒去检查上面配置的log失效时间(log.retention.hours=168 ),到目录查看是否有过期的消息如果有,删除
- log.cleaner.enable=false #是否启用log压缩,一般不用启用,启用的话可以提高性能
- zookeeper.connect=localhost:2181 #设置zookeeper的连接端口
上面是参数的解释,master机器实际的修改项为:
- host.name=192.168.168.200 #broker绑定的IP,要将其释放出来
- log.dirs=/usr/local/kafka/kafka-logs
- zookeeper.connect=192.168.168.200:2181
修改之后的完整的server.properties内容为:
- # Licensed to the Apache Software Foundation (ASF) under one or more
- # contributor license agreements. See the NOTICE file distributed with
- # this work for additional information regarding copyright ownership.
- # The ASF licenses this file to You under the Apache License, Version 2.0
- # (the "License"); you may not use this file except in compliance with
- # the License. You may obtain a copy of the License at
- #
- # http://www.apache.org/licenses/LICENSE-2.0
- #
- # Unless required by applicable law or agreed to in writing, software
- # distributed under the License is distributed on an "AS IS" BASIS,
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- # See the License for the specific language governing permissions and
- # limitations under the License.
- # see kafka.server.KafkaConfig for additional details and defaults
- ############################# Server Basics #############################
- # The id of the broker. This must be set to a unique integer for each broker.
- broker.id=0
- ############################# Socket Server Settings #############################
- # The port the socket server listens on
- port=9092
- # Hostname the broker will bind to. If not set, the server will bind to all interfaces
- host.name=192.168.168.200
- # Hostname the broker will advertise to producers and consumers. If not set, it uses the
- # value for "host.name" if configured. Otherwise, it will use the value returned from
- # java.net.InetAddress.getCanonicalHostName().
- #advertised.host.name=<hostname routable by clients>
- # The port to publish to ZooKeeper for clients to use. If this is not set,
- # it will publish the same port that the broker binds to.
- #advertised.port=<port accessible by clients>
- # The number of threads handling network requests
- num.network.threads=3
- # The number of threads doing disk I/O
- num.io.threads=8
- # The send buffer (SO_SNDBUF) used by the socket server
- socket.send.buffer.bytes=102400
- # The receive buffer (SO_RCVBUF) used by the socket server
- socket.receive.buffer.bytes=102400
- # The maximum size of a request that the socket server will accept (protection against OOM)
- socket.request.max.bytes=104857600
- ############################# Log Basics #############################
- # A comma seperated list of directories under which to store log files
- log.dirs=/usr/local/kafka/kafka-logs
- # The default number of log partitions per topic. More partitions allow greater
- # parallelism for consumption, but this will also result in more files across
- # the brokers.
- num.partitions=1
- # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
- # This value is recommended to be increased for installations with data dirs located in RAID array.
- num.recovery.threads.per.data.dir=1
- ############################# Log Flush Policy #############################
- # Messages are immediately written to the filesystem but by default we only fsync() to sync
- # the OS cache lazily. The following configurations control the flush of data to disk.
- # There are a few important trade-offs here:
- # 1. Durability: Unflushed data may be lost if you are not using replication.
- # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
- # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
- # The settings below allow one to configure the flush policy to flush data after a period of time or
- # every N messages (or both). This can be done globally and overridden on a per-topic basis.
- # The number of messages to accept before forcing a flush of data to disk
- #log.flush.interval.messages=10000
- # The maximum amount of time a message can sit in a log before we force a flush
- #log.flush.interval.ms=1000
- ############################# Log Retention Policy #############################
- # The following configurations control the disposal of log segments. The policy can
- # be set to delete segments after a period of time, or after a given size has accumulated.
- # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
- # from the end of the log.
- # The minimum age of a log file to be eligible for deletion
- log.retention.hours=168
- # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
- # segments don't drop below log.retention.bytes.
- #log.retention.bytes=1073741824
- # The maximum size of a log segment file. When this size is reached a new log segment will be created.
- log.segment.bytes=1073741824
- # The interval at which log segments are checked to see if they can be deleted according
- # to the retention policies
- log.retention.check.interval.ms=300000
- # By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.
- # If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.
- log.cleaner.enable=false
- ############################# Zookeeper #############################
- # Zookeeper connection string (see zookeeper docs for details).
- # This is a comma separated host:port pairs, each corresponding to a zk
- # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
- # You can also append an optional chroot string to the urls to specify the
- # root directory for all kafka znodes.
- zookeeper.connect=192.168.168.200:2181
- # Timeout in ms for connecting to zookeeper
- zookeeper.connection.timeout.ms=6000
3.2配置/etc/profile
sudo gedit /etc/profile
添加如下配置:
- #set kafka environment
- export KAFKA_HOME=/usr/local/kafka/kafka_2.11-0.8.2.1
- export PATH=$KAFKA_HOME/bin:$PATH
source /etc/profile
4、启动Kafka并测试
4.1启动zookeeper服务
- [root@master]# /usr/local/zookeeper/zookeeper-3.4.6/bin/zkServer.sh start
4.2启动Kafka服务
从后台启动Kafka
- #进入到kafka的bin目录
- cd /usr/local/kafka/kafka_2.11-0.8.2.1
- #启动kafka
- bin/kafka-server-start.sh config/server.properties &
- [root@master local]# jps
- 3584 Jps
- 3299 QuorumPeerMain
- 3519 Kafka
4.3 创建一个Topic实例
4.3.1 创建一个主题test
kafka-topics.sh --create --zookeeper 192.168.168.200:2181 --replication-factor 1 --partitions 1 --topic test
- [root@master 桌面]# kafka-topics.sh --create --zookeeper 192.168.168.200:2181 --replication-factor 1 --partitions 1 --topic test
- Created topic "test".
4.3.2 创建一个生产者
kafka-console-producer.sh --broker-list 192.168.168.200:9092 --topic test
此时控制台会捕获键盘值,当有换行键被按下表示一条消息被发送出去
- [root@master local]# kafka-console-producer.sh --broker-list 192.168.168.200:9092 --topic test
- [2017-07-11 20:54:43,465] WARN Property topic is not valid (kafka.utils.VerifiableProperties)
- test
- 6666
- success
- nice
在使用时提示:WARN Property topic is not valid (kafka.utils.VerifiableProperties),不影响正常使用,可忽略。
4.3.3 创建一个消费者
kafka-console-consumer.sh --zookeeper 192.168.168.200:2181 --topic test --from-beginning
此时控制台会处于接收状态, 在生产者上输入信息回车之后,消费者上会同步出现发送过来的消息。
- [root@master local]# kafka-console-consumer.sh --zookeeper 192.168.168.200:2181 --topic test --from-beginning
- test
- 6666
- success
- nice
5关闭kafka命令
kafka-server-stop.sh
搭建完毕!!!
参考自:http://blog.csdn.net/sand_clock/article/details/67633433
RedHat6.5安装kafka单机的更多相关文章
- RedHat6.5安装Spark单机
版本号: RedHat6.5 RHEL 6.5系统安装配置图解教程(rhel-server-6.5) JDK1.8 http://blog.csdn.net/chongxin1/arti ...
- RedHat6.5安装kafka集群
版本号: Redhat6.5 JDK1.8 zookeeper-3.4.6 kafka_2.11-0.8.2.1 1.软件环境 1.3台RedHat机器,master.slave1. ...
- window上安装kafka(单机)
1.第一步骤,先安装JDK,请参考:https://www.cnblogs.com/xubao/p/10692861.html 2.第二步骤,安装zookeeper,请参考:https://www.c ...
- RedHat6.5安装zookeeper单机
版本号: Redhat6.5 zookeeper-3.4.6 JDK1.8 zookeeper下载 官网下载地址:https://mirrors.tuna.tsinghua.edu.cn/apac ...
- centos8安装kafka(单机方式)
一,下载kafka 1,官网地址 http://kafka.apache.org/downloads.html 2,下载 [root@localhost source]# wget http://mi ...
- Linux下Kafka单机安装配置方法(图文)
Kafka是一个分布式的.可分区的.可复制的消息系统.它提供了普通消息系统的功能,但具有自己独特的设计.这个独特的设计是什么样的呢 介绍 Kafka是一个分布式的.可分区的.可复制的消息系统.它提供了 ...
- Linux下Kafka单机安装配置方法
Kafka是一个分布式的.可分区的.可复制的消息系统.它提供了普通消息系统的功能,但具有自己独特的设计.这个独特的设计是什么样的呢? 首先让我们看几个基本的消息系统术语: •Kafka将消息以topi ...
- centos7单机安装kafka,进行生产者消费者测试
[转载请注明]: 原文出处:https://www.cnblogs.com/jstarseven/p/11364852.html 作者:jstarseven 码字挺辛苦的..... 一.k ...
- Kafka单机安装Version1.0.1(自带Zookeeper)
1.说明 Kafka单机安装,基于版本1.0.1, 使用kafka_2.12-1.0.1.tgz安装包, 其中2.12是编译工具Scala的版本. 而且不需要另外安装Zookeeper服务, 使用Ka ...
随机推荐
- Java实现循环链表
本案例需要完成的任务定义如下:实现一个循环链表(单链表),具备增加元素.删除元素.打印循环链表等功能. 网上许多同类问题的实现方式过于复杂.难懂,本文旨在提出一种实现循环链表的简单.易懂的方法. 定义 ...
- laravel 添加验证码
1. 安装依赖 composer require gregwar/captcha 2.使用 use Gregwar\Captcha\CaptchaBuilder; use DB; use Requ ...
- CentOS安装JDK9
1.使用XShell将下载好的jdk-9.0.1_linux-x64_bin.tar.gz包上传到/opt/下 2.解压文件 $ tar -zxvf jdk-9.0.1_linux-x64_bin.t ...
- HDU 6060 17多校3 RXD and dividing(树+dfs)
Problem Description RXD has a tree T, with the size of n. Each edge has a cost.Define f(S) as the th ...
- async 函数--学习笔记一
含义: ES2017 标准引入了 async 函数,使得异步操作变得更加方便.async 函数是什么?一句话,它就是 Generator 函数的语法糖. 前文有一个 Generator 函数,依次读取 ...
- HDU2717-Catch That Cow (BFS入门)
题目传送门:http://acm.hdu.edu.cn/showproblem.php?pid=2717 Catch That Cow Time Limit: 5000/2000 MS (Java/O ...
- busybox 安装问题解决
直接编译错误 1.loginutils/passwd.c:93:16: error: storage size of ‘rlimit_fsize’ isn’t known 解决方法:在busybox根 ...
- Subsequence Count 2017ccpc网络赛 1006 dp+线段树维护矩阵
Problem Description Given a binary string S[1,...,N] (i.e. a sequence of 0's and 1's), and Q queries ...
- [LeetCode&Python] Problem 258. Add Digits
Given a non-negative integer num, repeatedly add all its digits until the result has only one digit. ...
- 修改select样式
CSS就可以解决,原理是将浏览器默认的下拉框样式清除,然后应用上自己的,再附一张向右对齐小箭头的图片即可. select { /*Chrome和Firefox里面的边框是不一样的,所以复写了一下*/ ...