安装kafka集群
1解压tar包
- tar -zxvf kafka_2.-1.1..tgz
2.进入config目录
3.配置server.properties文件
- # Licensed to the Apache Software Foundation (ASF) under one or more
- # contributor license agreements. See the NOTICE file distributed with
- # this work for additional information regarding copyright ownership.
- # The ASF licenses this file to You under the Apache License, Version 2.0
- # (the "License"); you may not use this file except in compliance with
- # the License. You may obtain a copy of the License at
- #
- # http://www.apache.org/licenses/LICENSE-2.0
- #
- # Unless required by applicable law or agreed to in writing, software
- # distributed under the License is distributed on an "AS IS" BASIS,
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- # See the License for the specific language governing permissions and
- # limitations under the License.
- # see kafka.server.KafkaConfig for additional details and defaults
- ############################# Server Basics #############################
- # The id of the broker. This must be set to a unique integer for
- #注意在集群中brokerid是独一无二的
- each broker.
- broker.id=2
- ############################# Socket Server Settings #############################
- # The address the socket server listens on. It will get the value returned from
- # java.net.InetAddress.getCanonicalHostName() if not configured.
- # FORMAT:
- # listeners = listener_name://host_name:port
- # EXAMPLE:
- # listeners = PLAINTEXT://your.host.name:9092
- #打开#
- listeners=PLAINTEXT://:9092
- # Hostname and port the broker will advertise to producers and consumers. If not set,
- # it uses the value for "listeners" if configured. Otherwise, it will use the value
- # returned from java.net.InetAddress.getCanonicalHostName()
- #配置自己的ip
- advertised.listeners=PLAINTEXT://192.168.5.102:9092
- # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
- #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
- # The number of threads that the server uses for receiving requests from the network and sending responses to the network
- num.network.threads=3
- # The number of threads that the server uses for processing requests, which may include disk I/O
- num.io.threads=8
- # The send buffer (SO_SNDBUF) used by the socket server
- socket.send.buffer.bytes=102400
- # The receive buffer (SO_RCVBUF) used by the socket server
- socket.receive.buffer.bytes=102400
- # The maximum size of a request that the socket server will accept (protection against OOM)
- socket.request.max.bytes=104857600
- ############################# Log Basics #############################
- # A comma separated list of directories under which to store log
- #配置数据文件地址
- files
- log.dirs=/app/kafka/log
- # The default number of log partitions per topic. More partitions allow greater
- # parallelism for consumption, but this will also result in more files across
- # the brokers.
- num.partitions=1
- # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
- # This value is recommended to be increased for installations with data dirs located in RAID array.
- num.recovery.threads.per.data.dir=1
- ############################# Internal Topic Settings #############################
- # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
- # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
- offsets.topic.replication.factor=1
- transaction.state.log.replication.factor=1
- transaction.state.log.min.isr=1
- ############################# Log Flush Policy #############################
- # Messages are immediately written to the filesystem but by default we only fsync() to sync
- # the OS cache lazily. The following configurations control the flush of data to disk.
- # There are a few important trade-offs here:
- # 1. Durability: Unflushed data may be lost if you are not using replication.
- # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
- # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
- # The settings below allow one to configure the flush policy to flush data after a period of time or
- # every N messages (or both). This can be done globally and overridden on a per-topic basis.
- # The number of messages to accept before forcing a flush of data to disk
- #log.flush.interval.messages=10000
- # The maximum amount of time a message can sit in a log before we force a flush
- #log.flush.interval.ms=1000
- ############################# Log Retention Policy #############################
- # The following configurations control the disposal of log segments. The policy can
- # be set to delete segments after a period of time, or after a given size has accumulated.
- # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
- # from the end of the log.
- # A size-based retention policy for logs. Segments are pruned from the log unless the remaining
- # segments drop below log.retention.bytes. Functions independently of log.retention.hours.
- #log.retention.bytes=1073741824
- # The maximum size of a log segment file. When this size is reached a new log segment will be created.
- log.segment.bytes=1073741824
- # The interval at which log segments are checked to see if they can be deleted according
- # to the retention policies
- log.retention.check.interval.ms=300000
- ############################# Zookeeper #############################
- # Zookeeper connection string (see zookeeper docs for details).
- # This is a comma separated host:port pairs, each corresponding to a zk
- # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
- # You can also append an optional chroot string to the urls to specify the
- # root directory for all kafka znodes.
- #配置zookeeper集群地址
- zookeeper.connect=192.168.5.101:2181,192.168.5.102:2181,192.168.5.103:2181
- # Timeout in ms for connecting to zookeeper
- zookeeper.connection.timeout.ms=6000
- ############################# Group Coordinator Settings #############################
- # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
- # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
- # The default value for this is 3 seconds.
- # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
- # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
- group.initial.rebalance.delay.ms=0
进入bin目录
后台启动kafka:
./kafka-server-start.sh ../config/server.properties 1>/dev/null 2>&1 &
指定监听端口启动:
- JMX_PORT=2898 ./
- kafka-server-start.sh ../config/server.properties 1>/dev/null 2>&1 &
安装kafka集群的更多相关文章
- Centos7.5安装kafka集群
Tags: kafka Centos7.5安装kafka集群 Centos7.5安装kafka集群 主机环境 软件环境 主机规划 主机安装前准备 安装jdk1.8 安装zookeeper 安装kafk ...
- Centos安装Kafka集群
kafka是LinkedIn开发并开源的一个分布式MQ系统,现在是Apache的一个孵化项目.在它的主页描述kafka为一个高吞吐量的分布式(能 将消息分散到不同的节点上)MQ.在这片博文中,作者简单 ...
- CentOS7 安装kafka集群
1. 环境准备 JDK1.8 ZooKeeper集群(参见本人博文) Scala2.12(如果需要做scala开发的话,安装方法参见本人博文) 本次安装的kafka和zookeeper集群在同一套物理 ...
- RedHat6.5安装kafka集群
版本号: Redhat6.5 JDK1.8 zookeeper-3.4.6 kafka_2.11-0.8.2.1 1.软件环境 1.3台RedHat机器,master.slave1. ...
- helm安装kafka集群并测试其高可用性
介绍 Kafka是由Apache软件基金会开发的一个开源流处理平台,由Scala和Java编写.Kafka是一种高吞吐量的分布式发布订阅消息系统,它可以处理消费者在网站中的所有动作流数据. 这种动作( ...
- 快速安装 kafka 集群
前言 最近因为工作原因,需要安装一个 kafka 集群,目前网络上有很多相关的教程,按着步骤来也能完成安装,只是这些教程都显得略微繁琐.因此,我写了这篇文章帮助大家快速完成 kafka 集群安装. ...
- 安装kafka 集群 步骤
1.下载 http://mirror.bit.edu.cn/apache/kafka/2.1.0/kafka_2.11-2.1.0.tgz 2.解压 tar -zxvf kafka_2.11-2.1 ...
- CentOS6.5 安装Kafka集群
1.安装zookeeper 参考文档:http://www.cnblogs.com/hunttown/p/5452138.html 2.下载:https://www.apache.org/dyn/cl ...
- kafka集群安装及基本命令行使用
集群安装 环境介绍 本次安装kafka集群利用的是自带的zooKeeper,其实最好是把kafka和zooKeeper部署在不同的节点上,这样更高可用. 三个节点: kafka1:192.168.56 ...
随机推荐
- 头文件中ifndef/define/endif的作用以及#pragma once使用
例如:要编写头文件test.h 在头文件开头写上两行: #ifndef _TEST_H #define _TEST_H//一般是文件名的大写 ············ ············ 头文件 ...
- Java设计模式(13)模板模式(Template模式)
Template模式定义:定义一个操作中算法的骨架,将一些步骤的执行延迟到其子类中. 其实Java的抽象类本来就是Template模式,因此使用很普遍.而且很容易理解和使用,我们直接以示例开始: pu ...
- 你必须要懂的APK瘦身知识
随着业务复杂度的逐渐增加,代码.资源也在不断的增加,此时你的APP大小也在增加.从用户层面来说,面对动辄几十兆的APP来说在非WIFI情况下还是会犹豫要不要下载,不下载你就可能因此失去了一个用户.从公 ...
- Ubuntu 地址导航栏修改为显示路径及如何恢复原模式?
在Ubuntu中,文件位置默认不是和Windows地址栏一样的(位置项),而是显示是路径名称组合(路径栏),这对于文件路径的获得复制很不方便.比如,获得某个文件德 路径,如果是地址栏,直接复制一下就可 ...
- 解决jar包乱码 in 创新实训 智能自然语言交流系统
今天用eclipse的fat jar插件,打成jar包.之后再命令行运行...程序的功能是切分大的文件...结果是切分的很正确,但是里面的中文都变成了乱码. 最开始以为是在Eclipse中的编码设置有 ...
- MY_使用selenium自动登录126/163邮箱并发送邮件
转自:https://www.cnblogs.com/yin-tao/p/7244082.html 我使用的是python2.7.13+selenium ps:几天之前,我曾多次尝试写这段代码,但是在 ...
- 怎么用一个ppt介绍一个项目
- GitHub developer API 学习
官网地址:https://developer.github.com/v3/ 目录 当前版本 schema parameters root endpoint client errors http red ...
- 第三百四十七节,Python分布式爬虫打造搜索引擎Scrapy精讲—通过downloadmiddleware中间件全局随机更换user-agent浏览器用户代理
第三百四十七节,Python分布式爬虫打造搜索引擎Scrapy精讲—通过downloadmiddleware随机更换user-agent浏览器用户代理 downloadmiddleware介绍中间件是 ...
- JUnit断言
在本节中,我们将介绍一些断言方法.所有这些方法都受到 Assert 类扩展了java.lang.Object类并为它们提供编写测试,以便检测故障.下表中有一种最常用的断言方法的更详细的解释. 断言 描 ...