Kafka系列一 基本安装
一 配置文件(下载、解压、跳过)
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. # see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker.
#Broker的全局唯一编号,不能重复
broker.id=0 ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092 # Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092 # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL # The number of threads that the server uses for receiving requests from the network and sending responses to the network
#处理网络请求的线程数量
num.network.threads=3 # The number of threads that the server uses for processing requests, which may include disk I/O
#用来处理磁盘IO的线程数量
num.io.threads=8 # The send buffer (SO_SNDBUF) used by the socket server
#发送套接字的缓冲区大小
socket.send.buffer.bytes=102400 # The receive buffer (SO_RCVBUF) used by the socket server
#接收套接字的缓冲区大小
socket.receive.buffer.bytes=102400 # The maximum size of a request that the socket server will accept (protection against OOM)
#请求套接字的缓冲区大小
socket.request.max.bytes=104857600 ############################# Log Basics ############################# # A comma seperated list of directories under which to store log files
#运行日志存放路径
log.dirs=/home/hadoop/logs/kafka # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
#topic 的分片个数
num.partitions=3 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
#用来恢复和清理Data下数据的线程数量
num.recovery.threads.per.data.dir=1 ############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1 ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000 ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log. # The minimum age of a log file to be eligible for deletion due to age
#segment文件保留的最长时间,超时将被删除
log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
# zk地址
zookeeper.connect=hadoop2:2181,hadoop3:2181,hadoop4:2181 # Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000 ############################# Group Coordinator Settings ############################# # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
server.properties
二 启动集群
分发安装包,修改每个配置文件的broker.id ,不得重复
启动zookeeper集群(hadoop2,hadoop3,hadoop4)
依次在各节点上启动kafka
bin/kafka-server-start.sh config/server.properties
三 常用操作命令
1 查看当前服务器中所有的topic
bin/kafka-topics.sh --list --zookeeper hadoop2:2181
2 创建topic
bin/kafka-topics.sh --create --zookeeper hadoop2:2181 --replication-factor 1 --partitions 3 --topic first
--replication-factor 1 副本个数
--partitions 3 分片个数
--topic first 主题名字
3 删除topic
bin/kafka-topics.sh --delete --zookeeper hadoop2:2181 --topic first
需要server.properties中设置delete.topic.enable=true否则只是标记删除或者直接重启
4 通过shell命令发送消息
bin/kafka-console-producer.sh --broker-list kafka1:9092 --topic first
5 通过shell命令消费消息
bin/kafka-console-consumer.sh --zookeeper hadoop2:2181 --from-beginning --topic first
6 查看消费者位置
bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper hadoop2:2181 --group testGroup
7 查看某个Topic详情
bin/kafka-topics.sh --topic first--describe --zookeeper hadoop2:2181
Kafka系列一 基本安装的更多相关文章
- Kafka系列一之架构介绍和安装
Kafka架构介绍和安装 写在前面 还是那句话,当你学习一个新的东西之前,你总得知道这个东西是什么?这个东西可以用来做什么?然后你才会去学习它,使用它.简单来说,kafka既是一个消息队列,如今,它也 ...
- Kafka集群的安装和使用
Kafka是一种高吞吐量的分布式发布订阅的消息队列系统,原本开发自LinkedIn,用作LinkedIn的活动流(ActivityStream)和运营数据处理管道(Pipeline)的基础.现在它已被 ...
- Kafka系列之-Kafka监控工具KafkaOffsetMonitor配置及使用
KafkaOffsetMonitor是一个可以用于监控Kafka的Topic及Consumer消费状况的工具,其配置和使用特别的方便.源项目Github地址为:https://github.com/q ...
- Kali linux系列之 zmap 安装
Kali linux系列之 zmap 安装 官方文档地址:https://zmap.io/ 准备:保证有比较顺畅的更新源,可以更新系统,下载安装包. 安装 第一步:sudo apt-get insta ...
- Redis系列(1)之安装
Redis系列(1)之安装 由于项目的需要,最近需要研究下Redis.Redis是个很轻量级的NoSql内存数据库,它有多轻量级的呢,用C写的,源码只有3万行,空的数据库只占1M内存.它的功能很丰富, ...
- Kafka系列之-Kafka Protocol实例分析
本文基于A Guide To The Kafka Protocol文档,以及Spark Streaming中实现的org.apache.spark.streaming.kafka.KafkaClust ...
- Kafka系列之-Kafka入门
接下来的这些博客,主要内容来自<Learning Apache Kafka Second Edition>这本书,书不厚,200多页.接下来摘录出本书中的重要知识点,偶尔参考一些网络资料, ...
- Open vSwitch系列之二 安装指定版本ovs
在ovs学习过程中,如果自己想要安装一个ovs交换机其实一条简单的命令 apt install openvswitch 就可以了,但是这种方法只能安装低版本的ovs.在特殊情况下需要安装指定版本,例 ...
- kubernetes系列03—kubeadm安装部署K8S集群
本文收录在容器技术学习系列文章总目录 1.kubernetes安装介绍 1.1 K8S架构图 1.2 K8S搭建安装示意图 1.3 安装kubernetes方法 1.3.1 方法1:使用kubeadm ...
随机推荐
- 编译并导入OpenSSL
编译并导入OpenSSL 1. 首先,需要运行脚本生成OpenSSL库,参考 https://github.com/x2on/OpenSSL-for-iPhone 示例 2. 运行脚本生成静态库 下一 ...
- iOS8 生成二维码与条形码
iOS8 生成二维码与条形码 效果图: 源码: // // ViewController.m // CodeCreator // // Created by YouXianMing on 15/3/1 ...
- 【转载】http和socket之长连接和短连接
TCP/IP TCP/IP是个协议组,可分为三个层次:网络层.传输层和应用层. 在网络层有IP协议.ICMP协议.ARP协议.RARP协议和BOOTP协议. 在传输层中有TCP协议与UDP协议. 在应 ...
- RedHat 7 安装PostgreSQL 10.5
系统环境 Redhat: Version: 7.4.1708 Architecture: x86_64 Address: 10.127.1.11 User: root Uassword: redhat ...
- 内置数据结构(tuple)
一.元组(tuple) 元组不能增.删和改,所以元组的元素只能查. tp = tuple() #初始化一个元组 tp = () #同上 tp = (1, 2, 3, 4,) #错误的定义元组方式 t ...
- ord 字符转code chr : code转字符
print(ord('刀')) # ord 字符转Unicode # 20992 print(chr(20992)) # Unicode 转成chr(字符)
- Fix for: Permission denied to access property 'toString'
Originally posted by rwolffgang here. Hi guys,when developing a game that runs in an iframe (Faceboo ...
- Git操作(提高篇)
Git操作(提高篇) 分支管理 分支就是科幻电影里面的平行宇宙,当你正在电脑前努力学习Git的时候,另一个你正在另一个平行宇宙里努力学习SVN. 假设你准备开发一个新功能,但是需要两周才能完成,第一周 ...
- NutzWk 5.0.x 微服务分布式版本开发及部署说明
NutzWk 5.x 已发布一段时间,这段时间基于此版本开发了智慧水务系统(NB-IOT).某物联网平台.某设备租赁平台.某智慧睡眠平台.某智慧园区项目等,开发和部署过程中遇到一些小问题,开这个帖子把 ...
- 记一次webservice的超时时间设置
一次项目组中需要控制超时时间,前期习惯用CXF实现,熟悉的才是最好的.所以这次依然想用CXF实现. 实现的方式代码如下: static{ String fvpWebserviceUrl = Prope ...