Hadoop生态圈-Kafka配置文件详解

                                      作者:尹正杰

版权声明:原创作品,谢绝转载!否则将追究法律责任。

一.默认kafka配置文件内容([yinzhengjie@s101 ~]$ more /soft/kafka/config/server.properties )

[yinzhengjie@s101 ~]$ more /soft/kafka/config/server.properties
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. # see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker.
broker.id= ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://s101:9092 # Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092 # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL # The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads= # The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads= # The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes= # The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes= # The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes= ############################# Log Basics ############################# # A comma separated list of directories under which to store log files
log.dirs=/home/yinzhengjie/kafka/logs # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions= # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir= ############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than is recommended for to ensure availability such as .
offsets.topic.replication.factor=
transaction.state.log.replication.factor=
transaction.state.log.min.isr= ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# . Durability: Unflushed data may be lost if you are not using replication.
# . Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# . Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages= # The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms= ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log. # The minimum age of a log file to be eligible for deletion due to age
log.retention.hours= # A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes= # The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes= # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms= ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=s102:,s103:,s104: # Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms= ############################# Group Coordinator Settings ############################# # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is seconds.
# We override this to here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application st
artup.
group.initial.rebalance.delay.ms=
[yinzhengjie@s101 ~]$

二.Kafka常用参数

1>.broker.id

  答:是Kafka节点服务的唯一身份标识(和zookeeper的myid类似),,如果多台kafka节点的broker.id重复的话,那仅有一个节点可以正常提供服务。

2>.listeners=PLAINTEXT://s101:9092

  答:listeners 表示监听端口,PLAINTEXT表示纯文本(也就是说,不管你发送什么数据类型都以纯文本的方式接收,包括图片,视频等等,只不过接收过来再打开的话可能会是乱码!),s101:9092表示的主机名和端口号。

3>.num.network.threads=3

  答:默认的网络线程数是3。

4>.num.io.threads=8

  答:默认的I/O线程是8。

5>.socket.send.buffer.bytes=102400

  答:默认的套接字发送缓冲是100K

6>.socket.receive.buffer.bytes=102400

  答:默认的tao-jiezi接受缓冲是100k。

7>.socket.request.max.bytes=104857600

  答:默认接收到的最大字节数是100M。

8>.log.dirs=/home/yinzhengjie/kafka/logs

  答:指定存放kafka的真实数据。我这里把真实的数据存在“/home/yinzhengjie/kafka/logs”。

9>.num.partitions=1

  答:默认的分区数是1。

10>.num.recovery.threads.per.data.dir=1

  答:每一个文件夹默认的恢复线程是1。

11>.log.retention.hours=168

  答:默认数据保存时间是168小时,即一个星期(7天)。

12>.log.segment.bytes=1073741824

  答:指定每个数据日志保存最大数据为1G,当超过这个值时,会自动进行日志滚动。

13>.log.retention.check.interval.ms=300000

  答:每隔300秒(即5分钟)检查日志是否过期,如果数据超过了存货日期(默认7天),就会在删除log中对应的数据。

14>.zookeeper.connect=s102:2181,s103:2181,s104:2181

  答:指定zookeeper服务器,如果有多个,需要用逗号分隔。

15>.zookeeper.connection.timeout.ms=6000

  答:设置zookeeper的连接超时时间(默认为6秒),如果到达这个指定时间仍然连接不上的话就默认该节点已经挂掉!

16>.

三.

Hadoop生态圈-Kafka配置文件详解的更多相关文章

  1. kafka实战教程(python操作kafka),kafka配置文件详解

    kafka实战教程(python操作kafka),kafka配置文件详解 应用往Kafka写数据的原因有很多:用户行为分析.日志存储.异步通信等.多样化的使用场景带来了多样化的需求:消息是否能丢失?是 ...

  2. hadoop(10)---hdfs配置文件详解

    以下只是简单的对hdfs(hdfs.site.xml)配置文件做一个简单的说明. <configuration><property><!-- 为namenode集群定义一 ...

  3. kafka配置文件详解

    kafka的配置分为 broker.producter.consumer三个不同的配置 一 .BROKER 的全局配置最为核心的三个配置 broker.id.log.dir.zookeeper.con ...

  4. Hadoop生态圈-Kafka的本地模式部署

    Hadoop生态圈-Kafka的本地模式部署 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.Kafka简介 1>.什么是JMS 答:在Java中有一个角消息系统的东西,我 ...

  5. ES之七:配置文件详解

    安装流程 http://www.elasticsearch.org/overview/elkdownloads/下载对应系统的安装包(我下载的是tar的),下载解压以后运行es根目录下bin目录的el ...

  6. Hadoop Hive sql语法详解

    Hadoop Hive sql语法详解 Hive 是基于Hadoop 构建的一套数据仓库分析系统,它提供了丰富的SQL查询方式来分析存储在Hadoop 分布式文件系统中的数据,可以将结构 化的数据文件 ...

  7. Hadoop生态圈-Kafka的完全分布式部署

    Hadoop生态圈-Kafka的完全分布式部署 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 本篇博客主要内容就是搭建Kafka完全分布式,它是在kafka本地模式(https:/ ...

  8. hadoop应用开发技术详解

    <大 数据技术丛书:Hadoop应用开发技术详解>共12章.第1-2章详细地介绍了Hadoop的生态系统.关键技术以及安装和配置:第3章是 MapReduce的使用入门,让读者了解整个开发 ...

  9. Hadoop基础-Idea打包详解之手动添加依赖(SequenceFile的压缩编解码器案例)

    Hadoop基础-Idea打包详解之手动添加依赖(SequenceFile的压缩编解码器案例) 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.编辑配置文件(pml.xml)(我 ...

随机推荐

  1. Elasticsearch Query DSL 整理总结(一)—— Query DSL 概要,MatchAllQuery,全文查询简述

    目录 引言 概要 Query and filter context Match All Query 全文查询 Full text queries 小结 参考文档 引言 虽然之前做过 elasticse ...

  2. HTML 样式 (style) 实例

    77.HTML 样式 (style) 实例HTML 的 style 属性style 属性的作用: 提供了一种改变所有 HTML 元素的样式的通用方法. 样式是 HTML 4 引入的,它是一种新的首选的 ...

  3. JAVA笔试准备

    建立时间:2019.4.19 修改时间: 腾讯:选择题(30个,一小时内),简答(2道)和编程题(2道) 涉及内容:(也有可能全是算法)C++,JAVA,数据结构,网络,Linux,计算题 1. 磁盘 ...

  4. 炸弹人的Alpha版使用说明

    本游戏是一款手机游戏,学生可以在无聊时打发时间,放松心情.现在只有三关,但游戏运行还算可以. 注意事项: 目前游戏还有一些不好的地方,游戏无法暂停,如果游戏任务死亡,则无法重开. 游戏后面的关卡还需要 ...

  5. js分页实例

    js分页实例 案例1 1.js_pageusers.html <!DOCTYPE html> <html> <head> <title>js_pageu ...

  6. Feature List

    我组最终决定所做的软件工程项目是Bing词典(UWP)的背单词模块,下面是初步定下的Feature List. 按用户场景变化顺序列举(假设是新用户): 1.用户可通过点击“背单词”标识或按钮进入背单 ...

  7. Beta阶段敏捷冲刺前准备

    一.介绍小组新加入的成员,Ta担任的角色. 新成员一:徐璐琳 风格:酷酷哒 擅长的技术:JAVA,CCNP 编程的兴趣:折磨人的快感 新角色:管理员 一句话宣言:打开开关又是一个机会 新成员二:祁泽文 ...

  8. [转帖]Application Request Route实现IIS Server Farms集群负载详解

    Application Request Route实现IIS Server Farms集群负载详解  https://www.cnblogs.com/knowledgesea/p/5099893.ht ...

  9. [菜鸟]HTTP 与 HTTPS 的区别

    HTTP 与 HTTPS 的区别 分类 编程技术 基本概念 HTTP(HyperText Transfer Protocol:超文本传输协议)是一种用于分布式.协作式和超媒体信息系统的应用层协议. 简 ...

  10. [转帖]Kubernetes及容器编排的总体介绍【译】

    Kubernetes及容器编排的总体介绍[译] 翻译自The New Stack<Kubernetes 生态环境>作者:JANAKIRAM MSV和 KRISHNAN SUBRAMANIA ...