kafka学习之-配置详解
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker.
broker.id=1 ############################# Socket Server Settings ############################# listeners=PLAINTEXT://:9092 # The port the socket server listens on
port=9092 # Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=192.168.14.105 # Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured. Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
advertised.host.name=192.168.14.105 # The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.
#advertised.port=<port accessible by clients> # The number of threads handling network requests
num.network.threads=4 # The number of threads doing disk I/O
num.io.threads=8 # The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400 # The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400 # The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600 ############################# Log Basics ############################# # A comma seperated list of directories under which to store log files
log.dirs=/tmp/kafka-logs # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=8 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
auto.create.topics.enable=true
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk
log.flush.interval.messages=20000 # The maximum amount of time a message can sit in a log before we force a flush
log.flush.interval.ms=10000 log.flush.scheduler.interval.ms=2000
log.retention.check.interval.ms=300000
############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log. # The minimum age of a log file to be eligible for deletion
log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000 # By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.
# If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.
log.cleaner.enable=false
############################# partition replicas #############################
num.replica.fetchers=4
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms =500
replica.high.watermark.checkpoint.interval.ms=5000
controller.socket.timeout.ms =30000
controller.message.queue.size=10
replica.lag.time.max.ms =10000
replica.lag.max.messages =4000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=192.168.14.100:2181,192.168.14.105:2181,192.168.14.102:2181 # Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
zookeeper.sync.time.ms=2000
zookeeper.session.timeout.ms=6000
http://www.cnblogs.com/fxjwind/p/3956353.html
apache kafka系列之server.properties配置文件参数说明 --参考
http://kafka.apache.org/documentation.html --官方文档
kafka学习之-配置详解的更多相关文章
- Struts2框架学习(三)——配置详解
一.struts.xml配置 <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE struts ...
- spring再学习之配置详解
applicationContext.xml文件配置: bean元素: <?xml version="1.0" encoding="UTF-8"?> ...
- kafka原理和实践(五)spring-kafka配置详解
系列目录 kafka原理和实践(一)原理:10分钟入门 kafka原理和实践(二)spring-kafka简单实践 kafka原理和实践(三)spring-kafka生产者源码 kafka原理和实践( ...
- Struts2学习笔记二 配置详解
Struts2执行流程 1.简单执行流程,如下所示: 在浏览器输入请求地址,首先会被过滤器处理,然后查找主配置文件,然后根据地址栏中输入的/hello去每个package中查找为/hello的name ...
- Spring学习 6- Spring MVC (Spring MVC原理及配置详解)
百度的面试官问:Web容器,Servlet容器,SpringMVC容器的区别: 我还写了个文章,说明web容器与servlet容器的联系,参考:servlet单实例多线程模式 这个文章有web容器与s ...
- webpack学习(五)配置详解
配置详解 //使用插件html-webpack-plugin打包合并html //使用插件extract-text-webpack-plugin打包独立的css //使用UglifyJsPlugin压 ...
- 大数据学习day11------hbase_day01----1. zk的监控机制,2动态感知服务上下线案例 3.HDFS-HA的高可用基本的工作原理 4. HDFS-HA的配置详解 5. HBASE(简介,安装,shell客户端,java客户端)
1. ZK的监控机制 1.1 监听数据的变化 (1)监听一次 public class ChangeDataWacher { public static void main(String[] arg ...
- 日志分析工具ELK配置详解
日志分析工具ELK配置详解 一.ELK介绍 1.1 elasticsearch 1.1.1 elasticsearch介绍 ElasticSearch是一个基于Lucene的搜索服务器.它提供了一个分 ...
- Flink 从 0 到 1 学习 —— Flink 配置文件详解
前面文章我们已经知道 Flink 是什么东西了,安装好 Flink 后,我们再来看下安装路径下的配置文件吧. 安装目录下主要有 flink-conf.yaml 配置.日志的配置文件.zk 配置.Fli ...
随机推荐
- 【Android】11.0 第11章 活动和片段--本章示例主界面
分类:C#.Android.VS2015: 创建日期:2016-02-21 一.简介 这一章我们学习activity和fragment,深入理解activity和fragment的生命周期是如何工作的 ...
- hdoj 1027 Ignatius and the Princess II 【逆康托展开】
Ignatius and the Princess II Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 65536/32768 K ( ...
- zend studio 安装后一体化配置
1.安装语言包http://www.eclipse.org/babel/downloads.php 11.0的时候仍然是junohttp://download.eclipse.org/technolo ...
- Oracle“不等于号”与Null的情况
今天突然才发现,Oracle中的“不等于操作符”是忽略Null的. 比如,查询comm不等于的300的记录,我会理所当然地使用where comm != 300 预想会返回包含Null的不等于300的 ...
- [已修正]安装struts找不到tld文件
今天安装的struts1.3,但是缺少tld文件,所以无法使用taglib,找了半天 假设你的struts版本为1.3.10 解压后的目录为F:\struts-1.3.10-all\struts-1. ...
- 腾讯云Ubuntu挂载硬盘空间
第一.检查硬盘设备是否有数据盘 42G是系统盘那么就剩下了200G的剩余空间,那么下面我就把这200G挂载. 查询命令: sudo fdisk -l 我们可以看到有200GB的数据盘没有挂载,看好前 ...
- java.io.PrintWriter 中 write() 与 print() 的区别
最终都是重写了抽象类Writer里面的write方法print方法可以将各种类型的数据转换成字符串的形式输出.重载的write方法只能输出字符.字符数组.字符串等与字符相关的数据.
- 面试-默认参数(传值)、var(传址)、out(输出)、const(常数)
相关资料:1.http://blog.csdn.net/rznice/article/details/69600112.http://www.cnblogs.com/echomyecho/archiv ...
- 工厂模式——(Head first设计模式4)
所谓工厂,肯定是和生产有关.工厂模式主要包括工厂方法模式和抽象工厂模式,有些人把简单工厂也作为一种模式,在本文我分别讨论简单工厂模式,工厂方法模式,抽象工厂模式.这些模式中同样也和生产有关.接下来,我 ...
- [Kernel]内核版本添加字符和内核版本'+'解决
转自:http://blog.csdn.net/adaptiver/article/details/7225980 之前每次由于git仓库编译出来每次都带有'+', 导致都需要使用git archiv ...