Kafka配置文件及解释
broker.id=0
num.network.threads=9
num.io.threads=24
socket.send.buffer.bytes=102400
listeners=PLAINTEXT://:9092
port=9092
host.name=
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/home/service/var/kafka
num.partitions=12
offsets.topic.replication.factor=2
transaction.state.log.min.isr=1
log.retention.hours=72
log.retention.check.interval.ms=300000
zookeeper.connect=10.12.176.3:2181,10.12.172.32:2181,10.12.174.14:2181/security-kafka
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=3000
log.cleaner.enable=true
delete.topic.enable=true
1. addrep_cpd-app-down.json 文件内容:
{"version":1, "partitions":[
{"topic":"cpd-app-down","partition":0,"replicas":[1,2]},
{"topic":"cpd-app-down","partition":1,"replicas":[2,3]},
{"topic":"cpd-app-down","partition":2,"replicas":[3,4]},
{"topic":"cpd-app-down","partition":3,"replicas":[4,5]},
{"topic":"cpd-app-down","partition":4,"replicas":[5,6]},
{"topic":"cpd-app-down","partition":5,"replicas":[6,0]},
{"topic":"cpd-app-down","partition":6,"replicas":[0,1]},
{"topic":"cpd-app-down","partition":7,"replicas":[1,2]},
{"topic":"cpd-app-down","partition":8,"replicas":[2,3]},
{"topic":"cpd-app-down","partition":9,"replicas":[3,4]},
{"topic":"cpd-app-down","partition":10,"replicas":[4,5]},
{"topic":"cpd-app-down","partition":11,"replicas":[5,6]},
{"topic":"cpd-app-down","partition":12,"replicas":[6,0]},
{"topic":"cpd-app-down","partition":13,"replicas":[0,1]}
] } 2.sh kafka-reassign-partitions.sh --zookeeper 10.6.72.38:2181,10.6.72.8:2181 --reassignment-json-file ../config/addrep_cpd-app-down.json --execute
broker.id=
listeners=PLAINTEXT://10.32.104.37:9092
num.network.threads=
num.io.threads=
socket.send.buffer.bytes=
socket.receive.buffer.bytes=
socket.request.max.bytes=
log.dirs=/var/data/kafka
num.partitions=
num.recovery.threads.per.data.dir=
log.retention.hours=
log.segment.bytes=
log.retention.check.interval.ms=
log.cleaner.enable=true
zookeeper.connect=10.32.106.42:,10.32.114.34:,10.32.104.37:
zookeeper.connection.timeout.ms=
delete.topic.enable=true
transaction.state.log.min.isr=
log.retention.hours=
default.replication.factor=
1. 创建topics-to-move.json,输入topic信息
{"topics":
[{"topic": "TestSing"}],
"version":1
} 2. 生成topic迁移到新broker的配置文件,json格式
sh bin/kafka-reassign-partitions.sh --zookeeper 10.32.106.42:2181 --topics-to-move-json-file topics-to-move.json --broker-list "3,4,5" --generate 3. 执行脚本,加载json文件,开始迁移操作
sh bin/kafka-reassign-partitions.sh --zookeeper 10.32.106.42:2181 --reassignment-json-file config/testsing.json --execute
kafka日志按文件大小分割: log4j.properties
#log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
#log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.kafkaAppender=org.apache.log4j.RollingFileAppender
log4j.appender.kafkaAppender.MaxFileSize=500MB
log4j.appender.kafkaAppender.MaxBackupIndex=5
log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.kafkaAppender=org.apache.log4j.RollingFileAppender
log4j.appender.kafkaAppender.MaxFileSize=500MB
log4j.appender.kafkaAppender.MaxBackupIndex=5
log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log
log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.stateChangeAppender=org.apache.log4j.RollingFileAppender
log4j.appender.stateChangeAppender.MaxFileSize=500MB
log4j.appender.stateChangeAppender.MaxBackupIndex=5
log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log
log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.requestAppender=org.apache.log4j.RollingFileAppender
log4j.appender.requestAppender.MaxFileSize=500MB
log4j.appender.requestAppender.MaxBackupIndex=5
log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log
log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.cleanerAppender=org.apache.log4j.RollingFileAppender
log4j.appender.cleanerAppender.MaxFileSize=500MB
log4j.appender.cleanerAppender.MaxBackupIndex=5
log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log
log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.controllerAppender=org.apache.log4j.RollingFileAppender
log4j.appender.controllerAppender.MaxFileSize=500MB
log4j.appender.controllerAppender.MaxBackupIndex=5
log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log
log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.authorizerAppender=org.apache.log4j.RollingFileAppender
log4j.appender.authorizerAppender.MaxFileSize=500MB
log4j.appender.authorizerAppender.MaxBackupIndex=5
log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log
log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.logger.kafka=INFO, kafkaAppender
log4j.logger.kafka.network.RequestChannel$=WARN, requestAppender
log4j.additivity.kafka.network.RequestChannel$=false
log4j.logger.kafka.request.logger=WARN, requestAppender
log4j.additivity.kafka.request.logger=false
log4j.logger.kafka.controller=INFO, controllerAppender
log4j.additivity.kafka.controller=false
log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender
log4j.additivity.kafka.log.LogCleaner=false
log4j.logger.state.change.logger=INFO, stateChangeAppender
log4j.additivity.state.change.logger=false
log4j.logger.kafka.authorizer.logger=WARN, authorizerAppender
log4j.additivity.kafka.authorizer.logger=false
Kafka配置文件及解释的更多相关文章
- kafka实战教程(python操作kafka),kafka配置文件详解
kafka实战教程(python操作kafka),kafka配置文件详解 应用往Kafka写数据的原因有很多:用户行为分析.日志存储.异步通信等.多样化的使用场景带来了多样化的需求:消息是否能丢失?是 ...
- Hadoop生态圈-Kafka配置文件详解
Hadoop生态圈-Kafka配置文件详解 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.默认kafka配置文件内容([yinzhengjie@s101 ~]$ more /s ...
- my.cnf 配置文件参数解释
my.cnf 配置文件参数解释: #*** client options 相关选项 ***# #以下选项会被MySQL客户端应用读取.注意只有MySQL附带的客户端应用程序保证可以读取这段内容.如果你 ...
- 分布式文件存储FastDFS(七)FastDFS配置文件具体解释
配置FastDFS时.改动配置文件是非常重要的一个步骤,理解配置文件里每一项的意义更加重要,所以我參考了大神的帖子,整理了配置文件的解释.原帖例如以下:http://bbs.chinaunix.net ...
- kafka配置文件中参数的限制
在kafka的优化过程中,不断的调节配置文件中的参数,但是有时候会遇到java.lang.NumberFormatException这样的错误 比如socket.receive.buffer.byte ...
- 在C#代码中应用Log4Net(三)Log4Net中配置文件的解释
一个完整的配置文件的例子如下所示,这个是”在C#代码中应用Log4Net(二)”中使用的配置文件. <log4net> <!-- 错误日志类--> <logger nam ...
- 在C#代码中应用Log4Net 中配置文件的解释
一个完整的配置文件的例子如下所示,这个是”在C#代码中应用Log4Net(二)”中使用的配置文件. <log4net> <!-- 错误日志类--> <logger nam ...
- log4j配置文件详细解释
web.xml中配置启动log4j的配置 <!-- webAppRootKey进行配置,这里主要是让log能将日志写到对应项目根目录下 --> <!-- 定义以后,在Web Cont ...
- [转]Log4Net中配置文件的解释
FROM:http://www.cnblogs.com/kissazi2/p/3392605.html 一个完整的配置文件的例子如下所示 <log4net> <!-- 错误日志类-- ...
随机推荐
- linux安装nginx步骤
转载自:https://blog.csdn.net/t8116189520/article/details/81909574,修改部分内容 本文已收录至博客专栏linux安装各种软件及配置环境教程中 ...
- 理论优美的深度信念网络--Hinton北大最新演讲
什么是深度信念网络 深度信念网络是第一批成功应用深度架构训练的非卷积模型之一. 在引入深度信念网络之前,研究社区通常认为深度模型太难优化,还不如使用易于优化的浅层ML模型.2006年,Hinton等研 ...
- sql server2008 装上后,总是出现machine.config line136,或者 出现 配置错误 无法识别的配置节 system.serviceModel 。
怀疑问题是vs 和 sql server2008安装冲突的问题造成, 有一个这样的说法: 用win8.1的64位 的系统,如果先装vs2010,再装sql server 2008 r2,根本就不行,一 ...
- C++ 传参的方式 值传递,指针传递,引用传递
关于传参总是搞晕,这里总结下: 值传递: void func(int n) { } void main() { int x = 1; func(x); return; } 这种就是值传递,在func函 ...
- PictureService
package me.zhengjie.tools.service; import me.zhengjie.tools.domain.Picture; import org.springframewo ...
- LeetCode No.94,95,96
No.94 InorderTraversal 二叉树的中序遍历 题目 给定一个二叉树,返回它的中序 遍历. 示例 输入: [1,null,2,3] 1 \ 2 / 3 输出: [1,3,2] 进阶:递 ...
- RDD(三)——transformation_value类型
map(func) 返回一个新的RDD,该RDD由每一个输入元素经过func函数转换后组成.有多少个元素,func就被执行多少次. mapPartitions(func) 类似于map,但是,map函 ...
- jeesite 去掉 /a
1.修改 jeesite.properties文件 adminPath=/a为 adminPath= 2.修改 web.xml文件找到如下设置 <filter-mapping> <f ...
- 牛客-富豪凯匹配串(bitset)
题目传送门 sol1:用bitset来维护,其实感觉挺暴力的,不怎么会用bitset,借着这道题学习一下. bitset暴力维护 #include "bits/stdc++.h" ...
- scala语言yield配合for的用法实例
yield配合for的用法 话不多说见实例 package com.donews.reynold /** * Created by reynold on 2017/3/23. */ object Sc ...