Kafka 0.8: 多日志文件夹机制
kafka 0.7.2 中对log.dir的定义如下:
log.dir |
none | Specifies the root directory in which all log data is kept. |
在kafka 0.8 中将log.dir 修改为 log.dirs,官方文档说明如下:
| log.dirs | /tmp/kafka-logs |
A comma-separated list of one or more directories in which Kafka data is stored. Each new partition that is created will be placed in the directory which currently has the fewest partitions. |
从0.8开始,支持配置多个日志文件夹,文件夹之间使用逗号隔开即可,这样做在实际项目中有非常大的好处,那就是支持多硬盘。
下面从源码着手来浅析一下多日志文件夹是怎么工作的
1. 首先broker启动时会加载指定的配置文件,并把property对象传入KafkaConfig对象中
object Kafka extends Logging {
try {
val props = Utils.loadProps(args(0))
val serverConfig = new KafkaConfig(props)
2. 在kafkaConfig 中会解析log.dirs字符串,将其通过逗号隔开,形成Set,调用split方法时传入"\\s*,\\s*",表示逗号前后的空格都会被忽略
/* the directories in which the log data is kept */
val logDirs = Utils.parseCsvList(props.getString("log.dirs", props.getString("log.dir", "/tmp/kafka-logs")))
require(logDirs.size > 0)
/**
* Parse a comma separated string into a sequence of strings.
* Whitespace surrounding the comma will be removed.
*/
def parseCsvList(csvList: String): Seq[String] = {
if(csvList == null || csvList.isEmpty)
Seq.empty[String]
else {
csvList.split("\\s*,\\s*").filter(v => !v.equals(""))
}
}
3. 在KafkaServer中生成LogManager对象时传入 [(dir_path_1,File(dir_path_1)), (dir_path_2,File(dir_path_2)) ]
new LogManager(logDirs = config.logDirs.map(new File(_)).toArray,
topicConfigs = configs,
defaultConfig = defaultLogConfig,
cleanerConfig = cleanerConfig,
flushCheckMs = config.logFlushSchedulerIntervalMs,
flushCheckpointMs = config.logFlushOffsetCheckpointIntervalMs,
retentionCheckMs = config.logCleanupIntervalMs,
scheduler = kafkaScheduler,
time = time)
4.LogManager首先对传入的dir进行下列验证:是否存在相同的文件夹、文件夹是否存在(不存在则创建)、是否为可读的文件夹
/**
* Create and check validity of the given directories, specifically:
* <ol>
* <li> Ensure that there are no duplicates in the directory list
* <li> Create each directory if it doesn't exist
* <li> Check that each path is a readable directory
* </ol>
*/
private def createAndValidateLogDirs(dirs: Seq[File]) {
if(dirs.map(_.getCanonicalPath).toSet.size < dirs.size)
throw new KafkaException("Duplicate log directory found: " + logDirs.mkString(", "))
for(dir <- dirs) {
if(!dir.exists) {
info("Log directory '" + dir.getAbsolutePath + "' not found, creating it.")
val created = dir.mkdirs()
if(!created)
throw new KafkaException("Failed to create data directory " + dir.getAbsolutePath)
}
if(!dir.isDirectory || !dir.canRead)
throw new KafkaException(dir.getAbsolutePath + " is not a readable log directory.")
}
}
5. LogManager 对所有的文件夹获取文件锁,防止其他进行对该文件夹进行操作
/**
* Lock all the given directories
*/
private def lockLogDirs(dirs: Seq[File]): Seq[FileLock] = {
dirs.map { dir =>
val lock = new FileLock(new File(dir, LockFile))
if(!lock.tryLock())
throw new KafkaException("Failed to acquire lock on file .lock in " + lock.file.getParentFile.getAbsolutePath +
". A Kafka instance in another process or thread is using this directory.")
lock
}
}
6. 通过文件夹下面的recovery-point-offset-checkpoint 恢复加载每个目录下面的partition文件
/**
* Recover and load all logs in the given data directories
*/
private def loadLogs(dirs: Seq[File]) {
for(dir <- dirs) {
val recoveryPoints = this.recoveryPointCheckpoints(dir).read
/* load the logs */
val subDirs = dir.listFiles()
if(subDirs != null) {
//当kafka退出时,正常关闭的日志文件都会在该日志文件下生成.kafka_cleanshutdown为后缀的文件,该文件的作用是,在下次启动时,此日志文件可以不进行恢复流程
val cleanShutDownFile = new File(dir, Log.CleanShutdownFile)
if(cleanShutDownFile.exists())
info("Found clean shutdown file. Skipping recovery for all logs in data directory '%s'".format(dir.getAbsolutePath))
for(dir <- subDirs) {
if(dir.isDirectory) {
info("Loading log '" + dir.getName + "'")
val topicPartition = Log.parseTopicPartitionName(dir.getName)
val config = topicConfigs.getOrElse(topicPartition.topic, defaultConfig)
val log = new Log(dir,
config,
recoveryPoints.getOrElse(topicPartition, 0L),
scheduler,
time)
val previous = this.logs.put(topicPartition, log)
if(previous != null)
throw new IllegalArgumentException("Duplicate log directories found: %s, %s!".format(log.dir.getAbsolutePath, previous.dir.getAbsolutePath))
}
}
cleanShutDownFile.delete()
}
}
}
7. 当需要创建新的日志文件时,会在日志文件比较少的文件夹下去创建,源码中的注释很详细
/**
* Choose the next directory in which to create a log. Currently this is done
* by calculating the number of partitions in each directory and then choosing the
* data directory with the fewest partitions.
*/
private def nextLogDir(): File = {
if(logDirs.size == 1) {
logDirs(0)
} else {
// count the number of logs in each parent directory (including 0 for empty directories
val logCounts = allLogs.groupBy(_.dir.getParent).mapValues(_.size)
val zeros = logDirs.map(dir => (dir.getPath, 0)).toMap
//下面代码的主要作用是,对没有日志文件的文件夹设置size为0
var dirCounts = (zeros ++ logCounts).toBuffer // choose the directory with the least logs in it
val leastLoaded = dirCounts.sortBy(_._2).head
new File(leastLoaded._1)
}
}
Kafka 0.8: 多日志文件夹机制的更多相关文章
- hololens DEP2220: 无法删除目标计算机“127.0.0.1”上的文件夹
Hololens开发调试的过程中,可能会出现 “DEP2220: 无法删除目标计算机“127.0.0.1”上的文件夹“ 的错误导致无法部署,解决办法是进入项目属性页——调试——启动选项,勾选“卸载并重 ...
- eas之日志文件夹
F:\ThisIs_MyWork\kingdee\eas\server\profiles\server1\logs 服务端的日志文件夹 F:\ThisIs_MyWork\kingdeecusto ...
- oracle 10g/11g 命令对照,日志文件夹对照
oracle 10g/11g 命令对照,日志文件夹对照 oracle 11g 中不再建议使用的命令 Deprecated Command Replacement Commands crs_st ...
- CI3.0控制器下面建文件夹 访问一直404 的解决方法
在单入口文件(框架目录下面的index.php)最下面的require_once BASEPATH.'core/CodeIgniter.php';这行上面设置一个路径,是相对于conrollers文件 ...
- Kafka 入门(二)--数据日志、副本机制和消费策略
一.Kafka 数据日志 1.主题 Topic Topic 是逻辑概念. 主题类似于分类,也可以理解为一个消息的集合.每一条发送到 Kafka 的消息都会带上一个主题信息,表明属于哪个主题. Kafk ...
- IIS下众多网站,如何快速定位某站点日志在哪个文件夹?
windows2008,iis 多站点, 日志.应用程序池都是默认设置, 没有分开………… Logs目录里面有W3SVC43,W3SVC44,W3SVC45,W3SVC46.....等等日志文件夹. ...
- asp 中创建日志打印文件夹
string FilePath = HttpRuntime.BinDirectory.ToString(); string FileName = FilePath + "日志" + ...
- iis7下查看站点日志对应文件夹
原文:iis7下查看站点日志对应文件夹 IIS7下面默认日志文件的存放路径:%SystemDrive%\inetpub\logs\LogFiles 查看方法:点击对应网站 -> 右侧功能视图 - ...
- kafka 0.10.2 cetos6.5 集群部署
安装 zookeeper http://www.cnblogs.com/xiaojf/p/6572351.html安装 scala http://www.cnblogs.com/xiaojf/p/65 ...
随机推荐
- 在Myeclipse buildpath 加server lib
把eclipse下的工程复制过来后,发现缺少Server Runtime.本想直接在buildpath里加lib,在Myeclipse里找了一圈,恁是没发现在哪里可以添加,虽然在preference里 ...
- Oracle Form 特殊的默认值 $$variables$$
Oracle Forms 提供了六个特殊的系统变量,均为提供日期和时间的信息的变量: •$$DATE$$ •$$TIME$$ •$$DATETIME$$ •$$DBDATE$$ •$$DBTIME$$ ...
- [Hadoop源码解读](二)MapReduce篇之Mapper类
前面在讲InputFormat的时候,讲到了Mapper类是如何利用RecordReader来读取InputSplit中的K-V对的. 这一篇里,开始对Mapper.class的子类进行解读. 先回忆 ...
- 函数 flst_get_first
/********************************************************************//** Gets list first node addre ...
- Dell笔记本禁用触摸板的方法
一·找到触摸板驱动所在的文件夹(其他型号 的本本,请自己探索一下,找到驱动在哪就行),一般 在 C:\program files\delltpad 中(若没有请下载安 装 ),如图: 二·双击 Del ...
- Sending data to USB printer in C#?
using System; using System.Drawing; using System.Drawing.Printing; using System.IO; using System.Run ...
- [liu yanling]测试用例作用
⒈指导测试的实施 测试用例主要适用于集成测试.系统测试和回归测试.在实施测试时测试用例作为测试的标准,测试人员一定要按照测试用例严格按用例项目和测试步骤逐一实施测试.并对测试情况记录在测试用例管理软件 ...
- java含多个包的命令行下执行
C:\Users\liyang\Desktop\BAE\Baidu-BCS-SDK-Java-1.4.5>java -classpath(可以cp简写) bcs-sdk-java_1.4.5.j ...
- scala-spark练手--dataframe数据可视化初稿
成品:http://www.cnblogs.com/drawwindows/p/5640606.html 初稿: import org.apache.spark.sql.hive.HiveContex ...
- SPI介绍
此文摘自百度百科:http://baike.baidu.com/view/245026.htm SPI概述SPI:高速同步串行口.3-4线接口,收发独立.可同步进行. SPI, 是英语Serial P ...