flume 使用手册
以下jie皆来自官网:
1:首先版本是flume 1.8
查看版本: bin/flume-ng version
2:配置与启动
https://flume.apache.org/FlumeUserGuide.html#configuration
Defining the flow
# list the sources, sinks and channels for the agent
<Agent>.sources = <Source>
<Agent>.sinks = <Sink>
<Agent>.channels = <Channel1> <Channel2> # set channel for source
<Agent>.sources.<Source>.channels = <Channel1> <Channel2> ... # set channel for sink
<Agent>.sinks.<Sink>.channel = <Channel1> eg2:
# list the sources, sinks and channels for the agent
agent_foo.sources = avro-appserver-src-1
agent_foo.sinks = hdfs-sink-1
agent_foo.channels = mem-channel-1 # set channel for source
agent_foo.sources.avro-appserver-src-1.channels = mem-channel-1 # set channel for sink
agent_foo.sinks.hdfs-sink-1.channel = mem-channel-1
Using environment variables in configuration files¶
Flume has the ability to substitute environment variables in the configuration. For example:
a1.sources = r1
a1.sources.r1.type = netcat
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = ${NC_PORT}
a1.sources.r1.channels = c1
NB: it currently works for values only, not for keys. (Ie. only on the “right side” of the = mark of the config lines.)
This can be enabled via Java system properties on agent invocation by setting propertiesImplementation = org.apache.flume.node.EnvVarResolverProperties.
- For example::
- $ NC_PORT=44444 bin/flume-ng agent –conf conf –conf-file example.conf –name a1 -Dflume.root.logger=INFO,console -DpropertiesImplementation=org.apache.flume.node.EnvVarResolverProperties
Note the above is just an example, environment variables can be configured in other ways, including being set in conf/flume-env.sh.
Configuring individual components¶
# properties for sources
<Agent>.sources.<Source>.<someProperty> = <someValue> # properties for channels
<Agent>.channel.<Channel>.<someProperty> = <someValue> # properties for sinks
<Agent>.sources.<Sink>.<someProperty> = <someValue>
Adding multiple flows in an agent¶
# list the sources, sinks and channels for the agent
<Agent>.sources = <Source1> <Source2>
<Agent>.sinks = <Sink1> <Sink2>
<Agent>.channels = <Channel1> <Channel2>
# list the sources, sinks and channels in the agent
agent_foo.sources = avro-AppSrv-source1 exec-tail-source2
agent_foo.sinks = hdfs-Cluster1-sink1 avro-forward-sink2
agent_foo.channels = mem-channel-1 file-channel-2 # flow #1 configuration
agent_foo.sources.avro-AppSrv-source1.channels = mem-channel-1
agent_foo.sinks.hdfs-Cluster1-sink1.channel = mem-channel-1 # flow #2 configuration
agent_foo.sources.exec-tail-source2.channels = file-channel-2
agent_foo.sinks.avro-forward-sink2.channel = file-channel-2
Configuring a multi agent flow¶
Weblog agent config:
# list sources, sinks and channels in the agent
agent_foo.sources = avro-AppSrv-source
agent_foo.sinks = avro-forward-sink
agent_foo.channels = file-channel # define the flow
agent_foo.sources.avro-AppSrv-source.channels = file-channel
agent_foo.sinks.avro-forward-sink.channel = file-channel # avro sink properties
agent_foo.sinks.avro-forward-sink.type = avro
agent_foo.sinks.avro-forward-sink.hostname = 10.1.1.100
agent_foo.sinks.avro-forward-sink.port = 10000 # configure other pieces
#...
HDFS agent config:
# list sources, sinks and channels in the agent
agent_foo.sources = avro-collection-source
agent_foo.sinks = hdfs-sink
agent_foo.channels = mem-channel # define the flow
agent_foo.sources.avro-collection-source.channels = mem-channel
agent_foo.sinks.hdfs-sink.channel = mem-channel # avro source properties
agent_foo.sources.avro-collection-source.type = avro
agent_foo.sources.avro-collection-source.bind = 10.1.1.100
agent_foo.sources.avro-collection-source.port = 10000 # configure other pieces
#...
Fan out flow
# List the sources, sinks and channels for the agent
<Agent>.sources = <Source1>
<Agent>.sinks = <Sink1> <Sink2>
<Agent>.channels = <Channel1> <Channel2> # set list of channels for source (separated by space)
<Agent>.sources.<Source1>.channels = <Channel1> <Channel2> # set channel for sinks
<Agent>.sinks.<Sink1>.channel = <Channel1>
<Agent>.sinks.<Sink2>.channel = <Channel2> <Agent>.sources.<Source1>.selector.type = replicating
# Mapping for multiplexing selector
<Agent>.sources.<Source1>.selector.type = multiplexing
<Agent>.sources.<Source1>.selector.header = <someHeader>
<Agent>.sources.<Source1>.selector.mapping.<Value1> = <Channel1>
<Agent>.sources.<Source1>.selector.mapping.<Value2> = <Channel1> <Channel2>
<Agent>.sources.<Source1>.selector.mapping.<Value3> = <Channel2>
#... <Agent>.sources.<Source1>.selector.default = <Channel2>
# list the sources, sinks and channels in the agent
agent_foo.sources = avro-AppSrv-source1
agent_foo.sinks = hdfs-Cluster1-sink1 avro-forward-sink2
agent_foo.channels = mem-channel-1 file-channel-2 # set channels for source
agent_foo.sources.avro-AppSrv-source1.channels = mem-channel-1 file-channel-2 # set channel for sinks
agent_foo.sinks.hdfs-Cluster1-sink1.channel = mem-channel-1
agent_foo.sinks.avro-forward-sink2.channel = file-channel-2 # channel selector configuration
agent_foo.sources.avro-AppSrv-source1.selector.type = multiplexing
agent_foo.sources.avro-AppSrv-source1.selector.header = State
agent_foo.sources.avro-AppSrv-source1.selector.mapping.CA = mem-channel-1
agent_foo.sources.avro-AppSrv-source1.selector.mapping.AZ = file-channel-2
agent_foo.sources.avro-AppSrv-source1.selector.mapping.NY = mem-channel-1 file-channel-2
agent_foo.sources.avro-AppSrv-source1.selector.default = mem-channel-1
# channel selector configuration
agent_foo.sources.avro-AppSrv-source1.selector.type = multiplexing
agent_foo.sources.avro-AppSrv-source1.selector.header = State
agent_foo.sources.avro-AppSrv-source1.selector.mapping.CA = mem-channel-1
agent_foo.sources.avro-AppSrv-source1.selector.mapping.AZ = file-channel-2
agent_foo.sources.avro-AppSrv-source1.selector.mapping.NY = mem-channel-1 file-channel-2
agent_foo.sources.avro-AppSrv-source1.selector.optional.CA = mem-channel-1 file-channel-2
agent_foo.sources.avro-AppSrv-source1.selector.mapping.AZ = file-channel-2
agent_foo.sources.avro-AppSrv-source1.selector.default = mem-channel-1
Network streams
Flume supports the following mechanisms to read data from popular log stream types, such as:
- Avro
- Thrift
- Syslog
- Netcat
- exec:
a1.sources.r1.type = exec
a1.sources.r1.command = tail -f /opt/xfs/logs/tomcat/xfs-cs/logs/xfs_cs_41- seq
- spool :暂时不清楚
- http:
a1.sources.r1.type =org.apache.flume.source.http.HTTPSource
a1.sources.r1.port= 5142
- tcp || syslog
a1.sources.r1.type =syslogtcp
a1.sources.r1.port= 5140 ####built log ==> echo "hello idoall.org syslog" | nc localhost 5140
bin/flume-ng agent --conf conf --conf-file example.conf --name a1 -Dflume.root.logger=INFO,console #####--name 就是配置中的agent
flume 使用手册的更多相关文章
- 【Flume学习之二】Flume 使用场景
环境 apache-flume-1.6.0 一.多agent连接 1.node101配置 option2 # Name the components on this agent a1.sources ...
- 【Flume学习之一】Flume简介
环境 apache-flume-1.6.0 Flume是分布式日志收集系统.可以将应用产生的数据存储到任何集中存储器中,比如HDFS,HBase:同类工具:Facebook Scribe,Apache ...
- flume+kafka+smart数据接入实施手册
1. 概述 本手册主要介绍了,一个将传统数据接入到Hadoop集群的数据接入方案和实施方法.供数据接入和集群运维人员参考. 1.1. 整体方案 Flume作为日志收集工具,监控一个文件目录或者一 ...
- 分布式日志收集系统Apache Flume的设计详细介绍
问题导读: 1.Flume传输的数据的基本单位是是什么? 2.Event是什么,流向是怎么样的? 3.Source:完成对日志数据的收集,分成什么打入Channel中? 4.Channel的作用是什么 ...
- DevOps之服务手册
唠叨话 关于德语噢屁事的知识点,仅提供精华汇总,具体知识点细节,参考教程网址,如需帮助,请留言. <DevOps服务手册(Manual)> <IT资源目标化>1.设施和设备(I ...
- 日志采集框架Flume以及Flume的安装部署(一个分布式、可靠、和高可用的海量日志采集、聚合和传输的系统)
Flume支持众多的source和sink类型,详细手册可参考官方文档,更多source和sink组件 http://flume.apache.org/FlumeUserGuide.html Flum ...
- Flume+Sqoop+Azkaban笔记
大纲(辅助系统) 离线辅助系统 数据接入 Flume介绍 Flume组件 Flume实战案例 任务调度 调度器基础 市面上调度工具 Oozie的使用 Oozie的流程定义详解 数据导出 sqoop基础 ...
- Flume 概述+环境配置+监听Hive日志信息并写入到hdfs
Flume介绍Flume是Apache基金会组织的一个提供的高可用的,高可靠的,分布式的海量日志采集.聚合和传输的系统,Flume支持在日志系统中定制各类数据发送方,用于收集数据:同时,Flume提供 ...
- 大数据技术之_09_Flume学习_Flume概述+Flume快速入门+Flume企业开发案例+Flume监控之Ganglia+Flume高级之自定义MySQLSource+Flume企业真实面试题(重点)
第1章 Flume概述1.1 Flume定义1.2 Flume组成架构1.2.1 Agent1.2.2 Source1.2.3 Channel1.2.4 Sink1.2.5 Event1.3 Flum ...
随机推荐
- 通俗理解caller和callee
caller 返回一个调用当前函数的引用: callee 返回一个正在被执行函数的引用: 举个例子: 当前有函数 a() 直接使用了caller 方法: b() 直接使用了callee方法: ca() ...
- python中的extend
extend()拓展列表,批量写入 举个例子: 1 a = ["hello", "world", "dlrb"] 2 b = [1, 2, ...
- Impala SQL 使用小记
1. impala端创建的表,DROP. hive会自动同步到. 但是通过hive DROP时,数据还会在,只是表的元数据没有了. 所以完全DROP表,需要impala端的DROP 2. impal ...
- maven 下载jar失败: resolution will not be reattempted until the update interval of central has elapsed or updates are forced
Multiple annotations found at this line: - ArtifactTransferException: Failure to transfer com.faster ...
- unzip
中文乱码问题: unzip -O CP936 filename
- 关于visual studio code在win10系统上安装后会报扩展宿主意外终止的解决方法
我的电脑的地址 C:\Users\Administrator.SC-201810160958\AppData\Local\Programs\Microsoft VS Code\resources\ap ...
- Linux 指令(一)文件/目录操作
1. 创建目录 mkdir 格式 mkdir [OPTION]... DIRECTORY... 选项 -p 递归创建 -v 创建时提示 例: root@ubuntu:/home/eko/x# mkdi ...
- redis点
(1)什么是redis? Redis 是一个基于内存的高性能key-value数据库. (有空再补充,有理解错误或不足欢迎指正) (2)Reids的特点 Redis本质上是一个Key-Value类型的 ...
- 选择、操作web元素-3
11月5日 Selenium 作业 3 登录 51job , http://www.51job.com 输入搜索关键词 "python", 地区选择 "杭州"( ...
- react-native启动页面设置,react-native-splash-screen
用于解决iOS和Android启动白屏问题及简单的启动页面展示 下载 react-native-splash-screen yarn add react-native-splash-screen re ...