不多说,直接上干货!

一、自定义拦截器类型必须是:类全名$内部类名,其实就是内部类名称
  如:zhouls.bigdata.MySearchAndReplaceInterceptor$Builder

二、为什么这样写
  至于为什么这样写:是因为Interceptor接口还有一个 公共的内部接口(Builder) ,所以自定义拦截器 要是实现 Builder接口,
  也就是实现一个内部类(该内部类的主要作用是:获取flume-conf.properties 自定义的 参数,并将参数传递给 自定义拦截器)
三、
  本人知识有限,可能描述的不太清楚,可自行了解 java接口与内部类

  由于有时候内置的拦截器不够用,所以需要针对特殊的业务需求自定义拦截器。
官方文档中没有发现自定义interceptor的步骤,但是可以根据flume源码参考内置的拦截器的代码
flume-1.7/flume-ng-core/src/main/java/org/apache/flume/interceptor/***Iterceptor.java

  无论,是flume的自带拦截器,还是,flume的自定义拦截器,我这篇博文呢,是想给大家,去规范和方便化!!!

  1. [hadoop@master app]$ rm -rf flume
  2. [hadoop@master app]$ ln -s flume-1.7./ flume
  3. [hadoop@master app]$ ll
  4. lrwxrwxrwx hadoop hadoop Jul : flume -> flume-1.7./
  5. drwxrwxr-x hadoop hadoop Apr : flume-1.6.
  6. drwxrwxr-x hadoop hadoop Apr : flume-1.7.

  Host Interceptor的应用场景是,将同一主机或服务器上的数据flume在一起。

   Regex Extractor Iterceptor的应用场景是,

这里,教大家一个非常实用的技巧,

  1. [hadoop@master flume-1.7.0]$ pwd
  2. /home/hadoop/app/flume-1.7.0
  3. [hadoop@master flume-1.7.0]$ ll
  4. total 148
  5. drwxr-xr-x 2 hadoop hadoop 4096 Apr 20 12:00 bin
  6. -rw-r--r-- 1 hadoop hadoop 77387 Oct 11 2016 CHANGELOG
  7. drwxr-xr-x 2 hadoop hadoop 4096 Apr 20 12:00 conf
  8. -rw-r--r-- 1 hadoop hadoop 6172 Sep 26 2016 DEVNOTES
  9. -rw-r--r-- 1 hadoop hadoop 2873 Sep 26 2016 doap_Flume.rdf
  10. drwxr-xr-x 10 hadoop hadoop 4096 Oct 13 2016 docs
  11. drwxrwxr-x 2 hadoop hadoop 4096 Apr 20 12:00 lib
  12. -rw-r--r-- 1 hadoop hadoop 27625 Oct 13 2016 LICENSE
  13. -rw-r--r-- 1 hadoop hadoop 249 Sep 26 2016 NOTICE
  14. -rw-r--r-- 1 hadoop hadoop 2520 Sep 26 2016 README.md
  15. -rw-r--r-- 1 hadoop hadoop 1585 Oct 11 2016 RELEASE-NOTES
  16. drwxrwxr-x 2 hadoop hadoop 4096 Apr 20 12:00 tools
  17. [hadoop@master flume-1.7.0]$ cp -r conf conf_HostInterceptor
  18. [hadoop@master flume-1.7.0]$ ll
  19. total 152
  20. drwxr-xr-x 2 hadoop hadoop 4096 Apr 20 12:00 bin
  21. -rw-r--r-- 1 hadoop hadoop 77387 Oct 11 2016 CHANGELOG
  22. drwxr-xr-x 2 hadoop hadoop 4096 Apr 20 12:00 conf
  23. drwxr-xr-x 2 hadoop hadoop 4096 Jul 27 11:59 conf_HostInterceptor
  24. -rw-r--r-- 1 hadoop hadoop 6172 Sep 26 2016 DEVNOTES
  25. -rw-r--r-- 1 hadoop hadoop 2873 Sep 26 2016 doap_Flume.rdf
  26. drwxr-xr-x 10 hadoop hadoop 4096 Oct 13 2016 docs
  27. drwxrwxr-x 2 hadoop hadoop 4096 Apr 20 12:00 lib
  28. -rw-r--r-- 1 hadoop hadoop 27625 Oct 13 2016 LICENSE
  29. -rw-r--r-- 1 hadoop hadoop 249 Sep 26 2016 NOTICE
  30. -rw-r--r-- 1 hadoop hadoop 2520 Sep 26 2016 README.md
  31. -rw-r--r-- 1 hadoop hadoop 1585 Oct 11 2016 RELEASE-NOTES
  32. drwxrwxr-x 2 hadoop hadoop 4096 Apr 20 12:00 tools
  33. [hadoop@master flume-1.7.0]$

  1. [hadoop@master flume-1.7.0]$ ll
  2. total 152
  3. drwxr-xr-x 2 hadoop hadoop 4096 Apr 20 12:00 bin
  4. -rw-r--r-- 1 hadoop hadoop 77387 Oct 11 2016 CHANGELOG
  5. drwxr-xr-x 2 hadoop hadoop 4096 Apr 20 12:00 conf
  6. drwxr-xr-x 2 hadoop hadoop 4096 Jul 27 12:01 conf_HostInterceptor
  7. -rw-r--r-- 1 hadoop hadoop 6172 Sep 26 2016 DEVNOTES
  8. -rw-r--r-- 1 hadoop hadoop 2873 Sep 26 2016 doap_Flume.rdf
  9. drwxr-xr-x 10 hadoop hadoop 4096 Oct 13 2016 docs
  10. drwxrwxr-x 2 hadoop hadoop 4096 Apr 20 12:00 lib
  11. -rw-r--r-- 1 hadoop hadoop 27625 Oct 13 2016 LICENSE
  12. -rw-r--r-- 1 hadoop hadoop 249 Sep 26 2016 NOTICE
  13. -rw-r--r-- 1 hadoop hadoop 2520 Sep 26 2016 README.md
  14. -rw-r--r-- 1 hadoop hadoop 1585 Oct 11 2016 RELEASE-NOTES
  15. drwxrwxr-x 2 hadoop hadoop 4096 Apr 20 12:00 tools
  16. [hadoop@master flume-1.7.0]$ cp -r conf conf_RegexExtractorInterceptor
  17. [hadoop@master flume-1.7.0]$ ll
  18. total 156
  19. drwxr-xr-x 2 hadoop hadoop 4096 Apr 20 12:00 bin
  20. -rw-r--r-- 1 hadoop hadoop 77387 Oct 11 2016 CHANGELOG
  21. drwxr-xr-x 2 hadoop hadoop 4096 Apr 20 12:00 conf
  22. drwxr-xr-x 2 hadoop hadoop 4096 Jul 27 12:01 conf_HostInterceptor
  23. drwxr-xr-x 2 hadoop hadoop 4096 Jul 27 12:03 conf_RegexExtractorInterceptor
  24. -rw-r--r-- 1 hadoop hadoop 6172 Sep 26 2016 DEVNOTES
  25. -rw-r--r-- 1 hadoop hadoop 2873 Sep 26 2016 doap_Flume.rdf
  26. drwxr-xr-x 10 hadoop hadoop 4096 Oct 13 2016 docs
  27. drwxrwxr-x 2 hadoop hadoop 4096 Apr 20 12:00 lib
  28. -rw-r--r-- 1 hadoop hadoop 27625 Oct 13 2016 LICENSE
  29. -rw-r--r-- 1 hadoop hadoop 249 Sep 26 2016 NOTICE
  30. -rw-r--r-- 1 hadoop hadoop 2520 Sep 26 2016 README.md
  31. -rw-r--r-- 1 hadoop hadoop 1585 Oct 11 2016 RELEASE-NOTES
  32. drwxrwxr-x 2 hadoop hadoop 4096 Apr 20 12:00 tools
  33. [hadoop@master flume-1.7.0]$

  1. drwxr-xr-x 2 hadoop hadoop 4096 Apr 20 12:00 bin
  2. -rw-r--r-- 1 hadoop hadoop 77387 Oct 11 2016 CHANGELOG
  3. drwxr-xr-x 2 hadoop hadoop 4096 Apr 20 12:00 conf
  4. drwxr-xr-x 2 hadoop hadoop 4096 Jul 27 12:01 conf_HostInterceptor
  5. drwxr-xr-x 2 hadoop hadoop 4096 Jul 27 12:03 conf_RegexExtractorInterceptor
  6. -rw-r--r-- 1 hadoop hadoop 6172 Sep 26 2016 DEVNOTES
  7. -rw-r--r-- 1 hadoop hadoop 2873 Sep 26 2016 doap_Flume.rdf
  8. drwxr-xr-x 10 hadoop hadoop 4096 Oct 13 2016 docs
  9. drwxrwxr-x 2 hadoop hadoop 4096 Apr 20 12:00 lib
  10. -rw-r--r-- 1 hadoop hadoop 27625 Oct 13 2016 LICENSE
  11. -rw-r--r-- 1 hadoop hadoop 249 Sep 26 2016 NOTICE
  12. -rw-r--r-- 1 hadoop hadoop 2520 Sep 26 2016 README.md
  13. -rw-r--r-- 1 hadoop hadoop 1585 Oct 11 2016 RELEASE-NOTES
  14. drwxrwxr-x 2 hadoop hadoop 4096 Apr 20 12:00 tools
  15. [hadoop@master flume-1.7.0]$ cp -r conf conf_SearchandReplaceInterceptor
  16. [hadoop@master flume-1.7.0]$ ll
  17. total 160
  18. drwxr-xr-x 2 hadoop hadoop 4096 Apr 20 12:00 bin
  19. -rw-r--r-- 1 hadoop hadoop 77387 Oct 11 2016 CHANGELOG
  20. drwxr-xr-x 2 hadoop hadoop 4096 Apr 20 12:00 conf
  21. drwxr-xr-x 2 hadoop hadoop 4096 Jul 27 12:01 conf_HostInterceptor
  22. drwxr-xr-x 2 hadoop hadoop 4096 Jul 27 12:03 conf_RegexExtractorInterceptor
  23. drwxr-xr-x 2 hadoop hadoop 4096 Jul 27 12:04 conf_SearchandReplaceInterceptor
  24. -rw-r--r-- 1 hadoop hadoop 6172 Sep 26 2016 DEVNOTES
  25. -rw-r--r-- 1 hadoop hadoop 2873 Sep 26 2016 doap_Flume.rdf
  26. drwxr-xr-x 10 hadoop hadoop 4096 Oct 13 2016 docs
  27. drwxrwxr-x 2 hadoop hadoop 4096 Apr 20 12:00 lib
  28. -rw-r--r-- 1 hadoop hadoop 27625 Oct 13 2016 LICENSE
  29. -rw-r--r-- 1 hadoop hadoop 249 Sep 26 2016 NOTICE
  30. -rw-r--r-- 1 hadoop hadoop 2520 Sep 26 2016 README.md
  31. -rw-r--r-- 1 hadoop hadoop 1585 Oct 11 2016 RELEASE-NOTES
  32. drwxrwxr-x 2 hadoop hadoop 4096 Apr 20 12:00 tools
  33. [hadoop@master flume-1.7.0]$

  

  大家,想必,很想问,为什么要这么cp复制出来呢?如flume的以下3种重要的自带拦截器???

  1. cp -r conf conf_HostInterceptor
  2. cp -r conf conf_SearchandReplaceInterceptor
  3. cp -r conf conf_RegexExtractorInterceptor

  你想啊,若不复制的话,则我们在使用时,则会不方便管理。尤其是,见如下,共用同一个log4j.properties,日志排查起来一点都不方便!!!

  

  而,现在是

  这样做下来,就是非常的方便和正规。

  1.  

  同时,大家,还要如下更改下

  1. [hadoop@master conf_HostInterceptor]$ pwd
  2. /home/hadoop/app/flume-1.7./conf_HostInterceptor
  3. [hadoop@master conf_HostInterceptor]$ ll
  4. total
  5. -rw-r--r-- hadoop hadoop Jul : flume-conf.properties.template
  6. -rw-r--r-- hadoop hadoop Jul : flume-env.ps1.template
  7. -rw-r--r-- hadoop hadoop Jul : flume-env.sh.template
  8. -rw-r--r-- hadoop hadoop Jul : log4j.properties
  9. [hadoop@master conf_HostInterceptor]$ mv flume-conf.properties.template flume-conf.properties
  10. [hadoop@master conf_HostInterceptor]$ vim log4j.properties

  1. #flume.root.logger=DEBUG,console
  2. flume.root.logger=INFO,LOGFILE
  3. flume.log.dir=./logs
  4. flume.log.file=flume_HostInterceptor.log

  同理

  1. [hadoop@master conf_RegexExtractorInterceptor]$ pwd
  2. /home/hadoop/app/flume-1.7./conf_RegexExtractorInterceptor
  3. [hadoop@master conf_RegexExtractorInterceptor]$ ll
  4. total
  5. -rw-r--r-- hadoop hadoop Jul : flume-conf.properties.template
  6. -rw-r--r-- hadoop hadoop Jul : flume-env.ps1.template
  7. -rw-r--r-- hadoop hadoop Jul : flume-env.sh.template
  8. -rw-r--r-- hadoop hadoop Jul : log4j.properties
  9. [hadoop@master conf_RegexExtractorInterceptor]$ mv flume-conf.properties.template flume-conf.properties
  10. [hadoop@master conf_RegexExtractorInterceptor]$ vim log4j.properties

  1. #flume.root.logger=DEBUG,console
  2. flume.root.logger=INFO,LOGFILE
  3. flume.log.dir=./logs
  4. flume.log.file=flume_RegexExtractorInterceptor.log

  同理

  1. [hadoop@master conf_SearchandReplaceInterceptor]$ pwd
  2. /home/hadoop/app/flume-1.7./conf_SearchandReplaceInterceptor
  3. [hadoop@master conf_SearchandReplaceInterceptor]$ ll
  4. total
  5. -rw-r--r-- hadoop hadoop Jul : flume-conf.properties.template
  6. -rw-r--r-- hadoop hadoop Jul : flume-env.ps1.template
  7. -rw-r--r-- hadoop hadoop Jul : flume-env.sh.template
  8. -rw-r--r-- hadoop hadoop Jul : log4j.properties
  9. [hadoop@master conf_SearchandReplaceInterceptor]$ mv flume-conf.properties.template flume-conf.properties
  10. [hadoop@master conf_SearchandReplaceInterceptor]$ vim log4j.properties

  1. #flume.root.logger=DEBUG,console
  2. flume.root.logger=INFO,LOGFILE
  3. flume.log.dir=./logs
  4. flume.log.file=flume_SearchandReplaceInterceptor.log

Host Interceptor

  conf_HostInterceptor的flume-conf.properties

  1. agent1.sources = r1
  2. agent1.sinks = k1
  3. agent1.channels = c1
  4.  
  5. # Describe/configure the source
  6. agent1.sources.r1.type = netcat
  7. agent1.sources.r1.bind = localhost
  8. agent1.sources.r1.port =
  9. agent1.sources.r1.interceptors = i1
  10. agent1.sources.r1.interceptors.i1.type = host
  11. agent1.sources.r1.interceptors.i1.hostHeader = hostname
  12.  
  13. # Use a channel which buffers events in memory
  14. agent1.channels.c1.type = memory
  15. agent1.channels.c1.capacity =
  16. agent1.channels.c1.transactionCapacity =
  17.  
  18. # Bind the source and sink to the channel
  19. agent1.sources.r1.channels = c1
  20. agent1.sinks.k1.channel = c1
  21.  
  22. # Describe the sink
  23. agent1.sinks.k1.type = logger

则,注意,启动命令也要发生变化

  1. [hadoop@master flume-1.7.0]$ bin/flume-ng agent --conf conf_HostInterceptor/  --conf-file conf_HostInterceptor/flume-conf.properties --name agent1  -Dflume.root.logger=INFO,console

  1. SLF4J: Found binding in [jar:file:/home/hadoop/app/hbase-0.98.19/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  2. SLF4J: Found binding in [jar:file:/home/hadoop/app/hive-1.0.0/lib/hive-jdbc-1.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  3. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  4. 2017-07-27 12:41:49,451 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start(PollingPropertiesFileConfigurationProvider.java:62)] Configuration provider starting
  5. 2017-07-27 12:41:50,137 (conf-file-poller-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:134)] Reloading configuration file:conf_HostInterceptor/flume-conf.properties
  6. 2017-07-27 12:41:50,188 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k1
  7. 2017-07-27 12:41:50,189 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k1
  8. 2017-07-27 12:41:50,189 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:930)] Added sinks: k1 Agent: agent1
  9. 2017-07-27 12:41:50,280 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:140)] Post-validation flume configuration contains configuration for agents: [agent1]
  10. 2017-07-27 12:41:50,280 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:147)] Creating channels
  11. 2017-07-27 12:41:50,337 (conf-file-poller-0) [INFO - org.apache.flume.channel.DefaultChannelFactory.create(DefaultChannelFactory.java:42)] Creating instance of channel c1 type memory
  12. 2017-07-27 12:41:50,423 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:201)] Created channel c1
  13. 2017-07-27 12:41:50,425 (conf-file-poller-0) [INFO - org.apache.flume.source.DefaultSourceFactory.create(DefaultSourceFactory.java:41)] Creating instance of source r1, type netcat
  14. 2017-07-27 12:41:51,478 (conf-file-poller-0) [INFO - org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:42)] Creating instance of sink: k1, type: logger
  15. 2017-07-27 12:41:51,490 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:116)] Channel c1 connected to [r1, k1]
  16. 2017-07-27 12:41:52,050 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:137)] Starting new configuration:{ sourceRunners:{r1=EventDrivenSourceRunner: { source:org.apache.flume.source.NetcatSource{name:r1,state:IDLE} }} sinkRunners:{k1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@13f948e counterGroup:{ name:null counters:{} } }} channels:{c1=org.apache.flume.channel.MemoryChannel{name: c1}} }
  17. 2017-07-27 12:41:52,052 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:144)] Starting Channel c1
  18. 2017-07-27 12:41:53,484 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:119)] Monitored counter group for type: CHANNEL, name: c1: Successfully registered new MBean.
  19. 2017-07-27 12:41:53,517 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:95)] Component type: CHANNEL, name: c1 started
  20. 2017-07-27 12:41:53,522 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:171)] Starting Sink k1
  21. 2017-07-27 12:41:53,524 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:182)] Starting Source r1
  22. 2017-07-27 12:41:53,531 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.source.NetcatSource.start(NetcatSource.java:155)] Source starting
  23. 2017-07-27 12:41:54,384 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.source.NetcatSource.start(NetcatSource.java:169)] Created serverSocket:sun.nio.ch.ServerSocketChannelImpl[/127.0.0.1:44444]

  等待数据的采集

  1. [hadoop@master ~]$ yum -y install telnet
  2. Loaded plugins: fastestmirror, refresh-packagekit, security
  3. You need to be root to perform this command.
  4. [hadoop@master ~]$ su root
  5. Password:
  6. [root@master hadoop]# yum -y install telnet
  7. Loaded plugins: fastestmirror, refresh-packagekit, security
  8. Loading mirror speeds from cached hostfile
  9. * base: mirrors.cqu.edu.cn
  10. * extras: mirrors.sohu.com

  成功地,然后,这边随便输入什么。比如hello

  1. [root@master ~]# telnet localhost 44444
  2. Trying ::1...
  3. telnet: connect to address ::1: Connection refused
  4. Trying 127.0.0.1...
  5. Connected to localhost.
  6. Escape character is '^]'.
  7. hello
  8. OK

  1. Event: { headers:{hostname=192.168.80.145} body: 68 65 6C 6C 6F 0D hello. }

  这就是Host Interceptor的作用体现!

  1. agent1.sources.r1.interceptors = i1
  2. agent1.sources.r1.interceptors.i1.type = host
  3. agent1.sources.r1.interceptors.i1.hostHeader = hostname

  若想要如下的效果,则

  1. Event: { headers:{hostname=master} body: 7A 68 6F 75 6C 73 0D zhouls. }

  则

  1. agent1.sources = r1
  2. agent1.sinks = k1
  3. agent1.channels = c1
  4.  
  5. # Describe/configure the source
  6. agent1.sources.r1.type = netcat
  7. agent1.sources.r1.bind = localhost
  8. agent1.sources.r1.port =
  9. agent1.sources.r1.interceptors = i1
  10. agent1.sources.r1.interceptors.i1.type = host
  11. agent1.sources.r1.interceptors.i1.useIP = false
  12. agent1.sources.r1.interceptors.i1.hostHeader = hostname
  13.  
  14. # Use a channel which buffers events in memory
  15. agent1.channels.c1.type = memory
  16. agent1.channels.c1.capacity =
  17. agent1.channels.c1.transactionCapacity =
  18.  
  19. # Bind the source and sink to the channel
  20. agent1.sources.r1.channels = c1
  21. agent1.sinks.k1.channel = c1
  22.  
  23. # Describe the sink
  24. agent1.sinks.k1.type = logger

  1. [hadoop@master flume-1.7.]$ bin/flume-ng agent --conf conf_HostInterceptor/ --conf-file conf_HostInterceptor/flume-conf.properties --name agent1 -Dflume.root.logger=INFO,console

  1. [root@master ~]# telnet localhost
  2. Trying ::...
  3. telnet: connect to address ::: Connection refused
  4. Trying 127.0.0.1...
  5. Connected to localhost.
  6. Escape character is '^]'.
  7. zhouls
  8. OK

  1. Event: { headers:{hostname=master} body: 7A 68 6F 75 6C 73 0D zhouls. }

Regex Extractor Interceptor(正则抽取拦截器)

  conf_RegexExtractorInterceptor的flume-conf.properties

  1. [hadoop@master conf_RegexExtractorInterceptor]$ pwd
  2. /home/hadoop/app/flume-1.7./conf_RegexExtractorInterceptor
  3. [hadoop@master conf_RegexExtractorInterceptor]$ ll
  4. total
  5. -rw-r--r-- hadoop hadoop Jul : flume-conf.properties
  6. -rw-r--r-- hadoop hadoop Jul : flume-env.ps1.template
  7. -rw-r--r-- hadoop hadoop Jul : flume-env.sh.template
  8. -rw-r--r-- hadoop hadoop Jul : log4j.properties
  9. [hadoop@master conf_RegexExtractorInterceptor]$ vim flume-conf.properties

  首先,我们来说说这个拦截器的应用场景  

  假设,有如下的flume测试数据

  1. video_info
  2.  
  3. {"id":"","uid":"","lat":"53.530598","lnt":"-2.5620373","hots":,"title":"","status":"","topicId":"","end_time":"","watch_num":,"share_num":"","replay_url":null,"replay_num":,"start_time":"","timestamp":,"type":"video_info"}
  4. {"id":"","uid":"","lat":"53.530598","lnt":"-2.5620373","hots":,"title":"","status":"","topicId":"","end_time":"","watch_num":,"share_num":"","replay_url":null,"replay_num":,"start_time":"","timestamp":,"type":"video_info"}
  5. {"id":"","uid":"","lat":"53.530598","lnt":"-2.5620373","hots":,"title":"","status":"","topicId":"","end_time":"","watch_num":,"share_num":"","replay_url":null,"replay_num":,"start_time":"","timestamp":,"type":"video_info"}
  6. {"id":"","uid":"","lat":"53.530598","lnt":"-2.5620373","hots":,"title":"","status":"","topicId":"","end_time":"","watch_num":,"share_num":"","replay_url":null,"replay_num":,"start_time":"","timestamp":,"type":"video_info"}
  7. {"id":"","uid":"","lat":"53.530598","lnt":"-2.5620373","hots":,"title":"","status":"","topicId":"","end_time":"","watch_num":,"share_num":"","replay_url":null,"replay_num":,"start_time":"","timestamp":,"type":"video_info"}
  8. {"id":"","uid":"","lat":"53.530598","lnt":"-2.5620373","hots":,"title":"","status":"","topicId":"","end_time":"","watch_num":,"share_num":"","replay_url":null,"replay_num":,"start_time":"","timestamp":,"type":"video_info"}
  9. {"id":"","uid":"","lat":"53.530598","lnt":"-2.5620373","hots":,"title":"","status":"","topicId":"","end_time":"","watch_num":,"share_num":"","replay_url":null,"replay_num":,"start_time":"","timestamp":,"type":"video_info"}
  10. {"id":"","uid":"","lat":"53.530598","lnt":"-2.5620373","hots":,"title":"","status":"","topicId":"","end_time":"","watch_num":,"share_num":"","replay_url":null,"replay_num":,"start_time":"","timestamp":,"type":"video_info"}
  11. {"id":"","uid":"","lat":"53.530598","lnt":"-2.5620373","hots":,"title":"","status":"","topicId":"","end_time":"","watch_num":,"share_num":"","replay_url":null,"replay_num":,"start_time":"","timestamp":,"type":"video_info"}
  12. {"id":"","uid":"","lat":"53.530598","lnt":"-2.5620373","hots":,"title":"","status":"","topicId":"","end_time":"","watch_num":,"share_num":"","replay_url":null,"replay_num":,"start_time":"","timestamp":,"type":"video_info"}
  13.  
  14. userinfo
  15.  
  16. {"uid":"","nickname":"mick","usign":"","sex":,"birthday":"","face":"","big_face":"","email":"abc@qq.com","mobile":"","reg_type":"","last_login_time":"","reg_time":"","last_update_time":"","status":"","is_verified":"","verified_info":"","is_seller":"","level":,"exp":,"anchor_level":,"anchor_exp":,"os":"android","timestamp":,"type":"userinfo"}
  17. {"uid":"","nickname":"mick","usign":"","sex":,"birthday":"","face":"","big_face":"","email":"abc@qq.com","mobile":"","reg_type":"","last_login_time":"","reg_time":"","last_update_time":"","status":"","is_verified":"","verified_info":"","is_seller":"","level":,"exp":,"anchor_level":,"anchor_exp":,"os":"android","timestamp":,"type":"userinfo"}
  18. {"uid":"","nickname":"mick","usign":"","sex":,"birthday":"","face":"","big_face":"","email":"abc@qq.com","mobile":"","reg_type":"","last_login_time":"","reg_time":"","last_update_time":"","status":"","is_verified":"","verified_info":"","is_seller":"","level":,"exp":,"anchor_level":,"anchor_exp":,"os":"android","timestamp":,"type":"userinfo"}
  19. {"uid":"","nickname":"mick","usign":"","sex":,"birthday":"","face":"","big_face":"","email":"abc@qq.com","mobile":"","reg_type":"","last_login_time":"","reg_time":"","last_update_time":"","status":"","is_verified":"","verified_info":"","is_seller":"","level":,"exp":,"anchor_level":,"anchor_exp":,"os":"android","timestamp":,"type":"userinfo"}
  20. {"uid":"","nickname":"mick","usign":"","sex":,"birthday":"","face":"","big_face":"","email":"abc@qq.com","mobile":"","reg_type":"","last_login_time":"","reg_time":"","last_update_time":"","status":"","is_verified":"","verified_info":"","is_seller":"","level":,"exp":,"anchor_level":,"anchor_exp":,"os":"android","timestamp":,"type":"userinfo"}
  21. {"uid":"","nickname":"mick","usign":"","sex":,"birthday":"","face":"","big_face":"","email":"abc@qq.com","mobile":"","reg_type":"","last_login_time":"","reg_time":"","last_update_time":"","status":"","is_verified":"","verified_info":"","is_seller":"","level":,"exp":,"anchor_level":,"anchor_exp":,"os":"android","timestamp":,"type":"userinfo"}
  22. {"uid":"","nickname":"mick","usign":"","sex":,"birthday":"","face":"","big_face":"","email":"abc@qq.com","mobile":"","reg_type":"","last_login_time":"","reg_time":"","last_update_time":"","status":"","is_verified":"","verified_info":"","is_seller":"","level":,"exp":,"anchor_level":,"anchor_exp":,"os":"android","timestamp":,"type":"userinfo"}
  23. {"uid":"","nickname":"mick","usign":"","sex":,"birthday":"","face":"","big_face":"","email":"abc@qq.com","mobile":"","reg_type":"","last_login_time":"","reg_time":"","last_update_time":"","status":"","is_verified":"","verified_info":"","is_seller":"","level":,"exp":,"anchor_level":,"anchor_exp":,"os":"android","timestamp":,"type":"userinfo"}
  24. {"uid":"","nickname":"mick","usign":"","sex":,"birthday":"","face":"","big_face":"","email":"abc@qq.com","mobile":"","reg_type":"","last_login_time":"","reg_time":"","last_update_time":"","status":"","is_verified":"","verified_info":"","is_seller":"","level":,"exp":,"anchor_level":,"anchor_exp":,"os":"android","timestamp":,"type":"userinfo"}
  25. {"uid":"","nickname":"mick","usign":"","sex":,"birthday":"","face":"","big_face":"","email":"abc@qq.com","mobile":"","reg_type":"","last_login_time":"","reg_time":"","last_update_time":"","status":"","is_verified":"","verified_info":"","is_seller":"","level":,"exp":,"anchor_level":,"anchor_exp":,"os":"android","timestamp":,"type":"userinfo"}
  26.  
  27. gift_record
  28.  
  29. {"send_id":"","good_id":"","video_id":"","gold":"","timestamp":,"type":"gift_record"}
  30. {"send_id":"","good_id":"","video_id":"","gold":"","timestamp":,"type":"gift_record"}
  31. {"send_id":"","good_id":"","video_id":"","gold":"","timestamp":,"type":"gift_record"}
  32. {"send_id":"","good_id":"","video_id":"","gold":"","timestamp":,"type":"gift_record"}
  33. {"send_id":"","good_id":"","video_id":"","gold":"","timestamp":,"type":"gift_record"}
  34. {"send_id":"","good_id":"","video_id":"","gold":"","timestamp":,"type":"gift_record"}
  35. {"send_id":"","good_id":"","video_id":"","gold":"","timestamp":,"type":"gift_record"}
  36. {"send_id":"","good_id":"","video_id":"","gold":"","timestamp":,"type":"gift_record"}
  37. {"send_id":"","good_id":"","video_id":"","gold":"","timestamp":,"type":"gift_record"}
  38. {"send_id":"","good_id":"","video_id":"","gold":"","timestamp":,"type":"gift_record"}

  以上是flume采集后的数据。假设都是在这个flume测试数据.txt里,现在呢,我想按照type来存放到不同的目录下。

  即video_info的存放到video_info目录下、userinfo的存放到userinfo目录下、gift_record的存放到gift_record目录下。

  则,这样的应用场景,即根据数据里内容的type字段的值的不同,来分别存储。则需要Regex Extractor Interceptor派上用场了。

   怎么做呢,其实很简单,把type的值,放到

  1. # 定义拦截器
  2. agent1.sources.r1.interceptors = i1
  3. # 设置拦截器类型
  4. agent1.sources.r1.interceptors.i1.type = regex_extractor
  5. # 设置正则表达式,匹配指定的数据,这样设置会在数据的header中增加log_type=”对应的值”
  6. agent1.sources.r1.interceptors.i1.regex = "type":"(\\w+)"
  7. agent1.sources.r1.interceptors.i1.serializers = s1
  8. agent1.sources.r1.interceptors.i1.serializers.s1.name = log_type

  为什么是这么来写?

  1. agent1.sources.r1.interceptors.i1.regex = "type":"(\\w+)"

  是因为数据的内容决定的。

  1. "type":"video_info"
  2.  
  3. "type":"userinfo"
  4.  
  5. "type":"gift_record"

 

  1. #source的名字
  2. agent1.sources = fileSource
  3. # channels的名字,建议按照type来命名
  4. agent1.channels = memoryChannel
  5. # sink的名字,建议按照目标来命名
  6. agent1.sinks = hdfsSink
  7.  
  8. # 指定source使用的channel名字
  9. agent1.sources.fileSource.channels = memoryChannel
  10. # 指定sink需要使用的channel的名字,注意这里是channel
  11. agent1.sinks.hdfsSink.channel = memoryChannel
  12.  
  13. agent1.sources.fileSource.type = exec
  14. agent1.sources.fileSource.command = tail -F /usr/local/log/server.log
  15.  
  16. #------- fileChannel-1相关配置-------------------------
  17. # channel类型
  18.  
  19. agent1.channels.memoryChannel.type = memory
  20. agent1.channels.memoryChannel.capacity =
  21. agent1.channels.memoryChannel.transactionCapacity =
  22. agent1.channels.memoryChannel.byteCapacityBufferPercentage =
  23. agent1.channels.memoryChannel.byteCapacity =
  24.  
  25. #---------拦截器相关配置------------------
  26. # 定义拦截器
  27. agent1.sources.fileSource.interceptors = i1
  28. # 设置拦截器类型
  29. agent1.sources.fileSource.interceptors.i1.type = regex_extractor
  30. # 设置正则表达式,匹配指定的数据,这样设置会在数据的header中增加log_type="某个值"
  31. agent1.sources.fileSource.interceptors.i1.regex = "type":"(\\w+)"
  32. agent1.sources.fileSource.interceptors.i1.serializers = s1
  33. agent1.sources.fileSource.interceptors.i1.serializers.s1.name = log_type
  34.  
  35. #---------hdfsSink 相关配置------------------
  36. agent1.sinks.hdfsSink.type = hdfs
  37. # 注意, 我们输出到下面一个子文件夹datax中
  38. agent1.sinks.hdfsSink.hdfs.path = hdfs://master:9000/data/types/%Y%m%d/%{log_type}
  39. agent1.sinks.hdfsSink.hdfs.writeFormat = Text
  40. agent1.sinks.hdfsSink.hdfs.fileType = DataStream
  41. agent1.sinks.hdfsSink.hdfs.callTimeout =
  42. agent1.sinks.hdfsSink.hdfs.useLocalTimeStamp = true
  43.  
  44. #当文件大小为52428800字节时,将临时文件滚动成一个目标文件
  45. agent1.sinks.hdfsSink.hdfs.rollSize =
  46. #events数据达到该数量的时候,将临时文件滚动成目标文件
  47. agent1.sinks.hdfsSink.hdfs.rollCount =
  48. #每隔N s将临时文件滚动成一个目标文件
  49. agent1.sinks.hdfsSink.hdfs.rollInterval =
  50.  
  51. #配置前缀和后缀
  52. agent1.sinks.hdfsSink.hdfs.filePrefix=run
  53. agent1.sinks.hdfsSink.hdfs.fileSuffix=.data

  监控文件是在

/usr/local/log/server.log

  1. [root@master local]# pwd
  2. /usr/local
  3. [root@master local]# ll
  4. total
  5. drwxr-xr-x. root root Sep bin
  6. drwxr-xr-x. root root Sep etc
  7. drwxr-xr-x. root root Sep games
  8. drwxr-xr-x. root root May : include
  9. drwxr-xr-x. root root May : lib
  10. drwxr-xr-x. root root Sep lib64
  11. drwxr-xr-x. root root Sep libexec
  12. drwxr-xr-x. root root Sep sbin
  13. drwxr-xr-x. root root May : share
  14. drwxr-xr-x. root root Sep src
  15. [root@master local]# mkdir log
  16. [root@master local]# cd log
  17. [root@master log]# pwd
  18. /usr/local/log
  19. [root@master log]# ll
  20. total
  21. [root@master log]#

  然后,执行

  1. [hadoop@master flume-1.7.]$ bin/flume-ng agent --conf conf_RegexExtractorInterceptor/ --conf-file conf_RegexExtractorInterceptor/flume-conf.properties --name agent1 -Dflume.root.logger=INFO,console

  然后,我这边,采用如下的一个shell脚本来模拟产生测试数据。

  producerLog.sh

  1. [root@master log]# pwd
  2. /usr/local/log
  3. [root@master log]# ll
  4. total
  5. [root@master log]# vim producerLog.sh

  1. #!/bin/bash
  2. log1='{"id":"14943445328940974610","uid":"840717325115457536","lat":"53.530598","lnt":"-2.5620373","hot
  3. s":0,"title":"","status":"","topicId":"","end_time":"","watch_num":0,"share_num":"","repl
  4. ay_url":null,"replay_num":0,"start_time":"","timestamp":1494344571,"type":"video_info"}'
  5.  
  6. log2='{"uid":"861848974414839810","nickname":"mick","usign":"","sex":1,"birthday":"","face":"","big_fac
  7. e":"","email":"abc@qq.com","mobile":"","reg_type":"","last_login_time":"","reg_time":"
  8. ","last_update_time":"","status":"","is_verified":"","verified_info":"","is_seller":"
  9. ","level":1,"exp":0,"anchor_level":0,"anchor_exp":0,"os":"android","timestamp":1494344580,"type":"user_info"}'
  10.  
  11. log3='{"send_id":"834688818270961664","good_id":"223","video_id":"14943443045138661356","gold":"10","ti
  12. mestamp":1494344574,"type":"gift_record"}'
  13.  
  14. declare -i count
  15.  
  16. count=
  17. while [ 'a' = 'a' ]
  18. do
  19. echo -e $log1 >> /usr/local/log/server.log
  20. echo -e $log2 >> /usr/local/log/server.log
  21. echo -e $log3 >> /usr/local/log/server.log
  22. count+=
  23. if [ ${count} -eq ]
  24. then
  25. count=
  26. echo "sleep..."
  27. sleep
  28. fi
  29. done

  这个shell脚本不太难哈。即log1会生成500条、log2会生成500条、log3会生成500条。每隔3秒。

  然后,再来创建server.log文件

  1. [root@master log]# pwd
  2. /usr/local/log
  3. [root@master log]# ll
  4. total
  5. -rw-r--r-- root root Jul : producerLog.sh
  6. [root@master log]# vim producerLog.sh
  7. [root@master log]# touch server.log
  8. [root@master log]# ll
  9. total
  10. -rw-r--r-- root root Jul : producerLog.sh
  11. -rw-r--r-- root root Jul : server.log
  12. [root@master log]# cat server.log
  13. [root@master log]#

  

  然后,来执行这个脚本,以模拟产生数据。

  1. [root@master log]# pwd
  2. /usr/local/log
  3. [root@master log]# ll
  4. total
  5. -rw-r--r-- root root Jul : producerLog.sh
  6. -rw-r--r-- root root Jul : server.log
  7. [root@master log]# chmod producerLog.sh
  8. [root@master log]# ll
  9. total
  10. -rwxr-xr-x root root Jul : producerLog.sh
  11. -rw-r--r-- root root Jul : server.log
  12. [root@master log]# ./producerLog.sh

  1. -- ::, (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:)] Block Under-replication detected. Rotating file.
  2. -- ::, (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:)] Closing hdfs://master:9000/data/types/20170727//run.1501137914366.data.tmp
  3. -- ::, (hdfs-hdfsSink-call-runner-) [INFO - org.apache.flume.sink.hdfs.BucketWriter$.call(BucketWriter.java:)] Renaming hdfs://master:9000/data/types/20170727/run.1501137914366.data.tmp to hdfs://master:9000/data/types/20170727/run.1501137914366.data
  4. -- ::, (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:)] Creating hdfs://master:9000/data/types/20170727//run.1501137914367.data.tmp
  5. -- ::, (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:)] Block Under-replication detected. Rotating file.
  6. -- ::, (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:)] Closing hdfs://master:9000/data/types/20170727/video_info/run.1501137883920.data.tmp
  7. -- ::, (hdfs-hdfsSink-call-runner-) [INFO - org.apache.flume.sink.hdfs.BucketWriter$.call(BucketWriter.java:)] Renaming hdfs://master:9000/data/types/20170727/video_info/run.1501137883920.data.tmp to hdfs://master:9000/data/types/20170727/video_info/run.1501137883920.data
  8. -- ::, (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:)] Creating hdfs://master:9000/data/types/20170727/video_info/run.1501137883921.data.tmp
  9. -- ::, (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:)] Block Under-replication detected. Rotating file.
  10. -- ::, (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:)] Closing hdfs://master:9000/data/types/20170727//run.1501137914367.data.tmp
  11. -- ::, (hdfs-hdfsSink-call-runner-) [INFO - org.apache.flume.sink.hdfs.BucketWriter$.call(BucketWriter.java:)] Renaming hdfs://master:9000/data/types/20170727/run.1501137914367.data.tmp to hdfs://master:9000/data/types/20170727/run.1501137914367.data
  12. -- ::, (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:)] Creating hdfs://master:9000/data/types/20170727//run.1501137914368.data.tmp
  13. -- ::, (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:)] Block Under-replication detected. Rotating file.
  14. -- ::, (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:)] Closing hdfs://master:9000/data/types/20170727/gift_record/run.1501137916399.data.tmp
  15. -- ::, (hdfs-hdfsSink-call-runner-) [INFO - org.apache.flume.sink.hdfs.BucketWriter$.call(BucketWriter.java:)] Renaming hdfs://master:9000/data/types/20170727/gift_record/run.1501137916399.data.tmp to hdfs://master:9000/data/types/20170727/gift_record/run.1501137916399.data
  16. -- ::, (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:)] Creating hdfs://master:9000/data/types/20170727/gift_record/run.1501137916400.data.tmp

Search and Replace  Interceptor

  以上存放,是在

  模拟产生的gift_record是存放在/data/types/20170727/gift_record

  但是呢。我现在需求是

  模拟产生的gift_record是存放在/data/types/20170727/giftRecord

  则改为

  1. agent1.sources.r1.interceptors = i1 i2 i3 i4
  2. agent1.sources.r1.interceptors.i1.type = search_replace
  3. agent1.sources.r1.interceptors.i1.searchPattern = "type":"gift_record"
  4. agent1.sources.r1.interceptors.i1.replaceString = "type":"giftRecord"
  5.  
  6. agent1.sources.r1.interceptors.i2.type = search_replace
  7. agent1.sources.r1.interceptors.i2.searchPattern = "type":"video_info"
  8. agent1.sources.r1.interceptors.i2.replaceString = "type":"videoInfo"
  9.  
  10. agent1.sources.r1.interceptors.i3.type = search_replace
  11. agent1.sources.r1.interceptors.i3.searchPattern = "type":"user_info"
  12. agent1.sources.r1.interceptors.i3.replaceString = "type":"userInfo"
  13.  
  14. agent1.sources.fileSource.interceptors.i4.type = regex_extractor
  15. agent1.sources.fileSource.interceptors.i4.regex = "type":"(\\w+)"
  16. agent1.sources.fileSource.interceptors.i4.serializers = s1
  17. agent1.sources.fileSource.interceptors.i4.serializers.s1.name = log_type

  1. [hadoop@master conf_SearchandReplaceInterceptor]$ pwd
  2. /home/hadoop/app/flume-1.7./conf_SearchandReplaceInterceptor
  3. [hadoop@master conf_SearchandReplaceInterceptor]$ ll
  4. total
  5. -rw-r--r-- hadoop hadoop Jul : flume-conf.properties
  6. -rw-r--r-- hadoop hadoop Jul : flume-env.ps1.template
  7. -rw-r--r-- hadoop hadoop Jul : flume-env.sh.template
  8. -rw-r--r-- hadoop hadoop Jul : log4j.properties
  9. [hadoop@master conf_SearchandReplaceInterceptor]$ vim flume-conf.properties

  1. #source的名字
  2. agent1.sources = fileSource
  3. # channels的名字,建议按照type来命名
  4. agent1.channels = memoryChannel
  5. # sink的名字,建议按照目标来命名
  6. agent1.sinks = hdfsSink
  7.  
  8. # 指定source使用的channel名字
  9. agent1.sources.fileSource.channels = memoryChannel
  10. # 指定sink需要使用的channel的名字,注意这里是channel
  11. agent1.sinks.hdfsSink.channel = memoryChannel
  12.  
  13. agent1.sources.fileSource.type = exec
  14. agent1.sources.fileSource.command = tail -F /usr/local/log/server.log
  15.  
  16. #------- fileChannel-1相关配置-------------------------
  17. # channel类型
  18.  
  19. agent1.channels.memoryChannel.type = memory
  20. agent1.channels.memoryChannel.capacity =
  21. agent1.channels.memoryChannel.transactionCapacity =
  22. agent1.channels.memoryChannel.byteCapacityBufferPercentage =
  23. agent1.channels.memoryChannel.byteCapacity =
  24.  
  25. #---------拦截器相关配置------------------

agent1.sources.r1.interceptors = i1 i2 i3 i4
agent1.sources.r1.interceptors.i1.type = search_replace
agent1.sources.r1.interceptors.i1.searchPattern = "type":"gift_record"
agent1.sources.r1.interceptors.i1.replaceString = "type":"giftRecord"

agent1.sources.r1.interceptors.i2.type = search_replace
agent1.sources.r1.interceptors.i2.searchPattern = "type":"video_info"
agent1.sources.r1.interceptors.i2.replaceString = "type":"videoInfo"

agent1.sources.r1.interceptors.i3.type = search_replace
agent1.sources.r1.interceptors.i3.searchPattern = "type":"user_info"
agent1.sources.r1.interceptors.i3.replaceString = "type":"userInfo"

  1. agent1.sources.fileSource.interceptors.i4.type = regex_extractor
  2. agent1.sources.fileSource.interceptors.i4.regex = "type":"(\\w+)"
  3. agent1.sources.fileSource.interceptors.i4.serializers = s1
  4. agent1.sources.fileSource.interceptors.i4.serializers.s1.name = log_type
  5.  
  6. #---------hdfsSink 相关配置------------------
  7. agent1.sinks.hdfsSink.type = hdfs
  8. # 注意, 我们输出到下面一个子文件夹datax中
  9. agent1.sinks.hdfsSink.hdfs.path = hdfs://master:9000/data/types/%Y%m%d/%{log_type}
  10. agent1.sinks.hdfsSink.hdfs.writeFormat = Text
  11. agent1.sinks.hdfsSink.hdfs.fileType = DataStream
  12. agent1.sinks.hdfsSink.hdfs.callTimeout =
  13. agent1.sinks.hdfsSink.hdfs.useLocalTimeStamp = true
  14.  
  15. #当文件大小为52428800字节时,将临时文件滚动成一个目标文件
  16. agent1.sinks.hdfsSink.hdfs.rollSize =
  17. #events数据达到该数量的时候,将临时文件滚动成目标文件
  18. agent1.sinks.hdfsSink.hdfs.rollCount =
  19. #每隔N s将临时文件滚动成一个目标文件
  20. agent1.sinks.hdfsSink.hdfs.rollInterval =
  21.  
  22. #配置前缀和后缀
  23. agent1.sinks.hdfsSink.hdfs.filePrefix=run
  24. agent1.sinks.hdfsSink.hdfs.fileSuffix=.data

  然后,执行

  1. [hadoop@master flume-1.7.]$ bin/flume-ng agent --conf conf_SearchandReplaceInterceptor/ --conf-file conf_SearchandReplaceInterceptor/flume-conf.properties --name agent1 -Dflume.root.logger=INFO,console

  我这里,出现了这个错误

  1. -- ::, (lifecycleSupervisor--) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:)] Component type: SOURCE, name: fileSource started
  2. -- ::, (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.HDFSDataStream.configure(HDFSDataStream.java:)] Serializer = TEXT, UseRawLocalFileSystem = false
  3. -- ::, (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:)] Creating hdfs://master:9000/data/types/20170729//run.1501294672792.data.tmp
  4. -- ::, (hdfs-hdfsSink-call-runner-) [WARN - org.apache.hadoop.util.NativeCodeLoader.<clinit>(NativeCodeLoader.java:)] Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  5. -- ::, (pool--thread-) [ERROR - org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:)] Failed while running command: tail -F /usr/local/log/server.log
  6. org.apache.flume.ChannelFullException: Space for commit to queue couldn't be acquired. Sinks are likely not keeping up with sources, or the buffer size is too tight
  7. at org.apache.flume.channel.MemoryChannel$MemoryTransaction.doCommit(MemoryChannel.java:)
  8. at org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:)
  9. at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:)
  10. at org.apache.flume.source.ExecSource$ExecRunnable.flushEventBatch(ExecSource.java:)
  11. at org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:)
  12. at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:)
  13. at java.util.concurrent.FutureTask.run(FutureTask.java:)
  14. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
  15. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
  16. at java.lang.Thread.run(Thread.java:)
  17. -- ::, (timedFlushExecService21-) [ERROR - org.apache.flume.source.ExecSource$ExecRunnable$.run(ExecSource.java:)] Exception occured when processing event batch
  18. org.apache.flume.ChannelException: java.lang.InterruptedException
  19. at org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:)
  20. at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:)
  21. at org.apache.flume.source.ExecSource$ExecRunnable.flushEventBatch(ExecSource.java:)
  22. at org.apache.flume.source.ExecSource$ExecRunnable.access$(ExecSource.java:)
  23. at org.apache.flume.source.ExecSource$ExecRunnable$.run(ExecSource.java:)
  24. at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:)
  25. at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:)
  26. at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$(ScheduledThreadPoolExecutor.java:)
  27. at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:)

  然后,这边模拟产生数据。

  1. [root@master log]# pwd
  2. /usr/local/log
  3. [root@master log]# ll
  4. total
  5. -rwxr-xr-x root root Jul : producerLog.sh
  6. -rw-r--r-- root root Jul : server.log
  7. [root@master log]# ./producerLog.sh
  8. sleep...
  9. sleep...
  10. sleep...

 Flume自定义拦截器(Interceptors)

一、自定义拦截器类型必须是:类全名$内部类名,其实就是内部类名称
  如:zhouls.bigdata.MySearchAndReplaceInterceptor$Builder

二、为什么这样写
  至于为什么这样写:是因为Interceptor接口还有一个 公共的内部接口(Builder) ,所以自定义拦截器 要是实现 Builder接口,
  也就是实现一个内部类(该内部类的主要作用是:获取flume-conf.properties 自定义的 参数,并将参数传递给 自定义拦截器)
三、
  本人知识有限,可能描述的不太清楚,可自行了解 java接口与内部类。

  由于有时候内置的拦截器不够用,所以需要针对特殊的业务需求自定义拦截器
  官方文档中没有发现自定义interceptor的步骤,但是可以根据flume源码参考内置的拦截器的代码
  flume-1.7/flume-ng-core/src/main/java/org/apache/flume/interceptor/HostInterceptor.java

  

  大家,去https://github.com/找到,因为,我的flume是1.7.0的。所以如下

  修改后的pom.xml为

  1. <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  2. xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  3. <modelVersion>4.0.</modelVersion>
  4.  
  5. <groupId>zhouls.bigdata</groupId>
  6. <artifactId>flumeDemo</artifactId>
  7. <version>0.0.-SNAPSHOT</version>
  8. <packaging>jar</packaging>
  9.  
  10. <name>flumeDemo</name>
  11. <url>http://maven.apache.org</url>
  12.  
  13. <properties>
  14. <project.build.sourceEncoding>UTF-</project.build.sourceEncoding>
  15. </properties>
  16.  
  17. <dependencies>
  18. <dependency>
  19. <groupId>junit</groupId>
  20. <artifactId>junit</artifactId>
  21. <version>4.12</version>
  22. <scope>test</scope>
  23. </dependency>
  24. <!-- 此版本的curator操作的zk是3..6版本 -->
  25. <dependency>
  26. <groupId>org.apache.curator</groupId>
  27. <artifactId>curator-framework</artifactId>
  28. <version>2.10.</version>
  29. </dependency>
  30. <!-- https://mvnrepository.com/artifact/org.apache.flume/flume-ng-core -->
  31. <dependency>
  32. <groupId>org.apache.flume</groupId>
  33. <artifactId>flume-ng-core</artifactId>
  34. <version>1.7.</version>
  35. </dependency>
  36.  
  37. </dependencies>
  38. </project>

 

  然后,我这里,参考github上的给定参考代码,来写出属于我们自己业务需求的flume自定义拦截器代码编程。

  MySearchAndReplaceInterceptor.java.java

  1. package zhouls.bigdata.flumeDemo;
  2.  
  3. import com.google.common.base.Preconditions;
  4. import org.apache.commons.lang.StringUtils;
  5. import org.apache.flume.Context;
  6. import org.apache.flume.Event;
  7. import org.apache.flume.interceptor.Interceptor;
  8. import org.slf4j.Logger;
  9. import org.slf4j.LoggerFactory;
  10.  
  11. import java.util.HashMap;
  12. import java.util.List;
  13. import java.util.regex.Matcher;
  14. import java.util.regex.Pattern;
  15.  
  16. /**
  17. * Created by zhouls.
  18. *
  19. * 使用说明:
  20. * ======================================================
  21. * # 定义拦截器
  22. * agent.sources.kafkaSource.interceptors = i0
  23. * # 设置拦截器类型
  24. * # gift_record:giftRecord的意思是会把日志中的gift_record替换为giftRecord
  25. * agent.sources.kafkaSource.interceptors.i0.type = zhouls.MySearchAndReplaceInterceptor
  26. * agent.sources.kafkaSource.interceptors.i0.searchReplace = "gift_record:giftRecord,video_info:videoInfo"
  27. * ======================================================
  28. */
  29. public class MySearchAndReplaceInterceptor implements Interceptor {
  30.  
  31. private static final Logger logger = LoggerFactory
  32. .getLogger(MySearchAndReplaceInterceptor.class);
  33.  
  34. /**
  35. * 需要替换的字符串信息
  36. * 格式:"key:value,key:value"
  37. */
  38. private final String search_replace;
  39. private String[] splits;
  40. private String[] key_value;
  41. private String key;
  42. private String value;
  43. private HashMap<String, String> hashMap = new HashMap<String, String>();
  44. private Pattern compile = Pattern.compile("\"type\":\"(\\w+)\"");
  45. private Matcher matcher;
  46. private String group;
  47.  
  48. private MySearchAndReplaceInterceptor(String search_replace) {
  49. this.search_replace = search_replace;
  50. }
  51.  
  52. /**
  53. * 初始化放在,最开始执行一次
  54. * 把配置的数据初始化到map中,方便后面调用
  55. */
  56. public void initialize() {
  57. try{
  58. if(StringUtils.isNotBlank(search_replace)){
  59. splits = search_replace.split(",");
  60. for (String key_value_pair:splits) {
  61. key_value = key_value_pair.split(":");
  62. key = key_value[];
  63. value = key_value[];
  64. hashMap.put(key,value);
  65. }
  66. }
  67. }catch (Exception e){
  68. logger.error("数据格式错误,初始化失败。"+search_replace,e.getCause());
  69. }
  70.  
  71. }
  72. public void close() {
  73.  
  74. }
  75.  
  76. /**
  77. * 具体的处理逻辑
  78. * @param event
  79. * @return
  80. */
  81. public Event intercept(Event event) {
  82. try{
  83. String origBody = new String(event.getBody());
  84. matcher = compile.matcher(origBody);
  85. if(matcher.find()){
  86. group = matcher.group();
  87. if(StringUtils.isNotBlank(group)){
  88. String newBody = origBody.replaceAll("\"type\":\""+group+"\"", "\"type\":\""+hashMap.get(group)+"\"");
  89. event.setBody(newBody.getBytes());
  90. }
  91. }
  92. }catch (Exception e){
  93. logger.error("拦截器处理失败!",e.getCause());
  94. }
  95. return event;
  96. }
  97.  
  98. public List<Event> intercept(List<Event> events) {
  99. for (Event event : events) {
  100. intercept(event);
  101. }
  102. return events;
  103. }
  104.  
  105. public static class Builder implements Interceptor.Builder {
  106. private static final String SEARCH_REPLACE_KEY = "searchReplace";
  107.  
  108. private String searchReplace;
  109.  
  110. public void configure(Context context) {
  111. searchReplace = context.getString(SEARCH_REPLACE_KEY);
  112. Preconditions.checkArgument(!StringUtils.isEmpty(searchReplace),
  113. "Must supply a valid search pattern " + SEARCH_REPLACE_KEY +
  114. " (may not be empty)");
  115. }
  116.  
  117. public Interceptor build() {
  118. Preconditions.checkNotNull(searchReplace,
  119. "Regular expression searchReplace required");
  120. return new MySearchAndReplaceInterceptor(searchReplace);
  121. }
  122.  
  123. }
  124. }

  

  然后把MySearchAndReplaceInterceptor这个类导出成一个jar包。

  同时,大家也可以用maven来打jar包

  把这个jar包上传到flume1.7.0的lib目录下

  1. [hadoop@master lib]$ rz
  2.  
  3. [hadoop@master lib]$ ls
  4. apache-log4j-extras-1.1.jar flume-file-channel-1.7..jar flume-taildir-source-1.7..jar kite-data-core-1.0..jar parquet-hive-bundle-1.4..jar
  5. async-1.4..jar flume-hdfs-sink-1.7..jar flume-thrift-source-1.7..jar kite-data-hbase-1.0..jar parquet-jackson-1.4..jar
  6. asynchbase-1.7..jar flume-hive-sink-1.7..jar flume-tools-1.7..jar kite-data-hive-1.0..jar protobuf-java-2.5..jar
  7. avro-1.7..jar flume-irc-sink-1.7..jar flume-twitter-source-1.7..jar kite-hadoop-compatibility-1.0..jar scala-library-2.10..jar
  8. avro-ipc-1.7..jar flume-jdbc-channel-1.7..jar gson-2.2..jar libthrift-0.9..jar serializer-2.7..jar
  9. commons-cli-1.2.jar flume-jms-source-1.7..jar guava-11.0..jar log4j-1.2..jar servlet-api-2.5-.jar
  10. commons-codec-1.8.jar flume-kafka-channel-1.7..jar httpclient-4.2..jar lz4-1.2..jar slf4j-api-1.6..jar
  11. commons-collections-3.2..jar flume-kafka-source-1.7..jar httpcore-4.1..jar mapdb-0.9..jar slf4j-log4j12-1.6..jar
  12. commons-compress-1.4..jar flume-ng-auth-1.7..jar irclib-1.10.jar metrics-core-2.2..jar snappy-java-1.1..jar
  13. commons-dbcp-1.4.jar flume-ng-configuration-1.7..jar jackson-annotations-2.3..jar mina-core-2.0..jar twitter4j-core-3.0..jar
  14. commons-io-2.1.jar flume-ng-core-1.7..jar jackson-core-2.3..jar MySearchAndReplaceInterceptor.jar twitter4j-media-support-3.0..jar
  15. commons-jexl-2.1..jar flume-ng-elasticsearch-sink-1.7..jar jackson-core-asl-1.9..jar netty-3.9..Final.jar twitter4j-stream-3.0..jar
  16. commons-lang-2.5.jar flume-ng-embedded-agent-1.7..jar jackson-databind-2.3..jar opencsv-2.3.jar velocity-1.7.jar
  17. commons-logging-1.1..jar flume-ng-hbase-sink-1.7..jar jackson-mapper-asl-1.9..jar paranamer-2.3.jar xalan-2.7..jar
  18. commons-pool-1.5..jar flume-ng-kafka-sink-1.7..jar jetty-6.1..jar parquet-avro-1.4..jar xercesImpl-2.9..jar
  19. curator-client-2.6..jar flume-ng-log4jappender-1.7..jar jetty-util-6.1..jar parquet-column-1.4..jar xml-apis-1.3..jar
  20. curator-framework-2.6..jar flume-ng-morphline-solr-sink-1.7..jar joda-time-2.1.jar parquet-common-1.4..jar xz-1.0.jar
  21. curator-recipes-2.6..jar flume-ng-node-1.7..jar jopt-simple-3.2.jar parquet-encoding-1.4..jar zkclient-0.7.jar
  22. derby-10.11.1.1.jar flume-ng-sdk-1.7..jar jsr305-1.3..jar parquet-format-2.0..jar
  23. flume-avro-source-1.7..jar flume-scribe-source-1.7..jar kafka_2.-0.9.0.1.jar parquet-generator-1.4..jar
  24. flume-dataset-sink-1.7..jar flume-spillable-memory-channel-1.7..jar kafka-clients-0.9.0.1.jar parquet-hadoop-1.4..jar
  25. [hadoop@master lib]$ pwd
  26. /home/hadoop/app/flume-1.7./lib
  27. [hadoop@master lib]$

  1. drwxr-xr-x hadoop hadoop Apr : conf
  2. drwxr-xr-x hadoop hadoop Jul : conf_HostInterceptor
  3. drwxr-xr-x hadoop hadoop Jul : conf_RegexExtractorInterceptor
  4. drwxr-xr-x hadoop hadoop Jul : conf_SearchandReplaceInterceptor
  5. -rw-r--r-- hadoop hadoop Sep DEVNOTES
  6. -rw-r--r-- hadoop hadoop Sep doap_Flume.rdf
  7. drwxr-xr-x hadoop hadoop Oct docs
  8. drwxrwxr-x hadoop hadoop Jul : lib
  9. -rw-r--r-- hadoop hadoop Oct LICENSE
  10. -rw-r--r-- hadoop hadoop Sep NOTICE
  11. -rw-r--r-- hadoop hadoop Sep README.md
  12. -rw-r--r-- hadoop hadoop Oct RELEASE-NOTES
  13. drwxrwxr-x hadoop hadoop Apr : tools
  14. [hadoop@master flume-1.7.]$ cp -r conf conf_MySearchAndReplaceInterceptor
  15. [hadoop@master flume-1.7.]$ ll
  16. total
  17. drwxr-xr-x hadoop hadoop Apr : bin
  18. -rw-r--r-- hadoop hadoop Oct CHANGELOG
  19. drwxr-xr-x hadoop hadoop Apr : conf
  20. drwxr-xr-x hadoop hadoop Jul : conf_HostInterceptor
  21. drwxr-xr-x hadoop hadoop Jul : conf_MySearchAndReplaceInterceptor
  22. drwxr-xr-x hadoop hadoop Jul : conf_RegexExtractorInterceptor
  23. drwxr-xr-x hadoop hadoop Jul : conf_SearchandReplaceInterceptor
  24. -rw-r--r-- hadoop hadoop Sep DEVNOTES
  25. -rw-r--r-- hadoop hadoop Sep doap_Flume.rdf
  26. drwxr-xr-x hadoop hadoop Oct docs
  27. drwxrwxr-x hadoop hadoop Jul : lib
  28. -rw-r--r-- hadoop hadoop Oct LICENSE
  29. -rw-r--r-- hadoop hadoop Sep NOTICE
  30. -rw-r--r-- hadoop hadoop Sep README.md
  31. -rw-r--r-- hadoop hadoop Oct RELEASE-NOTES
  32. drwxrwxr-x hadoop hadoop Apr : tools
  33. [hadoop@master flume-1.7.]$

  修改好log4j.properties ,为了方便管理查看日志

  1. [hadoop@master conf_MySearchAndReplaceInterceptor]$ pwd
  2. /home/hadoop/app/flume-1.7./conf_MySearchAndReplaceInterceptor
  3. [hadoop@master conf_MySearchAndReplaceInterceptor]$ ll
  4. total
  5. -rw-r--r-- hadoop hadoop Jul : flume-conf.properties.template
  6. -rw-r--r-- hadoop hadoop Jul : flume-env.ps1.template
  7. -rw-r--r-- hadoop hadoop Jul : flume-env.sh.template
  8. -rw-r--r-- hadoop hadoop Jul : log4j.properties
  9. [hadoop@master conf_MySearchAndReplaceInterceptor]$ mv flume-conf.properties.template flume-conf.properties
  10. [hadoop@master conf_MySearchAndReplaceInterceptor]$ vim log4j.properties

  1. #flume.root.logger=DEBUG,console
  2. flume.root.logger=INFO,LOGFILE
  3. flume.log.dir=./logs
  4. flume.log.file=flume_MySearchAndReplaceInterceptor.log

  1. [hadoop@master conf_MySearchAndReplaceInterceptor]$ ll
  2. total
  3. -rw-r--r-- hadoop hadoop Jul : flume-conf.properties
  4. -rw-r--r-- hadoop hadoop Jul : flume-env.ps1.template
  5. -rw-r--r-- hadoop hadoop Jul : flume-env.sh.template
  6. -rw-r--r-- hadoop hadoop Jul : log4j.properties
  7. [hadoop@master conf_MySearchAndReplaceInterceptor]$ vim flume-conf.properties

  然后,修改flume的配置文件如下:

  注意:不能为上面。

  

  除非你的程序需要引号(“”),否则不要加引号(“”),本程序不需要引号,因此是错误的

  1. #source的名字
  2. agent1.sources = fileSource
  3. # channels的名字,建议按照type来命名
  4. agent1.channels = memoryChannel
  5. # sink的名字,建议按照目标来命名
  6. agent1.sinks = hdfsSink
  7.  
  8. # 指定source使用的channel名字
  9. agent1.sources.fileSource.channels = memoryChannel
  10. # 指定sink需要使用的channel的名字,注意这里是channel
  11. agent1.sinks.hdfsSink.channel = memoryChannel
  12.  
  13. agent1.sources.fileSource.type = exec
  14. agent1.sources.fileSource.command = tail -F /usr/local/log/server.log
  15.  
  16. #------- fileChannel-1相关配置-------------------------
  17. # channel类型
  18.  
  19. agent1.channels.memoryChannel.type = memory
  20. agent1.channels.memoryChannel.capacity =
  21. agent1.channels.memoryChannel.transactionCapacity =
  22. agent1.channels.memoryChannel.byteCapacityBufferPercentage =
  23. agent1.channels.memoryChannel.byteCapacity =
  24. #---------拦截器相关配置------------------
  25. #定义拦截器
  26. agent1.sources.r1.interceptors = i1 i2
  27. # 设置拦截器类型
  28. agent1.sources.r1.interceptors.i1.type = zhouls.bigdata.MySearchAndReplaceInterceptor
  29. agent1.sources.r1.interceptors.i1.searchReplace = gift_record:giftRecord,video_info:videoInfo,user_info:userInfo
  30. # 设置拦截器类型
  31. agent1.sources.r1.interceptors.i2.type = regex_extractor
  32. # 设置正则表达式,匹配指定的数据,这样设置会在数据的header中增加log_type="某个值"
  33. agent1.sources.r1.interceptors.i2.regex = "type":"(\\w+)"
  34. agent1.sources.r1.interceptors.i2.serializers = s1
  35. agent1.sources.r1.interceptors.i2.serializers.s1.name = log_type
  36.  
  37.  
  38. #---------hdfsSink 相关配置------------------
  39. agent1.sinks.hdfsSink.type = hdfs
  40. # 注意, 我们输出到下面一个子文件夹datax中
  41. agent1.sinks.hdfsSink.hdfs.path = hdfs://master:9000/data/types/%Y%m%d/%{log_type}
  42. agent1.sinks.hdfsSink.hdfs.writeFormat = Text
  43. agent1.sinks.hdfsSink.hdfs.fileType = DataStream
  44. agent1.sinks.hdfsSink.hdfs.callTimeout =
  45. agent1.sinks.hdfsSink.hdfs.useLocalTimeStamp = true
  46.  
  47. #当文件大小为52428800字节时,将临时文件滚动成一个目标文件
  48. agent1.sinks.hdfsSink.hdfs.rollSize =
  49. #events数据达到该数量的时候,将临时文件滚动成目标文件
  50. agent1.sinks.hdfsSink.hdfs.rollCount =
  51. #每隔N s将临时文件滚动成一个目标文件
  52. agent1.sinks.hdfsSink.hdfs.rollInterval =
  53.  
  54. #配置前缀和后缀
  55. agent1.sinks.hdfsSink.hdfs.filePrefix=run
  56. agent1.sinks.hdfsSink.hdfs.fileSuffix=.data

  主要在里面添加拦截器的配置是如下

  1. #---------拦截器相关配置------------------
  2. #定义拦截器
  3. agent1.sources.r1.interceptors = i1 i2
  4. # 设置拦截器类型
  5. agent1.sources.r1.interceptors.i1.type = zhouls.bigdata.MySearchAndReplaceInterceptor
  6. agent1.sources.r1.interceptors.i1.searchReplace = "gift_record:giftRecord,video_info:videoInfo,user_info:userInfo"
  7.  
  8. # 设置拦截器类型
  9. agent1.sources.r1.interceptors.i2.type = regex_extractor
  10. # 设置正则表达式,匹配指定的数据,这样设置会在数据的header中增加log_type="某个值"
  11. agent1.sources.r1.interceptors.i2.regex = "type":"(\\w+)"
  12. agent1.sources.r1.interceptors.i2.serializers = s1
  13. agent1.sources.r1.interceptors.i2.serializers.s1.name = log_type

  意思就是,即把gift_record 换成giftRecord

         video_info转换成videoInfo

         user_info转换成userInfo

  然后,启动agent服务即可。

  1. [hadoop@master flume-1.7.]$ bin/flume-ng agent --conf conf_MySearchAndReplaceInterceptor/ --conf-file conf_MySearchAndReplaceInterceptor/flume-conf.properties --name agent1 -Dflume.root.logger=INFO,console

  我这里,出现了这个错误

  1. -- ::, (lifecycleSupervisor--) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:)] Component type: SOURCE, name: fileSource started
  2. -- ::, (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.HDFSDataStream.configure(HDFSDataStream.java:)] Serializer = TEXT, UseRawLocalFileSystem = false
  3. -- ::, (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:)] Creating hdfs://master:9000/data/types/20170729//run.1501294672792.data.tmp
  4. -- ::, (hdfs-hdfsSink-call-runner-) [WARN - org.apache.hadoop.util.NativeCodeLoader.<clinit>(NativeCodeLoader.java:)] Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  5. -- ::, (pool--thread-) [ERROR - org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:)] Failed while running command: tail -F /usr/local/log/server.log
  6. org.apache.flume.ChannelFullException: Space for commit to queue couldn't be acquired. Sinks are likely not keeping up with sources, or the buffer size is too tight
  7. at org.apache.flume.channel.MemoryChannel$MemoryTransaction.doCommit(MemoryChannel.java:)
  8. at org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:)
  9. at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:)
  10. at org.apache.flume.source.ExecSource$ExecRunnable.flushEventBatch(ExecSource.java:)
  11. at org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:)
  12. at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:)
  13. at java.util.concurrent.FutureTask.run(FutureTask.java:)
  14. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
  15. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
  16. at java.lang.Thread.run(Thread.java:)
  17. -- ::, (timedFlushExecService21-) [ERROR - org.apache.flume.source.ExecSource$ExecRunnable$.run(ExecSource.java:)] Exception occured when processing event batch
  18. org.apache.flume.ChannelException: java.lang.InterruptedException
  19. at org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:)
  20. at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:)
  21. at org.apache.flume.source.ExecSource$ExecRunnable.flushEventBatch(ExecSource.java:)
  22. at org.apache.flume.source.ExecSource$ExecRunnable.access$(ExecSource.java:)
  23. at org.apache.flume.source.ExecSource$ExecRunnable$.run(ExecSource.java:)
  24. at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:)
  25. at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:)
  26. at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$(ScheduledThreadPoolExecutor.java:)
  27. at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:)

  见博客

Flume启动运行时报错org.apache.flume.ChannelFullException: Space for commit to queue couldn't be acquired. Sinks are likely not keeping up with sources, or the buffer size is too tight解决办法(图文详解)

 

  

  中间,我这里还出现下面这个错误

Flume启动时报错Caused by: java.lang.InterruptedException: Timed out before HDFS call was made. Your hdfs.callTimeout might be set too low or HDFS calls are taking too long.解决办法(图文详解)

  中间,我这里还出现下面这个错误

Flume启动报错[ERROR - org.apache.flume.sink.hdfs. Hit max consecutive under-replication rotations (30); will not continue rolling files under this path due to under-replication解决办法(图文详解)

  1. [root@master log]# ll
  2. total
  3. -rwxr-xr-x root root Jul : producerLog.sh
  4. -rw-r--r-- root root Jul : server.log
  5. [root@master log]# ./producerLog.sh

  查看

  

 对于目标文件的生成
我这里,貌似懂了
是要达到那么多的临时文件大小生成后
才会有一股目标目录出来
让它等吧 
 
 
 
   还有资料说,

flume自定义拦截器实现多行读取日志

  加了还是没用。

  

欢迎大家,加入我的微信公众号:大数据躺过的坑        人工智能躺过的坑
 
 
 

同时,大家可以关注我的个人博客

   http://www.cnblogs.com/zlslch/   和     http://www.cnblogs.com/lchzls/      http://www.cnblogs.com/sunnyDream/   

   详情请见:http://www.cnblogs.com/zlslch/p/7473861.html

  人生苦短,我愿分享。本公众号将秉持活到老学到老学习无休止的交流分享开源精神,汇聚于互联网和个人学习工作的精华干货知识,一切来于互联网,反馈回互联网。
  目前研究领域:大数据、机器学习、深度学习、人工智能、数据挖掘、数据分析。 语言涉及:Java、Scala、Python、Shell、Linux等 。同时还涉及平常所使用的手机、电脑和互联网上的使用技巧、问题和实用软件。 只要你一直关注和呆在群里,每天必须有收获

对应本平台的讨论和答疑QQ群:大数据和人工智能躺过的坑(总群)(161156071) 

Flume自定义拦截器(Interceptors)或自带拦截器时的一些经验技巧总结(图文详解)的更多相关文章

  1. 基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

    不多说,直接上干货! 至于为什么,要写这篇博客以及安装Kafka-manager? 问题详情 无奈于,在kafka里没有一个较好自带的web ui.启动后无法观看,并且不友好.所以,需安装一个第三方的 ...

  2. Flume启动运行时报错org.apache.flume.ChannelFullException: Space for commit to queue couldn't be acquired. Sinks are likely not keeping up with sources, or the buffer size is too tight解决办法(图文详解)

        前期博客 Flume自定义拦截器(Interceptors)或自带拦截器时的一些经验技巧总结(图文详解) 问题详情 启动agent服务 [hadoop@master flume-1.7.0]$ ...

  3. Flume启动时报错Caused by: java.lang.InterruptedException: Timed out before HDFS call was made. Your hdfs.callTimeout might be set too low or HDFS calls are taking too long.解决办法(图文详解)

    前期博客 Flume自定义拦截器(Interceptors)或自带拦截器时的一些经验技巧总结(图文详解) 问题详情 -- ::, (agent-shutdown-hook) [INFO - org.a ...

  4. Flume启动报错[ERROR - org.apache.flume.sink.hdfs. Hit max consecutive under-replication rotations (30); will not continue rolling files under this path due to under-replication解决办法(图文详解)

    前期博客 Flume自定义拦截器(Interceptors)或自带拦截器时的一些经验技巧总结(图文详解)   问题详情 -- ::, (SinkRunner-PollingRunner-Default ...

  5. Flume中的flume-env.sh和log4j.properties配置调整建议(图文详解)

    GC是内存的回收的意思. Flume中的flume-env.sh配置调整建议 [hadoop@master conf_HostInterceptor]$ pwd /home/hadoop/app/fl ...

  6. Stamus Networks的产品SELKS(Suricata IDPS、Elasticsearch 、Logstash 、Kibana 和 Scirius )的下载和安装(带桌面版和不带桌面版)(图文详解)

    不多说,直接上干货!  SELKS是什么? SELKS 是Stamus Networks的产品,它是基于Debian的自启动运行发行,面向网络安全管理.它基于自己的图形规则管理器提供一套完整的.易于使 ...

  7. Windows下的Jupyter Notebook 安装与自定义启动(图文详解)

    不多说,直接上干货! 前期博客 Windows下的Python 3.6.1的下载与安装(适合32bits和64bits)(图文详解) 这是我自定义的Python 的安装目录 (D:\SoftWare\ ...

  8. 基于Web的Kafka管理器工具之Kafka-manager启动时出现Exception in thread "main" java.lang.UnsupportedClassVersionError错误解决办法(图文详解)

    不多说,直接上干货! 前期博客 基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8.0.9和0.10以后版本)(图文详解)   问题详情 我在Kaf ...

  9. 基于Web的Kafka管理器工具之Kafka-manager安装之后第一次进入web UI的初步配置(图文详解)

    前期博客 基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8.0.9和0.10以后版本)(图文详解) 基于Web的Kafka管理器工具之Kafka- ...

随机推荐

  1. 【290】Python 常用说明

    1. 双击直接运行 python 代码暂停显示的方法:python学习笔记(3)--IDLE双击运行后暂停 需要添加如下代码: import os os.system("pause" ...

  2. How to fix apt-get GPG error NO_PUBKEY Ubuntu 14

      This morning when I do apt-get update on my new Ubuntu 14.04 server, I got these error messages: R ...

  3. Java多线程-新特征-阻塞栈LinkedBlockingDeque

    对于阻塞栈,与阻塞队列相似.不同点在于栈是“后入先出”的结构,每次操作的是栈顶,而队列是“先进先出”的结构,每次操作的是队列头. 这里要特别说明一点的是,阻塞栈是Java6的新特征.. Java为阻塞 ...

  4. xml和configparser模块

    一.xml模块 xml是实现不同语言或程序之间进行数据交换的协议,跟json差不多,但json使用起来更简单, 但至今很多传统公司如金融行业的很多系统的接口还主要是xml. xml的格式如下,就是通过 ...

  5. PHPMailer fe v4.11 For Thinkphp 3.2

    PHPMailer fe v4.11 For Thinkphp 3.2,你值得拥有! 今晚用TP3.2开发一个东西的时候需要邮件发送功能,理所当然的想到了PHPMailer.于是有了此文!------ ...

  6. Eclipse右击jsp没有运行选项

    maven项目低级错误,没有更新maven资源库.....更新后就运行起来了

  7. C#连接Mysql数据库 MysqlHelper.cs文件

    mysql.data.dll下载_c#连接mysql必要插件mysql.data.dll是C#操作MYSQL的驱动文件,是c#连接mysql必要插件,使c#语言更简洁的操作mysql数据库.当你的电脑 ...

  8. PyGrub

    from:https://wiki.debian.org/PyGrub Using pyGRUB on Wheezy to boot a domU kernel Using pyGRUB from x ...

  9. Cunit编译安装

    1.  Examples/Makefile.am:26: to 'configure.ac' and run 'autoconf' again. configure.ac:211: error: re ...

  10. Python3 使用selenium库登陆知乎并保存cookie为本地文件

    Python3 使用selenium库登陆知乎并保存cookie为本地文件 学习使用selenium库模拟登陆知乎,并将cookie保存为本地文件,然后供以后(requests模块)使用,用selen ...