##########################################################################################################
##########################################################################################################

flume安装,解压后修改flume_env.sh配置文件,指定java_home即可。

cp hdfs jar包到flume lib目录下(否则无法抽取数据到hdfs上)

flume常见命令选项:

[hadoop@db01 flume-1.5.0]$ bin/flume-ng

commands:
  agent                     run a Flume agent

global options:
  --conf,-c <conf>          use configs in <conf> directory
  -Dproperty=value          sets a Java system property value

agent options:
  --name,-n <name>          the name of this agent (required)
  --conf-file,-f <file>     specify a config file (required if -z missing)

eg:

bin/flume-ng agent --conf /opt/cdh-5.3.6/flume-1.5.0/conf --name agent-test --conf-file test.conf
bin/flume-ng agent -c /opt/cdh-5.3.6/flume-1.5.0/conf -n agent-test -f test.conf

********************************************************************************************************

flume第一个案例:

定义配置文件/opt/cdh-5.3.6/flume-1.5.0/conf/a1.conf:

# The configuration file needs to define the sources,
# the channels and the sinks.

###################################
a1.sources = r1
a1.channels = c1
a1.sinks = k1

############define source#######################################
a1.sources.r1.type = netcat
a1.sources.r1.bind = db01
a1.sources.r1.port = 55555

#############define channel###################################
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

##########define sinks#########################
a1.sinks.k1.type = logger
a1.sinks.k1.maxBytesToLog = 1024

#######bind###############################
a1.sources.r1.channels=c1
a1.sinks.k1.channel = c1

安装telnet:

[root@db01 softwares]# rpm -ivh telnet-*
Preparing...                ########################################### [100%]
   1:telnet-server          ########################################### [ 50%]
   2:telnet                 ########################################### [100%]
[root@db01 softwares]#
[root@db01 softwares]#
[root@db01 softwares]# rpm -ivh xinetd-2.3.14-39.el6_4.x86_64.rpm
Preparing...                ########################################### [100%]
    package xinetd-2:2.3.14-39.el6_4.x86_64 is already installed
[root@db01 softwares]#
[root@db01 softwares]#
[root@db01 softwares]#
[root@db01 softwares]# /etc/rc.d/init.d/xinetd restart
Stopping xinetd:                                           [  OK  ]
Starting xinetd:                                           [  OK  ]

启动flume:

bin/flume-ng agent \
--conf /opt/cdh-5.3.6/flume-1.5.0/conf \
--name a1 \
--conf-file /opt/cdh-5.3.6/flume-1.5.0/conf/a1.conf \
-Dflume.root.logger=DEBUG,console

登录telnet 测试:

[root@db01 ~]# telnet db01 55555
Trying 192.168.100.231...
Connected to db01.
Escape character is '^]'.
hello flume
OK
chavin king   
OK

------------ 日志输出如下 -------------

2017-03-23 16:48:31,285 (netcat-handler-0) [DEBUG - org.apache.flume.source.NetcatSource$NetcatSocketHandler.run(NetcatSource.java:318)] Chars read = 13
2017-03-23 16:48:31,290 (netcat-handler-0) [DEBUG - org.apache.flume.source.NetcatSource$NetcatSocketHandler.run(NetcatSource.java:322)] Events processed = 1
2017-03-23 16:48:33,234 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{} body: 68 65 6C 6C 6F 20 66 6C 75 6D 65 0D             hello flume. }
2017-03-23 16:48:39,224 (conf-file-poller-0) [DEBUG - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:126)] Checking file:/opt/cdh-5.3.6/flume-1.5.0/conf/a1.conf for changes
2017-03-23 16:48:47,031 (netcat-handler-0) [DEBUG - org.apache.flume.source.NetcatSource$NetcatSocketHandler.run(NetcatSource.java:318)] Chars read = 13
2017-03-23 16:48:47,032 (netcat-handler-0) [DEBUG - org.apache.flume.source.NetcatSource$NetcatSocketHandler.run(NetcatSource.java:322)] Events processed = 1
2017-03-23 16:48:48,235 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{} body: 63 68 61 76 69 6E 20 6B 69 6E 67 0D             chavin king. }
2017-03-23 16:49:09,225 (conf-file-poller-0) [DEBUG - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:126)] Checking file:/opt/cdh-5.3.6/flume-1.5.0/conf/a1.conf for changes

***************************************************************************

flume第二个案例:收集hive log

/user/hadoop/flume/hive-logs/

[hadoop@db01 hadoop-2.5.0]$ bin/hdfs dfs -mkdir -p /user/hadoop/flume/hive-logs/

a2.conf文件:

# The configuration file needs to define the sources,
# the channels and the sinks.

###################################
a2.sources = r2
a2.channels = c2
a2.sinks = k2

############define source#######################################
a2.sources.r2.type = exec
a2.sources.r2.command = tail -f /opt/cdh-5.3.6/hive-0.13.1/data/logs/hive.log
a2.sources.r2.shell = /bin/bash -c

#############define channel###################################
a2.channels.c2.type = memory
a2.channels.c2.capacity = 1000
a2.channels.c2.transactionCapacity = 100

##########define sinks#########################
a2.sinks.k2.type = hdfs

#a2.sinks.k2.hdfs.path = hdfs://db02:8020/user/hadoop/flume/hive-logs/
#hadoop ha 配置方法,cp hadoop的配置文件到flume的conf目录下:
#cp /opt/cdh-5.3.6/hadoop-2.5.0/etc/hadoop/core-site.xml /opt/cdh-5.3.6/hadoop-2.5.0/etc/hadoop/hdfs-site.xml /opt/cdh-5.3.6/flume-1.5.0/conf/
a2.sinks.k2.hdfs.path = hdfs://ns1/user/hadoop/flume/hive-logs/

a2.sinks.k2.hdfs.fileType = DataStream
a2.sinks.k2.hdfs.writeFormat = Text
a2.sinks.k2.hdfs.batchSize = 10

#######bind###############################
a2.sources.r2.channels=c2
a2.sinks.k2.channel = c2

测试:
bin/flume-ng agent \
--conf /opt/cdh-5.3.6/flume-1.5.0/conf \
--name a2 \
--conf-file /opt/cdh-5.3.6/flume-1.5.0/conf/a2.conf \
-Dflume.root.logger=DEBUG,console

******************************************************************************
flume第三个案例:

编辑a3.conf文件:

# The configuration file needs to define the sources,
# the channels and the sinks.

######define agent#############################
a3.sources = r3
a3.channels = c3
a3.sinks = k3

############define source#######################################
a3.sources.r3.type = spooldir
a3.sources.r3.spoolDir = /opt/cdh-5.3.6/flume-1.5.0/spoolinglogs
a3.sources.r3.ignorePattern = ^(.)*\\.log$
a3.sources.r3.fileSuffix = .delete

#############define channel###################################
a3.channels.c3.type = file
a3.channels.c3.checkpointDir = /opt/cdh-5.3.6/flume-1.5.0/filechannel/checkpoint
a3.channels.c3.dataDirs = /opt/cdh-5.3.6/flume-1.5.0/filechannel/data

##########define sinks#########################
a3.sinks.k3.type = hdfs

#a3.sinks.k3.hdfs.path = hdfs://db02:8020/user/hadoop/flume/hive-logs/
a3.sinks.k3.hdfs.path = hdfs://ns1/user/hadoop/flume/splogs/%Y%m%d

a3.sinks.k3.hdfs.fileType = DataStream
a3.sinks.k3.hdfs.writeFormat = Text
a3.sinks.k3.hdfs.batchSize = 10
a3.sinks.k3.hdfs.useLocalTimeStamp = true
#######bind###############################
a3.sources.r3.channels=c3
a3.sinks.k3.channel = c3

测试:
bin/flume-ng agent \
--conf /opt/cdh-5.3.6/flume-1.5.0/conf \
--name a3 \
--conf-file /opt/cdh-5.3.6/flume-1.5.0/conf/a3.conf \
-Dflume.root.logger=DEBUG,console

flume学习笔记的更多相关文章

  1. flume学习笔记——安装和使用

    Flume是一个分布式.可靠.和高可用的海量日志聚合的系统,支持在系统中定制各类数据发送方,用于收集数据:同时,Flume提供对数据进行简单处理,并写到各种数据接受方(可定制)的能力. Flume是一 ...

  2. Apache Flume 学习笔记

    # 从http://flume.apache.org/download.html 下载flume ############################################# # 概述: ...

  3. Flume 学习笔记之 Flume NG+Kafka整合

    Flume NG集群+Kafka集群整合: 修改Flume配置文件(flume-kafka-server.conf),让Sink连上Kafka hadoop1: #set Agent name a1. ...

  4. Flume 学习笔记之 Flume NG高可用集群搭建

    Flume NG高可用集群搭建: 架构总图: 架构分配: 角色 Host 端口 agent1 hadoop3 52020 collector1 hadoop1 52020 collector2 had ...

  5. Flume 学习笔记之 Flume NG概述及单节点安装

    Flume NG概述: Flume NG是一个分布式,高可用,可靠的系统,它能将不同的海量数据收集,移动并存储到一个数据存储系统中.轻量,配置简单,适用于各种日志收集,并支持 Failover和负载均 ...

  6. spark学习笔记总结-spark入门资料精化

    Spark学习笔记 Spark简介 spark 可以很容易和yarn结合,直接调用HDFS.Hbase上面的数据,和hadoop结合.配置很容易. spark发展迅猛,框架比hadoop更加灵活实用. ...

  7. Hadoop学习笔记(1)概述

    写在学习笔记之前的话: 寒假已经开始好几天了,似乎按现在的时间算,明天就要过年了.在家的这几天,该忙的也都差不多了,其实也都是瞎忙.接下来的几点,哪里也不去了,静静的呆在家里学点东西.所以学习一下Ha ...

  8. Flink学习笔记:Connectors概述

    本文为<Flink大数据项目实战>学习笔记,想通过视频系统学习Flink这个最火爆的大数据计算框架的同学,推荐学习课程: Flink大数据项目实战:http://t.cn/EJtKhaz ...

  9. Hadoop学习笔记系列

    Hadoop学习笔记系列   一.为何要学习Hadoop? 这是一个信息爆炸的时代.经过数十年的积累,很多企业都聚集了大量的数据.这些数据也是企业的核心财富之一,怎样从累积的数据里寻找价值,变废为宝炼 ...

随机推荐

  1. 开源一个简易轻量的reactor网络框架

    github https://github.com/sea-boat/net-reactor net-reactor it's a simple and easy net framework with ...

  2. 03-Linux各目录及每个目录的详细介绍

    Linux各目录及每个目录的详细介绍 [常见目录说明] 目录 /bin 存放二进制可执行文件(ls,cat,mkdir等),常用命令一般都在这里. /etc 存放系统管理和配置文件 /home 存放所 ...

  3. Linux+Redis实战教程_Linux上安装jdk,mysql,tomcat_安装jdk

    1. Linux上安装jdk,mysql,tomcat[重点] Windows 控制面板 添加/卸载程序 进行程序的安装.更新.卸载.查看 rpm命令:相当于windows的添加/卸载程序 进行程序的 ...

  4. 8 -- 深入使用Spring -- 3...4 在ApplicationContext中使用资源

    8.3.4 在ApplicationContext中使用资源 不管以怎样的方式创建ApplicationContext实例,都需要为ApplicationContext指定配置文件,Spring允许使 ...

  5. 8 -- 深入使用Spring -- 3...2 ResouceLoader 接口和 ResourceLoaderAware 接口

    8.3.2 ResouceLoader 接口和 ResourceLoaderAware 接口 Spring 提供如下两个标志性接口: ⊙ ResourceLoader : 该接口实现类的实例可以获得一 ...

  6. JavaWeb学习总结(十五)Jsp中提交的表单的get和post的两种方式

    两者的比较: Get方式: 将请求的参数名和值转换成字符串,并附加在原来的URL之后,不安全 传输的数据量较小,一般不能大于2KB: post方式: 数量较大: 请求的参数和值放在HTML的请求头中, ...

  7. lua总则

    lua官方英文文档:http://www.lua.org/manual/5.2/ lua中国开发者网址:http://bbs.luaer.cn/ <lua程序设计(第二版)>(闭合函数和闭 ...

  8. 【laravel5.6】 Illuminate\Database\QueryException : SQLSTATE[42000]: Syntax error or access violation: 1071 Specified key was too long; max key length is 1000 bytes

    在进行数据迁移时候报错: 特殊字段太长报错, php artisan migrate 现在utf8mb4包括存储emojis支持.如果你运行MySQL v5.7.7或者更高版本,则不需要做任何事情. ...

  9. Intellij 部署项目java.lang.ClassNotFoundException: org.springframework.web.context.ContextLoaderListener

    报错信息: org.apache.catalina.core.StandardContext.listenerStart Error configuring application listener ...

  10. 【Spring Boot&& Spring Cloud系列】单点登录SSO概述

    概念 单点登录(Singleton Sign On),简称为SSO,是目前比较流行的企业业务整合的解决方案之一.SSO的定义是在多个应用系统中,用户只需要登录一次就能访问所有相互信任的应用系统. 也就 ...