flume 整合 kafka:
 
flume:高可用的,高可靠的,分布式的海量日志采集、聚合和传输的系统。
kafka:分布式的流数据平台。
 
flume 采集业务日志,发送到kafka
 
一、安装部署Kafka

Download

1.0.0 is the latest release. The current stable version is 1.0.0.
You can verify your download by following these procedures and using these KEYS.

1.0.0

We build for multiple versions of Scala. This only matters if you are using Scala and you want a version built for the same Scala version you use. Otherwise any version should work (2.11 is recommended).
 
1.解压:tar zxvf kafka_2.11-1.0.0.tgz
 
2.部署目录:mv kafka_2.12-1.0.0 /usr/local/kafka2.12
 
3.启动zookeeper ....
 
4.启动kafka:
#nohup bin/kafka-server-start.sh config/server.properties &
 

5.创建topic:

#bin/kafka-topics.sh --create --zookeeper localhost:2181 --partitions 1 --replication-factor 1 --topic test
Created topic "test".

6.查看topic:

# bin/kafka-topics.sh --list --zookeeper localhost:2181
__consumer_offsets
test

7.测试发送数据

#bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
输入:my test
 
8.测试消费消息:
#bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
 
 
二、安装部署flume
 
flume下载:
 

Download

Apache Flume is distributed under the Apache License, version 2.0
The link in the Mirrors column should display a list of available mirrors with a default selection based on your inferred location. If you do not see that page, try a different browser. The checksum and signature are links to the originals on the main distribution server.
Apache Flume binary (tar.gz) apache-flume-1.8.0-bin.tar.gz apache-flume-1.8.0-bin.tar.gz.md5 apache-flume-1.8.0-bin.tar.gz.sha1 apache-flume-1.8.0-bin.tar.gz.asc
Apache Flume source (tar.gz) apache-flume-1.8.0-src.tar.gz apache-flume-1.8.0-src.tar.gz.md5 apache-flume-1.8.0-src.tar.gz.sha1 apache-flume-1.8.0-src.tar.gz.asc
It is essential that you verify the integrity of the downloaded files using the PGP or MD5 signatures. Please read Verifying Apache HTTP Server Releases for more information on why you should verify our releases.
 
 
2.解压:tar zxvf apache-flume-1.8.0-bin.tar.gz
 
3.设置目录:mv apache-flume-1.8.0-bin /usr/local/flume1.8
 

4.准备工作:

安装java并设置java环境变量,flume环境变量,在`/etc/profile`中加入
 
export JAVA_HOME=/usr/java/jdk1.8.0_65
export FLUME_HOME=/usr/local/flume1.8
export PATH=$PATH:$JAVA_HOME/bin:$FLUME_HOME
 

执行:source /etc/profile 生效变量

5.建立log采集目录:
/tmp/logs/kafka.log
6.配置
拷贝配置模板:
# cp conf/flume-conf.properties.template conf/flume-conf.properties
# cp conf/flume-env.properties.template conf/flume-env.properties
编辑配置如下:
agent.sources = s1                                                                                                                  
agent.channels = c1                                                                                                                 
agent.sinks = k1                                                                                                                    
                                                                                                                                      
agent.sources.s1.type=exec                                     
#日志采集位置                                                                     
agent.sources.s1.command=tail -F /tmp/logs/kafka.log                                                                                
agent.sources.s1.channels=c1                                                                                                        
agent.channels.c1.type=memory                                                                                                       
agent.channels.c1.capacity=10000                                                                                                    
agent.channels.c1.transactionCapacity=100                                                                                           
                                                                                                                                      
agent.sinks.k1.type= org.apache.flume.sink.kafka.KafkaSink    
#kafka 地址                                                                      
agent.sinks.k1.brokerList=localhost:9092   
#kafka topic                                                                                         
agent.sinks.k1.topic=test                                                                                                      
agent.sinks.k1.serializer.class=kafka.serializer.StringEncoder                                                                      
                                                                                                                                      
agent.sinks.k1.channel=c1

功能验证

7.启动服务

# bin/flume-ng agent --conf ./conf/ -f conf/kafka.properties -Dflume.root.logger=DEBUG,console -n agent
运行日志位于logs目录,或者启动时添加-Dflume.root.logger=INFO,console 选项前台启动,输出打印日志,查看具体运行日志,服务异常时查原因。

8.创建测试日志生成:log_producer_test.sh

for((i=0;i<=1000;i++));
do echo "kafka_flume_test-"+$i>>/tmp/logs/kafka.log;
do
 
9.生成日志:
./log_producer_test.sh
 
观察kafka日志消费情况。。。

flume 整合 kafka的更多相关文章

  1. 【Kafka】Flume整合Kafka

    目录 需求 一.Flume下载地址 二.上传解压Flume 三.配置flume.conf 四.启动flume 五.测试整合 需求 实现flume监控某个目录下面的所有文件,然后将文件收集发送到kafk ...

  2. 入门大数据---Flume整合Kafka

    一.背景 先说一下,为什么要使用 Flume + Kafka? 以实时流处理项目为例,由于采集的数据量可能存在峰值和峰谷,假设是一个电商项目,那么峰值通常出现在秒杀时,这时如果直接将 Flume 聚合 ...

  3. flume整合kafka

    # Please paste flume.conf here. Example: # Sources, channels, and sinks are defined per # agent name ...

  4. flume 整合kafka

    背景:系统的数据量越来越大,日志不能再简单的文件的保存,如此日志将会越来越大,也不方便查找与分析,综合考虑下使用了flume来收集日志,收集日志后向kafka传递消息,下面给出具体的配置 # The ...

  5. flume与kafka整合

    flume与kafka整合 前提: flume安装和测试通过,可参考:http://www.cnblogs.com/rwxwsblog/p/5800300.html kafka安装和测试通过,可参考: ...

  6. ambari下的flume和kafka整合

    1.配置flume #扫描指定文件配置 agent.sources = s1 agent.channels = c1 agent.sinks = k1 agent.sources.s1.type=ex ...

  7. Flume和Kafka整合安装

    版本号: RedHat6.5   JDK1.8    flume-1.6.0   kafka_2.11-0.8.2.1 1.flume安装 RedHat6.5安装单机flume1.6:http://b ...

  8. 大数据入门第二十四天——SparkStreaming(二)与flume、kafka整合

    前一篇中数据源采用的是从一个socket中拿数据,有点属于“旁门左道”,正经的是从kafka等消息队列中拿数据! 主要支持的source,由官网得知如下: 获取数据的形式包括推送push和拉取pull ...

  9. flume和kafka整合(转)

    原文链接:Kafka flume 整合 前提 前提是要先把flume和kafka独立的部分先搭建好. 下载插件包 下载flume-kafka-plus:https://github.com/beyon ...

随机推荐

  1. Java JVM技术

    .1.     java监控工具使用 .1.1.    jconsole jconsole是一种集成了上面所有命令功能的可视化工具,可以分析jvm的内存使用情况和线程等信息. 启动jconsole 通 ...

  2. iOS | NSProxy

    Objective-C作为一种动态消息型语言,其机制不同于Java ,C#等编译型语言. 它将数据类型的确定等工作推迟到了运行时期来执行,并且它调用方法的方式实质是像对象发送消息,根据selector ...

  3. PHP实现全自动化邮件发送 phpmailer

    PHPmailer           composer地址 function SendMail($msg,$theme,$content) { $mail = new \PHPMailer\PHPM ...

  4. day31 进程和其他方法,锁,队列

    1.进程的其他方法: 首先引入模块: import os from multiprocessing import Process p = Process(target=f,) 进程的id:  p.pi ...

  5. 3.1 wifi网卡RT3070在S3C2440的移植和使用

    学习目标:熟悉RT3070在S3C2440的移植和使用,以及其中的相关工具的安装和使用: 一.配置内核选择WIFI驱动 1. 将usb wifi插到电脑,在ubuntu使用命令:# lsusb 查看w ...

  6. inotify和epoll

    参考EventHub.cpp 1.初始化inotify int mINotifyFd = inotify_init(); 2.将要监测的目录添加到inotify int result = inotif ...

  7. requests+mongodb爬取今日头条,多进程

    import json import os from urllib.parse import urlencode import pymongo import requests from bs4 imp ...

  8. Python知乎热门话题爬取

    本例子是参考崔老师的Python3网络爬虫开发实战写的 看网页界面: 热门话题都在 explore-feed feed-item的div里面 源码如下: import requests from py ...

  9. 采用文件方式安装Python第三方库

    由于Python某些第三方库仅提供源代码,通过pip下载文件后无法在Windows系统编译安装,会导致第三方库安装失败.为了解决这类第三方库的安装问题,美国加州大学尔湾分校提供了一个网页,帮助Pyth ...

  10. JZ2440开发板:修改ARM芯片时钟(学习笔记)

    想要修改ARM芯片的时钟,需要去查询芯片手册和原理图,获取相关的信息(见下方图片) 首先来看时钟的结构图 根据结构图可以看出,时钟源有两种选择:1. XTIpll和XTOpll所连接的晶振 2. EX ...