0.有的地方我已经整理成脚本了,有的命令是脚本里面截取的

1.启动hadoop和yarn

$HADOOP_HOME/sbin/start-dfs.sh;$HADOOP_HOME/sbin/start-yarn.sh

2.启动zk

#主机名是mini-mini3所以这里可以遍历
echo "start zkserver "
for i in 1 2 3
do
ssh mini$i "source /etc/profile;$ZK_HOME/bin/zkServer.sh start"
done

3.启动mysqld

service mysqld start

4.启动kafka,集群都要启动

bin/kafka-server-start.sh config/server.properties

5.启动storm

在nimbus.host所属的机器上启动 nimbus服务

nohup ./storm nimbus &

在nimbus.host所属的机器上启动ui服务

nohup ./storm ui &

在其它机器上启动supervisor服务

nohup ./storm supervisor &

6.启动flume

#exec.conf

a1.channels = r1
a1.sources = c1
a1.sinks = k1 #a1.sources.c1.type = spooldir #实时性要求不高的话,可以用这种方式,ta
#a1.sources.c1.channels = r1
#a1.sources.c1.spoolDir = /opt/flumeSpool/
#a1.sources.c1.fileHeader = false a1.sources.c1.type = exec
a1.sources.c1.command = tail -F /home/hadoop/kafkastudy/data/flume_sour
a1.sources.c1.channels = r1 a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = orderMq
a1.sinks.k1.brokerList = mini1:,mini2:,mini3:
a1.sinks.k1.requiredAcks =
a1.sinks.k1.batchSize =
a1.sinks.k1.channel = r1 a1.channels.r1.type = memory
a1.channels.r1.capacity =
a1.channels.r1.transactionCapacity =

bin/flume-ng agent --conf conf --conf-file conf/myconf/exec.conf --name a1 -Dflume.root.logger=INFO,console

7.启动造数据的程序

#!/bin/bash
for((i=;i<;i++))
do
echo "msg-"+$i >> /home/hadoop/kafkastudy/data/flume_sources/click_log/.log
done

8在mini1:8080上观察

总结

a.造数据和flume之间的链接是在exec.conf文件中配置了flume监听了文件,这个文件是造数据成员生成的,这里相当于数据源

b.flume和kafka之间的链接1在exec.conf中配置了.使用kafka的shell消费消息命令可以查看到

bin/kafka-console-consumer.sh --zookeeper mini1:2181  --topic test1

c.kafka和storm之间的链接,是由于我们在storm上运行了自己定义的一个程序,这个程序就是kafka2tostorm,在程序中指定了KafaSpout.同时还包含了自己的业务

d.

package kafkaAndStorm2;

import backtype.storm.Config;
import backtype.storm.LocalCluster;
import backtype.storm.StormSubmitter;
import backtype.storm.generated.AlreadyAliveException;
import backtype.storm.generated.InvalidTopologyException;
import backtype.storm.topology.TopologyBuilder;
import storm.kafka.BrokerHosts;
import storm.kafka.KafkaSpout;
import storm.kafka.SpoutConfig;
import storm.kafka.ZkHosts; /** */
public class KafkaAndStormTopologyMain {
public static void main(String[] args) throws AlreadyAliveException, InvalidTopologyException, InterruptedException {
TopologyBuilder topologyBuilder = new TopologyBuilder(); SpoutConfig config = new SpoutConfig(new ZkHosts("mini1:2181,mini2:2181,mini3:2181"),
"orderMq",
"/mykafka",
"kafkaSpout");
topologyBuilder.setSpout("kafkaSpout",new KafkaSpout(config), );
topologyBuilder.setBolt("mybolt1",new MyKafkaBolt2(),).shuffleGrouping("kafkaSpout"); Config conf = new Config();
//打印调试信息
// conf.setDebug(true);
if (args!=null && args.length>) {
StormSubmitter.submitTopology(args[], conf, topologyBuilder.createTopology());
}else {
LocalCluster localCluster = new LocalCluster();
localCluster.submitTopology("storm2kafka", conf, topologyBuilder.createTopology());
}
} }
package kafkaAndStorm2;

import backtype.storm.task.OutputCollector;
import backtype.storm.task.TopologyContext;
import backtype.storm.topology.BasicOutputCollector;
import backtype.storm.topology.IBasicBolt;
import backtype.storm.topology.IRichBolt;
import backtype.storm.topology.OutputFieldsDeclarer;
import backtype.storm.topology.base.BaseRichBolt;
import backtype.storm.tuple.Tuple; import java.util.Map; /**
*/
public class MyKafkaBolt2 extends BaseRichBolt {
@Override
public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) { } @Override
public void execute(Tuple input) {
byte[] value = (byte[]) input.getValue(0);
String msg = new String(value);
System.out.println(Thread.currentThread().getId()+" msg "+msg);
} @Override
public void cleanup() {
}
@Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
}
@Override
public Map<String, Object> getComponentConfiguration() {
return null;
}
}

  maven依赖,这里可能需要根据错误提示调一下

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion> <groupId>cn.itcast.learn</groupId>
<artifactId>kafka2Strom</artifactId>
<version>1.0-SNAPSHOT</version> <dependencies>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>0.9.5</version>
<scope>provided</scope>
<!--<scope>provided</scope>-->
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>0.9.5</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.clojure</groupId>
<artifactId>clojure</artifactId>
<version>1.5.1</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.8.2</artifactId>
<version>0.8.1</version>
<exclusions>
<exclusion>
<artifactId>jmxtools</artifactId>
<groupId>com.sun.jdmk</groupId>
</exclusion>
<exclusion>
<artifactId>jmxri</artifactId>
<groupId>com.sun.jmx</groupId>
</exclusion>
<exclusion>
<artifactId>jms</artifactId>
<groupId>javax.jms</groupId>
</exclusion>
<exclusion>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.4</version>
</dependency>
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
<version>2.7.3</version>
</dependency>
</dependencies> <build>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<mainClass>cn.itcast.bigdata.hadoop.mapreduce.wordcount.WordCount</mainClass>
</manifest>
</archive>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<!-- <source>1.7</source>
<target>1.7</target>-->
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build> </project>

flume+kafka+storm打通过程的更多相关文章

  1. 一次简单的springboot+dubbo+flume+kafka+storm+redis系统

    最近无事学习一下,用springboot+dubbo+flume+kafka+storm+redis做了一个简单的scenic系统 scenicweb:展现层,springboot+dubbo sce ...

  2. 简单测试flume+kafka+storm的集成

    集成 Flume/kafka/storm 是为了收集日志文件而引入的方法,最终将日志转到storm中进行分析.storm的分析方法见后面文章,这里只讨论集成方法. 以下为具体步骤及测试方法: 1.分别 ...

  3. Flume+Kafka+Storm+Hbase+HDSF+Poi整合

    Flume+Kafka+Storm+Hbase+HDSF+Poi整合 需求: 针对一个网站,我们需要根据用户的行为记录日志信息,分析对我们有用的数据. 举例:这个网站www.hongten.com(当 ...

  4. Flume+Kafka+Storm整合

    Flume+Kafka+Storm整合 1. 需求: 有一个客户端Client可以产生日志信息,我们需要通过Flume获取日志信息,再把该日志信息放入到Kafka的一个Topic:flume-to-k ...

  5. 大数据处理框架之Strom:Flume+Kafka+Storm整合

    环境 虚拟机:VMware 10 Linux版本:CentOS-6.5-x86_64 客户端:Xshell4 FTP:Xftp4 jdk1.8 storm-0.9 apache-flume-1.6.0 ...

  6. Flume+Kafka+storm的连接整合

    Flume-ng Flume是一个分布式.可靠.和高可用的海量日志采集.聚合和传输的系统. Flume的文档可以看http://flume.apache.org/FlumeUserGuide.html ...

  7. flume+kafka+storm+mysql架构设计

    前段时间学习了storm,最近刚开blog,就把这些资料放上来供大家参考. 这个框架用的组件基本都是最新稳定版本,flume-ng1.4+kafka0.8+storm0.9+mysql (项目是mav ...

  8. Flume+Kafka+Storm+Redis 大数据在线实时分析

    1.实时处理框架 即从上面的架构中我们可以看出,其由下面的几部分构成: Flume集群 Kafka集群 Storm集群 从构建实时处理系统的角度出发,我们需要做的是,如何让数据在各个不同的集群系统之间 ...

  9. flume+kafka+storm

    centos06.6+JDK1.7 flume1.4+kafka2.10+storm0.9.3 zookeeper3.4.6 集群: 192.168.80.133 x01 192.168.80.134 ...

随机推荐

  1. all-oone-data-structure(好)

    哈哈,我用了HashMap, 双向链表,还有了HashSet来保存key的集合. 现在这道题目还只有 9.3%的AC率,难度为HardTotal Accepted: 9 Total Submissio ...

  2. JS前端下载文本文件小技巧:1、download属性;2、借助Blob转换成二进制下载

    一.HTML download 与文件下载 如果希望在前端侧直接触发某些资源的下载,最方便快捷的方法就是使用HTML5原生的download属性,例如: <a href="large. ...

  3. Spark Shuffle 堆外内存溢出问题与解决(Shuffle通信原理)

    Spark Shuffle 堆外内存溢出问题与解决(Shuffle通信原理) http://xiguada.org/spark-shuffle-direct-buffer-oom/ 问题描述 Spar ...

  4. shorthand trick with boolean expressions

    https://stackoverflow.com/questions/2802055/what-does-the-construct-x-x-y-mean --------------------- ...

  5. 转:关于Android机型适配这件小事儿

    http://www.umindex.com/devices/android_resolutions 大家都知道Android机型分裂严重,在开发Android App的时候永远都面临适配N多机型的问 ...

  6. linux经常使用文字处理命令总结

    linux grep命令 作用 Linux系统中grep命令是一种强大的文本搜索工具,它能使用正則表達式搜索文本,并把匹 配的行打印出来.grep全称是Global Regular Expressio ...

  7. Visual Studio Image Library

    The Visual Studio Image Library Visual Studio 2013   The Visual Studio Image Library contains applic ...

  8. Moving Tables-贪心

    id=19566" target="_blank" style="color:blue; text-decoration:none">Movin ...

  9. javascript脚本中使用json2.js解析json

    官方地址:https://github.com/douglascrockford/JSON-js   点击页面右下角“Download ZIP”下载   网页中引用json2.js,下面是一个简单的例 ...

  10. JMeter 八:录制脚本--使用Jmeter自带的代理服务器

    参考:http://jmeter.apache.org/usermanual/jmeter_proxy_step_by_step.pdf http://jmeter.apache.org/userma ...