一:介绍Storm设计模型

1.Topology

  Storm对任务的抽象,其实 就是将实时数据分析任务 分解为 不同的阶段    

  点: 计算组件   Spout   Bolt

  边: 数据流向    数据从上一个组件流向下一个组件  带方向

2.tuple

  Storm每条记录 封装成一个tuple

  其实就是一些keyvalue对按顺序排列

  方便组件获取数据

3.Spout

  数据采集器

  源源不断的日志记录  如何被topology接收进行处理?

  Spout负责从数据源上获取数据,简单处理 封装成tuple向后面的bolt发射

4.Bolt

  数据处理器

  

二:开发wordcount案例

1.书写整个大纲的点线图

  

2..程序结构

  

3.修改pom文件

  这个地方需要注意,在集群上的时候,这时候storm的包是有的,不要再打包,所以将provided打开。

  

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <project xmlns="http://maven.apache.org/POM/4.0.0"
  3. xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  4. xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  5. <modelVersion>4.0.0</modelVersion>
  6.  
  7. <groupId>com.cj.it</groupId>
  8. <artifactId>storm</artifactId>
  9. <version>1.0-SNAPSHOT</version>
  10.  
  11. <properties>
  12. <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  13. <hbase.version>0.98.6-cdh5.3.6</hbase.version>
  14. <hdfs.version>2.5.0-cdh5.3.6</hdfs.version>
  15. <storm.version>0.9.6</storm.version>
  16. </properties>
  17.  
  18. <repositories>
  19. <repository>
  20. <id>cloudera</id>
  21. <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
  22. </repository>
  23. <repository>
  24. <id>alimaven</id>
  25. <name>aliyun maven</name>
  26. <url>http://maven.aliyun.com/nexus/content/groups/public/</url>
  27. </repository>
  28. </repositories>
  29.  
  30. <dependencies>
  31. <dependency>
  32. <groupId>junit</groupId>
  33. <artifactId>junit</artifactId>
  34. <version>4.12</version>
  35. <scope>test</scope>
  36. </dependency>
  37. <dependency>
  38. <groupId>org.apache.storm</groupId>
  39. <artifactId>storm-core</artifactId>
  40. <version>${storm.version}</version>
  41. <!-- IDEA执行要注释掉下面这行,打包时解开 -->
  42. <!--<scope>provided</scope>-->
  43.  
  44. </dependency>
  45. <dependency>
  46. <groupId>org.apache.storm</groupId>
  47. <artifactId>storm-hbase</artifactId>
  48. <version>${storm.version}</version>
  49. <exclusions>
  50. <exclusion>
  51. <groupId>org.apache.hadoop</groupId>
  52. <artifactId>hadoop-hdfs</artifactId>
  53. </exclusion>
  54. <exclusion>
  55. <groupId>org.apache.hbase</groupId>
  56. <artifactId>hbase-client</artifactId>
  57. </exclusion>
  58. </exclusions>
  59. </dependency>
  60.  
  61. <dependency>
  62. <groupId>org.apache.hadoop</groupId>
  63. <artifactId>hadoop-hdfs</artifactId>
  64. <version>${hdfs.version}</version>
  65. </dependency>
  66. <dependency>
  67. <groupId>org.apache.hbase</groupId>
  68. <artifactId>hbase-client</artifactId>
  69. <version>${hbase.version}</version>
  70. <exclusions>
  71. <exclusion>
  72. <artifactId>slf4j-log4j12</artifactId>
  73. <groupId>org.slf4j</groupId>
  74. </exclusion>
  75. </exclusions>
  76. </dependency>
  77. <dependency>
  78. <groupId>org.apache.zookeeper</groupId>
  79. <artifactId>zookeeper</artifactId>
  80. <version>3.4.6</version>
  81. <exclusions>
  82. <exclusion>
  83. <artifactId>slf4j-log4j12</artifactId>
  84. <groupId>org.slf4j</groupId>
  85. </exclusion>
  86. </exclusions>
  87. </dependency>
  88. <dependency>
  89. <groupId>org.apache.storm</groupId>
  90. <artifactId>storm-kafka</artifactId>
  91. <version>${storm.version}</version>
  92. <exclusions>
  93. <exclusion>
  94. <groupId>org.apache.zookeeper</groupId>
  95. <artifactId>zookeeper</artifactId>
  96. </exclusion>
  97. </exclusions>
  98. </dependency>
  99. <dependency>
  100. <groupId>org.apache.kafka</groupId>
  101. <artifactId>kafka_2.10</artifactId>
  102. <version>0.8.1.1</version>
  103. <exclusions>
  104. <exclusion>
  105. <groupId>org.apache.zookeeper</groupId>
  106. <artifactId>zookeeper</artifactId>
  107. </exclusion>
  108. <exclusion>
  109. <groupId>log4j</groupId>
  110. <artifactId>log4j</artifactId>
  111. </exclusion>
  112. </exclusions>
  113. </dependency>
  114. <dependency>
  115. <groupId>org.mockito</groupId>
  116. <artifactId>mockito-all</artifactId>
  117. <version>1.9.5</version>
  118. <scope>test</scope>
  119. </dependency>
  120. <dependency>
  121. <groupId>cz.mallat.uasparser</groupId>
  122. <artifactId>uasparser</artifactId>
  123. <version>0.6.1</version>
  124. </dependency>
  125. </dependencies>
  126.  
  127. <build>
  128. <plugins>
  129. <plugin>
  130. <artifactId>maven-compiler-plugin</artifactId>
  131. <version>3.3</version>
  132. <configuration>
  133. <source>1.7</source>
  134. <target>1.7</target>
  135. </configuration>
  136. </plugin>
  137. <plugin>
  138. <artifactId>maven-assembly-plugin</artifactId>
  139. <version>2.4</version>
  140. <configuration>
  141. <descriptors>
  142. <descriptor>src/main/assembly/src.xml</descriptor>
  143. </descriptors>
  144. <descriptorRefs>
  145. <descriptorRef>jar-with-dependencies</descriptorRef>
  146. </descriptorRefs>
  147. </configuration>
  148. <executions>
  149. <execution>
  150. <id>make-assembly</id> <!-- this is used for inheritance merges -->
  151. <phase>package</phase> <!-- bind to the packaging phase -->
  152. <goals>
  153. <goal>single</goal>
  154. </goals>
  155. </execution>
  156. </executions>
  157. </plugin>
  158. </plugins>
  159. </build>
  160.  
  161. </project>

4.src.xml

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <assembly
  3. xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0"
  4. xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  5. xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0 http://maven.apache.org/xsd/assembly-1.1.0.xsd">
  6. <id>jar-with-dependencies</id>
  7. <formats>
  8. <format>jar</format>
  9. </formats>
  10. <includeBaseDirectory>false</includeBaseDirectory>
  11. <dependencySets>
  12. <dependencySet>
  13. <unpack>false</unpack>
  14. <scope>runtime</scope>
  15. </dependencySet>
  16. </dependencySets>
  17. <fileSets>
  18. <fileSet>
  19. <directory>/lib</directory>
  20. </fileSet>
  21. </fileSets>
  22. </assembly>

5.log

  1. log4j.rootLogger=info,console
  2.  
  3. log4j.appender.console=org.apache.log4j.ConsoleAppender
  4. log4j.appender.console.layout=org.apache.log4j.SimpleLayout
  5.  
  6. log4j.logger.com.ibeifeng=INFO

6.SentenceSpout.java

  1. package com.jun.it;
  2.  
  3. import backtype.storm.spout.SpoutOutputCollector;
  4. import backtype.storm.task.TopologyContext;
  5. import backtype.storm.topology.IRichSpout;
  6. import backtype.storm.topology.OutputFieldsDeclarer;
  7. import backtype.storm.topology.base.BaseRichSpout;
  8. import backtype.storm.tuple.Fields;
  9. import backtype.storm.tuple.Values;
  10. import org.slf4j.Logger;
  11. import org.slf4j.LoggerFactory;
  12.  
  13. import java.util.Map;
  14. import java.util.Random;
  15.  
  16. public class SentenceSpout extends BaseRichSpout {
  17. private static final Logger logger= LoggerFactory.getLogger(SentenceSpout.class);
  18. private SpoutOutputCollector collector;
  19. //制造数据
  20. private static final String[] SENTENCES={
  21. "hadoop oozie storm hive",
  22. "hadoop spark sqoop hbase",
  23. "error flume yarn mapreduce"
  24. };
  25. //初始化collector
  26. @Override
  27. public void open(Map map, TopologyContext topologyContext, SpoutOutputCollector spoutOutputCollector) {
  28. this.collector=spoutOutputCollector;
  29. }
  30. //Key的设置
  31. @Override
  32. public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
  33. outputFieldsDeclarer.declare(new Fields("sentence"));
  34. }
  35. //Tuple的组装
  36. @Override
  37. public void nextTuple() {
  38. String sentence=SENTENCES[new Random().nextInt(SENTENCES.length)];
  39. if(sentence.contains("error")){
  40. logger.error("记录有问题"+sentence);
  41. }else{
  42. this.collector.emit(new Values(sentence));
  43. }
  44. try{
  45. Thread.sleep(1000);
  46. }catch (Exception e){
  47. e.printStackTrace();
  48. }
  49. }
  50.  
  51. public SentenceSpout() {
  52. super();
  53. }
  54.  
  55. @Override
  56. public void close() {
  57.  
  58. }
  59.  
  60. @Override
  61. public void activate() {
  62. super.activate();
  63. }
  64.  
  65. @Override
  66. public void deactivate() {
  67. super.deactivate();
  68. }
  69.  
  70. @Override
  71. public void ack(Object msgId) {
  72. super.ack(msgId);
  73. }
  74.  
  75. @Override
  76. public void fail(Object msgId) {
  77. super.fail(msgId);
  78. }
  79.  
  80. }

7.SplitBolt.java

  1. package com.jun.it;
  2.  
  3. import backtype.storm.task.OutputCollector;
  4. import backtype.storm.task.TopologyContext;
  5. import backtype.storm.topology.IRichBolt;
  6. import backtype.storm.topology.OutputFieldsDeclarer;
  7. import backtype.storm.tuple.Fields;
  8. import backtype.storm.tuple.Tuple;
  9. import backtype.storm.tuple.Values;
  10.  
  11. import java.util.Map;
  12.  
  13. public class SplitBolt implements IRichBolt {
  14. private OutputCollector collector;
  15. @Override
  16. public void prepare(Map map, TopologyContext topologyContext, OutputCollector outputCollector) {
  17. this.collector=outputCollector;
  18. }
  19.  
  20. @Override
  21. public void execute(Tuple tuple) {
  22. String sentence=tuple.getStringByField("sentence");
  23. if(sentence!=null&&!"".equals(sentence)){
  24. String[] words=sentence.split(" ");
  25. for (String word:words){
  26. this.collector.emit(new Values(word));
  27. }
  28. }
  29. }
  30.  
  31. @Override
  32. public void cleanup() {
  33.  
  34. }
  35.  
  36. @Override
  37. public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
  38. outputFieldsDeclarer.declare(new Fields("word"));
  39. }
  40.  
  41. @Override
  42. public Map<String, Object> getComponentConfiguration() {
  43. return null;
  44. }
  45. }

8.CountBolt.java

  1. package com.jun.it;
  2.  
  3. import backtype.storm.task.OutputCollector;
  4. import backtype.storm.task.TopologyContext;
  5. import backtype.storm.topology.IRichBolt;
  6. import backtype.storm.topology.OutputFieldsDeclarer;
  7. import backtype.storm.tuple.Fields;
  8. import backtype.storm.tuple.Tuple;
  9. import backtype.storm.tuple.Values;
  10.  
  11. import java.util.HashMap;
  12. import java.util.Map;
  13.  
  14. public class CountBolt implements IRichBolt {
  15. private Map<String,Integer> counts;
  16. private OutputCollector collector;
  17. @Override
  18. public void prepare(Map map, TopologyContext topologyContext, OutputCollector outputCollector) {
  19. this.collector=outputCollector;
  20. counts=new HashMap<>();
  21. }
  22.  
  23. @Override
  24. public void execute(Tuple tuple) {
  25. String word=tuple.getStringByField("word");
  26. int count=1;
  27. if(counts.containsKey(word)){
  28. count=counts.get(word)+1;
  29. }
  30. counts.put(word,count);
  31. this.collector.emit(new Values(word,count));
  32. }
  33.  
  34. @Override
  35. public void cleanup() {
  36.  
  37. }
  38.  
  39. @Override
  40. public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
  41. outputFieldsDeclarer.declare(new Fields("word","count"));
  42. }
  43.  
  44. @Override
  45. public Map<String, Object> getComponentConfiguration() {
  46. return null;
  47. }
  48. }

9.printBolt.java

  1. package com.jun.it;
  2.  
  3. import backtype.storm.task.OutputCollector;
  4. import backtype.storm.task.TopologyContext;
  5. import backtype.storm.topology.IRichBolt;
  6. import backtype.storm.topology.OutputFieldsDeclarer;
  7. import backtype.storm.tuple.Tuple;
  8.  
  9. import java.util.Map;
  10.  
  11. public class PrintBolt implements IRichBolt {
  12. @Override
  13. public void prepare(Map map, TopologyContext topologyContext, OutputCollector outputCollector) {
  14.  
  15. }
  16.  
  17. @Override
  18. public void execute(Tuple tuple) {
  19. String word=tuple.getStringByField("word");
  20. int count=tuple.getIntegerByField("count");
  21. System.out.println("word:"+word+", count:"+count);
  22. }
  23.  
  24. @Override
  25. public void cleanup() {
  26.  
  27. }
  28.  
  29. @Override
  30. public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
  31.  
  32. }
  33.  
  34. @Override
  35. public Map<String, Object> getComponentConfiguration() {
  36. return null;
  37. }
  38. }

10.WordCountTopology.java

  1. package com.jun.it;
  2.  
  3. import backtype.storm.Config;
  4. import backtype.storm.LocalCluster;
  5. import backtype.storm.StormSubmitter;
  6. import backtype.storm.generated.AlreadyAliveException;
  7. import backtype.storm.generated.InvalidTopologyException;
  8. import backtype.storm.topology.TopologyBuilder;
  9. import backtype.storm.tuple.Fields;
  10.  
  11. public class WordCountTopology {
  12. private static final String SENTENCE_SPOUT="sentenceSpout";
  13. private static final String SPLIT_BOLT="splitBolt";
  14. private static final String COUNT_BOLT="countBolt";
  15. private static final String PRINT_BOLT="printBolt";
  16. public static void main(String[] args){
  17. TopologyBuilder topologyBuilder=new TopologyBuilder();
  18. topologyBuilder.setSpout(SENTENCE_SPOUT,new SentenceSpout());
  19. topologyBuilder.setBolt(SPLIT_BOLT,new SplitBolt()).shuffleGrouping(SENTENCE_SPOUT);
  20. topologyBuilder.setBolt(COUNT_BOLT,new CountBolt()).fieldsGrouping(SPLIT_BOLT,new Fields("word"));
  21. topologyBuilder.setBolt(PRINT_BOLT,new PrintBolt()).globalGrouping(COUNT_BOLT);
  22. Config config=new Config();
  23. if(args==null||args.length==0){
  24. LocalCluster localCluster=new LocalCluster();
  25. localCluster.submitTopology("wordcount",config,topologyBuilder.createTopology());
  26. }else{
  27. config.setNumWorkers(1);
  28. try {
  29. StormSubmitter.submitTopology(args[0],config,topologyBuilder.createTopology());
  30. } catch (AlreadyAliveException e) {
  31. e.printStackTrace();
  32. } catch (InvalidTopologyException e) {
  33. e.printStackTrace();
  34. }
  35. }
  36.  
  37. }
  38. }

三:本地运行

1.前提

  原本以为需要启动storm,后来发现,不需要启动Storm。

  只需要在main的时候Run即可

2.结果

  

四:集群运行

1.在IDEA下打包

  下面的是有依赖的包。

 

2.上传到datas下

  

3.运行

   bin/storm jar /opt/datas/storm-1.0-SNAPSHOT-jar-with-dependencies.jar com.jun.it.WordCountTopology wordcount

  

4.UI效果

  

Storm中关于Topology的设计的更多相关文章

  1. Storm官方文档翻译之在生产环境集群中运行Topology

    在进群生产环境下运行Topology和在本地模式下运行非常相似.下面是步骤: 1.定义Topology(如果使用Java开发语言,则使用TopologyBuilder来创建) 2.使用StormSub ...

  2. 关于Storm 中Topology的并发度的理解

    来自:https://storm.apache.org/documentation/Understanding-the-parallelism-of-a-Storm-topology.html htt ...

  3. Twitter Storm中Topology的状态

    Twitter Storm中Topology的状态 状态转换如下,Topology 的持久化状态包括: active, inactive, killed, rebalancing 四个状态. 代码上看 ...

  4. 【Storm篇】--Storm中的同步服务DRPC

    一.前述 Drpc(分布式远程过程调用)是一种同步服务实现的机制,在Storm中客户端提交数据请求之后,立刻取得计算结果并返回给客户端.同时充分利用Storm的计算能力实现高密度的并行实时计算. 二. ...

  5. Storm中遇到的日志多次重写问题(一)

    业务描述: 统计从kafka spout中读取的数据条数,以及写入redis的数据的条数,写入hdfs的数据条数,写入kafaka的数据条数.并且每过5秒将数据按照json文件的形式写入日志.其中保存 ...

  6. dotNET使用DRPC远程调用运行在Storm上的Topology

    Distributed RPC(DRPC)是Storm构建在Thrift协议上的RPC的实现,DRPC使得你可以通过多种语言远程的使用Storm集群的计算能力.DRPC并非Storm的基础特性,但它确 ...

  7. Storm中Spout使用注意事项小结

    Storm中Spout用于读取并向计算拓扑中发送数据源,最近在调试一个topology时遇到了系统qps低,处理速度达不到要求的问题,经过排查后发现是由于对Spout的使用模式不当导致的多线程同步等待 ...

  8. storm源码之理解Storm中Worker、Executor、Task关系 + 并发度详解

    本文导读: 1 Worker.Executor.task详解 2 配置拓扑的并发度 3 拓扑示例 4 动态配置拓扑并发度 Worker.Executor.Task详解: Storm在集群上运行一个To ...

  9. 【原】storm源码之理解Storm中Worker、Executor、Task关系

    Storm在集群上运行一个Topology时,主要通过以下3个实体来完成Topology的执行工作:1. Worker(进程)2. Executor(线程)3. Task 下图简要描述了这3者之间的关 ...

随机推荐

  1. fatal error C1083: 无法打开包括文件: “SDKDDKVer.h”: No such file or directory(转)

    fatal error C1083: 无法打开包括文件: “SDKDDKVer.h”: No such file or directory 解决办法:(Vs2013中) 项目--右键--属性--配置属 ...

  2. C# 对图片加水印

    using System; using System.Collections; using System.Data; using System.Linq; using System.Web; usin ...

  3. Nginx 虚拟主机示例

    Nginx server 模块 server { // 标识虚拟主机开始 listen ; ## // 指定虚拟主机服务器端口 server_name localhost; // 指定 IP地址或者域 ...

  4. yolov3实践(一)

    很多博友看了我的第一篇博客yolo类检测算法解析——yolo v3,对其有了一定的认识和了解,但是并没有贴出代码和运行效果,略显苍白.因此在把篇博客理论的基础上,造就了第一篇实践文章,也就是本文.只要 ...

  5. Exception in thread "main" java.lang.NoSuchMethodError: scala.Predef$.refArrayOps([Ljava/lang/Object;)Lscala/collection/mutable/ArrayOps;

    Exception in thread "main" java.lang.NoSuchMethodError: scala.Predef$.refArrayOps([Ljava/l ...

  6. CentOS 6.8 部署django项目二

    CentOS 6.8 部署django项目一 1.项目部署后发现部分页面的样式丢失,是因为在nginx中配置的static路径中未包含. 解决:在settinfs.py中添加: STATIC_ROOT ...

  7. Latex 算法Algorithm

    在计算机科学当中,论文当中经常需要排版算法.相信大家在读论文中也看见了很多排版精美的算法.本文就通过示例来简要介绍一下 algorithms 束的用法.该束主要提供了两个宏包,包含两种进行算法排版的环 ...

  8. UML和模式应用5:细化阶段(7)---从需求到设计迭代进化

    1.前言 迭代开发中,每次迭代都会发生从以需求或分析为主要焦点到以设计和实现为主要焦点的转变 分析和面向对象的分析重点关注学习做正确的事,理解案例重要目标,规则和约束 设计工作强调正确的做事,熟练设计 ...

  9. opencv学习笔记(九)Mat 访问图像像素的值

    对图像的像素进行访问,可以实现空间增强,反色,大部分图像特效系列都是基于像素操作的.图像容器Mat是一个矩阵的形式,一般情况下是二维的.单通道灰度图一般存放的是<uchar>类型,其数据存 ...

  10. (常用)time,datetime,random,shutil(zipfile,tarfile),sys模块

    a.time模块import time 时间分为三种形式1.时间戳 (时间秒数的表达形式, 从1970年开始)print(time.time())start_time=time.time()time. ...