大数据学习——kafka+storm+hdfs整合
1 需求
kafka,storm,hdfs整合是流式数据常用的一套框架组合,现在 根据需求使用代码实现该需求 需求:应用所学技术实现,kafka接收随机句子,对接到storm中;使用storm集群统计句子中每个单词重复出现的次数(wordcount),将统计结果存入hdfs中。
1 pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion> <groupId>bigdata</groupId>
<artifactId>homework</artifactId>
<version>1.0-SNAPSHOT</version> <dependencies>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<!--<scope>provided</scope>-->
<version>1.2.2</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka-client</artifactId>
<!--<scope>provided</scope>-->
<version>1.2.2</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-hdfs</artifactId>
<version>1.0.2</version>
<exclusions>
<exclusion>
<groupId>io.confluent</groupId>
<artifactId>kafka-avro-serializer</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
</dependencies> <build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.2.1</version>
<configuration>
<createDependencyReducedPom>true</createDependencyReducedPom>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>storm.StormTopologyDriver</mainClass>
</transformer>
</transformers>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.7</source>
<target>1.7</target>
<skip>true</skip>
</configuration>
</plugin>
</plugins>
</build> </project>
2 PullWords.java
package kafka; import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer; import java.util.*;
import java.util.concurrent.atomic.AtomicBoolean; /**
* @Description
* kafka消费者
*/
public class PullWords { private KafkaConsumer<String, String> consumer;
private AtomicBoolean isAutoCommit; // kafka topic
private final static String TOPIC = "wordCount"; public PullWords() {
isAutoCommit = new AtomicBoolean(false); // 默认非自动提交
Properties props = new Properties();
props.put("bootstrap.servers", "mini1:2181,mini2:2181,mini3:2181");
props.put("group.id", "wordCount"); // 设置消费者组,组内的所有消费者协调在一起来消费订阅主题
if (isAutoCommit.get()) {
props.put("enable.auto.commit", "true"); // 设置自动提交
props.put("auto.commit.interval.ms", "1000"); //配置自动提交消费进度的时间
} else {
props.put("enable.auto.commit", "false");
}
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList(TOPIC));
this.isAutoCommit = isAutoCommit;
} public void subscribe(String... topic) {
consumer.subscribe(Arrays.asList(topic));
} public ConsumerRecords<String, String> pull() {
ConsumerRecords<String, String> records = consumer.poll(100);
consumer.commitSync();
return records;
} public ConsumerRecords<String, String> pullOneOrMore() {
ConsumerRecords<String, String> records = null;
List<String> values = new ArrayList<>();
while (true) {
records = consumer.poll(10);
if (records != null) {
records.forEach(e -> values.add(e.value()));
if (values.size() >= 1) {
consumer.commitSync();
values.clear();
break;
}
}
}
return records;
} public void close() {
consumer.close();
} }
3 PushWords.java
package kafka; import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata; import java.util.Properties;
import java.util.concurrent.Future; /**a
* @Description
* kafka生产者
* @Author hongzw@citycloud.com.cn
* @Date 2019-02-16 下午 7:08
*/
public class PushWords { private Producer<String, String> producer; // kafka topic
private final static String TOPIC = "words"; public PushWords() {
Properties props = new Properties();
props.put("bootstrap.servers", "storm01:9092,storm02:9092,storm03:9092");
props.put("acks", "all");
props.put("retries", 0); // 请求失败不再重试
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
producer = new KafkaProducer<>(props);
} // 发送句子到kafka集群
public Future<RecordMetadata> push(String key, String words) {
return producer.send(new ProducerRecord<>(TOPIC, key, words)); // send方法为异步调用
} public void close() {
producer.close();
} }
4 WordCount.java
package storm; import org.apache.storm.topology.BasicOutputCollector;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseBasicBolt;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Tuple;
import org.apache.storm.tuple.Values; import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map; public class WordCount extends BaseBasicBolt { Map<String, Integer> wordCountMap = new HashMap<>(); @Override
public void execute(Tuple tuple, BasicOutputCollector basicOutputCollector) {
String word = tuple.getValueByField("word").toString();
Integer count = Integer.valueOf(tuple.getValueByField("count").toString());
Integer integer = wordCountMap.get(word);
if (integer == null) {
wordCountMap.put(word, count);
} else {
wordCountMap.put(word, wordCountMap.get(word) + 1);
}
if (wordCountMap.size() > 20) { // map里面有超过20个单词则发送hfdsBolt
List<Object> list = new ArrayList<>(); // wordCountMap.forEach((k, v) -> {
// String result = new String(k + ":" + v);
// list.add(result);
// }); for (Map.Entry<String, Integer> entry : wordCountMap.entrySet()) {
String result = new String(entry.getKey() + ":" + entry.getValue());
list.add(result);
} wordCountMap.clear();
if (list.size() > 0) {
basicOutputCollector.emit(new Values(list));
}
}
} @Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
outputFieldsDeclarer.declare(new Fields("total"));
}
}
5 WordCountSplit.java
package storm; import org.apache.storm.topology.BasicOutputCollector;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseBasicBolt;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Tuple;
import org.apache.storm.tuple.Values; public class WordCountSplit extends BaseBasicBolt { @Override
public void execute(Tuple tuple, BasicOutputCollector collector) {
String[] words = tuple.getValue(tuple.fieldIndex("value")).toString().split(" ");
for (String word : words) {
collector.emit(new Values(word, 1));
}
} @Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("word", "count"));
}
}
6 StormTopologyDriver.java
package storm; import org.apache.storm.Config;
import org.apache.storm.LocalCluster;
import org.apache.storm.StormSubmitter;
import org.apache.storm.generated.AlreadyAliveException;
import org.apache.storm.generated.AuthorizationException;
import org.apache.storm.generated.InvalidTopologyException;
import org.apache.storm.hdfs.bolt.HdfsBolt;
import org.apache.storm.hdfs.bolt.format.DefaultFileNameFormat;
import org.apache.storm.hdfs.bolt.format.DelimitedRecordFormat;
import org.apache.storm.hdfs.bolt.format.FileNameFormat;
import org.apache.storm.hdfs.bolt.format.RecordFormat;
import org.apache.storm.hdfs.bolt.rotation.FileRotationPolicy;
import org.apache.storm.hdfs.bolt.rotation.FileSizeRotationPolicy;
import org.apache.storm.hdfs.bolt.sync.CountSyncPolicy;
import org.apache.storm.hdfs.bolt.sync.SyncPolicy;
import org.apache.storm.kafka.spout.KafkaSpout;
import org.apache.storm.kafka.spout.KafkaSpoutConfig;
import org.apache.storm.topology.TopologyBuilder; public class StormTopologyDriver { public static void main(String[] args) throws InvalidTopologyException, AuthorizationException, AlreadyAliveException {
TopologyBuilder topologyBuilder = new TopologyBuilder();
KafkaSpoutConfig.Builder builder = new KafkaSpoutConfig.Builder("mini1:2181", "wordCount");
builder.setProp("group.id", "wordCount");
builder.setProp("enable.auto.commit", "true");
builder.setProp("auto.commit.interval.ms", "1000");
builder.setProp("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
builder.setProp("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
topologyBuilder.setSpout("kafkaSpout", new KafkaSpout<>(builder.build())); topologyBuilder.setBolt("wordCountSplit", new WordCountSplit()).shuffleGrouping("kafkaSpout");
topologyBuilder.setBolt("wordCount", new WordCount()).shuffleGrouping("wordCountSplit"); // 将文件保存到hdfs
// 设置输出目录
// 输出字段分隔符
RecordFormat format = new DelimitedRecordFormat().withFieldDelimiter(",");
// 每100个tuple同步到HDFS一次
SyncPolicy syncPolicy = new CountSyncPolicy(5);
// 每个写出文件的大小为5MB
FileRotationPolicy rotationPolicy = new FileSizeRotationPolicy(1.0f, FileSizeRotationPolicy.Units.MB);
FileNameFormat fileNameFormat = new DefaultFileNameFormat().withPath("/storm");
HdfsBolt hdfsBolt = new HdfsBolt().withFsUrl("hdfs://mini1:9000").withFileNameFormat(fileNameFormat)
.withRecordFormat(format).withRotationPolicy(rotationPolicy).withSyncPolicy(syncPolicy);
topologyBuilder.setBolt("hdfsBolt", hdfsBolt).shuffleGrouping("wordCount"); Config config = new Config();
config.setNumWorkers(2);
// 本地模式
// LocalCluster localCluster = new LocalCluster();
// localCluster.submitTopology("countWords", config, topologyBuilder.createTopology()); // 集群模式
StormSubmitter.submitTopology("countWords", config, topologyBuilder.createTopology());
}
}
大数据学习——kafka+storm+hdfs整合的更多相关文章
- 大数据学习:storm流式计算
Storm是一个分布式的.高容错的实时计算系统.Storm适用的场景: 1.Storm可以用来用来处理源源不断的消息,并将处理之后的结果保存到持久化介质中. 2.由于Storm的处理组件都是分布式的, ...
- Kafka+Storm+HDFS整合实践
在基于Hadoop平台的很多应用场景中,我们需要对数据进行离线和实时分析,离线分析可以很容易地借助于Hive来实现统计分析,但是对于实时的需求Hive就不合适了.实时应用场景可以使用Storm,它是一 ...
- [转载] Kafka+Storm+HDFS整合实践
转载自http://www.tuicool.com/articles/NzyqAn 在基于Hadoop平台的很多应用场景中,我们需要对数据进行离线和实时分析,离线分析可以很容易地借助于Hive来实现统 ...
- 大数据学习系列之五 ----- Hive整合HBase图文详解
引言 在上一篇 大数据学习系列之四 ----- Hadoop+Hive环境搭建图文详解(单机) 和之前的大数据学习系列之二 ----- HBase环境搭建(单机) 中成功搭建了Hive和HBase的环 ...
- Kafka+Storm+HDFS 整合示例
消息通过各种方式进入到Kafka消息中间件,比如可以通过使用Flume来收集日志数据,然后在Kafka中路由暂存,然后再由实时计算程序Storm做实时分析,最后将结果保存在HDFS中,这时我们就需要将 ...
- 大数据学习之路-hdfs
1.什么是hadoop hadoop中有3个核心组件: 分布式文件系统:HDFS —— 实现将文件分布式存储在很多的服务器上 分布式运算编程框架:MAPREDUCE —— 实现在很多机器上分布式并行运 ...
- 大数据学习之测试hdfs和mapreduce(二)
上篇已经搭建好环境,本篇主要测试hadoop中的hdfs和mapreduce功能. 首先填坑:启动环境时发现DataNode启动不了.查看日志 从日志中可以看出,原因是因为datanode的clust ...
- 大数据学习——Kafka集群部署
1下载安装包 2解压安装包 -0.9.0.1.tgz -0.9.0.1 kafka 3修改配置文件 cp server.properties server.properties.bak # Lice ...
- 大数据学习——java操作hdfs环境搭建以及环境测试
1 新建一个maven项目 打印根目录下的文件的名字 添加pom依赖 pom.xml <?xml version="1.0" encoding="UTF-8&quo ...
随机推荐
- go查询mysql到list<map>
func selects() { db, err := sql.Open("mysql", "root:root@tcp(127.0.0.1:3306)/test?cha ...
- WebService学习之旅(五)基于Apache Axis2发布第一个WebService
上篇博文介绍了如何將axis2 webservice引擎安装到Web容器中,本节开始介绍如何基于apache axis2发布第一个简单的WebService. 一.WebService服务端发布步骤 ...
- 【extjs6学习笔记】0.3 准备:系统架构
- 爬虫基本原理及requests,response详解
一.爬虫基本原理 1.爬虫是什么 #1.什么是互联网? 互联网是由网络设备(网线,路由器,交换机,防火墙等等)和一台台计算机连接而成,像一张网一样. #2.互联网建立的目的? 互联网的核心价值在于数据 ...
- POJ 3133 Manhattan Wiring (插头DP,轮廓线,经典)
题意:给一个n*m的矩阵,每个格子中有1个数,可能是0或2或3,出现2的格子数为2个,出现3的格子数为2个,要求将两个2相连,两个3相连,求不交叉的最短路(起终点只算0.5长,其他算1). 思路: 这 ...
- (八)maven学习之继承
继承 如果项目划分了多个模块,都需要依赖相似的jar包,只需要创建一个父模块,在它的pom.xml文件中配置依赖的jar包.功能模块只需要继承父模块,就可以自动得到其依赖的jar包,而不需要再每个模块 ...
- 清空iptables
/sbin/iptables -P INPUT ACCEPT /sbin/iptables -F iptables -L
- 快学UiAutomator新建第一个测试工程
1.打开Eclipse 2.新建一个java项目,包 3.增加build path,加载需要的库文件jar包 4.新建测试类,继承UIAutomatorTestCase 5.编写测试用例,方法名必须t ...
- C++ _ const的用法,特别是用在函数前面与后面的区别!
在普通的非 const成员函数中 this的类型是一个指向类类型的 const指针.可以改变this所指向的值,但不能改变 this所保存的地址. 在 const成员函数中 this的类型是一个指向 ...
- JavaScript中数据类型和typeof返回的数据类型
除了上图,要注意三点:1.symbol是ES6中新增的数据类型 2.typeof(null)结果是Object 3.typeof(Object)和typeof(Array)的结果是function,因 ...