Storm集成Kafka的Trident实现


集成Kafka
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>${storm.version}</version>
</dependency>
String zks = "192.168.1.1xx:2181,192.168.1.1xx:2181,192.168.1.1xx:2181/kafka";
String topic = "log-storm"; BrokerHosts brokerHosts = new ZkHosts(zks);
SpoutConfig spoutConfig = new SpoutConfig(brokerHosts, topic, "/"+ topic, UUID.randomUUID().toString());
spoutConfig.scheme = new SchemeAsMultiScheme(new StringScheme());
spoutConfig.zkServers = Arrays.asList("192.168.1.1xx","192.168.1.1xx","192.168.1.1xx");
spoutConfig.zkPort = 2181;
private OutputCollector collector;
@Override
public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
this.collector = collector;
}
@Override
public void execute(Tuple input) {
try {
String msgBody = input.getString(0);
int traceIndex = msgBody.indexOf(TRACE_CONST);
if (traceIndex >= 0) {
String completeLog = msgBody.substring(traceIndex + TRACE_CONST.length());
collector.emit(new Values(completeLog));
}
collector.ack(input);
} catch (Exception e) {
collector.fail(input);
}
}
@Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("log"));
}
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("kafka-reader", new KafkaSpoutNoMetrics(spoutConfig), 3); builder.setBolt("log-extractor", new LogExtractorBolt(), 2).shuffleGrouping("kafka-reader");
builder.setBolt("log-splitter", new LogSplitterBolt(), 2).shuffleGrouping("log-extractor");
builder.setBolt("memcached-store", new MemcachedBolt()).fieldsGrouping("log-splitter", new Fields("md")); Config config = new Config();
String name = "LogStormProcessor"; config.setNumWorkers(1);
StormSubmitter.submitTopologyWithProgressBar(name, config, builder.createTopology());
使用Storm Trident

Stream stream = tridentTopology.newStream("event", kafkaSpout);
Exception in thread "main" java.lang.IllegalArgumentException: Trying to select non-existent field: 'event' from stream containing fields fields: <[str]>
at org.apache.storm.trident.Stream.projectionValidation(Stream.java:853)
at org.apache.storm.trident.Stream.each(Stream.java:320)
at com.zhen.log.processor.trident.Main.main(Main.java:48)

TridentKafkaConfig kafkaConfig = new TridentKafkaConfig(brokerHosts, topic);
OpaqueTridentKafkaSpout kafkaSpout = new OpaqueTridentKafkaSpout(kafkaConfig); TridentTopology tridentTopology = new TridentTopology();
Stream stream = tridentTopology.newStream("event", kafkaSpout);
Stream logStream = stream.each(new Fields("bytes"), new LogExtractorFunction(), new Fields("log"))
.each(new Fields("log"), new LogSplitterFunction(), new Fields("logObject"))
.each(new Fields("logObject"), new LogTypeFilter("TRACE"));
public interface CombinerAggregator<T> extends Serializable {
T init(TridentTuple tuple);
T combine(T val1, T val2);
T zero();
}
logStream
.each(new Fields("logObject"), new LogGroupFunction(), new Fields("key")).groupBy(new Fields("key"))
.persistentAggregate(MemcachedState.nonTransactional(servers), new Fields("logObject"), new LogCombinerAggregator(),
new Fields("statistic"))
Stream logStream = stream.each(new Fields("bytes"), new LogExtractorFunction(), new Fields("log"))
.each(new Fields("log"), new LogSplitterFunction(), new Fields("logObject"))
.each(new Fields("logObject"), new LogTypeFilter("TRACE"));
logStream.each(new Fields("log"), new LocalFileSaveFunction(), new Fields());
logStream
.each(new Fields("logObject"), new LogGroupFunction(), new Fields("key")).groupBy(new Fields("key"))
.persistentAggregate(MemcachedState.nonTransactional(servers), new Fields("logObject"), new LogCombinerAggregator(),
new Fields("statistic"))
;
<dependency>
<groupId>com.twitter</groupId>
<artifactId>finagle-memcached_2.9.2</artifactId>
<version>6.20.0</version>
</dependency>
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project log-storm-processor: Could not resolve dependencies for project com.zhen:log-storm-processor:jar:1.0.0-SNAPSHOT: The following artifacts could not be resolved: com.twitter.common.zookeeper:server-set:jar:1.0.83, com.twitter.common.zookeeper:client:jar:0.0.60, com.twitter.common.zookeeper:group:jar:0.0.78: Failure to find com.twitter.common.zookeeper:server-set:jar:1.0.83 in http://192.168.1.14:8081/nexus/content/repositories/releases/ was cached in the local repository, resolution will not be reattempted until the update interval of nexus-releases has elapsed or updates are forced -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.

Storm集成Kafka的Trident实现的更多相关文章
- storm集成kafka的应用,从kafka读取,写入kafka
storm集成kafka的应用,从kafka读取,写入kafka by 小闪电 0前言 storm的主要作用是进行流式的实时计算,对于一直产生的数据流处理是非常迅速的,然而大部分数据并不是均匀的数据流 ...
- Storm集成Kafka应用的开发
我们知道storm的作用主要是进行流式计算,对于源源不断的均匀数据流流入处理是非常有效的,而现实生活中大部分场景并不是均匀的数据流,而是时而多时而少的数据流入,这种情况下显然用批量处理是不合适的,如果 ...
- storm集成kafka
kafkautil: import java.util.Properties; import kafka.javaapi.producer.Producer; import kafka.produce ...
- Storm 学习之路(九)—— Storm集成Kafka
一.整合说明 Storm官方对Kafka的整合分为两个版本,官方说明文档分别如下: Storm Kafka Integration : 主要是针对0.8.x版本的Kafka提供整合支持: Storm ...
- Storm 系列(九)—— Storm 集成 Kafka
一.整合说明 Storm 官方对 Kafka 的整合分为两个版本,官方说明文档分别如下: Storm Kafka Integration : 主要是针对 0.8.x 版本的 Kafka 提供整合支持: ...
- Storm集成Kafka编程模型
原创文章,转载请注明: 转载自http://www.cnblogs.com/tovin/p/3974417.html 本文主要介绍如何在Storm编程实现与Kafka的集成 一.实现模型 数据流程: ...
- 5、Storm集成Kafka
1.pom文件依赖 <!--storm相关jar --> <dependency> <groupId>org.apache.storm</groupId> ...
- Storm应用系列之——集成Kafka
本文系原创系列,转载请注明. 原帖地址:http://blog.csdn.net/xeseo 前言 在前面Storm系列之——基本概念一文中,提到过Storm的Spout应该是源源不断的取数据,不能间 ...
- spark streaming集成kafka
Kakfa起初是由LinkedIn公司开发的一个分布式的消息系统,后成为Apache的一部分,它使用Scala编写,以可水平扩展和高吞吐率而被广泛使用.目前越来越多的开源分布式处理系统如Clouder ...
随机推荐
- TJson.format() 输出错误的CRLF
下面的JSON串: { "a":"x=\"a,b\"" } 通过下面代码输出,多了CRLF: procedure JsonFormatTes ...
- vue|html5 form 验证
html:<form id="scoreForm" @submit="fsub" > <template v-for="(item, ...
- 查看Linux下系统资源占用常用命令
一 top命令 1.作用top命令用来显示执行中的程序进程,使用权限是所有用户. 2.格式top [-] [d delay] [q] [c] [S] [s] [i] [n] 3.主要参数d:指定更新的 ...
- 转:AMD规范与CMD规范的区别是什么?
AMD规范与CMD规范的区别是什么? 在比较之前,我们得先来了解下什么是AMD规范?什么是CMD规范?当然先申明一下,我个人也是总结下而已,也是网上看到的资料,自己总结下或者可以说整理下而已,供 ...
- Texas Instruments matrix-gui-2.0 hacking -- execute_command.sh
#!/bin/sh #Copyright (C) Texas Instruments Incorporated - http://www.ti.com/ # # # Redistribution an ...
- UTF-8编码占几个字节?
占2个字节的:带有附加符号的拉丁文.希腊文.西里尔字母.亚美尼亚语.希伯来文.阿拉伯文.叙利亚文及它拿字母则需要二个字节编码 占3个字节的:基本等同于GBK,含21000多个汉字 占4个字节的:中日韩 ...
- HBase scan shell操作详解
创建表 create 'test1', 'lf', 'sf' lf: column family of LONG values (binary value) -- sf: column family ...
- FastAdmin 教程草稿大纲
FastAdmin 教程草稿大纲 计划 FastAdmin 教程大纲 FastAdmin 环境搭建 phpStudy 2018 安装 一键 CRUD 教程 环境变量配置 环境安装 命令行安装 列出所需 ...
- PHP写的手机端网站,可以打包成app吗,怎么打包?
8:13:36 沐歌-重庆 2018/1/19 8:13:36 PHP写的手机端网站,可以打包成app吗,怎么打包 风太大-淮安 2018/1/19 8:14:58 变色龙 沐歌-重庆 一般用什么打包 ...
- linux的性能优化
转一位大神的笔记. linux的性能优化: 1.CPU,MEM 2.DISK--RAID 3.网络相关的外设,网卡 linux系统性能分析: top:linux系统的负载,CPU,MEM,SWAP,占 ...