2 storm的topology提交执行
本博文的主要内容有
.storm单机模式,打包,放到storm集群
.Storm的并发机制图
.Storm的相关概念
.附PPT
打包,放到storm集群去。我这里,是单机模式下的storm。
weekend110-storm -> Export -> JAR file ->
当然,这边,肯定是,准备工作已经做好了。如启动了zookeeper,storm集群。
上传导出的jar
sftp> cd /home/hadoop/
sftp> put c:/d
demotop.jar Documents and Settings/
sftp> put c:/demotop.jar
Uploading demotop.jar to /home/hadoop/demotop.jar
100% 8KB 8KB/s 00:00:00
c:/demotop.jar: 9199 bytes transferred in 0 seconds (8 KB/s)
sftp>
新建输出目录
/home/hadoop/stormoutput/
[hadoop@weekend110 ~]$ cd /home/hadoop/app/apache-storm-0.9.2-incubating/
[hadoop@weekend110 apache-storm-0.9.2-incubating]$ cd bin
[hadoop@weekend110 bin]$ ls
storm storm.cmd storm-config.cmd
[hadoop@weekend110 bin]$ mkdir -p /home/hadoop/stormoutput/
[hadoop@weekend110 bin]$ ./storm jar ~/demotop.jar cn.itcast.stormdemo.TopoMain
[hadoop@weekend110 bin]$ ./storm jar ~/demotop.jar cn.itcast.stormdemo.TopoMain
Running: /home/hadoop/app/jdk1.7.0_65/bin/java -client -Dstorm.options= -Dstorm.home=/home/hadoop/app/apache-storm-0.9.2-incubating -Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib -Dstorm.conf.file= -cp /home/hadoop/app/apache-storm-0.9.2-incubating/lib/hiccup-0.3.6.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/log4j-over-slf4j-1.6.6.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/chill-java-0.3.5.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/httpcore-4.3.2.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/zookeeper-3.4.5.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/ring-core-1.1.5.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/clj-time-0.4.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/storm-core-0.9.2-incubating.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/httpclient-4.3.3.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/curator-framework-2.4.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/minlog-1.2.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/commons-codec-1.6.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/core.incubator-0.1.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/tools.macro-0.1.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/ring-servlet-0.3.11.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/netty-3.6.3.Final.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/commons-lang-2.5.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/logback-core-1.0.6.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/jgrapht-core-0.9.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/slf4j-api-1.6.5.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/kryo-2.21.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/clout-1.0.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/objenesis-1.2.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/tools.cli-0.2.4.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/jetty-6.1.26.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/joda-time-2.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/servlet-api-2.5-20081211.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/carbonite-1.4.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/curator-client-2.4.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/ring-jetty-adapter-0.3.11.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/commons-exec-1.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/reflectasm-1.07-shaded.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/commons-logging-1.1.3.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/clojure-1.5.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/guava-13.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/disruptor-2.10.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/compojure-1.1.3.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/netty-3.2.2.Final.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/servlet-api-2.5.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/clj-stacktrace-0.2.4.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/commons-io-2.4.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/logback-classic-1.0.6.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/ring-devel-0.3.11.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/snakeyaml-1.11.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/jline-2.11.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/tools.logging-0.2.3.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/asm-4.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/json-simple-1.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/jetty-util-6.1.26.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/math.numeric-tower-0.0.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/commons-fileupload-1.2.1.jar:/home/hadoop/demotop.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/conf:/home/hadoop/app/apache-storm-0.9.2-incubating/bin -Dstorm.jar=/home/hadoop/demotop.jar cn.itcast.stormdemo.TopoMain
2495 [main] INFO backtype.storm.StormSubmitter - Jar not uploaded to master yet. Submitting jar...
2566 [main] INFO backtype.storm.StormSubmitter - Uploading topology jar /home/hadoop/demotop.jar to assigned location: /home/hadoop/data/apache-storm-0.9.2-incubating/tmp/storm/nimbus/inbox/stormjar-67666aeb-2578-43c5-a328-e91d30b25a36.jar
2664 [main] INFO backtype.storm.StormSubmitter - Successfully uploaded topology jar to assigned location: /home/hadoop/data/apache-storm-0.9.2-incubating/tmp/storm/nimbus/inbox/stormjar-67666aeb-2578-43c5-a328-e91d30b25a36.jar
2665 [main] INFO backtype.storm.StormSubmitter - Submitting topology demotopo in distributed mode with conf {"topology.workers":4,"topology.acker.executors":0,"topology.debug":true}
4171 [main] INFO backtype.storm.StormSubmitter - Finished submitting topology: demotopo
[hadoop@weekend110 bin]$
Storm UI
Cluster Summary
Version |
Nimbus uptime |
Supervisors |
Used slots |
Free slots |
Total slots |
Executors |
Tasks |
0.9.2-incubating |
5h 4m 2s |
1 |
4 |
0 |
4 |
12 |
16 |
Topology summary
Name |
Id |
Status |
Uptime |
Num workers |
Num executors |
Num tasks |
demotopo-1-1476517821 |
ACTIVE |
55s |
16 |
4 |
12 |
Supervisor summary
Id |
Host |
Uptime |
Slots |
Used slots |
3a41e7dd-0160-4ad0-bad5-096cdba4647e |
weekend110 |
5h 2m 51s |
4 |
4 |
Nimbus Configuration
Key |
Value |
dev.zookeeper.path |
/tmp/dev-storm-zookeeper |
topology.tick.tuple.freq.secs |
|
topology.builtin.metrics.bucket.size.secs |
60 |
topology.fall.back.on.java.serialization |
true |
topology.max.error.report.per.interval |
5 |
zmq.linger.millis |
5000 |
topology.skip.missing.kryo.registrations |
false |
storm.messaging.netty.client_worker_threads |
1 |
ui.childopts |
-Xmx768m |
storm.zookeeper.session.timeout |
20000 |
nimbus.reassign |
true |
topology.trident.batch.emit.interval.millis |
500 |
storm.messaging.netty.flush.check.interval.ms |
10 |
nimbus.monitor.freq.secs |
10 |
logviewer.childopts |
-Xmx128m |
java.library.path |
/usr/local/lib:/opt/local/lib:/usr/lib |
topology.executor.send.buffer.size |
1024 |
storm.local.dir |
/home/hadoop/data/apache-storm-0.9.2-incubating/tmp/storm |
storm.messaging.netty.buffer_size |
5242880 |
supervisor.worker.start.timeout.secs |
120 |
topology.enable.message.timeouts |
true |
nimbus.cleanup.inbox.freq.secs |
600 |
nimbus.inbox.jar.expiration.secs |
3600 |
drpc.worker.threads |
64 |
topology.worker.shared.thread.pool.size |
4 |
nimbus.host |
weekend110 |
storm.messaging.netty.min_wait_ms |
100 |
storm.zookeeper.port |
2181 |
transactional.zookeeper.port |
|
topology.executor.receive.buffer.size |
1024 |
transactional.zookeeper.servers |
|
storm.zookeeper.root |
/storm |
storm.zookeeper.retry.intervalceiling.millis |
30000 |
supervisor.enable |
true |
storm.messaging.netty.server_worker_threads |
1 |
storm.zookeeper.servers |
weekend110 |
transactional.zookeeper.root |
/transactional |
topology.acker.executors |
|
topology.transfer.buffer.size |
1024 |
topology.worker.childopts |
|
drpc.queue.size |
128 |
worker.childopts |
-Xmx768m |
supervisor.heartbeat.frequency.secs |
5 |
topology.error.throttle.interval.secs |
10 |
zmq.hwm |
0 |
drpc.port |
3772 |
supervisor.monitor.frequency.secs |
3 |
drpc.childopts |
-Xmx768m |
topology.receiver.buffer.size |
8 |
task.heartbeat.frequency.secs |
3 |
topology.tasks |
|
storm.messaging.netty.max_retries |
30 |
topology.spout.wait.strategy |
backtype.storm.spout.SleepSpoutWaitStrategy |
nimbus.thrift.max_buffer_size |
1048576 |
topology.max.spout.pending |
|
storm.zookeeper.retry.interval |
1000 |
topology.sleep.spout.wait.strategy.time.ms |
1 |
nimbus.topology.validator |
backtype.storm.nimbus.DefaultTopologyValidator |
supervisor.slots.ports |
6700,6701,6702,6703 |
topology.debug |
false |
nimbus.task.launch.secs |
120 |
nimbus.supervisor.timeout.secs |
60 |
topology.message.timeout.secs |
30 |
task.refresh.poll.secs |
10 |
topology.workers |
1 |
supervisor.childopts |
-Xmx256m |
nimbus.thrift.port |
6627 |
topology.stats.sample.rate |
0.05 |
worker.heartbeat.frequency.secs |
1 |
topology.tuple.serializer |
backtype.storm.serialization.types.ListDelegateSerializer |
topology.disruptor.wait.strategy |
com.lmax.disruptor.BlockingWaitStrategy |
topology.multilang.serializer |
backtype.storm.multilang.JsonSerializer |
nimbus.task.timeout.secs |
30 |
storm.zookeeper.connection.timeout |
15000 |
topology.kryo.factory |
backtype.storm.serialization.DefaultKryoFactory |
drpc.invocations.port |
3773 |
logviewer.port |
8000 |
zmq.threads |
1 |
storm.zookeeper.retry.times |
5 |
topology.worker.receiver.thread.count |
1 |
storm.thrift.transport |
backtype.storm.security.auth.SimpleTransportPlugin |
topology.state.synchronization.timeout.secs |
60 |
supervisor.worker.timeout.secs |
30 |
nimbus.file.copy.expiration.secs |
600 |
storm.messaging.transport |
backtype.storm.messaging.netty.Context |
logviewer.appender.name |
A1 |
storm.messaging.netty.max_wait_ms |
1000 |
drpc.request.timeout.secs |
600 |
storm.local.mode.zmq |
false |
ui.port |
8080 |
nimbus.childopts |
-Xmx1024m |
storm.cluster.mode |
distributed |
topology.max.task.parallelism |
|
storm.messaging.netty.transfer.batch.size |
262144 |
[hadoop@weekend110 apache-storm-0.9.2-incubating]$ jps
4065 worker
2116 QuorumPeerMain
4067 worker
4236 Jps
3220 supervisor
3160 nimbus
4059 worker
3210 core
4061 worker
[hadoop@weekend110 apache-storm-0.9.2-incubating]$
若是3节点的,分布式storm集群,则
[hadoop@weekend110 apache-storm-0.9.2-incubating]$ jps
4065 worker
2116 QuorumPeerMain
4067 worker
4236 Jps
3220 supervisor
3160 nimbus
4059 worker
3210 core
4061 worker
[hadoop@weekend110 apache-storm-0.9.2-incubating]$ cd /home/hadoop/stormoutput/
[hadoop@weekend110 stormoutput]$ ll
total 32
-rw-rw-r--. 1 hadoop hadoop 7741 Oct 15 15:57 148996a9-4c34-498b-8199-5c887cd4a7f0
-rw-rw-r--. 1 hadoop hadoop 7683 Oct 15 15:57 4a71fb82-1562-45dd-886c-b5610a202fd0
-rw-rw-r--. 1 hadoop hadoop 7681 Oct 15 15:57 71b93a13-4b79-460f-a1c9-b454d24e925d
-rw-rw-r--. 1 hadoop hadoop 7744 Oct 15 15:57 b20451ec-9bdd-4f92-a295-814a69b1a6e8
[hadoop@weekend110 stormoutput]$
得到的4个输出文件。
[hadoop@weekend110 apache-storm-0.9.2-incubating]$ cd /home/hadoop/stormoutput/
[hadoop@weekend110 stormoutput]$ ll
total 32
-rw-rw-r--. 1 hadoop hadoop 7741 Oct 15 15:57 148996a9-4c34-498b-8199-5c887cd4a7f0
-rw-rw-r--. 1 hadoop hadoop 7683 Oct 15 15:57 4a71fb82-1562-45dd-886c-b5610a202fd0
-rw-rw-r--. 1 hadoop hadoop 7681 Oct 15 15:57 71b93a13-4b79-460f-a1c9-b454d24e925d
-rw-rw-r--. 1 hadoop hadoop 7744 Oct 15 15:57 b20451ec-9bdd-4f92-a295-814a69b1a6e8
[hadoop@weekend110 stormoutput]$ tail -f 148996a9-4c34-498b-8199-5c887cd4a7f0
XIAOMI_itisok
MOTO_itisok
MOTO_itisok
MOTO_itisok
MOTO_itisok
MATE_itisok
MEIZU_itisok
XIAOMI_itisok
SONY_itisok
MATE_itisok
MEIZU_itisok
IPHONE_itisok
MEIZU_itisok
XIAOMI_itisok
MATE_itisok
MOTO_itisok
MOTO_itisok
SONY_itisok
MEIZU_itisok
MOTO_itisok
MATE_itisok
MEIZU_itisok
MATE_itisok
SUMSUNG_itisok
MATE_itisok
MATE_itisok
MEIZU_itisok
SONY_itisok
MEIZU_itisok
MATE_itisok
MOTO_itisok
SONY_itisok
XIAOMI_itisok
SONY_itisok
MOTO_itisok
MATE_itisok
IPHONE_itisok
SONY_itisok
XIAOMI_itisok
SUMSUNG_itisok
SUMSUNG_itisok
SONY_itisok
MEIZU_itisok
IPHONE_itisok
MATE_itisok
MATE_itisok
MOTO_itisok
XIAOMI_itisok
SUMSUNG_itisok
MATE_itisok
MOTO_itisok
MATE_itisok
SUMSUNG_itisok
SONY_itisok
XIAOMI_itisok
IPHONE_itisok
SUMSUNG_itisok
MEIZU_itisok
MOTO_itisok
SUMSUNG_itisok
MOTO_itisok
MATE_itisok
XIAOMI_itisok
MOTO_itisok
IPHONE_itisok
MATE_itisok
SONY_itisok
XIAOMI_itisok
IPHONE_itisok
IPHONE_itisok
XIAOMI_itisok
SONY_itisok
MATE_itisok
MOTO_itisok
SUMSUNG_itisok
SONY_itisok
MATE_itisok
XIAOMI_itisok
SONY_itisok
XIAOMI_itisok
[hadoop@weekend110 apache-storm-0.9.2-incubating]$ cd bin/
[hadoop@weekend110 bin]$ clear
[hadoop@weekend110 bin]$ cd /home/hadoop/stormoutput/
[hadoop@weekend110 stormoutput]$ clear
[hadoop@weekend110 stormoutput]$ pwd
/home/hadoop/stormoutput
[hadoop@weekend110 stormoutput]$ ll
total 64
-rw-rw-r--. 1 hadoop hadoop 12868 Oct 15 16:00 148996a9-4c34-498b-8199-5c887cd4a7f0
-rw-rw-r--. 1 hadoop hadoop 12885 Oct 15 16:00 4a71fb82-1562-45dd-886c-b5610a202fd0
-rw-rw-r--. 1 hadoop hadoop 12863 Oct 15 16:00 71b93a13-4b79-460f-a1c9-b454d24e925d
-rw-rw-r--. 1 hadoop hadoop 12903 Oct 15 16:00 b20451ec-9bdd-4f92-a295-814a69b1a6e8
[hadoop@weekend110 stormoutput]$ tail -f 4a71fb82-1562-45dd-886c-b5610a202fd0
MEIZU_itisok
SONY_itisok
MOTO_itisok
MOTO_itisok
MOTO_itisok
SONY_itisok
MEIZU_itisok
SUMSUNG_itisok
XIAOMI_itisok
XIAOMI_itisok
MEIZU_itisok
SONY_itisok
SUMSUNG_itisok
XIAOMI_itisok
SONY_itisok
MEIZU_itisok
SUMSUNG_itisok
MEIZU_itisok
SUMSUNG_itisok
IPHONE_itisok
SUMSUNG_itisok
SONY_itisok
MOTO_itisok
XIAOMI_itisok
SONY_itisok
MOTO_itisok
SONY_itisok
MOTO_itisok
MATE_itisok
MOTO_itisok
MATE_itisok
MEIZU_itisok
MATE_itisok
SONY_itisok
SUMSUNG_itisok
MATE_itisok
XIAOMI_itisok
SUMSUNG_itisok
SUMSUNG_itisok
SUMSUNG_itisok
MATE_itisok
SONY_itisok
MEIZU_itisok
[hadoop@weekend110 stormoutput]$ pwd
/home/hadoop/stormoutput
[hadoop@weekend110 stormoutput]$ ll
total 64
-rw-rw-r--. 1 hadoop hadoop 14265 Oct 15 16:01 148996a9-4c34-498b-8199-5c887cd4a7f0
-rw-rw-r--. 1 hadoop hadoop 14282 Oct 15 16:01 4a71fb82-1562-45dd-886c-b5610a202fd0
-rw-rw-r--. 1 hadoop hadoop 14263 Oct 15 16:01 71b93a13-4b79-460f-a1c9-b454d24e925d
-rw-rw-r--. 1 hadoop hadoop 14334 Oct 15 16:01 b20451ec-9bdd-4f92-a295-814a69b1a6e8
[hadoop@weekend110 stormoutput]$ tail -f 71b93a13-4b79-460f-a1c9-b454d24e925d
MEIZU_itisok
SUMSUNG_itisok
SUMSUNG_itisok
SUMSUNG_itisok
MOTO_itisok
SUMSUNG_itisok
MOTO_itisok
SONY_itisok
SUMSUNG_itisok
IPHONE_itisok
MOTO_itisok
SUMSUNG_itisok
MATE_itisok
MATE_itisok
MOTO_itisok
MOTO_itisok
IPHONE_itisok
XIAOMI_itisok
XIAOMI_itisok
SUMSUNG_itisok
XIAOMI_itisok
MOTO_itisok
SONY_itisok
SUMSUNG_itisok
IPHONE_itisok
IPHONE_itisok
MEIZU_itisok
SONY_itisok
MOTO_itisok
SUMSUNG_itisok
IPHONE_itisok
XIAOMI_itisok
MEIZU_itisok
MOTO_itisok
MEIZU_itisok
XIAOMI_itisok
IPHONE_itisok
SONY_itisok
MATE_itisok
[hadoop@weekend110 stormoutput]$ pwd
/home/hadoop/stormoutput
[hadoop@weekend110 stormoutput]$ ll
total 64
-rw-rw-r--. 1 hadoop hadoop 15994 Oct 15 16:02 148996a9-4c34-498b-8199-5c887cd4a7f0
-rw-rw-r--. 1 hadoop hadoop 15985 Oct 15 16:02 4a71fb82-1562-45dd-886c-b5610a202fd0
-rw-rw-r--. 1 hadoop hadoop 15989 Oct 15 16:02 71b93a13-4b79-460f-a1c9-b454d24e925d
-rw-rw-r--. 1 hadoop hadoop 16051 Oct 15 16:02 b20451ec-9bdd-4f92-a295-814a69b1a6e8
[hadoop@weekend110 stormoutput]$ tail -f b20451ec-9bdd-4f92-a295-814a69b1a6e8
XIAOMI_itisok
XIAOMI_itisok
MEIZU_itisok
SUMSUNG_itisok
XIAOMI_itisok
MOTO_itisok
MATE_itisok
SUMSUNG_itisok
SUMSUNG_itisok
MATE_itisok
IPHONE_itisok
XIAOMI_itisok
MEIZU_itisok
IPHONE_itisok
SUMSUNG_itisok
XIAOMI_itisok
SUMSUNG_itisok
MEIZU_itisok
SONY_itisok
MEIZU_itisok
MOTO_itisok
SONY_itisok
MOTO_itisok
MOTO_itisok
MOTO_itisok
MEIZU_itisok
MOTO_itisok
XIAOMI_itisok
SUMSUNG_itisok
MATE_itisok
MOTO_itisok
MATE_itisok
XIAOMI_itisok
MEIZU_itisok
SONY_itisok
MOTO_itisok
MOTO_itisok
SUMSUNG_itisok
SONY_itisok
XIAOMI_itisok
XIAOMI_itisok
SUMSUNG_itisok
MOTO_itisok
SUMSUNG_itisok
MOTO_itisok
由此可见,是随机分数据的。
Storm的并发机制图,如下
Storm的相关概念
storm的深入学习:
分布式共享锁的实现
事务topology的实现机制及开发模式
在具体场景中的跟其他框架的整合( 入口: flume/activeMQ/kafka(分布式的消息队列系统) 出口: redis/hbase/mysql cluster)
注意,storm往往不是独立的,在实际业务里,数据来,数据走,
入口: 如:flume/activeMQ/kafka等分布式的消息队列系统。
当前,storm + kafka,是黄金组合。
出口:如edis/hbase/mysql cluster
附PPT:
conf.setNumWorkers(4) 表示设置了4个worker来执行整个topology的所有组件
builder.setBolt("boltA",new BoltA(), 4) ---->指明 boltA组件的线程数excutors总共有4个
builder.setBolt("boltB",new BoltB(), 4) ---->指明 boltB组件的线程数excutors总共有4个
builder.setSpout("randomSpout",new RandomSpout(), 2) ---->指明randomSpout组件的线程数excutors总共有4个
-----意味着整个topology中执行所有组件的总线程数为4+4+2=10个
----worker数量是4个,有可能会出现这样的负载情况, worker-1有2个线程,worker-2有2个线程,worker-3有3个线程,worker-4有3个线程
如果指定某个组件的具体task并发实例数
builder.setSpout("randomspout", new RandomWordSpout(), 4).setNumTasks(8);
----意味着对于这个组件的执行线程excutor来说,一个excutor将执行8/4=2个task
2 storm的topology提交执行的更多相关文章
- Storm Topology 提交 总结---Kettle On Storm 实现
一,目的 在学习的过程中,需要用到 PDI---一个开源的ETL软件.主要是用它来设计一些转换流程来处理数据.但是,在PDI中设计好的 transformation 是在本地的执行引擎中执行的,(参考 ...
- storm源码分析之topology提交过程
storm集群上运行的是一个个topology,一个topology是spouts和bolts组成的图.当我们开发完topology程序后将其打成jar包,然后在shell中执行storm jar x ...
- Storm系列(三)Topology提交过程
提交示例代码: 1 ); // 设置一个ack线程 9 conf.setDebug(true); // 设置打印所有发送的消息及系统消息 10 StormSubmitter.su ...
- Storm集群中执行的各种组件及其并行
一.Storm中执行的组件 我们知道,Storm的强大之处就是能够非常easy地在集群中横向拓展它的计算能力,它会把整个运算过程切割成多个独立的tasks在集群中进行并行计算.在Storm中 ...
- 关于Storm 中Topology的并发度的理解
来自:https://storm.apache.org/documentation/Understanding-the-parallelism-of-a-Storm-topology.html htt ...
- Storm编程入门API系列之Storm的Topology默认Workers、默认executors和默认tasks数目
关于,storm的启动我这里不多说了. 见博客 storm的3节点集群详细启动步骤(非HA和HA)(图文详解) 建立stormDemo项目 Group Id : zhouls.bigdata Art ...
- Storm编程入门API系列之Storm的Topology多个Workers数目控制实现
前期博客 Storm编程入门API系列之Storm的Topology默认Workers.默认executors和默认tasks数目 继续编写 StormTopologyMoreWorker.java ...
- Storm编程入门API系列之Storm的Topology多个Executors数目控制实现
前期博客 Storm编程入门API系列之Storm的Topology默认Workers.默认executors和默认tasks数目 Storm编程入门API系列之Storm的Topology多个Wor ...
- Storm编程入门API系列之Storm的Topology多个tasks数目控制实现
前期博客 Storm编程入门API系列之Storm的Topology默认Workers.默认executors和默认tasks数目 Storm编程入门API系列之Storm的Topology多个Wor ...
随机推荐
- raw socket遇上windows
最近很长一段时间内又捡起了大学时丢下的网络协议,开始回顾网络协议编程,于是linux系统成了首选,它让我感到了无比的自由,可以很通透的游走于协议的各层. 最初写了个ARP欺骗程序,很成功的欺骗了win ...
- angular2 组件之间通讯-使用服务通讯模式 2016.10.27 基于正式版ng2
工作中用到ng2的组件通讯 奈何官方文档言简意赅 没说明白 自己搞明白后 整理后分享下 rxjs 不懂的看这篇文章 讲很详细 http://www.open-open.com/lib/view/ope ...
- hdu 1788 Chinese remainder theorem again(最小公倍数)
Problem Description 我知道部分同学最近在看中国剩余定理,就这个定理本身,还是比较简单的: 假设m1,m2,-,mk两两互素,则下面同余方程组: x≡a1(mod m1) x≡a2( ...
- jQuery慢慢啃之工具(十)
1.jQuery.support//一组用于展示不同浏览器各自特性和bug的属性集合 2.jQuery.browser//浏览器内核标识.依据 navigator.userAgent 判断. 可用值: ...
- python【第十六篇】DOM
文档对象模型(Document Object Model,简称DOM),是W3C组织推荐的处理可扩展标志语言的标准编程接口. DOM可以以一种独立于平台和语言的方式访问和修改一个文档的内容和结构.换句 ...
- Python核心编程2第五章课后练习
5-1 整型,讲讲python普通整型与长整型区别 python整形一共有三种:布尔型,长整型和标准整型.普通整型与长整型的区别在于标准整形的取值范围是-2^31到2^31-1,长整型所能表达的数值与 ...
- 配置SHH集群
==特别要注意当前用户的问题== 1. 修改路由信息 vi /etc/hosts 10.211.55.15 hmaster1 10.211.55.16 hmaster2 10.211.55.17 hs ...
- 2016022602 - redis安装和启动
redis安装 我使用的是ubuntu15.1,打开终端,输入命令:sudo apt-get install redis-server 将会在本机安装上redis. 启动redis 启动redis命令 ...
- Bluestacks 安卓模拟器利器
蓝手指测试安卓比较给力,尤其含有安卓原生态的多语言是现在厂商手机所无法提供了的. 但是有一点需要注意:BlueStack的日志文件非常大,日志目录默认是%Sysem Dir%/Program Da ...
- 字符串搜索算法Boyer-Moore
整理日: 2015年2月16日 1. 主要特征 假设文本串text长度为n,模式串pattern长度为m,BM算法的主要特征为: 从右往左进行比较匹配(一般的字符串搜索算法如KMP都是从从左往右进行匹 ...