本博文的主要内容有

  .storm单机模式,打包,放到storm集群

  .Storm的并发机制图

  .Storm的相关概念

  .附PPT

打包,放到storm集群去。我这里,是单机模式下的storm。

weekend110-storm  ->   Export   ->   JAR file   ->

当然,这边,肯定是,准备工作已经做好了。如启动了zookeeper,storm集群。

上传导出的jar

sftp> cd /home/hadoop/

sftp> put c:/d

demotop.jar           Documents and Settings/

sftp> put c:/demotop.jar

Uploading demotop.jar to /home/hadoop/demotop.jar

100% 8KB      8KB/s 00:00:00

c:/demotop.jar: 9199 bytes transferred in 0 seconds (8 KB/s)

sftp>

新建输出目录

/home/hadoop/stormoutput/

[hadoop@weekend110 ~]$ cd /home/hadoop/app/apache-storm-0.9.2-incubating/

[hadoop@weekend110 apache-storm-0.9.2-incubating]$ cd bin

[hadoop@weekend110 bin]$ ls

storm  storm.cmd  storm-config.cmd

[hadoop@weekend110 bin]$ mkdir -p /home/hadoop/stormoutput/

[hadoop@weekend110 bin]$ ./storm jar ~/demotop.jar cn.itcast.stormdemo.TopoMain

[hadoop@weekend110 bin]$ ./storm jar ~/demotop.jar cn.itcast.stormdemo.TopoMain

Running: /home/hadoop/app/jdk1.7.0_65/bin/java -client -Dstorm.options= -Dstorm.home=/home/hadoop/app/apache-storm-0.9.2-incubating -Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib -Dstorm.conf.file= -cp /home/hadoop/app/apache-storm-0.9.2-incubating/lib/hiccup-0.3.6.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/log4j-over-slf4j-1.6.6.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/chill-java-0.3.5.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/httpcore-4.3.2.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/zookeeper-3.4.5.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/ring-core-1.1.5.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/clj-time-0.4.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/storm-core-0.9.2-incubating.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/httpclient-4.3.3.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/curator-framework-2.4.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/minlog-1.2.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/commons-codec-1.6.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/core.incubator-0.1.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/tools.macro-0.1.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/ring-servlet-0.3.11.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/netty-3.6.3.Final.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/commons-lang-2.5.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/logback-core-1.0.6.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/jgrapht-core-0.9.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/slf4j-api-1.6.5.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/kryo-2.21.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/clout-1.0.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/objenesis-1.2.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/tools.cli-0.2.4.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/jetty-6.1.26.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/joda-time-2.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/servlet-api-2.5-20081211.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/carbonite-1.4.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/curator-client-2.4.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/ring-jetty-adapter-0.3.11.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/commons-exec-1.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/reflectasm-1.07-shaded.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/commons-logging-1.1.3.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/clojure-1.5.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/guava-13.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/disruptor-2.10.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/compojure-1.1.3.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/netty-3.2.2.Final.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/servlet-api-2.5.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/clj-stacktrace-0.2.4.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/commons-io-2.4.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/logback-classic-1.0.6.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/ring-devel-0.3.11.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/snakeyaml-1.11.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/jline-2.11.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/tools.logging-0.2.3.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/asm-4.0.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/json-simple-1.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/jetty-util-6.1.26.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/math.numeric-tower-0.0.1.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/lib/commons-fileupload-1.2.1.jar:/home/hadoop/demotop.jar:/home/hadoop/app/apache-storm-0.9.2-incubating/conf:/home/hadoop/app/apache-storm-0.9.2-incubating/bin -Dstorm.jar=/home/hadoop/demotop.jar cn.itcast.stormdemo.TopoMain

2495 [main] INFO  backtype.storm.StormSubmitter - Jar not uploaded to master yet. Submitting jar...

2566 [main] INFO  backtype.storm.StormSubmitter - Uploading topology jar /home/hadoop/demotop.jar to assigned location: /home/hadoop/data/apache-storm-0.9.2-incubating/tmp/storm/nimbus/inbox/stormjar-67666aeb-2578-43c5-a328-e91d30b25a36.jar

2664 [main] INFO  backtype.storm.StormSubmitter - Successfully uploaded topology jar to assigned location: /home/hadoop/data/apache-storm-0.9.2-incubating/tmp/storm/nimbus/inbox/stormjar-67666aeb-2578-43c5-a328-e91d30b25a36.jar

2665 [main] INFO  backtype.storm.StormSubmitter - Submitting topology demotopo in distributed mode with conf {"topology.workers":4,"topology.acker.executors":0,"topology.debug":true}

4171 [main] INFO  backtype.storm.StormSubmitter - Finished submitting topology: demotopo

[hadoop@weekend110 bin]$

http://weekend110:8080/

Storm UI

Cluster Summary

Version

Nimbus uptime

Supervisors

Used slots

Free slots

Total slots

Executors

Tasks

0.9.2-incubating

5h 4m 2s

1

4

0

4

12

16

Topology summary

Name

Id

Status

Uptime

Num workers

Num executors

Num tasks

demotopo

demotopo-1-1476517821

ACTIVE

55s

16

4

12

Supervisor summary

Id

Host

Uptime

Slots

Used slots

3a41e7dd-0160-4ad0-bad5-096cdba4647e

weekend110

5h 2m 51s

4

4

Nimbus Configuration

Key

Value

dev.zookeeper.path

/tmp/dev-storm-zookeeper

topology.tick.tuple.freq.secs

 

topology.builtin.metrics.bucket.size.secs

60

topology.fall.back.on.java.serialization

true

topology.max.error.report.per.interval

5

zmq.linger.millis

5000

topology.skip.missing.kryo.registrations

false

storm.messaging.netty.client_worker_threads

1

ui.childopts

-Xmx768m

storm.zookeeper.session.timeout

20000

nimbus.reassign

true

topology.trident.batch.emit.interval.millis

500

storm.messaging.netty.flush.check.interval.ms

10

nimbus.monitor.freq.secs

10

logviewer.childopts

-Xmx128m

java.library.path

/usr/local/lib:/opt/local/lib:/usr/lib

topology.executor.send.buffer.size

1024

storm.local.dir

/home/hadoop/data/apache-storm-0.9.2-incubating/tmp/storm

storm.messaging.netty.buffer_size

5242880

supervisor.worker.start.timeout.secs

120

topology.enable.message.timeouts

true

nimbus.cleanup.inbox.freq.secs

600

nimbus.inbox.jar.expiration.secs

3600

drpc.worker.threads

64

topology.worker.shared.thread.pool.size

4

nimbus.host

weekend110

storm.messaging.netty.min_wait_ms

100

storm.zookeeper.port

2181

transactional.zookeeper.port

 

topology.executor.receive.buffer.size

1024

transactional.zookeeper.servers

 

storm.zookeeper.root

/storm

storm.zookeeper.retry.intervalceiling.millis

30000

supervisor.enable

true

storm.messaging.netty.server_worker_threads

1

storm.zookeeper.servers

weekend110

transactional.zookeeper.root

/transactional

topology.acker.executors

 

topology.transfer.buffer.size

1024

topology.worker.childopts

 

drpc.queue.size

128

worker.childopts

-Xmx768m

supervisor.heartbeat.frequency.secs

5

topology.error.throttle.interval.secs

10

zmq.hwm

0

drpc.port

3772

supervisor.monitor.frequency.secs

3

drpc.childopts

-Xmx768m

topology.receiver.buffer.size

8

task.heartbeat.frequency.secs

3

topology.tasks

 

storm.messaging.netty.max_retries

30

topology.spout.wait.strategy

backtype.storm.spout.SleepSpoutWaitStrategy

nimbus.thrift.max_buffer_size

1048576

topology.max.spout.pending

 

storm.zookeeper.retry.interval

1000

topology.sleep.spout.wait.strategy.time.ms

1

nimbus.topology.validator

backtype.storm.nimbus.DefaultTopologyValidator

supervisor.slots.ports

6700,6701,6702,6703

topology.debug

false

nimbus.task.launch.secs

120

nimbus.supervisor.timeout.secs

60

topology.message.timeout.secs

30

task.refresh.poll.secs

10

topology.workers

1

supervisor.childopts

-Xmx256m

nimbus.thrift.port

6627

topology.stats.sample.rate

0.05

worker.heartbeat.frequency.secs

1

topology.tuple.serializer

backtype.storm.serialization.types.ListDelegateSerializer

topology.disruptor.wait.strategy

com.lmax.disruptor.BlockingWaitStrategy

topology.multilang.serializer

backtype.storm.multilang.JsonSerializer

nimbus.task.timeout.secs

30

storm.zookeeper.connection.timeout

15000

topology.kryo.factory

backtype.storm.serialization.DefaultKryoFactory

drpc.invocations.port

3773

logviewer.port

8000

zmq.threads

1

storm.zookeeper.retry.times

5

topology.worker.receiver.thread.count

1

storm.thrift.transport

backtype.storm.security.auth.SimpleTransportPlugin

topology.state.synchronization.timeout.secs

60

supervisor.worker.timeout.secs

30

nimbus.file.copy.expiration.secs

600

storm.messaging.transport

backtype.storm.messaging.netty.Context

logviewer.appender.name

A1

storm.messaging.netty.max_wait_ms

1000

drpc.request.timeout.secs

600

storm.local.mode.zmq

false

ui.port

8080

nimbus.childopts

-Xmx1024m

storm.cluster.mode

distributed

topology.max.task.parallelism

 

storm.messaging.netty.transfer.batch.size

262144

[hadoop@weekend110 apache-storm-0.9.2-incubating]$ jps

4065 worker

2116 QuorumPeerMain

4067 worker

4236 Jps

3220 supervisor

3160 nimbus

4059 worker

3210 core

4061 worker

[hadoop@weekend110 apache-storm-0.9.2-incubating]$

若是3节点的,分布式storm集群,则

[hadoop@weekend110 apache-storm-0.9.2-incubating]$ jps

4065 worker

2116 QuorumPeerMain

4067 worker

4236 Jps

3220 supervisor

3160 nimbus

4059 worker

3210 core

4061 worker

[hadoop@weekend110 apache-storm-0.9.2-incubating]$ cd /home/hadoop/stormoutput/

[hadoop@weekend110 stormoutput]$ ll

total 32

-rw-rw-r--. 1 hadoop hadoop 7741 Oct 15 15:57 148996a9-4c34-498b-8199-5c887cd4a7f0

-rw-rw-r--. 1 hadoop hadoop 7683 Oct 15 15:57 4a71fb82-1562-45dd-886c-b5610a202fd0

-rw-rw-r--. 1 hadoop hadoop 7681 Oct 15 15:57 71b93a13-4b79-460f-a1c9-b454d24e925d

-rw-rw-r--. 1 hadoop hadoop 7744 Oct 15 15:57 b20451ec-9bdd-4f92-a295-814a69b1a6e8

[hadoop@weekend110 stormoutput]$

得到的4个输出文件。

[hadoop@weekend110 apache-storm-0.9.2-incubating]$ cd /home/hadoop/stormoutput/

[hadoop@weekend110 stormoutput]$ ll

total 32

-rw-rw-r--. 1 hadoop hadoop 7741 Oct 15 15:57 148996a9-4c34-498b-8199-5c887cd4a7f0

-rw-rw-r--. 1 hadoop hadoop 7683 Oct 15 15:57 4a71fb82-1562-45dd-886c-b5610a202fd0

-rw-rw-r--. 1 hadoop hadoop 7681 Oct 15 15:57 71b93a13-4b79-460f-a1c9-b454d24e925d

-rw-rw-r--. 1 hadoop hadoop 7744 Oct 15 15:57 b20451ec-9bdd-4f92-a295-814a69b1a6e8

[hadoop@weekend110 stormoutput]$ tail -f 148996a9-4c34-498b-8199-5c887cd4a7f0

XIAOMI_itisok

MOTO_itisok

MOTO_itisok

MOTO_itisok

MOTO_itisok

MATE_itisok

MEIZU_itisok

XIAOMI_itisok

SONY_itisok

MATE_itisok

MEIZU_itisok

IPHONE_itisok

MEIZU_itisok

XIAOMI_itisok

MATE_itisok

MOTO_itisok

MOTO_itisok

SONY_itisok

MEIZU_itisok

MOTO_itisok

MATE_itisok

MEIZU_itisok

MATE_itisok

SUMSUNG_itisok

MATE_itisok

MATE_itisok

MEIZU_itisok

SONY_itisok

MEIZU_itisok

MATE_itisok

MOTO_itisok

SONY_itisok

XIAOMI_itisok

SONY_itisok

MOTO_itisok

MATE_itisok

IPHONE_itisok

SONY_itisok

XIAOMI_itisok

SUMSUNG_itisok

SUMSUNG_itisok

SONY_itisok

MEIZU_itisok

IPHONE_itisok

MATE_itisok

MATE_itisok

MOTO_itisok

XIAOMI_itisok

SUMSUNG_itisok

MATE_itisok

MOTO_itisok

MATE_itisok

SUMSUNG_itisok

SONY_itisok

XIAOMI_itisok

IPHONE_itisok

SUMSUNG_itisok

MEIZU_itisok

MOTO_itisok

SUMSUNG_itisok

MOTO_itisok

MATE_itisok

XIAOMI_itisok

MOTO_itisok

IPHONE_itisok

MATE_itisok

SONY_itisok

XIAOMI_itisok

IPHONE_itisok

IPHONE_itisok

XIAOMI_itisok

SONY_itisok

MATE_itisok

MOTO_itisok

SUMSUNG_itisok

SONY_itisok

MATE_itisok

XIAOMI_itisok

SONY_itisok

XIAOMI_itisok

[hadoop@weekend110 apache-storm-0.9.2-incubating]$ cd bin/

[hadoop@weekend110 bin]$ clear

[hadoop@weekend110 bin]$ cd /home/hadoop/stormoutput/

[hadoop@weekend110 stormoutput]$ clear

[hadoop@weekend110 stormoutput]$ pwd

/home/hadoop/stormoutput

[hadoop@weekend110 stormoutput]$ ll

total 64

-rw-rw-r--. 1 hadoop hadoop 12868 Oct 15 16:00 148996a9-4c34-498b-8199-5c887cd4a7f0

-rw-rw-r--. 1 hadoop hadoop 12885 Oct 15 16:00 4a71fb82-1562-45dd-886c-b5610a202fd0

-rw-rw-r--. 1 hadoop hadoop 12863 Oct 15 16:00 71b93a13-4b79-460f-a1c9-b454d24e925d

-rw-rw-r--. 1 hadoop hadoop 12903 Oct 15 16:00 b20451ec-9bdd-4f92-a295-814a69b1a6e8

[hadoop@weekend110 stormoutput]$ tail -f 4a71fb82-1562-45dd-886c-b5610a202fd0

MEIZU_itisok

SONY_itisok

MOTO_itisok

MOTO_itisok

MOTO_itisok

SONY_itisok

MEIZU_itisok

SUMSUNG_itisok

XIAOMI_itisok

XIAOMI_itisok

MEIZU_itisok

SONY_itisok

SUMSUNG_itisok

XIAOMI_itisok

SONY_itisok

MEIZU_itisok

SUMSUNG_itisok

MEIZU_itisok

SUMSUNG_itisok

IPHONE_itisok

SUMSUNG_itisok

SONY_itisok

MOTO_itisok

XIAOMI_itisok

SONY_itisok

MOTO_itisok

SONY_itisok

MOTO_itisok

MATE_itisok

MOTO_itisok

MATE_itisok

MEIZU_itisok

MATE_itisok

SONY_itisok

SUMSUNG_itisok

MATE_itisok

XIAOMI_itisok

SUMSUNG_itisok

SUMSUNG_itisok

SUMSUNG_itisok

MATE_itisok

SONY_itisok

MEIZU_itisok

[hadoop@weekend110 stormoutput]$ pwd

/home/hadoop/stormoutput

[hadoop@weekend110 stormoutput]$ ll

total 64

-rw-rw-r--. 1 hadoop hadoop 14265 Oct 15 16:01 148996a9-4c34-498b-8199-5c887cd4a7f0

-rw-rw-r--. 1 hadoop hadoop 14282 Oct 15 16:01 4a71fb82-1562-45dd-886c-b5610a202fd0

-rw-rw-r--. 1 hadoop hadoop 14263 Oct 15 16:01 71b93a13-4b79-460f-a1c9-b454d24e925d

-rw-rw-r--. 1 hadoop hadoop 14334 Oct 15 16:01 b20451ec-9bdd-4f92-a295-814a69b1a6e8

[hadoop@weekend110 stormoutput]$ tail -f 71b93a13-4b79-460f-a1c9-b454d24e925d

MEIZU_itisok

SUMSUNG_itisok

SUMSUNG_itisok

SUMSUNG_itisok

MOTO_itisok

SUMSUNG_itisok

MOTO_itisok

SONY_itisok

SUMSUNG_itisok

IPHONE_itisok

MOTO_itisok

SUMSUNG_itisok

MATE_itisok

MATE_itisok

MOTO_itisok

MOTO_itisok

IPHONE_itisok

XIAOMI_itisok

XIAOMI_itisok

SUMSUNG_itisok

XIAOMI_itisok

MOTO_itisok

SONY_itisok

SUMSUNG_itisok

IPHONE_itisok

IPHONE_itisok

MEIZU_itisok

SONY_itisok

MOTO_itisok

SUMSUNG_itisok

IPHONE_itisok

XIAOMI_itisok

MEIZU_itisok

MOTO_itisok

MEIZU_itisok

XIAOMI_itisok

IPHONE_itisok

SONY_itisok

MATE_itisok

[hadoop@weekend110 stormoutput]$ pwd

/home/hadoop/stormoutput

[hadoop@weekend110 stormoutput]$ ll

total 64

-rw-rw-r--. 1 hadoop hadoop 15994 Oct 15 16:02 148996a9-4c34-498b-8199-5c887cd4a7f0

-rw-rw-r--. 1 hadoop hadoop 15985 Oct 15 16:02 4a71fb82-1562-45dd-886c-b5610a202fd0

-rw-rw-r--. 1 hadoop hadoop 15989 Oct 15 16:02 71b93a13-4b79-460f-a1c9-b454d24e925d

-rw-rw-r--. 1 hadoop hadoop 16051 Oct 15 16:02 b20451ec-9bdd-4f92-a295-814a69b1a6e8

[hadoop@weekend110 stormoutput]$ tail -f b20451ec-9bdd-4f92-a295-814a69b1a6e8

XIAOMI_itisok

XIAOMI_itisok

MEIZU_itisok

SUMSUNG_itisok

XIAOMI_itisok

MOTO_itisok

MATE_itisok

SUMSUNG_itisok

SUMSUNG_itisok

MATE_itisok

IPHONE_itisok

XIAOMI_itisok

MEIZU_itisok

IPHONE_itisok

SUMSUNG_itisok

XIAOMI_itisok

SUMSUNG_itisok

MEIZU_itisok

SONY_itisok

MEIZU_itisok

MOTO_itisok

SONY_itisok

MOTO_itisok

MOTO_itisok

MOTO_itisok

MEIZU_itisok

MOTO_itisok

XIAOMI_itisok

SUMSUNG_itisok

MATE_itisok

MOTO_itisok

MATE_itisok

XIAOMI_itisok

MEIZU_itisok

SONY_itisok

MOTO_itisok

MOTO_itisok

SUMSUNG_itisok

SONY_itisok

XIAOMI_itisok

XIAOMI_itisok

SUMSUNG_itisok

MOTO_itisok

SUMSUNG_itisok

MOTO_itisok

由此可见,是随机分数据的。

Storm的并发机制图,如下

Storm的相关概念

storm的深入学习:

分布式共享锁的实现

事务topology的实现机制及开发模式

在具体场景中的跟其他框架的整合(  入口: flume/activeMQ/kafka(分布式的消息队列系统)     出口:  redis/hbase/mysql cluster)

注意,storm往往不是独立的,在实际业务里,数据来,数据走,

入口: 如:flume/activeMQ/kafka等分布式的消息队列系统。

当前,storm + kafka,是黄金组合。

出口:如edis/hbase/mysql cluster

附PPT:

conf.setNumWorkers(4) 表示设置了4个worker来执行整个topology的所有组件

builder.setBolt("boltA",new BoltA(),  4)  ---->指明 boltA组件的线程数excutors总共有4个

builder.setBolt("boltB",new BoltB(),  4) ---->指明 boltB组件的线程数excutors总共有4个

builder.setSpout("randomSpout",new RandomSpout(),  2) ---->指明randomSpout组件的线程数excutors总共有4个

-----意味着整个topology中执行所有组件的总线程数为4+4+2=10个

----worker数量是4个,有可能会出现这样的负载情况,  worker-1有2个线程,worker-2有2个线程,worker-3有3个线程,worker-4有3个线程

如果指定某个组件的具体task并发实例数

builder.setSpout("randomspout", new RandomWordSpout(), 4).setNumTasks(8);

----意味着对于这个组件的执行线程excutor来说,一个excutor将执行8/4=2个task

2 storm的topology提交执行的更多相关文章

  1. Storm Topology 提交 总结---Kettle On Storm 实现

    一,目的 在学习的过程中,需要用到 PDI---一个开源的ETL软件.主要是用它来设计一些转换流程来处理数据.但是,在PDI中设计好的 transformation 是在本地的执行引擎中执行的,(参考 ...

  2. storm源码分析之topology提交过程

    storm集群上运行的是一个个topology,一个topology是spouts和bolts组成的图.当我们开发完topology程序后将其打成jar包,然后在shell中执行storm jar x ...

  3. Storm系列(三)Topology提交过程

    提交示例代码: 1  ); // 设置一个ack线程 9      conf.setDebug(true); // 设置打印所有发送的消息及系统消息 10      StormSubmitter.su ...

  4. Storm集群中执行的各种组件及其并行

    一.Storm中执行的组件      我们知道,Storm的强大之处就是能够非常easy地在集群中横向拓展它的计算能力,它会把整个运算过程切割成多个独立的tasks在集群中进行并行计算.在Storm中 ...

  5. 关于Storm 中Topology的并发度的理解

    来自:https://storm.apache.org/documentation/Understanding-the-parallelism-of-a-Storm-topology.html htt ...

  6. Storm编程入门API系列之Storm的Topology默认Workers、默认executors和默认tasks数目

    关于,storm的启动我这里不多说了. 见博客 storm的3节点集群详细启动步骤(非HA和HA)(图文详解) 建立stormDemo项目 Group Id :  zhouls.bigdata Art ...

  7. Storm编程入门API系列之Storm的Topology多个Workers数目控制实现

    前期博客 Storm编程入门API系列之Storm的Topology默认Workers.默认executors和默认tasks数目 继续编写 StormTopologyMoreWorker.java ...

  8. Storm编程入门API系列之Storm的Topology多个Executors数目控制实现

    前期博客 Storm编程入门API系列之Storm的Topology默认Workers.默认executors和默认tasks数目 Storm编程入门API系列之Storm的Topology多个Wor ...

  9. Storm编程入门API系列之Storm的Topology多个tasks数目控制实现

    前期博客 Storm编程入门API系列之Storm的Topology默认Workers.默认executors和默认tasks数目 Storm编程入门API系列之Storm的Topology多个Wor ...

随机推荐

  1. winform程序中界面的跳转问题

    首先是我们进行窗口间的跳转,尤其注意的是winform程序里面的空间都是中线程安全的.但是注意的是如果你在一个线程中操纵另外的控件,这时候会提示你一个错误,这个错误的解决方法准备单独的在另一篇文章中来 ...

  2. zoom与transform:scale的区别

    一. zoom特性 1. zoom是IE的私有属性,但目前除Firefox不支持外,其他浏览器支持尚好. 2.定义: zoom即变焦,可改变元素尺寸,属于真实尺寸.zoom:百分值/数值/normal ...

  3. 接触.net5年了,感觉自己的知识面很狭隘。

    08年毕业找工作期间开始接触网页开发,由于在学校了混了4年时间,我只能从html标记语言开始学习,后来应聘到一个网站建设公司,开始学习ps.Dreamweaver和asp.由于基础薄弱,一个月后离开了 ...

  4. Fedora 21 中添加及更新源的命令

    原文: Fedora 21 中添加及更新源的命令 fedora的软件源信息文件(*.repo)都是放在 /etc/yum.repos.d 目录下的.可以通过# ls -l /etc/yum.repos ...

  5. preg_replace($pattern, $replacement, $content) 修饰符的奇葩作用

    $str = "<span>lin</span> == <span>3615</span>";$pattern = "/& ...

  6. Bayeux协议

    Bayeux 协议-- Bayeux 1.0草案1 本备忘录状态 This document specifies a protocol for the Internet community, and ...

  7. 大数据技术人年度盛事! BDTC 2016将于12月8-10日在京举行

    2016年12月8日-10日,由中国计算机学会(CCF)主办,CCF大数据专家委员会承办,中国科学院计算技术研究所和CSDN共同协办的2016中国大数据技术大会(Big Data Technology ...

  8. Android Binder机制简单了解

    Binder -- 一种进程间通信(IPC)机制, 基于OpenBinder来实现 毫无疑问, 老罗的文章是不得不看的 Android进程间通信(IPC)机制Binder简要介绍和学习计划 浅谈Ser ...

  9. RPC通信编程

    使用 RPC 编程是在客户机和服务器实体之间进行可靠通信的最强大.最高效的方法之一.它为在分布式计算环境中运行的几乎所有应用程序提供基础. RPC 是什么? RPC 的全称是 Remote Proce ...

  10. mp4下载完后才能播放的问题

    下载完后才能播放的问题.mp4视频有metadata,通常在文件尾部,而flash读到这个metadata才开始播放,解决办法是用工具处理一下mp4,把它的metadata移至文件头部. 推荐工具:m ...