A Stream represents the core data model in Trident, and can be thought of as a "stream" of tuples that
are processed as a series of small batches.

A stream is partitioned accross the nodes in the cluster, and operations are applied to a stream
in parallel accross each partition.

There are five types of operations that can be performed on streams in Trident:

1. **Partiton-Local Operations** - Operations that are applied locally to each partition and do not involve network transfer
2. **Repartitioning Operations** - Operations that change how tuples are partitioned across tasks(thus causing network transfer), but do not change the content of the stream.
3. **Aggregation Operations** - Operations that *may* repartition a stream (thus causing network transfer)
4. **Grouping Operations** - Operations that may repartition a stream on specific fields and group together tuples whose fields values are equal.
5. **Merge and Join Operations** - Operations that combine different streams together.

1. Generate the data source retrieving from other system with having class implementing the interface "IRichSpout"

public class TopologyBuilder {
private final Map<String, IRichBolt> _bolts = new HashMap<>();
private final Map<String, IRichSpout> _spouts = new HashMap<>();
private final Map<String, ComponentCommon> commons = new HashMap<>();
private final Map<String, Set<String>> _componentToSharedMemory = new HashMap<>();
private final Map<String, SharedMemory> _sharedMemory = new HashMap<>();
private boolean hasStatefulBolt = false; private Map<String, StateSpoutSpec> _stateSpouts = new HashMap<>();
private List<ByteBuffer> _workerHooks = new ArrayList<>();
}
TopologyBuilder builder = new TopologyBuilder();
@Override

public void nextTuple() {
final Random rand = new Random();
final Values row = rows.get(rand.nextInt(rows.size() - 1));
this.collector.emit(row);
Thread.yield();
}
public class UserSpout implements IRichSpout {
private static final long serialVersionUID = -6993660253781550888L; boolean isDistributed;
SpoutOutputCollector collector; //The source of data
public static final List<Values> rows = Lists.newArrayList(
new Values(1,"peter",System.currentTimeMillis()),
new Values(2,"bob",System.currentTimeMillis()),
new Values(3,"alice",System.currentTimeMillis())); public UserSpout() {
this(true);
}
public UserSpout(boolean isDistributed) {
this.isDistributed = isDistributed;
}
public boolean isDistributed() {
return this.isDistributed;
}
@Override
public void open(Map<String, Object> conf, TopologyContext context, SpoutOutputCollector collector) {
this.collector = collector;
}
@Override
public void close() { }
//generate the data source retrieving from other sytem
@Override
public void nextTuple() {
final Random rand = new Random();
final Values row = rows.get(rand.nextInt(rows.size() - 1));
this.collector.emit(row);
Thread.yield();
}
@Override
public void ack(Object msgId) { }
@Override
public void fail(Object msgId) { }
@Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("user_id","user_name","create_date"));
}
@Override
public void activate() {
}
@Override
public void deactivate() {
}
@Override
public Map<String, Object> getComponentConfiguration() {
return null;
}
}
 @Override
public void open(Map<String, Object> conf, TopologyContext context, SpoutOutputCollector collector) {
this.collector = collector;
}

2. Now we have a look at setSpout method which adding our spout to  ComponentCommon

//source
builder.setSpout(USER_SPOUT, this.userSpout, 1);
public SpoutDeclarer setSpout(String id, IRichSpout spout, Number parallelism_hint) throws IllegalArgumentException {
validateUnusedId(id);
initCommon(id, spout, parallelism_hint);
_spouts.put(id, spout);
return new SpoutGetter(id);
}
private void initCommon(String id, IComponent component, Number parallelism) throws IllegalArgumentException {
ComponentCommon common = new ComponentCommon();
common.set_inputs(new HashMap<GlobalStreamId, Grouping>());
if (parallelism != null) {
int dop = parallelism.intValue();
if (dop < 1) {
throw new IllegalArgumentException("Parallelism must be positive.");
}
common.set_parallelism_hint(dop);
}
Map<String, Object> conf = component.getComponentConfiguration();
if (conf != null) {
common.set_json_conf(JSONValue.toJSONString(conf));
}
commons.put(id, common);
}

3. What is ComponentCommon

It is about Thrift.

/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Autogenerated by Thrift Compiler (0.11.0)
*
* DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
* @generated
*/
package org.apache.storm.generated; @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.11.0)")
public class ComponentCommon implements org.apache.storm.thrift.TBase<ComponentCommon, ComponentCommon._Fields>, java.io.Serializable, Cloneable, Comparable<ComponentCommon> {
private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("ComponentCommon"); private static final org.apache.storm.thrift.protocol.TField INPUTS_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("inputs", org.apache.storm.thrift.protocol.TType.MAP, (short)1);
private static final org.apache.storm.thrift.protocol.TField STREAMS_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("streams", org.apache.storm.thrift.protocol.TType.MAP, (short)2);
private static final org.apache.storm.thrift.protocol.TField PARALLELISM_HINT_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("parallelism_hint", org.apache.storm.thrift.protocol.TType.I32, (short)3);
private static final org.apache.storm.thrift.protocol.TField JSON_CONF_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("json_conf", org.apache.storm.thrift.protocol.TType.STRING, (short)4); private static final org.apache.storm.thrift.scheme.SchemeFactory STANDARD_SCHEME_FACTORY = new ComponentCommonStandardSchemeFactory();
private static final org.apache.storm.thrift.scheme.SchemeFactory TUPLE_SCHEME_FACTORY = new ComponentCommonTupleSchemeFactory(); private java.util.Map<GlobalStreamId,Grouping> inputs; // required
private java.util.Map<java.lang.String,StreamInfo> streams; // required
private int parallelism_hint; // optional
private java.lang.String json_conf; // optional /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
public enum _Fields implements org.apache.storm.thrift.TFieldIdEnum {
INPUTS((short)1, "inputs"),
STREAMS((short)2, "streams"),
PARALLELISM_HINT((short)3, "parallelism_hint"),
JSON_CONF((short)4, "json_conf"); private static final java.util.Map<java.lang.String, _Fields> byName = new java.util.HashMap<java.lang.String, _Fields>(); static {
for (_Fields field : java.util.EnumSet.allOf(_Fields.class)) {
byName.put(field.getFieldName(), field);
}
} /**
* Find the _Fields constant that matches fieldId, or null if its not found.
*/
public static _Fields findByThriftId(int fieldId) {
switch(fieldId) {
case 1: // INPUTS
return INPUTS;
case 2: // STREAMS
return STREAMS;
case 3: // PARALLELISM_HINT
return PARALLELISM_HINT;
case 4: // JSON_CONF
return JSON_CONF;
default:
return null;
}
} /**
* Find the _Fields constant that matches fieldId, throwing an exception
* if it is not found.
*/
public static _Fields findByThriftIdOrThrow(int fieldId) {
_Fields fields = findByThriftId(fieldId);
if (fields == null) throw new java.lang.IllegalArgumentException("Field " + fieldId + " doesn't exist!");
return fields;
} /**
* Find the _Fields constant that matches name, or null if its not found.
*/
public static _Fields findByName(java.lang.String name) {
return byName.get(name);
} private final short _thriftId;
private final java.lang.String _fieldName; _Fields(short thriftId, java.lang.String fieldName) {
_thriftId = thriftId;
_fieldName = fieldName;
} public short getThriftFieldId() {
return _thriftId;
} public java.lang.String getFieldName() {
return _fieldName;
}
} // isset id assignments
private static final int __PARALLELISM_HINT_ISSET_ID = 0;
private byte __isset_bitfield = 0;
private static final _Fields optionals[] = {_Fields.PARALLELISM_HINT,_Fields.JSON_CONF};
public static final java.util.Map<_Fields, org.apache.storm.thrift.meta_data.FieldMetaData> metaDataMap;
static {
java.util.Map<_Fields, org.apache.storm.thrift.meta_data.FieldMetaData> tmpMap = new java.util.EnumMap<_Fields, org.apache.storm.thrift.meta_data.FieldMetaData>(_Fields.class);
tmpMap.put(_Fields.INPUTS, new org.apache.storm.thrift.meta_data.FieldMetaData("inputs", org.apache.storm.thrift.TFieldRequirementType.REQUIRED,
new org.apache.storm.thrift.meta_data.MapMetaData(org.apache.storm.thrift.protocol.TType.MAP,
new org.apache.storm.thrift.meta_data.StructMetaData(org.apache.storm.thrift.protocol.TType.STRUCT, GlobalStreamId.class),
new org.apache.storm.thrift.meta_data.StructMetaData(org.apache.storm.thrift.protocol.TType.STRUCT, Grouping.class))));
tmpMap.put(_Fields.STREAMS, new org.apache.storm.thrift.meta_data.FieldMetaData("streams", org.apache.storm.thrift.TFieldRequirementType.REQUIRED,
new org.apache.storm.thrift.meta_data.MapMetaData(org.apache.storm.thrift.protocol.TType.MAP,
new org.apache.storm.thrift.meta_data.FieldValueMetaData(org.apache.storm.thrift.protocol.TType.STRING),
new org.apache.storm.thrift.meta_data.StructMetaData(org.apache.storm.thrift.protocol.TType.STRUCT, StreamInfo.class))));
tmpMap.put(_Fields.PARALLELISM_HINT, new org.apache.storm.thrift.meta_data.FieldMetaData("parallelism_hint", org.apache.storm.thrift.TFieldRequirementType.OPTIONAL,
new org.apache.storm.thrift.meta_data.FieldValueMetaData(org.apache.storm.thrift.protocol.TType.I32)));
tmpMap.put(_Fields.JSON_CONF, new org.apache.storm.thrift.meta_data.FieldMetaData("json_conf", org.apache.storm.thrift.TFieldRequirementType.OPTIONAL,
new org.apache.storm.thrift.meta_data.FieldValueMetaData(org.apache.storm.thrift.protocol.TType.STRING)));
metaDataMap = java.util.Collections.unmodifiableMap(tmpMap);
org.apache.storm.thrift.meta_data.FieldMetaData.addStructMetaDataMap(ComponentCommon.class, metaDataMap);
} public ComponentCommon() {
} public ComponentCommon(
java.util.Map<GlobalStreamId,Grouping> inputs,
java.util.Map<java.lang.String,StreamInfo> streams)
{
this();
this.inputs = inputs;
this.streams = streams;
} /**
* Performs a deep copy on <i>other</i>.
*/
public ComponentCommon(ComponentCommon other) {
__isset_bitfield = other.__isset_bitfield;
if (other.is_set_inputs()) {
java.util.Map<GlobalStreamId,Grouping> __this__inputs = new java.util.HashMap<GlobalStreamId,Grouping>(other.inputs.size());
for (java.util.Map.Entry<GlobalStreamId, Grouping> other_element : other.inputs.entrySet()) { GlobalStreamId other_element_key = other_element.getKey();
Grouping other_element_value = other_element.getValue(); GlobalStreamId __this__inputs_copy_key = new GlobalStreamId(other_element_key); Grouping __this__inputs_copy_value = new Grouping(other_element_value); __this__inputs.put(__this__inputs_copy_key, __this__inputs_copy_value);
}
this.inputs = __this__inputs;
}
if (other.is_set_streams()) {
java.util.Map<java.lang.String,StreamInfo> __this__streams = new java.util.HashMap<java.lang.String,StreamInfo>(other.streams.size());
for (java.util.Map.Entry<java.lang.String, StreamInfo> other_element : other.streams.entrySet()) { java.lang.String other_element_key = other_element.getKey();
StreamInfo other_element_value = other_element.getValue(); java.lang.String __this__streams_copy_key = other_element_key; StreamInfo __this__streams_copy_value = new StreamInfo(other_element_value); __this__streams.put(__this__streams_copy_key, __this__streams_copy_value);
}
this.streams = __this__streams;
}
this.parallelism_hint = other.parallelism_hint;
if (other.is_set_json_conf()) {
this.json_conf = other.json_conf;
}
} public ComponentCommon deepCopy() {
return new ComponentCommon(this);
} @Override
public void clear() {
this.inputs = null;
this.streams = null;
set_parallelism_hint_isSet(false);
this.parallelism_hint = 0;
this.json_conf = null;
} public int get_inputs_size() {
return (this.inputs == null) ? 0 : this.inputs.size();
} public void put_to_inputs(GlobalStreamId key, Grouping val) {
if (this.inputs == null) {
this.inputs = new java.util.HashMap<GlobalStreamId,Grouping>();
}
this.inputs.put(key, val);
} public java.util.Map<GlobalStreamId,Grouping> get_inputs() {
return this.inputs;
} public void set_inputs(java.util.Map<GlobalStreamId,Grouping> inputs) {
this.inputs = inputs;
} public void unset_inputs() {
this.inputs = null;
} /** Returns true if field inputs is set (has been assigned a value) and false otherwise */
public boolean is_set_inputs() {
return this.inputs != null;
} public void set_inputs_isSet(boolean value) {
if (!value) {
this.inputs = null;
}
} public int get_streams_size() {
return (this.streams == null) ? 0 : this.streams.size();
} public void put_to_streams(java.lang.String key, StreamInfo val) {
if (this.streams == null) {
this.streams = new java.util.HashMap<java.lang.String,StreamInfo>();
}
this.streams.put(key, val);
} public java.util.Map<java.lang.String,StreamInfo> get_streams() {
return this.streams;
} public void set_streams(java.util.Map<java.lang.String,StreamInfo> streams) {
this.streams = streams;
} public void unset_streams() {
this.streams = null;
} /** Returns true if field streams is set (has been assigned a value) and false otherwise */
public boolean is_set_streams() {
return this.streams != null;
} public void set_streams_isSet(boolean value) {
if (!value) {
this.streams = null;
}
} public int get_parallelism_hint() {
return this.parallelism_hint;
} public void set_parallelism_hint(int parallelism_hint) {
this.parallelism_hint = parallelism_hint;
set_parallelism_hint_isSet(true);
} public void unset_parallelism_hint() {
__isset_bitfield = org.apache.storm.thrift.EncodingUtils.clearBit(__isset_bitfield, __PARALLELISM_HINT_ISSET_ID);
} /** Returns true if field parallelism_hint is set (has been assigned a value) and false otherwise */
public boolean is_set_parallelism_hint() {
return org.apache.storm.thrift.EncodingUtils.testBit(__isset_bitfield, __PARALLELISM_HINT_ISSET_ID);
} public void set_parallelism_hint_isSet(boolean value) {
__isset_bitfield = org.apache.storm.thrift.EncodingUtils.setBit(__isset_bitfield, __PARALLELISM_HINT_ISSET_ID, value);
} public java.lang.String get_json_conf() {
return this.json_conf;
} public void set_json_conf(java.lang.String json_conf) {
this.json_conf = json_conf;
} public void unset_json_conf() {
this.json_conf = null;
} /** Returns true if field json_conf is set (has been assigned a value) and false otherwise */
public boolean is_set_json_conf() {
return this.json_conf != null;
} public void set_json_conf_isSet(boolean value) {
if (!value) {
this.json_conf = null;
}
} public void setFieldValue(_Fields field, java.lang.Object value) {
switch (field) {
case INPUTS:
if (value == null) {
unset_inputs();
} else {
set_inputs((java.util.Map<GlobalStreamId,Grouping>)value);
}
break; case STREAMS:
if (value == null) {
unset_streams();
} else {
set_streams((java.util.Map<java.lang.String,StreamInfo>)value);
}
break; case PARALLELISM_HINT:
if (value == null) {
unset_parallelism_hint();
} else {
set_parallelism_hint((java.lang.Integer)value);
}
break; case JSON_CONF:
if (value == null) {
unset_json_conf();
} else {
set_json_conf((java.lang.String)value);
}
break; }
} public java.lang.Object getFieldValue(_Fields field) {
switch (field) {
case INPUTS:
return get_inputs(); case STREAMS:
return get_streams(); case PARALLELISM_HINT:
return get_parallelism_hint(); case JSON_CONF:
return get_json_conf(); }
throw new java.lang.IllegalStateException();
} /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
public boolean isSet(_Fields field) {
if (field == null) {
throw new java.lang.IllegalArgumentException();
} switch (field) {
case INPUTS:
return is_set_inputs();
case STREAMS:
return is_set_streams();
case PARALLELISM_HINT:
return is_set_parallelism_hint();
case JSON_CONF:
return is_set_json_conf();
}
throw new java.lang.IllegalStateException();
} @Override
public boolean equals(java.lang.Object that) {
if (that == null)
return false;
if (that instanceof ComponentCommon)
return this.equals((ComponentCommon)that);
return false;
} public boolean equals(ComponentCommon that) {
if (that == null)
return false;
if (this == that)
return true; boolean this_present_inputs = true && this.is_set_inputs();
boolean that_present_inputs = true && that.is_set_inputs();
if (this_present_inputs || that_present_inputs) {
if (!(this_present_inputs && that_present_inputs))
return false;
if (!this.inputs.equals(that.inputs))
return false;
} boolean this_present_streams = true && this.is_set_streams();
boolean that_present_streams = true && that.is_set_streams();
if (this_present_streams || that_present_streams) {
if (!(this_present_streams && that_present_streams))
return false;
if (!this.streams.equals(that.streams))
return false;
} boolean this_present_parallelism_hint = true && this.is_set_parallelism_hint();
boolean that_present_parallelism_hint = true && that.is_set_parallelism_hint();
if (this_present_parallelism_hint || that_present_parallelism_hint) {
if (!(this_present_parallelism_hint && that_present_parallelism_hint))
return false;
if (this.parallelism_hint != that.parallelism_hint)
return false;
} boolean this_present_json_conf = true && this.is_set_json_conf();
boolean that_present_json_conf = true && that.is_set_json_conf();
if (this_present_json_conf || that_present_json_conf) {
if (!(this_present_json_conf && that_present_json_conf))
return false;
if (!this.json_conf.equals(that.json_conf))
return false;
} return true;
} @Override
public int hashCode() {
int hashCode = 1; hashCode = hashCode * 8191 + ((is_set_inputs()) ? 131071 : 524287);
if (is_set_inputs())
hashCode = hashCode * 8191 + inputs.hashCode(); hashCode = hashCode * 8191 + ((is_set_streams()) ? 131071 : 524287);
if (is_set_streams())
hashCode = hashCode * 8191 + streams.hashCode(); hashCode = hashCode * 8191 + ((is_set_parallelism_hint()) ? 131071 : 524287);
if (is_set_parallelism_hint())
hashCode = hashCode * 8191 + parallelism_hint; hashCode = hashCode * 8191 + ((is_set_json_conf()) ? 131071 : 524287);
if (is_set_json_conf())
hashCode = hashCode * 8191 + json_conf.hashCode(); return hashCode;
} @Override
public int compareTo(ComponentCommon other) {
if (!getClass().equals(other.getClass())) {
return getClass().getName().compareTo(other.getClass().getName());
} int lastComparison = 0; lastComparison = java.lang.Boolean.valueOf(is_set_inputs()).compareTo(other.is_set_inputs());
if (lastComparison != 0) {
return lastComparison;
}
if (is_set_inputs()) {
lastComparison = org.apache.storm.thrift.TBaseHelper.compareTo(this.inputs, other.inputs);
if (lastComparison != 0) {
return lastComparison;
}
}
lastComparison = java.lang.Boolean.valueOf(is_set_streams()).compareTo(other.is_set_streams());
if (lastComparison != 0) {
return lastComparison;
}
if (is_set_streams()) {
lastComparison = org.apache.storm.thrift.TBaseHelper.compareTo(this.streams, other.streams);
if (lastComparison != 0) {
return lastComparison;
}
}
lastComparison = java.lang.Boolean.valueOf(is_set_parallelism_hint()).compareTo(other.is_set_parallelism_hint());
if (lastComparison != 0) {
return lastComparison;
}
if (is_set_parallelism_hint()) {
lastComparison = org.apache.storm.thrift.TBaseHelper.compareTo(this.parallelism_hint, other.parallelism_hint);
if (lastComparison != 0) {
return lastComparison;
}
}
lastComparison = java.lang.Boolean.valueOf(is_set_json_conf()).compareTo(other.is_set_json_conf());
if (lastComparison != 0) {
return lastComparison;
}
if (is_set_json_conf()) {
lastComparison = org.apache.storm.thrift.TBaseHelper.compareTo(this.json_conf, other.json_conf);
if (lastComparison != 0) {
return lastComparison;
}
}
return 0;
} public _Fields fieldForId(int fieldId) {
return _Fields.findByThriftId(fieldId);
} public void read(org.apache.storm.thrift.protocol.TProtocol iprot) throws org.apache.storm.thrift.TException {
scheme(iprot).read(iprot, this);
} public void write(org.apache.storm.thrift.protocol.TProtocol oprot) throws org.apache.storm.thrift.TException {
scheme(oprot).write(oprot, this);
} @Override
public java.lang.String toString() {
java.lang.StringBuilder sb = new java.lang.StringBuilder("ComponentCommon(");
boolean first = true; sb.append("inputs:");
if (this.inputs == null) {
sb.append("null");
} else {
sb.append(this.inputs);
}
first = false;
if (!first) sb.append(", ");
sb.append("streams:");
if (this.streams == null) {
sb.append("null");
} else {
sb.append(this.streams);
}
first = false;
if (is_set_parallelism_hint()) {
if (!first) sb.append(", ");
sb.append("parallelism_hint:");
sb.append(this.parallelism_hint);
first = false;
}
if (is_set_json_conf()) {
if (!first) sb.append(", ");
sb.append("json_conf:");
if (this.json_conf == null) {
sb.append("null");
} else {
sb.append(this.json_conf);
}
first = false;
}
sb.append(")");
return sb.toString();
} public void validate() throws org.apache.storm.thrift.TException {
// check for required fields
if (!is_set_inputs()) {
throw new org.apache.storm.thrift.protocol.TProtocolException("Required field 'inputs' is unset! Struct:" + toString());
} if (!is_set_streams()) {
throw new org.apache.storm.thrift.protocol.TProtocolException("Required field 'streams' is unset! Struct:" + toString());
} // check for sub-struct validity
} private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
try {
write(new org.apache.storm.thrift.protocol.TCompactProtocol(new org.apache.storm.thrift.transport.TIOStreamTransport(out)));
} catch (org.apache.storm.thrift.TException te) {
throw new java.io.IOException(te);
}
} private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, java.lang.ClassNotFoundException {
try {
// it doesn't seem like you should have to do this, but java serialization is wacky, and doesn't call the default constructor.
__isset_bitfield = 0;
read(new org.apache.storm.thrift.protocol.TCompactProtocol(new org.apache.storm.thrift.transport.TIOStreamTransport(in)));
} catch (org.apache.storm.thrift.TException te) {
throw new java.io.IOException(te);
}
} private static class ComponentCommonStandardSchemeFactory implements org.apache.storm.thrift.scheme.SchemeFactory {
public ComponentCommonStandardScheme getScheme() {
return new ComponentCommonStandardScheme();
}
} private static class ComponentCommonStandardScheme extends org.apache.storm.thrift.scheme.StandardScheme<ComponentCommon> { public void read(org.apache.storm.thrift.protocol.TProtocol iprot, ComponentCommon struct) throws org.apache.storm.thrift.TException {
org.apache.storm.thrift.protocol.TField schemeField;
iprot.readStructBegin();
while (true)
{
schemeField = iprot.readFieldBegin();
if (schemeField.type == org.apache.storm.thrift.protocol.TType.STOP) {
break;
}
switch (schemeField.id) {
case 1: // INPUTS
if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
{
org.apache.storm.thrift.protocol.TMap _map24 = iprot.readMapBegin();
struct.inputs = new java.util.HashMap<GlobalStreamId,Grouping>(2*_map24.size);
GlobalStreamId _key25;
Grouping _val26;
for (int _i27 = 0; _i27 < _map24.size; ++_i27)
{
_key25 = new GlobalStreamId();
_key25.read(iprot);
_val26 = new Grouping();
_val26.read(iprot);
struct.inputs.put(_key25, _val26);
}
iprot.readMapEnd();
}
struct.set_inputs_isSet(true);
} else {
org.apache.storm.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
}
break;
case 2: // STREAMS
if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
{
org.apache.storm.thrift.protocol.TMap _map28 = iprot.readMapBegin();
struct.streams = new java.util.HashMap<java.lang.String,StreamInfo>(2*_map28.size);
java.lang.String _key29;
StreamInfo _val30;
for (int _i31 = 0; _i31 < _map28.size; ++_i31)
{
_key29 = iprot.readString();
_val30 = new StreamInfo();
_val30.read(iprot);
struct.streams.put(_key29, _val30);
}
iprot.readMapEnd();
}
struct.set_streams_isSet(true);
} else {
org.apache.storm.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
}
break;
case 3: // PARALLELISM_HINT
if (schemeField.type == org.apache.storm.thrift.protocol.TType.I32) {
struct.parallelism_hint = iprot.readI32();
struct.set_parallelism_hint_isSet(true);
} else {
org.apache.storm.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
}
break;
case 4: // JSON_CONF
if (schemeField.type == org.apache.storm.thrift.protocol.TType.STRING) {
struct.json_conf = iprot.readString();
struct.set_json_conf_isSet(true);
} else {
org.apache.storm.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
}
break;
default:
org.apache.storm.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
}
iprot.readFieldEnd();
}
iprot.readStructEnd();
struct.validate();
} public void write(org.apache.storm.thrift.protocol.TProtocol oprot, ComponentCommon struct) throws org.apache.storm.thrift.TException {
struct.validate(); oprot.writeStructBegin(STRUCT_DESC);
if (struct.inputs != null) {
oprot.writeFieldBegin(INPUTS_FIELD_DESC);
{
oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.STRUCT, struct.inputs.size()));
for (java.util.Map.Entry<GlobalStreamId, Grouping> _iter32 : struct.inputs.entrySet())
{
_iter32.getKey().write(oprot);
_iter32.getValue().write(oprot);
}
oprot.writeMapEnd();
}
oprot.writeFieldEnd();
}
if (struct.streams != null) {
oprot.writeFieldBegin(STREAMS_FIELD_DESC);
{
oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, struct.streams.size()));
for (java.util.Map.Entry<java.lang.String, StreamInfo> _iter33 : struct.streams.entrySet())
{
oprot.writeString(_iter33.getKey());
_iter33.getValue().write(oprot);
}
oprot.writeMapEnd();
}
oprot.writeFieldEnd();
}
if (struct.is_set_parallelism_hint()) {
oprot.writeFieldBegin(PARALLELISM_HINT_FIELD_DESC);
oprot.writeI32(struct.parallelism_hint);
oprot.writeFieldEnd();
}
if (struct.json_conf != null) {
if (struct.is_set_json_conf()) {
oprot.writeFieldBegin(JSON_CONF_FIELD_DESC);
oprot.writeString(struct.json_conf);
oprot.writeFieldEnd();
}
}
oprot.writeFieldStop();
oprot.writeStructEnd();
} } private static class ComponentCommonTupleSchemeFactory implements org.apache.storm.thrift.scheme.SchemeFactory {
public ComponentCommonTupleScheme getScheme() {
return new ComponentCommonTupleScheme();
}
} private static class ComponentCommonTupleScheme extends org.apache.storm.thrift.scheme.TupleScheme<ComponentCommon> { @Override
public void write(org.apache.storm.thrift.protocol.TProtocol prot, ComponentCommon struct) throws org.apache.storm.thrift.TException {
org.apache.storm.thrift.protocol.TTupleProtocol oprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
{
oprot.writeI32(struct.inputs.size());
for (java.util.Map.Entry<GlobalStreamId, Grouping> _iter34 : struct.inputs.entrySet())
{
_iter34.getKey().write(oprot);
_iter34.getValue().write(oprot);
}
}
{
oprot.writeI32(struct.streams.size());
for (java.util.Map.Entry<java.lang.String, StreamInfo> _iter35 : struct.streams.entrySet())
{
oprot.writeString(_iter35.getKey());
_iter35.getValue().write(oprot);
}
}
java.util.BitSet optionals = new java.util.BitSet();
if (struct.is_set_parallelism_hint()) {
optionals.set(0);
}
if (struct.is_set_json_conf()) {
optionals.set(1);
}
oprot.writeBitSet(optionals, 2);
if (struct.is_set_parallelism_hint()) {
oprot.writeI32(struct.parallelism_hint);
}
if (struct.is_set_json_conf()) {
oprot.writeString(struct.json_conf);
}
} @Override
public void read(org.apache.storm.thrift.protocol.TProtocol prot, ComponentCommon struct) throws org.apache.storm.thrift.TException {
org.apache.storm.thrift.protocol.TTupleProtocol iprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
{
org.apache.storm.thrift.protocol.TMap _map36 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
struct.inputs = new java.util.HashMap<GlobalStreamId,Grouping>(2*_map36.size);
GlobalStreamId _key37;
Grouping _val38;
for (int _i39 = 0; _i39 < _map36.size; ++_i39)
{
_key37 = new GlobalStreamId();
_key37.read(iprot);
_val38 = new Grouping();
_val38.read(iprot);
struct.inputs.put(_key37, _val38);
}
}
struct.set_inputs_isSet(true);
{
org.apache.storm.thrift.protocol.TMap _map40 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
struct.streams = new java.util.HashMap<java.lang.String,StreamInfo>(2*_map40.size);
java.lang.String _key41;
StreamInfo _val42;
for (int _i43 = 0; _i43 < _map40.size; ++_i43)
{
_key41 = iprot.readString();
_val42 = new StreamInfo();
_val42.read(iprot);
struct.streams.put(_key41, _val42);
}
}
struct.set_streams_isSet(true);
java.util.BitSet incoming = iprot.readBitSet(2);
if (incoming.get(0)) {
struct.parallelism_hint = iprot.readI32();
struct.set_parallelism_hint_isSet(true);
}
if (incoming.get(1)) {
struct.json_conf = iprot.readString();
struct.set_json_conf_isSet(true);
}
}
} private static <S extends org.apache.storm.thrift.scheme.IScheme> S scheme(org.apache.storm.thrift.protocol.TProtocol proto) {
return (org.apache.storm.thrift.scheme.StandardScheme.class.equals(proto.getScheme()) ? STANDARD_SCHEME_FACTORY : TUPLE_SCHEME_FACTORY).getScheme();
}
}

4. Also we need bolt extending BaseRichBolt

public abstract class BaseTickTupleAwareRichBolt extends BaseRichBolt {
@Override
public void execute(final Tuple tuple) {
if (TupleUtils.isTick(tuple)) {
onTickTuple(tuple);
} else {
process(tuple);
}
}
protected void onTickTuple(final Tuple tuple) {
}
protected abstract void process(final Tuple tuple);
}
public abstract class AbstractJdbcBolt extends BaseTickTupleAwareRichBolt {
private static final Logger LOG = LoggerFactory.getLogger(
AbstractJdbcBolt.class); static {
DriverManager.getDrivers();
} protected OutputCollector collector;
protected transient JdbcClient jdbcClient;
protected String configKey;
protected Integer queryTimeoutSecs;
protected ConnectionProvider connectionProvider; public AbstractJdbcBolt(final ConnectionProvider connectionProviderParam) {
Validate.notNull(connectionProviderParam);
this.connectionProvider = connectionProviderParam;
}
@Override
public void prepare(final Map<String, Object> map, final TopologyContext topologyContext,
final OutputCollector outputCollector) {
this.collector = outputCollector; connectionProvider.prepare(); if (queryTimeoutSecs == null) {
String msgTimeout = map.get(Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS)
.toString();
queryTimeoutSecs = Integer.parseInt(msgTimeout);
} this.jdbcClient = new JdbcClient(connectionProvider, queryTimeoutSecs);
}
@Override
public void cleanup() {
connectionProvider.cleanup();
}
}
public class JdbcLookupBolt extends AbstractJdbcBolt {
private static final Logger LOG = LoggerFactory.getLogger(JdbcLookupBolt.class);
private String selectQuery;
private JdbcLookupMapper jdbcLookupMapper; public JdbcLookupBolt(ConnectionProvider connectionProvider, String selectQuery, JdbcLookupMapper jdbcLookupMapper) {
super(connectionProvider);
Validate.notNull(selectQuery);
Validate.notNull(jdbcLookupMapper);
this.selectQuery = selectQuery;
this.jdbcLookupMapper = jdbcLookupMapper;
} public JdbcLookupBolt withQueryTimeoutSecs(int queryTimeoutSecs) {
this.queryTimeoutSecs = queryTimeoutSecs;
return this;
} @Override
protected void process(Tuple tuple) {
try {
List<Column> columns = jdbcLookupMapper.getColumns(tuple);
List<List<Column>> result = jdbcClient.select(this.selectQuery, columns); if (result != null && result.size() != 0) {
for (List<Column> row : result) {
List<Values> values = jdbcLookupMapper.toTuple(tuple, row);
for (Values value : values) {
collector.emit(tuple, value);
}
}
}
this.collector.ack(tuple);
} catch (Exception e) {
this.collector.reportError(e);
this.collector.fail(tuple);
}
} @Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
jdbcLookupMapper.declareOutputFields(outputFieldsDeclarer);
}
}

4. Now we add one or more bolt to storm

builder.setBolt(LOOKUP_BOLT, departmentLookupBolt, 1).shuffleGrouping(USER_SPOUT);
public BoltDeclarer setBolt(String id, IRichBolt bolt, Number parallelism_hint) throws IllegalArgumentException {
validateUnusedId(id);
initCommon(id, bolt, parallelism_hint);
_bolts.put(id, bolt);
return new BoltGetter(id);
}
private void initCommon(String id, IComponent component, Number parallelism) throws IllegalArgumentException {
ComponentCommon common = new ComponentCommon();
common.set_inputs(new HashMap<GlobalStreamId, Grouping>());
if (parallelism != null) {
int dop = parallelism.intValue();
if (dop < 1) {
throw new IllegalArgumentException("Parallelism must be positive.");
}
common.set_parallelism_hint(dop);
}
Map<String, Object> conf = component.getComponentConfiguration();
if (conf != null) {
common.set_json_conf(JSONValue.toJSONString(conf));
}
commons.put(id, common);
}

5. Last step is to build topology

builder.createTopology();
public StormTopology createTopology() {
Map<String, Bolt> boltSpecs = new HashMap<>();
Map<String, SpoutSpec> spoutSpecs = new HashMap<>();
maybeAddCheckpointSpout();
for (String boltId : _bolts.keySet()) {
IRichBolt bolt = _bolts.get(boltId);
bolt = maybeAddCheckpointTupleForwarder(bolt);
ComponentCommon common = getComponentCommon(boltId, bolt);
try {
maybeAddCheckpointInputs(common);
boltSpecs.put(boltId, new Bolt(ComponentObject.serialized_java(Utils.javaSerialize(bolt)), common));
} catch (RuntimeException wrapperCause) {
if (wrapperCause.getCause() != null && NotSerializableException.class.equals(wrapperCause.getCause().getClass())) {
throw new IllegalStateException(
"Bolt '" + boltId + "' contains a non-serializable field of type " + wrapperCause.getCause().getMessage() + ", " +
"which was instantiated prior to topology creation. " + wrapperCause.getCause().getMessage() + " " +
"should be instantiated within the prepare method of '" + boltId + " at the earliest.", wrapperCause);
}
throw wrapperCause;
}
}
for (String spoutId : _spouts.keySet()) {
IRichSpout spout = _spouts.get(spoutId);
ComponentCommon common = getComponentCommon(spoutId, spout);
try {
spoutSpecs.put(spoutId, new SpoutSpec(ComponentObject.serialized_java(Utils.javaSerialize(spout)), common));
} catch (RuntimeException wrapperCause) {
if (wrapperCause.getCause() != null && NotSerializableException.class.equals(wrapperCause.getCause().getClass())) {
throw new IllegalStateException(
"Spout '" + spoutId + "' contains a non-serializable field of type " + wrapperCause.getCause().getMessage() + ", " +
"which was instantiated prior to topology creation. " + wrapperCause.getCause().getMessage() + " " +
"should be instantiated within the prepare method of '" + spoutId + " at the earliest.", wrapperCause);
}
throw wrapperCause;
}
} StormTopology stormTopology = new StormTopology(spoutSpecs,
boltSpecs,
new HashMap<>()); stormTopology.set_worker_hooks(_workerHooks); if (!_componentToSharedMemory.isEmpty()) {
stormTopology.set_component_to_shared_memory(_componentToSharedMemory);
stormTopology.set_shared_memory(_sharedMemory);
} return Utils.addVersions(stormTopology);
}

Storm Flow的更多相关文章

  1. Storm介绍(一)

    作者:Jack47 PS:如果喜欢我写的文章,欢迎关注我的微信公众账号程序员杰克,两边的文章会同步,也可以添加我的RSS订阅源. 内容简介 本文是Storm系列之一,介绍了Storm的起源,Storm ...

  2. storm启动过程之源码分析

    TopologyMaster: 处理拓扑的一些基本信息和工作,比如更新心跳信息,拓扑指标信息更新等   NimbusServer: ** * * NimbusServer work flow: 1. ...

  3. Flume-ng+Kafka+storm的学习笔记

    Flume-ng Flume是一个分布式.可靠.和高可用的海量日志采集.聚合和传输的系统. Flume的文档可以看http://flume.apache.org/FlumeUserGuide.html ...

  4. storm的并发

    1 storm并行的基本概念 storm集群中的一个机器可以运行一个或者多个worker,对应于一个或者多个topologies. 1个worker进程运行1个或多个excutor线程.每个worke ...

  5. Storm入门(十四)Trident API Overview

    The core data model in Trident is the "Stream", processed as a series of batches. A stream ...

  6. Storm(一)Storm的简介与相关概念

    一.Storm的简介 官网地址:http://storm.apache.org/ Storm是一个免费开源.分布式.高容错的实时计算系统.Storm令持续不断的流计算变得容易,弥补了Hadoop批处理 ...

  7. storm定时器

    package com.example.mail; import org.apache.storm.Config; import org.apache.storm.LocalCluster; impo ...

  8. storm入门基本知识

    引言 介绍storm之前,我先抛出这两个问题: 1.实时计算需要解决些什么问题? 2.storm作为实时计算到底有何优势? storm简介 官方介绍: Apache Storm is a free a ...

  9. 在 Cloudera Data Flow 上运行你的第一个 Flink 例子

    文档编写目的 Cloudera Data Flow(CDF) 作为 Cloudera 一个独立的产品单元,围绕着实时数据采集,实时数据处理和实时数据分析有多个不同的功能模块,如下图所示: 图中 4 个 ...

随机推荐

  1. createFile

    #include<iostream> #include<windows.h> #include<stdio.h> using namespace std; int ...

  2. MYSQL ERROR 1049 (42000): Unknown database

    https://www.cnblogs.com/hedgehog105/p/10196566.html lower_case_table_names=2

  3. 第九次 Scrum Meeting

    第九次 Scrum Meeting 写在前面 会议时间 会议时长 会议地点 2019/4/14 19:00 60min 新主楼F座2F 附Github仓库:WEDO 例会照片 工作情况总结(4.14) ...

  4. QueryRunner(DBUtils) 结果集实例

    转自:http://www.cnblogs.com/myit/p/4272824.html#   单行数据处理:ScalarHandler    ArrayHandler    MapHandler  ...

  5. python fileinput处理多文件

    import fileinput with fileinput.input(files=(path1,path2)) as f: for line in f: print(line)

  6. C#常用的引用

    1.使用ConfigurationManager需要在.net引用中添加System.Configuration引用 2.使用HttpContext需要在.net引用中添加System.Web引用

  7. PC端政务云产品的一些的看法

    第一部分:网站整体问题 1. 在hover或click时,没有明确的色彩等样式变化,如腾讯采取的是背景和颜色同时变化,搜狐和知乎采取的是颜色字体颜色的改变,无论时哪种,我觉得都是必要的. 2. 与上一 ...

  8. MySQL修改数据表

    ALTER [IGNORE] table tb_name alter_spec,alter_spec......... alter_specification: ADD [COLUMN] create ...

  9. Parcel Vs Webpack

    横空出世的Parcel近日成为了前端圈的又一大热点,在短短几周内就获得了13K的Star.作为前端构建工具新人的Parcel为什么能在短期内获得这么多赞同?他和老大哥Webpack比起来到底有什么优势 ...

  10. C#操作Redis String字符串

    /// <summary> /// Redis String 操作 /// </summary> public static void Redis_String() { Red ...