Flume的Avro Sink和Avro Source研究之一: Avro Source
问题 : Avro Source提供了怎么样RPC服务,是怎么提供的?
问题 1.1 Flume Source是如何启动一个Netty Server来提供RPC服务。
由GitHub上avro-rpc-quickstart知道可以通过下面这种方式启动一个NettyServer,来提供特定的RPC。那么Flume Source 是通过这种方法来提供的RPC服务吗?
server = new NettyServer(new SpecificResponder(Mail.class, new MailImpl()), new InetSocketAddress(65111));
AvroSource中创建NettyServer的源码为:
Responder responder = new SpecificResponder(AvroSourceProtocol.class, this); NioServerSocketChannelFactory socketChannelFactory = initSocketChannelFactory(); ChannelPipelineFactory pipelineFactory = initChannelPipelineFactory(); server = new NettyServer(responder, new InetSocketAddress(bindAddress, port),
socketChannelFactory, pipelineFactory, null);
看来AvroSource也是直接用Avro提供的NettyServer类来建立了一个NettyServe,不过它使用了另一个构造函数,指定了ChannelFactory和ChannelPipelineFactory.
那么AvroSource使用的是怎么样的一个ChannelFactory呢?
initSocketChannelFactory()方法的实现为:
private NioServerSocketChannelFactory initSocketChannelFactory() {
NioServerSocketChannelFactory socketChannelFactory;
if (maxThreads <= 0) {
socketChannelFactory = new NioServerSocketChannelFactory
(Executors .newCachedThreadPool(), Executors.newCachedThreadPool());
} else {
socketChannelFactory = new NioServerSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newFixedThreadPool(maxThreads));
}
return socketChannelFactory;
}
看来之所以要指定ChannelFactory,是为了根据AvroSource的"threads”这个参数,来决定可以使用worker thread的最大个数。这个数字决定了最多有多少个线程来处理RPC请求。
参见NioServerChannelFactory的说明
A ServerSocketChannelFactory which creates a server-side NIO-based ServerSocketChannel. It utilizes the non-blocking I/O mode which was introduced with NIO to serve many number of concurrent connections efficiently. How threads work There are two types of threads in a NioServerSocketChannelFactory; one is boss thread and the other is worker thread. Boss threads Each bound ServerSocketChannel has its own boss thread. For example, if you opened two server ports such as 80 and 443, you will have two boss threads. A boss thread accepts incoming connections until the port is unbound. Once a connection is accepted successfully, the boss thread passes the accepted Channel to one of the worker threads that the NioServerSocketChannelFactory manages. Worker threads One NioServerSocketChannelFactory can have one or more worker threads. A worker thread performs non-blocking read and write for one or more Channels in a non-blocking mode.
ChannelPipelineFactory是干嘛的呢?为什么也要特化一个?
ChannelPipleline类的说明为:
A list of
ChannelHandler
s which handles or interceptsChannelEvent
s of aChannel
.ChannelPipeline
implements an advanced form of the Intercepting Filter pattern to give a user full control over how an event is handled and how theChannelHandler
s in the pipeline interact with each other.
看来这东西提供了一种更高级的拦截器组合。那就来看看AvroSource是用了怎么样的ChannelPiplelineFactory
private ChannelPipelineFactory initChannelPipelineFactory() {
ChannelPipelineFactory pipelineFactory;
boolean enableCompression = compressionType.equalsIgnoreCase("deflate");
if (enableCompression || enableSsl) {
pipelineFactory = new SSLCompressionChannelPipelineFactory(
enableCompression, enableSsl, keystore,
keystorePassword, keystoreType);
} else {
pipelineFactory = new ChannelPipelineFactory() {
@Override
public ChannelPipeline getPipeline() throws Exception {
return Channels.pipeline();
}
};
}
return pipelineFactory;
}
看来如果开启了压缩或者使用了ssl,就使用SSLCompressionChannelPiplelineFactory,这类是AvroSource一个私有的静态内部类。否则就使用Channels.pipleline()新建一个,这个pipleline貌似啥都不做?
问题 1.2这样Server是起来了,可是Server提供了什么样的RPC服务呢?
关键在这一句。
Responder responder = new SpecificResponder(AvroSourceProtocol.class, this);
查下Avro的API,得知道SpecificResponder的两个参数是protocol和protocol的实现。看起来AvroSource这个类实现了AvroSourceProtocol。Yes, AvroSource的声明为
public class AvroSource extends AbstractSource implements EventDrivenSource,Configurable, AvroSourceProtocol
那就看看AvroSourceProtocol是怎么样定义的吧。它定义在flume-ng-sdk工程的src/main/avro目录下,由flume.avdl定义。avdl是使用Avro IDL定义的协议。放在那个特定的目录下,是avro-maven-plugin的约定。
这个avdl是这样的
@namespace("org.apache.flume.source.avro")
protocol AvroSourceProtocol {
enum Status {
OK, FAILED, UNKNOWN
}record AvroFlumeEvent {
map<string> headers;
bytes body;
}Status append( AvroFlumeEvent event );
Status appendBatch( array<AvroFlumeEvent> events );
}
它定义了一个枚举,用作append和appendBatch的返回值。表示Source端对传输来的消息处理的结果,有OK FAILED UNKNOWN三种状态。
定义了 AvroFlumeEvent这样一个record类型,符合Flume对Event的定义,header是一系列K-V对,即一个Map, body是byte数组。
定义了两个方法,append单条AvroFlumeEvent,以及append一批AvroFlumeEvent.
由此avdl,Avro生成了三个java文件,包括:一个枚举Status,一个类AvroFlumeEvent,一个接口AvroSourceProtocol。其中AvroSource类实现了AvroSourceProtocol接口,对外提供了append和appendBatch这两个远程方法调用。
append方法实现为:
@Override
public Status append(AvroFlumeEvent avroEvent) {
logger.debug("Avro source {}: Received avro event: {}", getName(),
avroEvent);
sourceCounter.incrementAppendReceivedCount();
sourceCounter.incrementEventReceivedCount(); Event event = EventBuilder.withBody(avroEvent.getBody().array(),
toStringMap(avroEvent.getHeaders())); try {
getChannelProcessor().processEvent(event);
} catch (ChannelException ex) {
logger.warn("Avro source " + getName() + ": Unable to process event. " +
"Exception follows.", ex);
return Status.FAILED;
} sourceCounter.incrementAppendAcceptedCount();
sourceCounter.incrementEventAcceptedCount(); return Status.OK;
}
这个方法就是用获取的AvroFlumeEvent对象,经过转换构建一个Event对象。这个转换只是将不对等的数据类型进行了转换,arvoEvent.getBody()返回的是ByteBuffer,而avroEvent.getHeaders()返回的是Map<CharSequence,CharSequence>。
构建完Event后,把这个消息传递给这个Source对应的ChannelProcessor来处理。
appendBatch方法和append方法的实现很相似。
Flume的Avro Sink和Avro Source研究之一: Avro Source的更多相关文章
- 将线上服务器生成的日志信息实时导入kafka,采用agent和collector分层传输,app的数据通过thrift传给agent,agent通过avro sink将数据发给collector,collector将数据汇集后,发送给kafka
记flume部署过程中遇到的问题以及解决方法(持续更新) - CSDN博客 https://blog.csdn.net/lijinqi1987/article/details/77449889 现将调 ...
- IDEA 编译错误:java: try-with-resources is not supported in -source 1.6 (use -source 7 or higher to enable try-with-resources)
错误描述 在用IDEA编译别人的项目的时候遇到下面的错误: java: try-with-resources is not supported in -source 1.6 (use -source ...
- Maven错误 diamond operator is not supported in -source 1.5 (use -source 7 or higher to enable diamond operator)问题解决
如果在Maven构建时出现: diamond operator is not supported in -source 1.5 (use -source 7 or higher to enable d ...
- Source roots (or source folders) Test source roots (or test source folders; shown as rootTest)Resource rootsTest resource roots
idea中Mark Directory As里的Sources Root.ReSources Root等的区别 1.Source roots (or source folders) 通过这个类指定一个 ...
- Flume的Avro Sink和Avro Source研究之二 : Avro Sink
啊,AvroSink要复杂好多:< 好吧,先确定主要问题: AvroSink为啥这么多代码?有必要吗?它都有哪些逻辑需要实现? 你看,avro-rpc-quickstart里是这么建client ...
- Flume配置Failover Sink Processor
1 官网内容 2 看一张图一目了然 3 详细配置 source配置文件 #配置文件: a1.sources= r1 a1.sinks= k1 k2 a1.channels= c1 #负载平衡 a1.s ...
- Hadoop实战-Flume之Hdfs Sink(十)
a1.sources = r1 a1.sinks = k1 a1.channels = c1 # Describe/configure the source a1.sources.r1.type = ...
- flume 测试 hive sink
测试flume,将数据送到hive表中,首先建表. create table order_flume( order_id string, user_id string, eval_set string ...
- 自定义flume的hbase sink 的序列化程序
package com.hello.hbase; import java.nio.charset.Charset; import java.text.SimpleDateFormat; import ...
随机推荐
- 谷歌(Chrome)安装Advanced REST Client插件
进入Extensions(工具——>扩展程序) 点击Get More extensions或新建标签页点击网上应用店 如果加载太慢,出现chrome网上应用店无法打开,显示暂时无法加载该应用的画 ...
- popViewControllerAnimated 后,对页面内UITableView 内数据刷新
popViewControllerAnimated后,这时它不执行viewDidLoad,所以不能及时对viewControler及时刷新,此时对该页面进行操作可以调用viewWillAppear:( ...
- CKedit在线编辑器
在线编辑器 在实现所见即得的编辑效果. FCK 是开发者的名字的缩写 CKEditor 功能很完善的,具有,在线编辑与图片上传JS插件 UEdit ...
- 印象笔记无法同步问题解决 Unable to send HTTP request: 12029
问题 今天突然发现本地软件不能访问网络. 包括: 印象笔记无法同步, 搜狗输入法无法登陆. 但其它上网正常. 思路及解决过程 因为chrome上网 ,qq上网均正常. 且同事可以正常使用. 推测是本地 ...
- hadoop下跑mapreduce程序报错
mapreduce真的是门学问,遇到的问题逼着我把它从MRv1摸索到MRv2,从年前就牵挂在心里,连过年回家的旅途上都是心情凝重,今天终于在eclipse控制台看到了job completed suc ...
- Winform 下拉框绑定问题
在Winform中下拉框绑定的时候只能读到text属性值,Id的值不管怎么搞都读取不到,所以就百度找到了一种方式: public void CmdBind() { var data = _logic. ...
- 【转】c#文件操作大全(二)
61.文件夹移动到整合操作 FolderDialog aa = new FolderDialog(); aa.DisplayDialog(); if (aa ...
- Export功能 导致 页面显示很多非法字符,还可能页面显示两次
private void exportBinaryToExcel(byte[] bytes, string filename) { Response.AddHeader("Content-D ...
- 提高SQL查询效率的常用方法
提高SQL查询效率的常用方法 (1)选择最有效率的表名顺序(只在基于规则的优化器中有效): Oracle的解析器按照从右到左的顺序处理FROM子句中的表名,FROM子句中写在最后的表(基础表 driv ...
- 两个list 求交集效率对比
__author__ = 'daitr' #--coding:utf-8-- import datetime #方法一: #a=[2,3,4,5] #b=[2,5,8] #tmp = [val for ...