ElasticSearch-hadoop saveToEs源码分析:

类的调用路径关系为:

EsSpark ->
EsRDDWriter ->
RestService ->
RestRepository ->
RestClient

他们的作用:

  • EsSpark,读取ES和存储ES的入口
  • EsRDDWriter,调用RestService创建PartitionWriter,对ES进行数据写入
  • RestService,负责创建 RestRepository,PartitionWriter
  • RestRepository,bulk高层抽象,底层利用NetworkClient做真实的http bulk请求

各个类对应的源码追踪如下:

https://github.com/elastic/elasticsearch-hadoop/blob/2.1/spark/core/main/scala/org/elasticsearch/spark/rdd/EsSpark.scala

  def saveToEs(rdd: RDD[_], resource: String) { saveToEs(rdd, Map(ES_RESOURCE_WRITE -> resource)) }
def saveToEs(rdd: RDD[_], resource: String, cfg: Map[String, String]) {
saveToEs(rdd, collection.mutable.Map(cfg.toSeq: _*) += (ES_RESOURCE_WRITE -> resource))
}
def saveToEs(rdd: RDD[_], cfg: Map[String, String]) {
CompatUtils.warnSchemaRDD(rdd, LogFactory.getLog("org.elasticsearch.spark.rdd.EsSpark")) if (rdd == null || rdd.partitions.length == 0) {
return
} val sparkCfg = new SparkSettingsManager().load(rdd.sparkContext.getConf)
val config = new PropertiesSettings().load(sparkCfg.save())
config.merge(cfg.asJava) rdd.sparkContext.runJob(rdd, new EsRDDWriter(config.save()).write _)
}

https://github.com/elastic/elasticsearch-hadoop/blob/2.1/spark/core/main/scala/org/elasticsearch/spark/rdd/EsRDDWriter.scala

  def write(taskContext: TaskContext, data: Iterator[T]) {
val writer = RestService.createWriter(settings, taskContext.partitionId, -1, log) taskContext.addOnCompleteCallback(() => writer.close()) if (runtimeMetadata) {
writer.repository.addRuntimeFieldExtractor(metaExtractor)
} while (data.hasNext) {
writer.repository.writeToIndex(processData(data))
}
}

https://github.com/elastic/elasticsearch-hadoop/blob/2.1/mr/src/main/java/org/elasticsearch/hadoop/rest/RestService.java

    public static PartitionWriter createWriter(Settings settings, int currentSplit, int totalSplits, Log log) {
Version.logVersion(); InitializationUtils.discoverEsVersion(settings, log);
InitializationUtils.discoverNodesIfNeeded(settings, log);
InitializationUtils.filterNonClientNodesIfNeeded(settings, log);
InitializationUtils.filterNonDataNodesIfNeeded(settings, log); List<String> nodes = SettingsUtils.discoveredOrDeclaredNodes(settings); // check invalid splits (applicable when running in non-MR environments) - in this case fall back to Random..
int selectedNode = (currentSplit < 0) ? new Random().nextInt(nodes.size()) : currentSplit % nodes.size(); // select the appropriate nodes first, to spread the load before-hand
SettingsUtils.pinNode(settings, nodes.get(selectedNode)); Resource resource = new Resource(settings, false); log.info(String.format("Writing to [%s]", resource)); // single index vs multi indices
IndexExtractor iformat = ObjectUtils.instantiate(settings.getMappingIndexExtractorClassName(), settings);
iformat.compile(resource.toString()); RestRepository repository = (iformat.hasPattern() ? initMultiIndices(settings, currentSplit, resource, log) : initSingleIndex(settings, currentSplit, resource, log)); return new PartitionWriter(settings, currentSplit, totalSplits, repository);
}

https://github.com/elastic/elasticsearch-hadoop/blob/2.1/mr/src/main/java/org/elasticsearch/hadoop/rest/RestRepository.java

    /**
* Writes the objects to index.
*
* @param object object to add to the index
*/
public void writeToIndex(Object object) {
Assert.notNull(object, "no object data given"); lazyInitWriting();
doWriteToIndex(command.write(object));
}
    private void doWriteToIndex(BytesRef payload) {
// check space first
if (payload.length() > ba.available()) {
if (autoFlush) {
flush();
}
else {
throw new EsHadoopIllegalStateException(
String.format("Auto-flush disabled and bulk buffer full; disable manual flush or increase capacity [current size %s]; bailing out", ba.capacity()));
}
} data.copyFrom(payload);
payload.reset(); dataEntries++;
if (bufferEntriesThreshold > 0 && dataEntries >= bufferEntriesThreshold) {
if (autoFlush) {
flush();
}
else {
// handle the corner case of manual flush that occurs only after the buffer is completely full (think size of 1)
if (dataEntries > bufferEntriesThreshold) {
throw new EsHadoopIllegalStateException(
String.format(
"Auto-flush disabled and maximum number of entries surpassed; disable manual flush or increase capacity [current size %s]; bailing out",
bufferEntriesThreshold));
}
}
}
}
    public void flush() {
BitSet bulk = tryFlush();
if (!bulk.isEmpty()) {
throw new EsHadoopException(String.format("Could not write all entries [%s/%s] (maybe ES was overloaded?). Bailing out...", bulk.cardinality(), bulk.size()));
}
}
    public BitSet tryFlush() {
if (log.isDebugEnabled()) {
log.debug(String.format("Sending batch of [%d] bytes/[%s] entries", data.length(), dataEntries));
} BitSet bulkResult = EMPTY; try {
// double check data - it might be a false flush (called on clean-up)
if (data.length() > 0) {
bulkResult = client.bulk(resourceW, data);
executedBulkWrite = true;
}
} catch (EsHadoopException ex) {
hadWriteErrors = true;
throw ex;
} // discard the data buffer, only if it was properly sent/processed
//if (bulkResult.isEmpty()) {
// always discard data since there's no code path that uses the in flight data
discard();
//} return bulkResult;
}

https://github.com/elastic/elasticsearch-hadoop/blob/2.1/mr/src/main/java/org/elasticsearch/hadoop/rest/RestClient.java

    public BitSet bulk(Resource resource, TrackingBytesArray data) {
Retry retry = retryPolicy.init();
int httpStatus = 0; boolean isRetry = false; do {
// NB: dynamically get the stats since the transport can change
long start = network.transportStats().netTotalTime;
Response response = execute(PUT, resource.bulk(), data);
long spent = network.transportStats().netTotalTime - start; stats.bulkTotal++;
stats.docsSent += data.entries();
stats.bulkTotalTime += spent;
// bytes will be counted by the transport layer if (isRetry) {
stats.docsRetried += data.entries();
stats.bytesRetried += data.length();
stats.bulkRetries++;
stats.bulkRetriesTotalTime += spent;
} isRetry = true; httpStatus = (retryFailedEntries(response, data) ? HttpStatus.SERVICE_UNAVAILABLE : HttpStatus.OK);
} while (data.length() > 0 && retry.retry(httpStatus)); return data.leftoversPosition();
}

ElasticSearch-hadoop saveToEs源码分析的更多相关文章

  1. ElasticSearch Index操作源码分析

    ElasticSearch Index操作源码分析 本文记录ElasticSearch创建索引执行源码流程.从执行流程角度看一下创建索引会涉及到哪些服务(比如AllocationService.Mas ...

  2. Hadoop RPC源码分析

    Hadoop RPC源码分析 上一篇文章http://www.cnblogs.com/dycg/p/rpc.html 讲了Hadoop RPC的使用方法,这一次我们从demo中一层层进行分析. RPC ...

  3. [Hadoop] - TaskTracker源码分析(状态发送)

    TaskTracker节点向JobTracker汇报当前节点的运行时信息时候,是将运行状态信息同心跳报告一起发送给JobTracker的,主要包括TaskTracker的基本信息.节点资源使用信息.各 ...

  4. Hadoop TextInputFormat源码分析

    from:http://blog.csdn.net/lzm1340458776/article/details/42707047 InputFormat主要用于描述输入数据的格式(我们只分析新API, ...

  5. [Hadoop] - TaskTracker源码分析

    在Hadoop1.x版本中,MapReduce采用master/salve架构,TaskTracker就是这个架构中的slave部分.TaskTracker以服务组件的形式存在,负责任务的执行和任务状 ...

  6. [Hadoop] - TaskTracker源码分析(TaskTracker节点健康状况监控)

    在TaskTracker中对象healthStatus保存了当前节点的健康状况,对应的类是org.apache.hadoop.mapred.TaskTrackerStatus.TaskTrackerH ...

  7. Hadoop TaskScheduler源码分析

    TaskScheduler是MapReduce中的任务调度器.在MapReduce中,JobTracker接收JobClient提交的Job,将它们按InputFormat的划分以及其他相关配置,生成 ...

  8. Hadoop2源码分析-准备篇

    1.概述 我们已经能够搭建一个高可用的Hadoop平台了,也熟悉并掌握了一个项目在Hadoop平台下的开发流程,基于Hadoop的一些套件我们也能够使用,并且能利用这些套件进行一些任务的开发.在Had ...

  9. Hadoop RCFile存储格式详解(源码分析、代码示例)

    RCFile   RCFile全称Record Columnar File,列式记录文件,是一种类似于SequenceFile的键值对(Key/Value Pairs)数据文件.   关键词:Reco ...

随机推荐

  1. Linux系统安装telnet以及xinetd服务

    Linux系统安装telnet以及xinetd服务 一.安装telnet 1.检测telnet-server的rpm包是否安装 # rpm -qa telnet-server 若无输入内容,则表示没有 ...

  2. 关于即来即停app的功能

    Asmallpark软件接口文档说明 编码均采用UTF-8格式传输全部为http,POST请求状态码:200  操作成功    100  服务器异常,稍后再试  404  请求非法  402  数据库 ...

  3. Python3基础 __repr__ 实例对象的名字,可以显示信息

             Python : 3.7.0          OS : Ubuntu 18.04.1 LTS         IDE : PyCharm 2018.2.4       Conda ...

  4. USB Compound Device,USB复合设备 ; USB Composite Device,USB组合设备【转】

    本文转载自:https://blog.csdn.net/autumn20080101/article/details/52776863 科普下USB复合设备和USB组合设备的区别. 关键字 Commu ...

  5. strerror函数的总结

    定义函数:char * strerror(int errnum); 函数说明:strerror()用来依参数errnum 的错误代码来查询其错误原因的描述字符串, 然后将该字符串指针返回. 返回值:返 ...

  6. UVa 815 洪水!

    https://vjudge.net/problem/UVA-815 题意:一个n*m的方格区域,共有n*m个方格,每个方格是边长为10米的正方形,整个区域的外围是无限高的高墙,给出这n*m个方格的初 ...

  7. jQuery.page 分页控件

    分享一下自己在项目中引用的Jquery分页控件 index.html内容 <!DOCTYPE html> <html lang="zh-cn" xmlns=&qu ...

  8. C# 窗口模拟点击按钮或关闭窗口

    public class CloseForm { [DllImport("user32", EntryPoint = "FindWindow")] privat ...

  9. json.dump()和json.dmups()的区别

    在python中支持json合适的数据是通过json模块实现的. 在序列化json数据的时候遇到两个形状很像的函数,dump()和dumps().主要说说他们的区别 先看看官方文档的说明:https: ...

  10. python tar 压缩解压

    压缩: 1. import tarfile import os def tar(fname): t = tarfile.open(fname + ".tar.gz", " ...