以下为官方文档:

Multithreaded Pipeline Overview

A multithreaded pipeline is a pipeline with an origin that supports parallel execution, enabling one pipeline to run in multiple threads.

Multithreaded pipelines enable processing high volumes of data in a single pipeline on one Data Collector, thus taking full advantage of all available CPUs on the Data Collector machine. When using multithreaded pipelines, make sure to allocate sufficient resources to the pipeline and Data Collector.

A multithreaded pipeline honors the configured delivery guarantee for the pipeline, but does not guarantee the order in which batches of data are processed.

How It Works

When you configure a multithreaded pipeline, you specify the number of threads that the origin should use to generate batches of data. You can also configure the maximum number of pipeline runners that Data Collector uses to perform pipeline processing.

A pipeline runner is a sourceless pipeline instance - an instance of the pipeline that includes all of the processors and destinations in the pipeline and represents all pipeline processing after the origin.

Origins perform multithreaded processing based on the origin systems they work with, but the following is true for all origins that generate multithreaded pipelines:

When you start the pipeline, the origin creates a number of threads based on the multithreaded property configured in the origin. And Data Collector creates a number of pipeline runners based on the pipeline Max Runners property to perform pipeline processing. Each thread connects to the origin system and creates a batch of data, and passes the batch to an available pipeline runner.

Each pipeline runner processes one batch at a time, just like a pipeline that runs on a single thread. When the flow of data slows, the pipeline runners wait idly until they are needed, generating an empty batch at regular intervals. You can configure the Runner Idle Time pipeline property specify the interval or to opt out of empty batch generation.

Multithreaded pipelines preserve the order of records within each batch, just like a single-threaded pipeline. But since batches are processed by different pipeline instances, the order that batches are written to destinations is not ensured.

For example, take the following multithreaded pipeline. The HTTP Server origin processes HTTP POST and PUT requests passed from HTTP clients. When you configure the origin, you specify the number of threads to use - in this case, the Max Concurrent Requests property:

Let's say you configure the pipeline to opt out of the Max Runners property. When you do this, Data Collector generates a matching number of pipeline runners for the number of threads.

With Max Concurrent Requests set to 5, when you start the pipeline the origin creates five threads and Data Collector creates five pipeline runners. Upon receiving data, the origin passes a batch to each of the pipeline runners for processing.

Conceptually, the multithreaded pipeline looks like this:

Each pipeline runner performs the processing associated with the rest of the pipeline. After a batch is written to pipeline destinations - in this case, Azure Data Lake Store 1 and 2 - the pipeline runner becomes available for another batch of data. Each batch is processed and written as quickly as possible, independently from batches processed by other pipeline runners, so the write-order of the batches can differ from the read-order.

At any given moment, the five pipeline runners can each process a batch, so this multithreaded pipeline processes up to five batches at a time. When incoming data slows, the pipeline runners sit idle, available for use as soon as the data flow increases.

 
 
 
 

StreamSets 多线程 Pipelines的更多相关文章

  1. StreamSets 部署 Pipelines 到 SDC Edge

    可以使用如下方法: 下载edge 运行包并包含pipeline定义文件. 直接发布到edge 设备. 在data colelctor 机器配置并配置了edge server 地址(主要需要网络可访问) ...

  2. StreamSets 相关文章

    相关streamsets 文章(不按顺序) 学习视频-百度网盘 StreamSets 设计Edge pipeline StreamSets Data Collector Edge 说明 streams ...

  3. StreamSets SDC RPC Pipelines说明

    主要目的是进行跨pipeline 数据的通信,而不仅仅是内部pipeline 的通信,之间不同网络进行通信 一个参考图 pipeline 类型 origin destination 部署架构 使用多个 ...

  4. StreamSets使用指南

    StreamSets使用指南 最近在调研Streamsets,照猫画虎做了几个最简单的Demo鉴于网络上相关资料非常少,做个记录. 1.简介 Streamsets是一款大数据实时采集和ETL工具,可以 ...

  5. python多线程爬虫设计及实现示例

    爬虫的基本步骤分为:获取,解析,存储.假设这里获取和存储为io密集型(访问网络和数据存储),解析为cpu密集型.那么在设计多线程爬虫时主要有两种方案:第一种方案是一个线程完成三个步骤,然后运行多个线程 ...

  6. 抓包分析、多线程爬虫及xpath学习

    1.抓包分析 1.1 Fiddler安装及基本操作 由于很多网站采用的是HTTPS协议,而fiddler默认不支持HTTPS,先通过设置使fiddler能抓取HTTPS网站,过程可参考(https:/ ...

  7. StreamSets学习系列之StreamSets的Core Tarball方式安装(图文详解)

    不多说,直接上干货! 前期博客 StreamSets学习系列之StreamSets支持多种安装方式[Core Tarball.Cloudera Parcel .Full Tarball .Full R ...

  8. StreamSets 设计Edge pipeline

    edge pipeline 运行在edge 执行模式,我们可以使用 data collector UI 进行edge pipeline 设计, 设计完成之后,你可以部署对应的pipeline到edge ...

  9. streamsets origin 说明

    origin 是streamsets pipeline的soure 入口,只能应用一个origin 在pipeline中, 对于运行在不同执行模式的pipeline 可以应用不同的origin 独立模 ...

随机推荐

  1. 20170729xlVba SSC_RECENT100

    Public Sub Recent100() Dim WebText As String Dim Reg As Object, Mh As Object, OneMh As Object Dim i ...

  2. 49 DOM(2)

    一.value属性: input ,select 标签 ,textarea 标签中有value属性, 获取他们属性值的方法,先获取该元素ele,然后ele.value得到value值. <!DO ...

  3. pyoj61 双线DP

    传纸条(一) 时间限制:2000 ms  |  内存限制:65535 KB 难度:5   描述 小渊和小轩是好朋友也是同班同学,他们在一起总有谈不完的话题.一次素质拓展活动中,班上同学安排做成一个m行 ...

  4. Oracle性能诊断艺术-读书笔记

    create table test0605 as select * from dba_objects; select t1.owner,t1.object_name,t1.object_id from ...

  5. EBS存储附件信息

    附件三种形式 1.文件 2.url 3.文本 三种方式存储不一样 1.文件是存blob 2.url是存一个链接信息,读出来的时候,就是一个蓝色可点链接    fnd_attached_document ...

  6. 网络SSID是什么意思

    ssid是网络的ID(名称).一般用在无线网络上.搜索无线网络名一般就是在搜索无线网络的ssid. SSID是Service Set Identifier的缩写,意思是:服务集标识.SSID技术可以将 ...

  7. hibernate--一级和二级缓存(使用Ehcache)以及查询缓存

    https://blog.csdn.net/u012411414/article/details/50483185 有一下几点需要理清才行: 一级缓存是session缓存 session关闭就小时 二 ...

  8. 059——VUE中vue-router之路由嵌套在文章系统中的使用方法:

    <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...

  9. Python - Learn Note (3)

    Python之模块 包就是文件夹:包可以有多级: 模块就是 xxx.py文件:可以创建自己的模块,并且导入它们,模块的名字就和文件的名字相同: Python使用import语句导入一个模块. impo ...

  10. C#:String.Format数字格式化输出 {0:N2} {0:D2} {0:C2}...

    int a = 12345678; //格式为sring输出//   Label1.Text = string.Format("asdfadsf{0}adsfasdf",a);// ...