streamsets origin 说明
origin 是streamsets pipeline的soure 入口,只能应用一个origin 在pipeline中,
对于运行在不同执行模式的pipeline 可以应用不同的origin
- 独立模式
- 集群模式
- edge模式(agent)
- 开发模式(方便测试)
standalone(独立模式)组件
In standalone pipelines, you can use the following origins:
- Amazon S3 - Reads objects from Amazon S3.
- Amazon SQS Consumer - Reads data from queues in Amazon Simple Queue Services (SQS).
- Azure IoT/Event Hub Consumer - Reads data from Microsoft Azure Event Hub. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
- CoAP Server - Listens on a CoAP endpoint and processes the contents of all authorized CoAP requests. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
- Directory - Reads fully-written files from a directory. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
- Elasticsearch - Reads data from an Elasticsearch cluster. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
- File Tail - Reads lines of data from an active file after reading related archived files in the directory.
- Google BigQuery - Executes a query job and reads the result from Google BigQuery.
- Google Cloud Storage - Reads fully written objects from Google Cloud Storage.
- Google Pub/Sub Subscriber - Consumes messages from a Google Pub/Sub subscription. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
- Hadoop FS Standalone - Reads fully-written files from HDFS. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
- HTTP Client - Reads data from a streaming HTTP resource URL.
- HTTP Server - Listens on an HTTP endpoint and processes the contents of all authorized HTTP POST and PUT requests. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
- HTTP to Kafka (Deprecated) - Listens on a HTTP endpoint and writes the contents of all authorized HTTP POST requests directly to Kafka.
- JDBC Multitable Consumer - Reads database data from multiple tables through a JDBC connection. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
- JDBC Query Consumer - Reads database data using a user-defined SQL query through a JDBC connection.
- JMS Consumer - Reads messages from JMS.
- Kafka Consumer - Reads messages from a single Kafka topic.
- Kafka Multitopic Consumer - Reads messages from multiple Kafka topics. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
- Kinesis Consumer - Reads data from Kinesis Streams. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
- MapR DB CDC - Reads changed MapR DB data that has been written to MapR Streams. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
- MapR DB JSON - Reads JSON documents from MapR DB JSON tables.
- MapR FS - Reads files from MapR FS.
- MapR FS Standalone - Reads fully-written files from MapR FS. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
- MapR Multitopic Streams Consumer - Reads messages from multiple MapR Streams topics. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
- MapR Streams Consumer - Reads messages from MapR Streams.
- MongoDB - Reads documents from MongoDB.
- MongoDB Oplog - Reads entries from a MongoDB Oplog.
- MQTT Subscriber - Subscribes to a topic on an MQTT broker to read messages from the broker.
- MySQL Binary Log - Reads MySQL binary logs to generate change data capture records.
- Omniture - Reads web usage reports from the Omniture reporting API.
- OPC UA Client - Reads data from a OPC UA server.
- Oracle CDC Client - Reads LogMiner redo logs to generate change data capture records.
- PostgreSQL CDC Client - Reads PostgreSQL WAL data to generate change data capture records.
- RabbitMQ Consumer - Reads messages from RabbitMQ.
- Redis Consumer - Reads messages from Redis.
- REST Service - Listens on an HTTP endpoint, parses the contents of all authorized requests, and sends responses back to the originating REST API. Creates multiple threads to enable parallel processing in a multithreaded pipeline. Use as part of a microservice pipeline.
- Salesforce - Reads data from Salesforce.
- SDC RPC - Reads data from an SDC RPC destination in an SDC RPC pipeline.
- SDC RPC to Kafka (Deprecated) - Reads data from an SDC RPC destination in an SDC RPC pipeline and writes it to Kafka.
- SFTP/FTP Client - Reads files from an SFTP or FTP server.
- SQL Server CDC Client - Reads data from Microsoft SQL Server CDC tables. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
- SQL Server Change Tracking - Reads data from Microsoft SQL Server change tracking tables and generates the latest version of each record. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
- TCP Server - Listens at the specified ports and processes incoming data over TCP/IP connections. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
- UDP Multithreaded Source - Reads messages from one or more UDP ports. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
- UDP Source - Reads messages from one or more UDP ports.
- UDP to Kafka (Deprecated) - Reads messages from one or more UDP ports and writes the data to Kafka.
- WebSocket Client - Reads data from a WebSocket server endpoint.
- WebSocket Server - Listens on a WebSocket endpoint and processes the contents of all authorized WebSocket client requests. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
集群模式的组件
In cluster pipelines, you can use the following origins:
- Hadoop FS - Reads data from HDFS, Amazon S3, or other file systems using the Hadoop FileSystem interface.
- Kafka Consumer - Reads messages from Kafka. Use the cluster version of the origin.
- MapR FS - Reads data from MapR FS.
- MapR Streams Consumer - Reads messages from MapR Streams.
edge 模式
In edge pipelines, you can use the following origins:
- Directory - Reads fully-written files from a directory.
- File Tail - Reads lines of data from an active file after reading related archived files in the directory.
- HTTP Client - Reads data from a streaming HTTP resource URL.
- HTTP Server - Listens on an HTTP endpoint and processes the contents of all authorized HTTP POST and PUT requests.
- MQTT Subscriber - Subscribes to a topic on an MQTT broker to read messages from the broker.
- System Metrics - Reads system metrics from the edge device where SDC Edge is installed.
- WebSocket Client - Reads data from a WebSocket server endpoint.
- Windows Event Log - Reads data from a Microsoft Windows event log located on a Windows machine.
开发模式
To help create or test pipelines, you can use the following development origins:
- Dev Data Generator
- Dev Random Source
- Dev Raw Data Source
- Dev SDC RPC with Buffering
- Dev Snapshot Replaying
- Sensor Reader
参考资料
streamsets origin 说明的更多相关文章
- StreamSets 相关文章
相关streamsets 文章(不按顺序) 学习视频-百度网盘 StreamSets 设计Edge pipeline StreamSets Data Collector Edge 说明 streams ...
- streamsets 3.5 的一些新功能
streamsets 3.5 有了一些新的特性以及增强,总之是越来越方便了,详细的可以 查看官方说明,以下简单例举一些比较有意义的. origins 新的pulsar 消费origin jdbc 多表 ...
- streamsets 集成 cratedb 测试
我们可以集成crate 到streamsets 中可以实现强大的数据导入,数据分析能力. 演示的是进行csv 文件的解析并输出到cratedb 环境使用docker && docker ...
- StreamSets sdc rpc 测试
一个简单的参考图 destination pipeline 创建 pipeline flow sdc destination 配置 origin sdc rpc pipeline pipeline f ...
- StreamSets SDC RPC Pipelines说明
主要目的是进行跨pipeline 数据的通信,而不仅仅是内部pipeline 的通信,之间不同网络进行通信 一个参考图 pipeline 类型 origin destination 部署架构 使用多个 ...
- StreamSets 设计Edge pipeline
edge pipeline 运行在edge 执行模式,我们可以使用 data collector UI 进行edge pipeline 设计, 设计完成之后,你可以部署对应的pipeline到edge ...
- streamsets excel 数据处理
streamsets 有一个directory的origin 可以方便的进行文件的处理,支持的格式也比较多,使用简单 pipeline flow 配置 excel 数据copy 因为使用的是容器,会有 ...
- streamsets 错误记录处理
我们可以在stage 级别,或者piepline 级别进行error 处理配置 pipeline的错误记录处理 discard(丢踢) send response to Origin pipeline ...
- streamsets http client && json parse && local fs 使用
streamsets 包含了丰富的组件,origin processer destination 测试例子为集成了http client 以及json 处理 启动服务 使用docker 创建pipel ...
随机推荐
- 11月16host文件
#################################################################################################### ...
- 【转载】open-falcon部署
运维监控系统之Open-Falcon 一.Open-Falcon介绍 1.监控系统,可以从运营级别(基本配置即可),以及应用级别(二次开发,通过端口进行日志上报),对服务器.操作系统.中间件.应用 ...
- python 手动遍历迭代器
想遍历一个可迭代对象中的所有元素,但是却不想使用for 循环 为了手动的遍历可迭代对象,使用next() 函数并在代码中捕获StopIteration 异常.比如,下面的例子手动读取一个文件中的所有行 ...
- SQL学习笔记七之MySQL视图、触发器、事务、存储过程、函数
阅读目录 一 视图 二 触发器 三 事务 四 存储过程 五 函数 六 流程控制 一 视图 视图是一个虚拟表(非真实存在),其本质是[根据SQL语句获取动态的数据集,并为其命名],用户使用时只需使用[名 ...
- 《JDK 8.0 学习笔记》1~3章
第一章 Java平台概论 了解Java的发展历程和相关术语如JDK.JVM.JRE等 第二章 从JDK到IDE 书本介绍了新建Java程序的注意事项以及在cmd和Eclipse环境下如何运行Java, ...
- java类同时引用父类和接口的成员变量,需要指明是父类的还是接口的
code: package com.qhong; public class Main extends B implements A{ public static void main(String[] ...
- sqlite的bool字段
简直被坑死了, bool字段更新,只能用0或1,才是正确的更新. 否则select出来的字段是错的 本来用true和false更新的,更新之后,使用sqliteexpert查看,更新结果是对的. 但是 ...
- C++ 中的关于输出的设置于
▲setw(n)用法: 通俗地讲就是预设宽度 如 cout<<setw(5)<<255<<endl; 结果是: (空格)(空格)255 ▲setfill(char ...
- python 计算字典value值的和
my_dict = {,,} print(sum(my_dict.values()))
- 递归--练习3--noi7592求最大公约数问题
递归--练习3--noi7592求最大公约数问题 一.心得 两个低级错误:1. ll setMax(ll &m,ll &n)中无引用,结果只传值,没传地址2. return f(n,m ...