kudu中的flume sink代码路径: https://github.com/apache/kudu/tree/master/java/kudu-flume-sink kudu-flume-sink默认使用的producer是 org.apache.kudu.flume.sink.SimpleKuduOperationsProducer public List<Operation> getOperations(Event event) throws FlumeException { try…
应用一:kafka数据同步到kudu 1 准备kafka topic # bin/kafka-topics.sh --zookeeper $zk:2181/kafka -create --topic test_sync --partitions 2 --replication-factor 2 WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could coll…
flume sink核心类结构 1 核心接口Sink org.apache.flume.Sink /** * <p>Requests the sink to attempt to consume data from attached channel</p> * <p><strong>Note</strong>: This method should be consuming from the channel * within the bounds…
核心枚举 public enum ServerState { LOOKING, FOLLOWING, LEADING, OBSERVING; } zookeeper服务器状态:刚启动LOOKING,follower是FOLLOWING,leader是LEADING,observer是OBSERVING: public enum LearnerType { PARTICIPANT, OBSERVER; } 简单来说,zookeeper启动的核心类是QuorumPeerMain,启动之后会加载配置,…
相关文章: 大数据系列之Kafka安装 大数据系列之Flume--几种不同的Sources 大数据系列之Flume+HDFS 关于Flume 的 一些核心概念: 组件名称     功能介绍 Agent代理 使用JVM 运行Flume.每台机器运行一个agent,但是可以在一个agent中包含多个sources和sinks. Client客户端 生产数据,运行在一个独立的线程. Source源 从Client收集数据,传递给Channel. Sink接收器 从Channel收集数据,进行相关操作,…
kudu加减数据盘不能直接修改配置fs_data_dirs后重启,否则会报错: Check failed: _s.ok() Bad status: Already present: FS layout already exists; not overwriting existing layout: FSManager roots already exist: /data0/kudu/data 官方解释如下: When Kudu starts, it checks each configured…
对文件进行词频统计,是一个大数据领域的hello word级别的应用,来看下实现有多简单: 1 Linux单机处理 egrep -o "\b[[:alpha:]]+\b" test_word.log|sort|uniq -c|sort -rn|head -10 2 Spark分布式处理(Scala) val sparkConf = new SparkConf() val sc = new SparkContext(sparkConf) sc.textFile("test_wo…
impala2.12 官方:http://impala.apache.org/ 一 简介 Apache Impala is the open source, native analytic database for Apache Hadoop. Impala is shipped by Cloudera, MapR, Oracle, and Amazon. impala是hadoop上的开源分析性数据库:C++和java语言开发: Do BI-style Queries on Hadoop Im…
tpc 官方:http://www.tpc.org/ 一 简介 The TPC is a non-profit corporation founded to define transaction processing and database benchmarks and to disseminate objective, verifiable TPC performance data to the industry. TPC(The Transaction Processing Perform…
Spark相关知识点 1.Spark基础知识 1.Spark是什么? UCBerkeley AMPlab所开源的类HadoopMapReduce的通用的并行计算框架 dfsSpark基于mapreduce算法实现的分布式计算,拥有HadoopMapReduce所具有的优点:但不同于MapReduce的是Job中间输出和结果可以保存在内存中,从而不再需要读写HDFS,因此Spark能更好地适用于数据挖掘与机器学习等需要迭代的map reduce的算法. 2.Spark与Hadoop的对比(Spar…