kudu中的flume sink代码路径:

https://github.com/apache/kudu/tree/master/java/kudu-flume-sink

kudu-flume-sink默认使用的producer是

org.apache.kudu.flume.sink.SimpleKuduOperationsProducer

  public List<Operation> getOperations(Event event) throws FlumeException {
try {
Insert insert = table.newInsert();
PartialRow row = insert.getRow();
row.addBinary(payloadColumn, event.getBody()); return Collections.singletonList((Operation) insert);
} catch (Exception e) {
throw new FlumeException("Failed to create Kudu Insert object", e);
}
}

是将消息直接存放到一个payload列中

如果想要支持json格式数据,需要二次开发

package com.cloudera.kudu;
public class JsonKuduOperationsProducer implements KuduOperationsProducer {

网上已经有人共享出来代码:https://cloud.tencent.com/developer/article/1158194

但是以上代码有几个不方便的地方,1)不允许null;2)对时间类型支持不好;3)所有的值必须是string,然后根据kudu中字段类型进行解析,在生成数据时需要注意,否则需要自行修改代码;

针对以上不便修改后代码如下:

JsonKuduOperationsProducer.java

package com.cloudera.kudu;

import com.google.common.collect.Lists;
import com.google.common.base.Preconditions;
import org.apache.avro.data.Json;
import org.json.JSONObject;
import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.FlumeException;
import org.apache.flume.annotations.InterfaceAudience;
import org.apache.flume.annotations.InterfaceStability;
import org.apache.kudu.ColumnSchema;
import org.apache.kudu.Schema;
import org.apache.kudu.Type;
import org.apache.kudu.client.KuduTable;
import org.apache.kudu.client.Operation;
import org.apache.kudu.client.PartialRow;
import org.apache.kudu.flume.sink.KuduOperationsProducer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import java.nio.charset.Charset;
import java.text.SimpleDateFormat;
import java.util.List;
import java.util.TimeZone;
import java.util.function.Function; @InterfaceAudience.Public
@InterfaceStability.Evolving
public class JsonKuduOperationsProducer implements KuduOperationsProducer {
private static final Logger logger = LoggerFactory.getLogger(JsonKuduOperationsProducer.class);
private static final String INSERT = "insert";
private static final String UPSERT = "upsert";
private static final List<String> validOperations = Lists.newArrayList(UPSERT, INSERT); public static final String ENCODING_PROP = "encoding";
public static final String DEFAULT_ENCODING = "utf-8";
public static final String OPERATION_PROP = "operation";
public static final String DEFAULT_OPERATION = UPSERT;
public static final String SKIP_MISSING_COLUMN_PROP = "skipMissingColumn";
public static final boolean DEFAULT_SKIP_MISSING_COLUMN = false;
public static final String SKIP_BAD_COLUMN_VALUE_PROP = "skipBadColumnValue";
public static final boolean DEFAULT_SKIP_BAD_COLUMN_VALUE = false;
public static final String WARN_UNMATCHED_ROWS_PROP = "skipUnmatchedRows";
public static final boolean DEFAULT_WARN_UNMATCHED_ROWS = true; private KuduTable table;
private Charset charset;
private String operation;
private boolean skipMissingColumn;
private boolean skipBadColumnValue;
private boolean warnUnmatchedRows; public JsonKuduOperationsProducer() {
} @Override
public void configure(Context context) {
String charsetName = context.getString(ENCODING_PROP, DEFAULT_ENCODING);
try {
charset = Charset.forName(charsetName);
} catch (IllegalArgumentException e) {
throw new FlumeException(
String.format("Invalid or unsupported charset %s", charsetName), e);
}
operation = context.getString(OPERATION_PROP, DEFAULT_OPERATION).toLowerCase();
Preconditions.checkArgument(
validOperations.contains(operation),
"Unrecognized operation '%s'",
operation);
skipMissingColumn = context.getBoolean(SKIP_MISSING_COLUMN_PROP,
DEFAULT_SKIP_MISSING_COLUMN);
skipBadColumnValue = context.getBoolean(SKIP_BAD_COLUMN_VALUE_PROP,
DEFAULT_SKIP_BAD_COLUMN_VALUE);
warnUnmatchedRows = context.getBoolean(WARN_UNMATCHED_ROWS_PROP,
DEFAULT_WARN_UNMATCHED_ROWS);
} @Override
public void initialize(KuduTable table) {
this.table = table;
} @Override
public List<Operation> getOperations(Event event) throws FlumeException {
String raw = new String(event.getBody(), charset);
logger.info("get raw: " + raw);
List<Operation> ops = Lists.newArrayList();
if(raw != null && !raw.isEmpty()) {
JSONObject json = null;
//just pass if it is not a json
try {
json = new JSONObject(raw);
} catch (Exception e) {
e.printStackTrace();
}
if (json != null) {
Schema schema = table.getSchema();
Operation op;
switch (operation) {
case UPSERT:
op = table.newUpsert();
break;
case INSERT:
op = table.newInsert();
break;
default:
throw new FlumeException(
String.format("Unrecognized operation type '%s' in getOperations(): " +
"this should never happen!", operation));
}
//just record the error event into log and pass
try {
PartialRow row = op.getRow();
for (ColumnSchema col : schema.getColumns()) {
try {
if (json.has(col.getName()) && json.get(col.getName()) != null) coerceAndSet(json.get(col.getName()), col.getName(), col.getType(), col.isKey(), col.isNullable(), col.getDefaultValue(), row);
else if (col.isKey() || !col.isNullable()) throw new RuntimeException("column : " + col.getName() + " is null or not exists in " + row);
} catch (NumberFormatException e) {
String msg = String.format(
"Raw value '%s' couldn't be parsed to type %s for column '%s'",
raw, col.getType(), col.getName());
logOrThrow(skipBadColumnValue, msg, e);
} catch (IllegalArgumentException e) {
String msg = String.format(
"Column '%s' has no matching group in '%s'",
col.getName(), raw);
logOrThrow(skipMissingColumn, msg, e);
}
}
ops.add(op);
} catch (Exception e) {
logger.error("get error [" + e.getMessage() + "]: " + raw, e);
}
}
}
return ops;
} protected <T> T getValue(T defaultValue, Object val, boolean isKey, boolean isNullable, Object columnDefaultValue, boolean compressException, Function<String, T> fromStr) {
T result = defaultValue;
try {
if (val == null) {
if (isKey || !isNullable) {
throw new RuntimeException("column is key or not nullable");
}
if (columnDefaultValue != null && !"null".equals(columnDefaultValue)) {
if (columnDefaultValue instanceof String) result = fromStr.apply((String)columnDefaultValue);
else result = (T)columnDefaultValue;
}
} else {
boolean isConverted = false;
//handle: try to convert directly
// try {
// result = (T)val;
// isConverted = true;
// } catch (Exception e1) {
//// e1.printStackTrace();
// }
//handle: parse from string
if (!isConverted) result = fromStr.apply(val.toString());
}
} catch(Exception e) {
if (compressException) e.printStackTrace();
else throw e;
}
return result;
} private SimpleDateFormat[] sdfs = new SimpleDateFormat[]{
new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.000'Z'"),
new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss'Z'"),
new SimpleDateFormat("yyyy-MM-dd HH:mm:ss")
};
{
for (SimpleDateFormat sdf : sdfs) sdf.setTimeZone(TimeZone.getTimeZone("UTC"));
} private void coerceAndSet(Object rawVal, String colName, Type type, boolean isKey, boolean isNullable, Object defaultValue, PartialRow row)
throws NumberFormatException {
switch (type) {
case INT8:
row.addByte(colName, (rawVal != null && rawVal instanceof Boolean) ? (Boolean)rawVal ? (byte)1 : (byte)0 : this.getValue((byte)0, rawVal, isKey, isNullable, defaultValue, this.skipBadColumnValue, (String str) -> Byte.parseByte(str)));
break;
case INT16:
row.addShort(colName, this.getValue((short)0, rawVal, isKey, isNullable, defaultValue, this.skipBadColumnValue, (String str) -> Short.parseShort(str)));
break;
case INT32:
row.addInt(colName, this.getValue(0, rawVal, isKey, isNullable, defaultValue, this.skipBadColumnValue, (String str) -> Integer.parseInt(str)));
break;
case INT64:
row.addLong(colName, this.getValue(0l, rawVal, isKey, isNullable, defaultValue, this.skipBadColumnValue, (String str) -> Long.parseLong(str)));
break;
case BINARY:
row.addBinary(colName, rawVal == null ? new byte[0] : rawVal.toString().getBytes(charset));
break;
case STRING:
row.addString(colName, rawVal == null ? "" : rawVal.toString());
break;
case BOOL:
row.addBoolean(colName, this.getValue(false, rawVal, isKey, isNullable, defaultValue, this.skipBadColumnValue, (String str) -> Boolean.parseBoolean(str)));
break;
case FLOAT:
row.addFloat(colName, this.getValue(0f, rawVal, isKey, isNullable, defaultValue, this.skipBadColumnValue, (String str) -> Float.parseFloat(str)));
break;
case DOUBLE:
row.addDouble(colName, this.getValue(0d, rawVal, isKey, isNullable, defaultValue, this.skipBadColumnValue, (String str) -> Double.parseDouble(str)));
break;
case UNIXTIME_MICROS:
Long value = this.<Long>getValue(null, rawVal, isKey, isNullable, defaultValue, this.skipBadColumnValue, (String str) -> {
Long result = null;
if (str != null && !"".equals(str)) {
boolean isPatternOk =false;
//handle: yyyy-MM-dd HH:mm:ss
if (str.contains("-") && str.contains(":")) {
for (SimpleDateFormat sdf : sdfs) {
try {
result = sdf.parse(str).getTime() * 1000;
isPatternOk = true;
break;
} catch (Exception e) {
// e.printStackTrace();
}
}
}
//handle: second, millisecond, microsecond
if (!isPatternOk && (str.length() == 10 || str.length() == 13 || str.length() == 16)) {
result = Long.parseLong(str);
if (str.length() == 10) result *= 1000000;
if (str.length() == 13) result *= 1000;
}
}
return result;
});
if (value != null) row.addLong(colName, value);
break;
default:
logger.warn("got unknown type {} for column '{}'-- ignoring this column", type, colName);
}
} private void logOrThrow(boolean log, String msg, Exception e)
throws FlumeException {
if (log) {
logger.warn(msg, e);
} else {
throw new FlumeException(msg, e);
}
} @Override
public void close() {
}
}

去掉类JsonStr2Map,主要是getValue和coerceAndSet配合,支持默认值,支持null,支持传递任意类型(自动适配处理),支持boolean转byte,时间类型支持yyyy-MM-dd HH:mm:ss等pattern和秒、毫秒、微秒4种格式,并且会自动将秒和毫秒转成微秒;

注意SimpleDateFormat设置timezone为UTC,这里是为了保证消息中的时间和写入kudu中的时间一致,否则会根据timezone做偏移,比如timezone为Asia/Shanghai,则写入kudu的时间会比消息中的时间晚8小时;

打包放到$FLUME_HOME/lib下

【原创】大数据基础之Flume(2)kudu sink的更多相关文章

  1. 【原创】大数据基础之Flume(2)应用之kafka-kudu

    应用一:kafka数据同步到kudu 1 准备kafka topic # bin/kafka-topics.sh --zookeeper $zk:2181/kafka -create --topic ...

  2. 【原创】大数据基础之Flume(2)Sink代码解析

    flume sink核心类结构 1 核心接口Sink org.apache.flume.Sink /** * <p>Requests the sink to attempt to cons ...

  3. 【原创】大数据基础之Zookeeper(2)源代码解析

    核心枚举 public enum ServerState { LOOKING, FOLLOWING, LEADING, OBSERVING; } zookeeper服务器状态:刚启动LOOKING,f ...

  4. 大数据系列之Flume+kafka 整合

    相关文章: 大数据系列之Kafka安装 大数据系列之Flume--几种不同的Sources 大数据系列之Flume+HDFS 关于Flume 的 一些核心概念: 组件名称     功能介绍 Agent ...

  5. 【原创】大数据基础之Kudu(5)kudu增加或删除目录/数据盘

    kudu加减数据盘不能直接修改配置fs_data_dirs后重启,否则会报错: Check failed: _s.ok() Bad status: Already present: FS layout ...

  6. 【原创】大数据基础之词频统计Word Count

    对文件进行词频统计,是一个大数据领域的hello word级别的应用,来看下实现有多简单: 1 Linux单机处理 egrep -o "\b[[:alpha:]]+\b" test ...

  7. 【原创】大数据基础之Impala(1)简介、安装、使用

    impala2.12 官方:http://impala.apache.org/ 一 简介 Apache Impala is the open source, native analytic datab ...

  8. 【原创】大数据基础之Benchmark(2)TPC-DS

    tpc 官方:http://www.tpc.org/ 一 简介 The TPC is a non-profit corporation founded to define transaction pr ...

  9. 大数据基础知识问答----spark篇,大数据生态圈

    Spark相关知识点 1.Spark基础知识 1.Spark是什么? UCBerkeley AMPlab所开源的类HadoopMapReduce的通用的并行计算框架 dfsSpark基于mapredu ...

随机推荐

  1. fork()和僵尸进程

    2018-01-03@望京 关于fork()函数,Unix/Linux提供的fork()系统调用,fork()一次返回两次, 操作系统自动把当前进程(称为父进程)复制了一份(称为子进程),然后,分别在 ...

  2. MySQL的随笔

    数据库引擎 MySQL5.0支持的存储引擎包括MyISAM,InnoDB.MEMORY.MERGE.BDB等等,其中InnoDB和BDB提供事务安全表,其他存储引擎都是非事务安全表. MyISAM M ...

  3. None.js 第三步 回调函数【阻塞代码--非阻塞代码】

    阻塞代码实例 var fs = require("fs"); // 导入文件系统 file system var data = fs.readFileSync('input.txt ...

  4. PHP取微信access_token并全局存储与更新

    来源:http://www.zcphp.com/html/weixinkaifa-show-20.html 官方的说明: access_token是公众号的全局唯一票据,公众号调用各接口时都需使用ac ...

  5. Kaldi的data目录解析

    data/test # things in data/*test* and data/*train* cmvn.scp # Start point of every audio feature at ...

  6. 【JS】获取一个月份有多少天

    new Date(year, month, 0).getDate() 通过 Date 构造函数传入年份.月份.零,然后通过调用获取日期函数即可获取当前月份有多少天 new Date(2019, 2, ...

  7. Centos 02 操作系统 & Linux安装

    操作系统的概念 操作系统是沟通使用者和硬件之间传递信息的工具或程序,是电子计算机系统负责支撑应用程序运行环境以及用户操作环境的基础系统软件 硬件 ==> 系统核心 ==> 命令解释器she ...

  8. maven(一) 一 修改仓库存放路径

    一.修改仓库存放路径 maven默认的仓库是在C盘下的,这样当重新装系统的时候,仓库就要重新建了.因此可以修改默认存放的位置. 修改仓库的地址在maven安装包(即apache-maven-bin)下 ...

  9. nutz学习笔记(1)

    写在前头 最近到了合肥分公司,分公司用的架构为nutz·····目前在根据官方文档(http://nutzam.com/core/nutz_preface.html)自学并实践中,此笔记将不会如官方文 ...

  10. [TensorFlow笔记乱锅炖] tf.multinomial(logits, num_samples)使用方法

    tf.multinomial(logits, num_samples) 第一个参数logits可以是一个数组,每个元素的值可以简单地理解为对应index的选择概率,注意这里的概率没有规定加起来的和为1 ...