[Apache Atlas] Atlas 架构设计及源代码简单分析
Apache Atlas 架构图
Atlas 支持多数据源接入:Hive、HBase、Storm等
Type System
Type
Atlas 中定义了一些元数据类型
── AtlasBaseTypeDef
│ ├── AtlasEnumDef
│ └── AtlasStructDef
│ ├── AtlasBusinessMetadataDef
│ ├── AtlasClassificationDef
│ ├── AtlasEntityDef
│ └── AtlasRelationshipDef
├── AtlasStructType
│ ├── AtlasBusinessMetadataType
│ ├── AtlasClassificationType
│ ├── AtlasRelationshipType
│ └── AtlasEntityType
│ └── AtlasRootEntityType
├── AtlasType
│ ├── AtlasArrayType
│ ├── AtlasBigDecimalType
│ ├── AtlasBigIntegerType
│ ├── AtlasByteType
│ ├── AtlasDateType
│ ├── AtlasDoubleType
│ ├── AtlasEnumType
│ ├── AtlasFloatType
│ ├── AtlasIntType
│ ├── AtlasLongType
│ ├── AtlasMapType
│ ├── AtlasObjectIdType
│ ├── AtlasShortType
│ ├── AtlasStringType
│ └── AtlasStructType
│ ├── AtlasBusinessMetadataType
│ ├── AtlasClassificationType
│ ├── AtlasEntityType
│ └── AtlasRelationshipType
├── AtlasTypeDefStore
│ └── AtlasTypeDefGraphStore
│ └── AtlasTypeDefGraphStoreV2
└── StructTypeDefinition
└── HierarchicalTypeDefinition
├── ClassTypeDefinition
└── TraitTypeDefinition
Entity
Entity 是基于类型的具体实现
AtlasEntity
├── AtlasEntityExtInfo
│ ├── AtlasEntitiesWithExtInfo
│ └── AtlasEntityWithExtInfo
├── AtlasEntityStore
│ └── AtlasEntityStoreV2
├── AtlasEntityStream
│ └── AtlasEntityStreamForImport
├── AtlasEntityType
│ └── AtlasRootEntityType
└── IAtlasEntityChangeNotifier
├── AtlasEntityChangeNotifier
└── EntityChangeNotifierNop
Attributes
针对模型定义属性
AtlasAttributeDef
└── AtlasRelationshipAttributeDef
AtlasAttributeDef 属性字段:
private String name;
private String typeName;
private boolean isOptional;
private Cardinality cardinality;
private int valuesMinCount;
private int valuesMaxCount;
private boolean isUnique;
private boolean isIndexable;
private boolean includeInNotification;
private String defaultValue;
private String description;
private int searchWeight = DEFAULT_SEARCHWEIGHT;
private IndexType indexType = null;
private List<AtlasConstraintDef> constraints;
private Map<String, String> options;
private String displayName;
具体实现:
db:
"name": "db",
"typeName": "hive_db",
"isOptional": false,
"isIndexable": true,
"isUnique": false,
"cardinality": "SINGLE"
columns:
"name": "columns",
"typeName": "array<hive_column>",
"isOptional": optional,
"isIndexable": true,
“isUnique": false,
"constraints": [ { "type": "ownedRef" } ]
- isComposite - 是否复合
- isIndexable - 是否索引
- isUnique - 是否唯一
- multiplicity - 指示此属性是(必需的/可选的/还是可以是多值)的
System specific types and their significance
Referenceable
This type represents all entities that can be searched for using a unique attribute called qualifiedName.
├── Referenceable
├── ReferenceableDeserializer
├── ReferenceableSerializer
└── V1SearchReferenceableSerializer
Hooks
以Hive元信息采集为例分析采集过程:
全量导入
import-hive.sh
"${JAVA_BIN}" ${JAVA_PROPERTIES} -cp "${CP}"
org.apache.atlas.hive.bridge.HiveMetaStoreBridge $IMPORT_ARGS
importTables
└── importDatabases [addons/hive-bridge/src/main/java/org/apache/atlas/hive/bridge/HiveMetaStoreBridge.java +295]
└── importHiveMetadata [addons/hive-bridge/src/main/java/org/apache/atlas/hive/bridge/HiveMetaStoreBridge.java +289]
上面是调用过程:
importTables -> importTable --> registerInstances
AtlasEntitiesWithExtInfo ret = null;
EntityMutationResponse response = atlasClientV2.createEntities(entities);
List<AtlasEntityHeader> createdEntities = response.getEntitiesByOperation(EntityMutations.EntityOperation.CREATE);
if (CollectionUtils.isNotEmpty(createdEntities)) {
ret = new AtlasEntitiesWithExtInfo();
for (AtlasEntityHeader createdEntity : createdEntities) {
AtlasEntityWithExtInfo entity = atlasClientV2.getEntityByGuid(createdEntity.getGuid());
ret.addEntity(entity.getEntity());
if (MapUtils.isNotEmpty(entity.getReferredEntities())) {
for (Map.Entry<String, AtlasEntity> entry : entity.getReferredEntities().entrySet()) {
ret.addReferredEntity(entry.getKey(), entry.getValue());
}
}
LOG.info("Created {} entity: name={}, guid={}", entity.getEntity().getTypeName(), entity.getEntity().getAttribute(ATTRIBUTE_QUALIFIED_NAME), entity.getEntity().getGuid());
}
}
通过Http Post 的请求将库表数据更新至Atlas
atlasClientV2有很多Http接口
Atlas HTTP 客户端API:
实时监听
HiveHook implements ExecuteWithHookContext
ExecuteWithHookContext is a new interface that the Pre/Post Execute Hook can run with the HookContext.
实现run()方法来对Hive 相关事件做处理
Hive相关事件:
BaseHiveEvent
├── AlterTableRename
├── CreateHiveProcess
├── DropDatabase
├── DropTable
├── CreateDatabase
│ └── AlterDatabase
└── CreateTable
└── AlterTable
└── AlterTableRenameCol
以create database 为例分析流程:
//处理Hook 上下文信息
AtlasHiveHookContext context =
new AtlasHiveHookContext(this, oper, hookContext, getKnownObjects(), isSkipTempTables());
//建库事件处理,提取相关库信息
event = new CreateDatabase(context);
if (event != null) {
final UserGroupInformation ugi = hookContext.getUgi() == null ? Utils.getUGI() : hookContext.getUgi();
super.notifyEntities(ActiveEntityFilter.apply(event.getNotificationMessages()), ugi);
}
public enum HookNotificationType {
TYPE_CREATE, TYPE_UPDATE, ENTITY_CREATE, ENTITY_PARTIAL_UPDATE, ENTITY_FULL_UPDATE, ENTITY_DELETE,
ENTITY_CREATE_V2, ENTITY_PARTIAL_UPDATE_V2, ENTITY_FULL_UPDATE_V2, ENTITY_DELETE_V2
}
//操作用户获取
if (context.isMetastoreHook()) {
try {
ugi = SecurityUtils.getUGI();
} catch (Exception e) {
//do nothing
}
} else {
ret = getHiveUserName();
if (StringUtils.isEmpty(ret)) {
ugi = getUgi();
}
}
if (ugi != null) {
ret = ugi.getShortUserName();
}
if (StringUtils.isEmpty(ret)) {
try {
ret = UserGroupInformation.getCurrentUser().getShortUserName();
} catch (IOException e) {
LOG.warn("Failed for UserGroupInformation.getCurrentUser() ", e);
ret = System.getProperty("user.name");
}
}
主要:
获取实体信息, 传递Hook message的类型、操作用户
notifyEntities 可以看出其他组件HBase、impala也会调用该方法进行消息的发送
public static void notifyEntities(List<HookNotification> messages, UserGroupInformation ugi, int maxRetries) {
if (executor == null) { // send synchronously
notifyEntitiesInternal(messages, maxRetries, ugi, notificationInterface, logFailedMessages, failedMessagesLogger);
} else {
executor.submit(new Runnable() {
@Override
public void run() {
notifyEntitiesInternal(messages, maxRetries, ugi, notificationInterface, logFailedMessages, failedMessagesLogger);
}
});
}
}
消息通知框架:
NotificationInterface
├── AtlasFileSpool
└── AbstractNotification
├── KafkaNotification
└── Spooler
数据写入Kafka中:
@Override
public void sendInternal(NotificationType notificationType, List<String> messages) throws NotificationException {
KafkaProducer producer = getOrCreateProducer(notificationType);
sendInternalToProducer(producer, notificationType, messages);
}
根据NotificationType写入指定topic 中:
private static final Map<NotificationType, String> PRODUCER_TOPIC_MAP = new HashMap<NotificationType, String>() {
{
put(NotificationType.HOOK, ATLAS_HOOK_TOPIC);
put(NotificationType.ENTITIES, ATLAS_ENTITIES_TOPIC);
}
};
NOTIFICATION_HOOK_TOPIC_NAME("atlas.notification.hook.topic.name", "ATLAS_HOOK"),
NOTIFICATION_ENTITIES_TOPIC_NAME("atlas.notification.entities.topic.name", "ATLAS_ENTITIES"),
数据主要写入两个Topic中: ATLAS_ENTITIES、ATLAS_HOOK
ATLAS_HOOK是写入Hook事件消息, 创建库的事件元数据信息会写入该Topic中
如何唯一确定一个库:
public String getQualifiedName(Database db) {
return getDatabaseName(db) + QNAME_SEP_METADATA_NAMESPACE + getMetadataNamespace();
}
dbName@clusterName 确定唯一性
外延应用
一个基于Hive hook 实现Impala 元数据刷新的用例:
AutoRefreshImpala:https://github.com/Observe-secretly/AutoRefreshImpala
参考
[1] Apache Atlas – Data Governance and Metadata framework for Hadoop
[2] Apache Atlas 源码
[Apache Atlas] Atlas 架构设计及源代码简单分析的更多相关文章
- FFmpeg的HEVC解码器源代码简单分析:环路滤波(Loop Filter)
===================================================== HEVC源代码分析文章列表: [解码 -libavcodec HEVC 解码器] FFmpe ...
- FFmpeg的HEVC解码器源代码简单分析:CTU解码(CTU Decode)部分-TU
===================================================== HEVC源代码分析文章列表: [解码 -libavcodec HEVC 解码器] FFmpe ...
- FFmpeg的HEVC解码器源代码简单分析:CTU解码(CTU Decode)部分-PU
===================================================== HEVC源代码分析文章列表: [解码 -libavcodec HEVC 解码器] FFmpe ...
- FFmpeg源代码简单分析:libavdevice的gdigrab
===================================================== FFmpeg的库函数源代码分析文章列表: [架构图] FFmpeg源代码结构图 - 解码 F ...
- FFmpeg源代码简单分析:libavdevice的avdevice_register_all()
===================================================== FFmpeg的库函数源代码分析文章列表: [架构图] FFmpeg源代码结构图 - 解码 F ...
- FFmpeg源代码简单分析:configure
===================================================== FFmpeg的库函数源代码分析文章列表: [架构图] FFmpeg源代码结构图 - 解码 F ...
- FFmpeg源代码简单分析:makefile
===================================================== FFmpeg的库函数源代码分析文章列表: [架构图] FFmpeg源代码结构图 - 解码 F ...
- FFmpeg源代码简单分析:libswscale的sws_scale()
===================================================== FFmpeg的库函数源代码分析文章列表: [架构图] FFmpeg源代码结构图 - 解码 F ...
- FFmpeg源代码简单分析:libswscale的sws_getContext()
===================================================== FFmpeg的库函数源代码分析文章列表: [架构图] FFmpeg源代码结构图 - 解码 F ...
随机推荐
- 微信小程序测试点
一.测试范围 1.权限测试 需要检查以下几种情况下微信用户访问的权限 1)未授权微信登录小程序 未授权时,一般使用一些业务功能的时候,都会弹出提醒:先授权再操作对应功能.or在提交数据到后台的时候,会 ...
- small-spring 代码贡献者3个月,敢说精通Spring了,分享我的总结!
作者:小傅哥 博客:https://bugstack.cn 沉淀.分享.成长,让自己和他人都能有所收获! 一.为什么手写Spring 这个与我们码农朝夕相处的 Spring,就像睡在你身边的媳妇,你知 ...
- MobSF移动安全扫描平台环境搭建与试用
MobSF简介 MobSF(Mobile-Security-Framework)是一种开源自动化的移动应用程序(Android / iOS / Windows)安全测试框架,能够执行静态,动态和恶意软 ...
- 解析java源文件
尝试从java源文件中解析出类.方法.属性等信息,但下面的代码没有完全实现. Sub parseJava() Dim package_name as String 'read a file Docum ...
- putty编译过程
在Win7上用Visual Studio编译putty源代码. 安装vs2005,只安装c++和.net framework sdk即可: 将putty-src.zip解压到e:\MyDoc\VSPr ...
- Shell-14-常用命令和工具
常用命令 有人说 Shell 脚本是命令堆积的一个文件, 按顺序去执行 还有人说想学好 Shell 脚本,要把 Linux 上各种常见的命令或工具掌握了,这些说法都没错 Shell 语言本身在语法结构 ...
- 解决docker删除加载失败的镜像报错
背景: 准备在vulhub复现weblogic反序列化漏洞时报错,环境加载失败准备删除weblogic镜像时报错: unable to delete 7d35c6cd3bcd (must be for ...
- noip8
T1 星际旅行 考试时觉得是道数学题,但没想到忘了欧拉路. 首先将每条边都拆成两条边,那么题目就变成了任意删掉两条边,使得新的图中存在欧拉路.设 \(sum\) 表示自环的数量, \(du_{i}\) ...
- C#硬件访问(摄像头、麦克风)
#需要引用:AForge类库.Microsoft.DirectX using System;using System.Windows.Forms;namespace CameraTest{ publi ...
- .net core signalR 服务端强制中断用户连接
.net core signalR 服务端断开连接 { } { } *:first-child { } *:last-child { } { } { } { } { } { } { } { } { } ...