Hive SQL解析过程

SQL->AST(Abstract Syntax Tree)->Task(MapRedTask,FetchTask)->QueryPlan(Task集合)->Job(Yarn)

SQL解析会在两个地方进行:

  • 一个是SQL执行前compile,具体在Driver.compile,为了创建QueryPlan;
  • 一个是explain,具体在ExplainSemanticAnalyzer.analyzeInternal,为了创建ExplainTask;

SQL执行过程

1 compile过程(SQL->AST(Abstract Syntax Tree)->QueryPlan)

org.apache.hadoop.hive.ql.Driver

  public int compile(String command, boolean resetTaskIds, boolean deferClose) {
...
ParseDriver pd = new ParseDriver();
ASTNode tree = pd.parse(command, ctx);
tree = ParseUtils.findRootNonNullToken(tree);
...
BaseSemanticAnalyzer sem = SemanticAnalyzerFactory.get(queryState, tree);
...
sem.analyze(tree, ctx);
...
// Record any ACID compliant FileSinkOperators we saw so we can add our transaction ID to
// them later.
acidSinks = sem.getAcidFileSinks(); LOG.info("Semantic Analysis Completed"); // validate the plan
sem.validate();
acidInQuery = sem.hasAcidInQuery();
perfLogger.PerfLogEnd(CLASS_NAME, PerfLogger.ANALYZE); if (isInterrupted()) {
return handleInterruption("after analyzing query.");
} // get the output schema
schema = getSchema(sem, conf);
plan = new QueryPlan(queryStr, sem, perfLogger.getStartTime(PerfLogger.DRIVER_RUN), queryId,
queryState.getHiveOperation(), schema);
...

compile过程为先由ParseDriver将SQL转换为ASTNode,然后由BaseSemanticAnalyzer对ASTNode进行分析,最后将BaseSemanticAnalyzer传入QueryPlan构造函数来创建QueryPlan;

1)将SQL转换为ASTNode过程如下(SQL->AST(Abstract Syntax Tree))

org.apache.hadoop.hive.ql.parse.ParseDriver

  public ASTNode parse(String command, Context ctx, boolean setTokenRewriteStream)
throws ParseException {
if (LOG.isDebugEnabled()) {
LOG.debug("Parsing command: " + command);
} HiveLexerX lexer = new HiveLexerX(new ANTLRNoCaseStringStream(command));
TokenRewriteStream tokens = new TokenRewriteStream(lexer);
if (ctx != null) {
if ( setTokenRewriteStream) {
ctx.setTokenRewriteStream(tokens);
}
lexer.setHiveConf(ctx.getConf());
}
HiveParser parser = new HiveParser(tokens);
if (ctx != null) {
parser.setHiveConf(ctx.getConf());
}
parser.setTreeAdaptor(adaptor);
HiveParser.statement_return r = null;
try {
r = parser.statement();
} catch (RecognitionException e) {
e.printStackTrace();
throw new ParseException(parser.errors);
} if (lexer.getErrors().size() == 0 && parser.errors.size() == 0) {
LOG.debug("Parse Completed");
} else if (lexer.getErrors().size() != 0) {
throw new ParseException(lexer.getErrors());
} else {
throw new ParseException(parser.errors);
} ASTNode tree = (ASTNode) r.getTree();
tree.setUnknownTokenBoundaries();
return tree;
}

2)analyze过程(AST(Abstract Syntax Tree)->Task)

org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer

  public void analyze(ASTNode ast, Context ctx) throws SemanticException {
initCtx(ctx);
init(true);
analyzeInternal(ast);
}

其中analyzeInternal是抽象方法,由不同的子类实现,比如DDLSemanticAnalyzer,SemanticAnalyzer,UpdateDeleteSemanticAnalyzer,ExplainSemanticAnalyzer等;
analyzeInternal主要的工作是将ASTNode转化为Task,包括可能的optimize,过程比较复杂,这里不贴代码;

3)创建QueryPlan过程如下(Task->QueryPlan)

org.apache.hadoop.hive.ql.QueryPlan

  public QueryPlan(String queryString, BaseSemanticAnalyzer sem, Long startTime, String queryId,
HiveOperation operation, Schema resultSchema) {
this.queryString = queryString; rootTasks = new ArrayList<Task<? extends Serializable>>(sem.getAllRootTasks());
reducerTimeStatsPerJobList = new ArrayList<ReducerTimeStatsPerJob>();
fetchTask = sem.getFetchTask();
// Note that inputs and outputs can be changed when the query gets executed
inputs = sem.getAllInputs();
outputs = sem.getAllOutputs();
linfo = sem.getLineageInfo();
tableAccessInfo = sem.getTableAccessInfo();
columnAccessInfo = sem.getColumnAccessInfo();
idToTableNameMap = new HashMap<String, String>(sem.getIdToTableNameMap()); this.queryId = queryId == null ? makeQueryId() : queryId;
query = new org.apache.hadoop.hive.ql.plan.api.Query();
query.setQueryId(this.queryId);
query.putToQueryAttributes("queryString", this.queryString);
queryProperties = sem.getQueryProperties();
queryStartTime = startTime;
this.operation = operation;
this.autoCommitValue = sem.getAutoCommitValue();
this.resultSchema = resultSchema;
}

可见只是简单的将BaseSemanticAnalyzer中的内容拷贝出来,其中最重要的是sem.getAllRootTasks和sem.getFetchTask;

2 execute过程(QueryPlan->Job)

org.apache.hadoop.hive.ql.Driver

  public int execute(boolean deferClose) throws CommandNeedRetryException {
...
// Add root Tasks to runnable
for (Task<? extends Serializable> tsk : plan.getRootTasks()) {
// This should never happen, if it does, it's a bug with the potential to produce
// incorrect results.
assert tsk.getParentTasks() == null || tsk.getParentTasks().isEmpty();
driverCxt.addToRunnable(tsk);
}
...
// Loop while you either have tasks running, or tasks queued up
while (driverCxt.isRunning()) { // Launch upto maxthreads tasks
Task<? extends Serializable> task;
while ((task = driverCxt.getRunnable(maxthreads)) != null) {
TaskRunner runner = launchTask(task, queryId, noName, jobname, jobs, driverCxt);
if (!runner.isRunning()) {
break;
}
}
... private TaskRunner launchTask(Task<? extends Serializable> tsk, String queryId, boolean noName,
String jobname, int jobs, DriverContext cxt) throws HiveException {
...
TaskRunner tskRun = new TaskRunner(tsk, tskRes);
...
tskRun.start();
...
tskRun.runSequential();
...

Driver.run中从QueryPlan中取出Task,并逐个launchTask,launchTask过程为将Task包装为TaskRunner,并最终调用TaskRunner.runSequential,下面看TaskRunner:

org.apache.hadoop.hive.ql.exec.TaskRunner

  public void runSequential() {
int exitVal = -101;
try {
exitVal = tsk.executeTask();
...

这里直接调用Task.executeTask

org.apache.hadoop.hive.ql.exec.Task

  public int executeTask() {
...
int retval = execute(driverContext);
...

这里execute是抽象方法,由子类实现,比如DDLTask,MapRedTask等,着重看MapRedTask,因为大部分的Task都是MapRedTask:

org.apache.hadoop.hive.ql.exec.mr.MapRedTask

  public int execute(DriverContext driverContext) {
...
if (!runningViaChild) {
// we are not running this mapred task via child jvm
// so directly invoke ExecDriver
return super.execute(driverContext);
}
...

这里直接调用父类方法,也就是ExecDriver.execute,下面看:

org.apache.hadoop.hive.ql.exec.mr.ExecDriver

  protected transient JobConf job;
...
public int execute(DriverContext driverContext) {
...
JobClient jc = null; MapWork mWork = work.getMapWork();
ReduceWork rWork = work.getReduceWork();
...
if (mWork.getNumMapTasks() != null) {
job.setNumMapTasks(mWork.getNumMapTasks().intValue());
}
...
job.setNumReduceTasks(rWork != null ? rWork.getNumReduceTasks().intValue() : 0);
job.setReducerClass(ExecReducer.class);
...
jc = new JobClient(job);
...
rj = jc.submitJob(job);
this.jobID = rj.getJobID();
...

这里将Task转化为Job提交到Yarn执行;

SQL Explain过程

另外一个SQL解析的过程是explain,在ExplainSemanticAnalyzer中将ASTNode转化为ExplainTask:

org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer

  public void analyzeInternal(ASTNode ast) throws SemanticException {
...
ctx.setExplain(true);
ctx.setExplainLogical(logical); // Create a semantic analyzer for the query
ASTNode input = (ASTNode) ast.getChild(0);
BaseSemanticAnalyzer sem = SemanticAnalyzerFactory.get(queryState, input);
sem.analyze(input, ctx);
sem.validate(); ctx.setResFile(ctx.getLocalTmpPath());
List<Task<? extends Serializable>> tasks = sem.getAllRootTasks();
if (tasks == null) {
tasks = Collections.emptyList();
} FetchTask fetchTask = sem.getFetchTask();
if (fetchTask != null) {
// Initialize fetch work such that operator tree will be constructed.
fetchTask.getWork().initializeForFetch(ctx.getOpContext());
} ParseContext pCtx = null;
if (sem instanceof SemanticAnalyzer) {
pCtx = ((SemanticAnalyzer)sem).getParseContext();
} boolean userLevelExplain = !extended
&& !formatted
&& !dependency
&& !logical
&& !authorize
&& (HiveConf.getBoolVar(ctx.getConf(), HiveConf.ConfVars.HIVE_EXPLAIN_USER) && HiveConf
.getVar(conf, HiveConf.ConfVars.HIVE_EXECUTION_ENGINE).equals("tez"));
ExplainWork work = new ExplainWork(ctx.getResFile(),
pCtx,
tasks,
fetchTask,
sem,
extended,
formatted,
dependency,
logical,
authorize,
userLevelExplain,
ctx.getCboInfo()); work.setAppendTaskType(
HiveConf.getBoolVar(conf, HiveConf.ConfVars.HIVEEXPLAINDEPENDENCYAPPENDTASKTYPES)); ExplainTask explTask = (ExplainTask) TaskFactory.get(work, conf); fieldList = explTask.getResultSchema();
rootTasks.add(explTask);
}

【原创】大数据基础之Hive(2)Hive SQL执行过程之SQL解析过程的更多相关文章

  1. 【原创】大数据基础之Spark(4)RDD原理及代码解析

    一 简介 spark核心是RDD,官方文档地址:https://spark.apache.org/docs/latest/rdd-programming-guide.html#resilient-di ...

  2. CentOS6安装各种大数据软件 第八章:Hive安装和配置

    相关文章链接 CentOS6安装各种大数据软件 第一章:各个软件版本介绍 CentOS6安装各种大数据软件 第二章:Linux各个软件启动命令 CentOS6安装各种大数据软件 第三章:Linux基础 ...

  3. 【原创】大数据基础之Benchmark(2)TPC-DS

    tpc 官方:http://www.tpc.org/ 一 简介 The TPC is a non-profit corporation founded to define transaction pr ...

  4. 【原创】大数据基础之Zookeeper(2)源代码解析

    核心枚举 public enum ServerState { LOOKING, FOLLOWING, LEADING, OBSERVING; } zookeeper服务器状态:刚启动LOOKING,f ...

  5. 【原创】大数据基础之Hive(5)性能调优Performance Tuning

    1 compress & mr hive默认的execution engine是mr hive> set hive.execution.engine;hive.execution.eng ...

  6. 【原创】大数据基础之Hive(1)Hive SQL执行过程之代码流程

    hive 2.1 hive执行sql有两种方式: 执行hive命令,又细分为hive -e,hive -f,hive交互式: 执行beeline命令,beeline会连接远程thrift server ...

  7. 【原创】大数据基础之Hive(5)hive on spark

    hive 2.3.4 on spark 2.4.0 Hive on Spark provides Hive with the ability to utilize Apache Spark as it ...

  8. 【原创】大数据基础之Hive(3)最简绿色部署

    hadoop部署参考:https://www.cnblogs.com/barneywill/p/10428098.html 1 拷贝到所有服务器上并解压 # ansible all-servers - ...

  9. 了解大数据的技术生态系统 Hadoop,hive,spark(转载)

    首先给出原文链接: 原文链接 大数据本身是一个很宽泛的概念,Hadoop生态圈(或者泛生态圈)基本上都是为了处理超过单机尺度的数据处理而诞生的.你能够把它比作一个厨房所以须要的各种工具. 锅碗瓢盆,各 ...

随机推荐

  1. React16.x特性剪辑

    本文整理了 React 16.x 出现的耳目一新的概念与 api 以及应用场景. 更多 React 系列文章可以订阅blog 16.0 Fiber 在 16 之前的版本的渲染过程可以想象成一次性潜水 ...

  2. SqlServer2008_r2安装功能选择

    勾上数据引擎服务.客户端工具链接.sdk.管理工具.客户连接SDK.最后一个 sql2008安装时,怎么选择服务账户NT Authority\System ,系统内置账号,对本地系统拥有完全控制权限: ...

  3. es6可变参数-扩展运算符

    es5中参数不确定个数的情况下: //求参数和 function f(){ var a = Array.prototype.slice.call(arguments); var sum = 0; a. ...

  4. JAVA ==号和equals()的区别

    ==号和equals()方法都是比较是否相等的方法,那它们有什么区别和联系呢? 首先,==号在比较基本数据类型时比较的是值,而用==号比较两个对象时比较的是两个对象的地址值: int x = 10; ...

  5. $router和$route的区别

    在路由跳转的时候除了用router-link标签以外需要在script标签在事件里面跳转,所以有个方法就是在script标签里面写this.$router.push('要跳转的路径名'), 在写的时候 ...

  6. python之路7-正则表达式

    正则表达式用于做字符串匹配,在python中用re模块来操作 生成正则的在线工具:http://tool.chinaz.com/regex

  7. linux添加超级管理员用户,修改,删除用户

    useradd一个用户后,去修改/etc/passwd文件中的这个用户这一行,把其中的uid改为0,gid改为0(其中****代表一个用户名)这样****就具有root权限了 如:root2:x:0: ...

  8. python学习日记(OOP——反射)

    反射 反射就是通过字符串的形式,导入模块:通过字符串的形式,去模块寻找指定函数,并执行.利用字符串的形式去对象(模块)中操作(查找/获取/删除/添加)成员,一种基于字符串的事件驱动! hasattr ...

  9. 洛谷 P4302 【[SCOI2003]字符串折叠】

    又来填一个以前很久很久以前挖的坑 首先如果先抛开折叠的内部情况不谈,我们可以得到这样的一个经典的区间DP的式子 $ f[l][r]=min(f[l][r],f[l][k]+f[k+1][r])(l&l ...

  10. BZOJ2616PERIODNI

    题目描述 给定一个N列的表格,每列的高度各不相同,但底部对齐,然后向表格中填入K个相同的数,填写时要求不能有两个数在同一列,或同一行,下图中b是错误的填写,a是正确的填写,因为两个a虽然在同一行,但它 ...