EXPLAIN Syntax

Hive provides an EXPLAIN command that shows the execution plan for a query. The syntax for this statement is as follows:

EXPLAIN [EXTENDED|DEPENDENCY|AUTHORIZATION] query
  1. AUTHORIZATION is supported from HIVE 0.14.0 via HIVE-5961.

The use of EXTENDED in the EXPLAIN statement produces extra information about the operators in the plan. This is typically physical information like file names.

A Hive query gets converted into a sequence (it is more an Directed Acyclic Graph) of stages. These stages may be map/reduce stages or they may even be stages that do metastore or file system operations like move and rename. The explain output comprises of three parts:

  • The Abstract Syntax Tree for the query
  • The dependencies between the different stages of the plan
  • The description of each of the stages

The description of the stages itself shows a sequence of operators with the metadata associated with the operators. The metadata may comprise of things like filter expressions for the FilterOperator or the select expressions for the SelectOperator or the output file names for the FileSinkOperator.

As an example, consider the following EXPLAIN query:

EXPLAIN
FROM src INSERT OVERWRITE TABLE dest_g1 SELECT src.key, sum(substr(src.value,4)) GROUP BY src.key;

The output of this statement contains the following parts:

  • The Abstract Syntax Tree

    ABSTRACT SYNTAX TREE:
      (TOK_QUERY (TOK_FROM (TOK_TABREF src)) (TOK_INSERT (TOK_DESTINATION (TOK_TAB dest_g1)) (TOK_SELECT (TOK_SELEXPR (TOK_COLREF src key)) (TOK_SELEXPR (TOK_FUNCTION sum (TOK_FUNCTION substr (TOK_COLREF src value) 4)))) (TOK_GROUPBY (TOK_COLREF src key))))
  • The Dependency Graph

    STAGE DEPENDENCIES:
      Stage-1 is a root stage
      Stage-2 depends on stages: Stage-1
      Stage-0 depends on stages: Stage-2

    This shows that Stage-1 is the root stage, Stage-2 is executed after Stage-1 is done and Stage-0 is executed after Stage-2 is done.

  • The plans of each Stage

    STAGE PLANS:
      Stage: Stage-1
        Map Reduce
          Alias -> Map Operator Tree:
            src
                Reduce Output Operator
                  key expressions:
                        expr: key
                        type: string
                  sort order: +
                  Map-reduce partition columns:
                        expr: rand()
                        type: double
                  tag: -1
                  value expressions:
                        expr: substr(value, 4)
                        type: string
          Reduce Operator Tree:
            Group By Operator
              aggregations:
                    expr: sum(UDFToDouble(VALUE.0))
              keys:
                    expr: KEY.0
                    type: string
              mode: partial1
              File Output Operator
                compressed: false
                table:
                    input format: org.apache.hadoop.mapred.SequenceFileInputFormat
                    output format: org.apache.hadoop.mapred.SequenceFileOutputFormat
                    name: binary_table
     
      Stage: Stage-2
        Map Reduce
          Alias -> Map Operator Tree:
            /tmp/hive-zshao/67494501/106593589.10001
              Reduce Output Operator
                key expressions:
                      expr: 0
                      type: string
                sort order: +
                Map-reduce partition columns:
                      expr: 0
                      type: string
                tag: -1
                value expressions:
                      expr: 1
                      type: double
          Reduce Operator Tree:
            Group By Operator
              aggregations:
                    expr: sum(VALUE.0)
              keys:
                    expr: KEY.0
                    type: string
              mode: final
              Select Operator
                expressions:
                      expr: 0
                      type: string
                      expr: 1
                      type: double
                Select Operator
                  expressions:
                        expr: UDFToInteger(0)
                        type: int
                        expr: 1
                        type: double
                  File Output Operator
                    compressed: false
                    table:
                        input format: org.apache.hadoop.mapred.TextInputFormat
                        output format: org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat
                        serde: org.apache.hadoop.hive.serde2.dynamic_type.DynamicSerDe
                        name: dest_g1
     
      Stage: Stage-0
        Move Operator
          tables:
                replace: true
                table:
                    input format: org.apache.hadoop.mapred.TextInputFormat
                    output format: org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat
                    serde: org.apache.hadoop.hive.serde2.dynamic_type.DynamicSerDe
                    name: dest_g1

    In this example there are 2 map/reduce stages (Stage-1 and Stage-2) and 1 File System related stage (Stage-0). Stage-0 basically moves the results from a temporary directory to the directory corresponding to the table dest_g1.

A map/reduce stage itself comprises of 2 parts:

  • A mapping from table alias to Map Operator Tree - This mapping tells the mappers which operator tree to call in order to process the rows from a particular table or result of a previous map/reduce stage. In Stage-1 in the above example, the rows from src table are processed by the operator tree rooted at a Reduce Output Operator. Similarly, in Stage-2 the rows of the results of Stage-1 are processed by another operator tree rooted at another Reduce Output Operator. Each of these Reduce Output Operators partitions the data to the reducers according to the criteria shown in the metadata.
  • A Reduce Operator Tree - This is the operator tree which processes all the rows on the reducer of the map/reduce job. In Stage-1 for example, the Reducer Operator Tree is carrying out a partial aggregation where as the Reducer Operator Tree in Stage-2 computes the final aggregation from the partial aggregates computed in Stage-1

The use of DEPENDENCY in the EXPLAIN statement produces extra information about the inputs in the plan. It shows various attributes for the inputs. For example, for a query like:

EXPLAIN DEPENDENCY
  SELECT key, count(1) FROM srcpart WHERE ds IS NOT NULL GROUP BY key

the following output is produced:

{"input_partitions":[{"partitionName":"default<at:var at:name="srcpart" />ds=2008-04-08/hr=11"},{"partitionName":"default<at:var at:name="srcpart" />ds=2008-04-08/hr=12"},{"partitionName":"default<at:var at:name="srcpart" />ds=2008-04-09/hr=11"},{"partitionName":"default<at:var at:name="srcpart" />ds=2008-04-09/hr=12"}],"input_tables":[{"tablename":"default@srcpart","tabletype":"MANAGED_TABLE"}]}

The inputs contain both the tables and the partitions. Note that the table is present even if none of the partitions is accessed in the query.

The dependencies show the parents in case a table is accessed via a view. Consider the following queries:

CREATE VIEW V1 AS SELECT key, value from src;
EXPLAIN DEPENDENCY SELECT * FROM V1;

The following output is produced:

{"input_partitions":[],"input_tables":[{"tablename":"default@v1","tabletype":"VIRTUAL_VIEW"},{"tablename":"default@src","tabletype":"MANAGED_TABLE","tableParents":"[default@v1]"}]}

As above, the inputs contain the view V1 and the table 'src' that the view V1 refers to.

All the outputs are shown if a table is being accessed via multiple parents.

CREATE VIEW V2 AS SELECT ds, key, value FROM srcpart WHERE ds IS NOT NULL;
CREATE VIEW V4 AS
  SELECT src1.key, src2.value as value1, src3.value as value2
  FROM V1 src1 JOIN V2 src2 on src1.key = src2.key JOIN src src3 ON src2.key = src3.key;
EXPLAIN DEPENDENCY SELECT * FROM V4;

The following output is produced.

{"input_partitions":[{"partitionParents":"[default@v2]","partitionName":"default<at:var at:name="srcpart" />ds=2008-04-08/hr=11"},{"partitionParents":"[default@v2]","partitionName":"default<at:var at:name="srcpart" />ds=2008-04-08/hr=12"},{"partitionParents":"[default@v2]","partitionName":"default<at:var at:name="srcpart" />ds=2008-04-09/hr=11"},{"partitionParents":"[default@v2]","partitionName":"default<at:var at:name="srcpart" />ds=2008-04-09/hr=12"}],"input_tables":[{"tablename":"default@v4","tabletype":"VIRTUAL_VIEW"},{"tablename":"default@v2","tabletype":"VIRTUAL_VIEW","tableParents":"[default@v4]"},{"tablename":"default@v1","tabletype":"VIRTUAL_VIEW","tableParents":"[default@v4]"},{"tablename":"default@src","tabletype":"MANAGED_TABLE","tableParents":"[default@v4, default@v1]"},{"tablename":"default@srcpart","tabletype":"MANAGED_TABLE","tableParents":"[default@v2]"}]}

As can be seen, src is being accessed via parents v1 and v4.

The use of AUTHORIZATION in the EXPLAIN statement shows all entities needed to be authorized to execute the query and authorization failures if exists. For example, for a query like:

EXPLAIN AUTHORIZATION
  SELECT * FROM src JOIN srcpart;

the following output is produced:

INPUTS:
  default@srcpart
  default@src
  default@srcpart@ds=2008-04-08/hr=11
  default@srcpart@ds=2008-04-08/hr=12
  default@srcpart@ds=2008-04-09/hr=11
  default@srcpart@ds=2008-04-09/hr=12
OUTPUTS:
  hdfs://localhost:9000/tmp/.../-mr-10000
CURRENT_USER:
  navis
OPERATION:
  QUERY
AUTHORIZATION_FAILURES:
  Permission denied: Principal [name=navis, type=USER] does not have following privileges for operation QUERY [[SELECT] on Object [type=TABLE_OR_VIEW, name=default.src], [SELECT] on Object [type=TABLE_OR_VIEW, name=default.srcpart]]

With the FORMATTED keyword, it will be returned in JSON format.

"OUTPUTS":["hdfs://localhost:9000/tmp/.../-mr-10000"],"INPUTS":["default@srcpart","default@src","default@srcpart@ds=2008-04-08/hr=11","default@srcpart@ds=2008-04-08/hr=12","default@srcpart@ds=2008-04-09/hr=11","default@srcpart@ds=2008-04-09/hr=12"],"OPERATION":"QUERY","CURRENT_USER":"navis","AUTHORIZATION_FAILURES":["Permission denied: Principal [name=navis, type=USER] does not have following privileges for operation QUERY [[SELECT] on Object [type=TABLE_OR_VIEW, name=default.src], [SELECT] on Object [type=TABLE_OR_VIEW, name=default.srcpart]]"]}
 

[Hive - LanguageManual ] Explain (待)的更多相关文章

  1. Hive的Explain命令

    Hive的Explain命令,用于显示SQL查询的执行计划. Hive查询被转化成序列阶段(这是一个有向无环图).这些阶段可能是mapper/reducer阶段,或者是Metastore或文件系统的操 ...

  2. [Hive - LanguageManual ] ]SQL Standard Based Hive Authorization

    Status of Hive Authorization before Hive 0.13 SQL Standards Based Hive Authorization (New in Hive 0. ...

  3. [Hive - LanguageManual ] Windowing and Analytics Functions (待)

    LanguageManual WindowingAndAnalytics     Skip to end of metadata   Added by Lefty Leverenz, last edi ...

  4. [HIve - LanguageManual] Hive Operators and User-Defined Functions (UDFs)

    Hive Operators and User-Defined Functions (UDFs) Hive Operators and User-Defined Functions (UDFs) Bu ...

  5. [Hive - LanguageManual] Import/Export

    LanguageManual ImportExport     Skip to end of metadata   Added by Carl Steinbach, last edited by Le ...

  6. [Hive - LanguageManual] DML: Load, Insert, Update, Delete

    LanguageManual DML Hive Data Manipulation Language Hive Data Manipulation Language Loading files int ...

  7. [Hive - LanguageManual] Alter Table/Partition/Column

    Alter Table/Partition/Column Alter Table Rename Table Alter Table Properties Alter Table Comment Add ...

  8. Hive LanguageManual DDL

    hive语法规则LanguageManual DDL SQL DML 和 DDL 数据操作语言 (DML) 和 数据定义语言 (DDL) 一.数据库 增删改都在文档里说得也很明白,不重复造车轮 二.表 ...

  9. 【Hive】explain command throw ClassCastException in 2.3.4

    参考:https://issues.apache.org/jira/browse/HIVE-21489 (一)问题描述: Hive-2.3.4 执行  explain select * from sr ...

随机推荐

  1. Filter过滤非法字符

    示例:定义一个Filter,用于用户发言中出现的“晕”字,即如果没有这个字则允许发言,如果有这个字则不允许发言并提示错误. CharForm.jsp <%@ page language=&quo ...

  2. 第二十二章 CLR寄宿和AppDomain

    1. 概念解析 CLR Hosting(CLR 宿主):初始启动.Net Application时,Windows进程的执行和初始化跟传统的Win32程序是一样的,执行的还是非托管代码,只不过由于PE ...

  3. PCL—低层次视觉—点云分割(基于形态学)

    1.航空测量与点云的形态学 航空测量是对地形地貌进行测量的一种高效手段.生成地形三维形貌一直是地球学,测量学的研究重点.但对于城市,森林,等独特地形来说,航空测量会受到影响.因为土地表面的树,地面上的 ...

  4. VS2012界面改为英文

    需要下载一个语言包 http://www.microsoft.com/en-us/download/confirmation.aspx?id=30681 还是不要指望这个,简直坑爹. 我把所有中文版的 ...

  5. float label 提示

    很多时候,我们写input 都会添加 placeholder 属性,用于提示用户这里该输入什么,怎么输入,但是当用户一旦输入了字符串,该提示就会消失,相信会有人,输入内容后可能会忘记这里要输入的是什么 ...

  6. HDU 1672 Cuckoo Hashing

    Cuckoo Hashing Description One of the most fundamental data structure problems is the dictionary pro ...

  7. HDU 2586 + HDU 4912 最近公共祖先

    先给个LCA模板 HDU 1330(LCA模板) #include <cstdio> #include <cstring> #define N 40005 struct Edg ...

  8. bzoj3551 3545

    我直接来讲在线好了 这是一个很巧妙的方法,把边作为一个点 做一遍最小生成树,当加如一条边时,我们把这条边两点x,y的并查集的根i,j的父亲都设为这条边代表的点k,由k向i,j连边 这样我们就构建出一棵 ...

  9. OpenSSL再爆多处高危漏洞

    OpenSSL团队于北京时间6月5号晚8点左右发布了5个安全补丁,这次的更新涉及多处高危漏洞,连接:http://www.openssl.org/news/ 受影响的版本包括: OpenSSL 1.0 ...

  10. POJ 2446 Chessboard (二分图匹配)

    题意 在一个N*M的矩形里,用1*2的骨牌去覆盖该矩形,每个骨牌只能覆盖相邻的两个格子,问是否能把每个格子都盖住.PS:有K个孔不用覆盖. 思路 容易发现,棋盘上坐标和为奇数的点只会和坐标和为偶数的点 ...