想要看hive的日志,我们查看/home/hadoop/hive/conf/hive-log4j2.properties

# list of properties
property.hive.log.level = INFO
property.hive.root.logger = DRFA
property.hive.log.dir = ${sys:java.io.tmpdir}/${sys:user.name}
property.hive.log.file = hive.log
property.hive.perflogger.log.level = INFO

红色部分说明了hive的日志目录,但是我们查看 echo ${sys:java.io.tmpdir} 是没有输出的,为空,其实日志目录在系统的/tmp/hostname/hive.log

[hadoop@master hadoop]$ pwd
/tmp/hadoop
[hadoop@master hadoop]$ ls -rlt
total
-rw-rw-r--. hadoop hadoop Jan : hive.log.--
-rw-rw-r--. hadoop hadoop Mar : hive.log.--
-rw-rw-r--. hadoop hadoop Apr : hive.log.--
-rw-rw-r--. hadoop hadoop Apr : stderr
-rw-rw-r--. hadoop hadoop Apr : hive.log

在hive的安装目录,新建一个logs目录,来存储日志文件

[hadoop@master hive]$ pwd
/home/hadoop/hive
[hadoop@master hive]$ mkdir logs

修改配置文件 hive-log4j2.properties

property.hive.log.dir = /home/hadoop/hive/logs

退出hive

[hadoop@master sbin]$ hive

Logging initialized using configuration in file:/home/hadoop/hive/conf/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive .X releases.
hive> show databases;
OK
db_hive
default
Time taken: 3.857 seconds, Fetched: row(s)
hive> use db_hive;
OK
Time taken: 0.024 seconds
hive> exit ;

重新启动hive

[hadoop@master logs]$ pwd
/home/hadoop/hive/logs
[hadoop@master logs]$ ls -rlt
total
-rw-rw-r--. hadoop hadoop Apr : hive.log

可以看到hive的日志已经更改路径了

可以看hive中的变量:

hive> set;

...
...
system:sun.java.launcher=SUN_STANDARD
system:sun.jnu.encoding=UTF-
system:sun.management.compiler=HotSpot -Bit Tiered Compilers
system:sun.os.patch.level=unknown
system:user.country=US
system:user.dir=/home/hadoop/hadoop-2.7./sbin
system:user.home=/home/hadoop
system:user.language=en
system:user.name=hadoop
system:user.timezone=America/Los_Angeles

设置属性在启动的时候

[hadoop@master sbin]$ hive -help
usage: hive
-d,--define <key=value> Variable substitution to apply to Hive
commands. e.g. -d A=B or --define A=B
--database <databasename> Specify the database to use
-e <quoted-query-string> SQL from command line
-f <filename> SQL from files
-H,--help Print help information
--hiveconf <property=value> Use value for given property
--hivevar <key=value> Variable substitution to apply to Hive
commands. e.g. --hivevar A=B
-i <filename> Initialization SQL file
-S,--silent Silent mode in interactive shell
-v,--verbose Verbose mode (echo executed SQL to the
console)
[hadoop@master sbin]$ hive --hiveconf hive.root.logger=INFO,console Logging initialized using configuration in file:/home/hadoop/hive/conf/hive-log4j2.properties Async: true
--02T22::, INFO [main] SessionState:
Logging initialized using configuration in file:/home/hadoop/hive/conf/hive-log4j2.properties Async: true
--02T22::, WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Hive-on-MR is deprecated in Hive and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive .X releases.
hive> use db_hive;
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Compiling command(queryId=hadoop_20190402225652_7939edc0-f75c--8bc3-a2b793b0251f): use db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.ObjectStore: ObjectStore, initialize called
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.ObjectStore: Initialized ObjectStore
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: Added admin role in metastore
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: Added public role in metastore
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: No user is added in admin role, since config is empty
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : get_all_functions
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_all_functions
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : get_database: db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Semantic Analysis Completed
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Completed compiling command(queryId=hadoop_20190402225652_7939edc0-f75c--8bc3-a2b793b0251f); Time taken: 3.036 seconds
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Concurrency mode is disabled, not creating a lock manager
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Executing command(queryId=hadoop_20190402225652_7939edc0-f75c--8bc3-a2b793b0251f): use db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] sqlstd.SQLStdHiveAccessController: Created SQLStdHiveAccessController for session context : HiveAuthzSessionContext [sessionString=ad332b5e-6ec9--bef1-253a5ab59e59, clientType=HIVECLI]
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] hive.metastore: Mestastore configuration hive.metastore.filter.hook changed from org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl to org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : Cleaning up thread local RawStore...
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=Cleaning up thread local RawStore...
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : Done cleaning up thread local RawStore
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=Done cleaning up thread local RawStore
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Starting task [Stage-:DDL] in serial mode
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : get_database: db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.ObjectStore: ObjectStore, initialize called
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.ObjectStore: Initialized ObjectStore
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : get_database: db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Completed executing command(queryId=hadoop_20190402225652_7939edc0-f75c--8bc3-a2b793b0251f); Time taken: 0.175 seconds
OK
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: OK
Time taken: 3.222 seconds
hive> show tables;
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Compiling command(queryId=hadoop_20190402225706_0f585a6e-ca17--8d99-3260e7caabf5): show tables
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : get_database: db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Semantic Analysis Completed
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string, comment:from deserializer)], properties:null)
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] exec.ListSinkOperator: Initializing operator LIST_SINK[]
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Completed compiling command(queryId=hadoop_20190402225706_0f585a6e-ca17--8d99-3260e7caabf5); Time taken: 0.076 seconds
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Concurrency mode is disabled, not creating a lock manager
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Executing command(queryId=hadoop_20190402225706_0f585a6e-ca17--8d99-3260e7caabf5): show tables
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Starting task [Stage-:DDL] in serial mode
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : get_database: db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : get_tables: db=db_hive pat=.*
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_tables: db=db_hive pat=.*
OK
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Completed executing command(queryId=hadoop_20190402225706_0f585a6e-ca17--8d99-3260e7caabf5); Time taken: 0.028 seconds
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: OK
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] mapred.FileInputFormat: Total input paths to process :
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] exec.ListSinkOperator: Closing operator LIST_SINK[]
u2
u4
Time taken: 0.105 seconds, Fetched: row(s)

日志比较详细。

Hive的日志操作的更多相关文章

  1. 基于hive的日志分析系统

    转自 http://www.cppblog.com/koson/archive/2010/07/19/120773.html           hive 简介         hive 是一个基于  ...

  2. Hive配置与操作实践

    Hive配置与操作实践 @(Hadoop) 安装hive hive的安装十分简单,只需要在一台服务器上部署即可. 上传hive安装包,解压缩,将其配入环境变量. mysql的设置 在要作为元数据库的m ...

  3. mysql 查看 删除 日志操作总结(包括单独和主从mysql)

    我们可以在mysql的安装目录下看到mysql的二进制日志文件,如mysql-bin.000***等,很多人都不及时的处理,导致整个硬盘被塞满也是有可能的.这些是数据库的操作日志.它记录了我们平时使用 ...

  4. SQL Server 最小化日志操作解析,应用

    Sql Server 中数据库在BULK_LOGGED/SIMPLE模式下的一些操作会采用最小化日志的记录方式,以减小tran log落盘日志量从而提高整体性能. 这里我简单介绍下哪些操作在什么样的情 ...

  5. 使用Log4j进行日志操作

    使用Log4j进行日志操作 一.Log4j简介 (1)概述 Log4j是Apache的一个开放源代码项目,通过使用Log4j,我们可以控制日志信息输送的目的地是控制台.文件.GUI组件.甚至是套接字服 ...

  6. SQL Server 最小化日志操作解析,应用[手稿]

    Sql Server 中数据库在BULK_LOGGED/SIMPLE模式下的一些操作会采用最小化日志的记录方式,以减小tran log落盘日志量从而提高整体性能. 这里我简单介绍下哪些操作在什么样的情 ...

  7. xBIM 日志操作

    目录 xBIM 应用与学习 (一) xBIM 应用与学习 (二) xBIM 基本的模型操作 xBIM 日志操作 XBIM 3D 墙壁案例 xBIM 格式之间转换 xBIM 使用Linq 来优化查询 x ...

  8. python中的日志操作和发送邮件

    1.python中的日志操作 安装log模块:pip install nnlog 参数:my_log = nnlog.Logger('server_log.log',level='debug',bac ...

  9. 如何监听对 HIVE 元数据的操作

    目录 简介 HIVE 基本操作 获取 HIVE 源码 编译 HIVE 源码 启动 HIVE 停止 HIVE 监听对 HIVE 元数据的操作 参考文档 简介 公司有个元数据管理平台,会定期同步 HIVE ...

随机推荐

  1. Python - 排序( 插入, 冒泡, 快速, 二分 )

    插入排序 算法分析 两次循环, 大循环对队列中的每一个元素拿出来作为小循环的裁定对象 小循环对堆当前循环对象在有序队列中寻找插入的位置 性能参数 空间复杂度 O(1) 时间复杂度 O(n^2) 详细代 ...

  2. python之scrapy模块logging日志

    1.知识点 """ logging : scrapy: settings中设置LOG_LEVEL="WARNING" settings中设置LOG_F ...

  3. openstack部署dashboard

    1.下载安装包 yum install openstack-dashboard 2.编辑配置文件 cp /etc/openstack-dashboard/local_settings /etc/ope ...

  4. vue-cli2.x版本安装vue-cli建项目

    全局安装vue-cli 命令行输入: vue-cli版本在3以下 npm install --global vue-cli 安装vue-cli后,可以查看一下是否安装成功vue --version, ...

  5. Pytorch-属性统计

    引言 本篇介绍Pytorch属性统计的几种方式. 统计属性 求值或位置 norm mean sum prod max, min, argmin, argmax kthvalue, topk norm ...

  6. Python multiprocess模块(上)

    multiprocess模块 一. Process模块介绍 1. 直接使用Process模块创建进程 (1)主进程和子进程 (2)if __name__ == "__main__" ...

  7. ASP.NET Core 入门笔记6,ASP.NET Core MVC 视图传值入门

    摘抄自:https://www.cnblogs.com/ken-io/p/aspnet-core-tutorial-mvc-view-renderdata.html 如有侵权请告知 一.前言 1.本教 ...

  8. 从零探索Java网络编程01之 TCP/IP 与 Socket

    最近完成了几项比较简单的项目, 终于是在996里偷了点闲暇时光, 想着来研究研究些啥吧?  一个普通的控制台日志映入了我的眼帘(孽缘呀): (图中使用 SpringBoot 的 log4j 来输出日志 ...

  9. 第10课.c++的新成员

    1.动态内存分配 a.c++中通过new关键字进行动态内存申请 b.c++中的动态内存申请是基于类型进行的 c.delete关键字用于内存释放 2.new关键字与malloc函数的区别 a.new关键 ...

  10. Springboot Rabbitmq 使用Jackson2JsonMessageConverter 消息传递后转对象

    Springboot为了应对高并发,接入了消息队列Rabbitmq,第一版验证时使用简单消费队列: //发送端 AbstractOrder order =new Order(); rabbitmqTe ...