Hive的日志操作
想要看hive的日志,我们查看/home/hadoop/hive/conf/hive-log4j2.properties
# list of properties
property.hive.log.level = INFO
property.hive.root.logger = DRFA
property.hive.log.dir = ${sys:java.io.tmpdir}/${sys:user.name}
property.hive.log.file = hive.log
property.hive.perflogger.log.level = INFO
红色部分说明了hive的日志目录,但是我们查看 echo ${sys:java.io.tmpdir} 是没有输出的,为空,其实日志目录在系统的/tmp/hostname/hive.log
[hadoop@master hadoop]$ pwd
/tmp/hadoop
[hadoop@master hadoop]$ ls -rlt
total
-rw-rw-r--. hadoop hadoop Jan : hive.log.--
-rw-rw-r--. hadoop hadoop Mar : hive.log.--
-rw-rw-r--. hadoop hadoop Apr : hive.log.--
-rw-rw-r--. hadoop hadoop Apr : stderr
-rw-rw-r--. hadoop hadoop Apr : hive.log
在hive的安装目录,新建一个logs目录,来存储日志文件
[hadoop@master hive]$ pwd
/home/hadoop/hive
[hadoop@master hive]$ mkdir logs
修改配置文件 hive-log4j2.properties
property.hive.log.dir = /home/hadoop/hive/logs
退出hive
[hadoop@master sbin]$ hive Logging initialized using configuration in file:/home/hadoop/hive/conf/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive .X releases.
hive> show databases;
OK
db_hive
default
Time taken: 3.857 seconds, Fetched: row(s)
hive> use db_hive;
OK
Time taken: 0.024 seconds
hive> exit ;
重新启动hive
[hadoop@master logs]$ pwd
/home/hadoop/hive/logs
[hadoop@master logs]$ ls -rlt
total
-rw-rw-r--. hadoop hadoop Apr : hive.log
可以看到hive的日志已经更改路径了
可以看hive中的变量:
hive> set;
...
...
system:sun.java.launcher=SUN_STANDARD
system:sun.jnu.encoding=UTF-
system:sun.management.compiler=HotSpot -Bit Tiered Compilers
system:sun.os.patch.level=unknown
system:user.country=US
system:user.dir=/home/hadoop/hadoop-2.7./sbin
system:user.home=/home/hadoop
system:user.language=en
system:user.name=hadoop
system:user.timezone=America/Los_Angeles
设置属性在启动的时候
[hadoop@master sbin]$ hive -help
usage: hive
-d,--define <key=value> Variable substitution to apply to Hive
commands. e.g. -d A=B or --define A=B
--database <databasename> Specify the database to use
-e <quoted-query-string> SQL from command line
-f <filename> SQL from files
-H,--help Print help information
--hiveconf <property=value> Use value for given property
--hivevar <key=value> Variable substitution to apply to Hive
commands. e.g. --hivevar A=B
-i <filename> Initialization SQL file
-S,--silent Silent mode in interactive shell
-v,--verbose Verbose mode (echo executed SQL to the
console)
[hadoop@master sbin]$ hive --hiveconf hive.root.logger=INFO,console Logging initialized using configuration in file:/home/hadoop/hive/conf/hive-log4j2.properties Async: true
--02T22::, INFO [main] SessionState:
Logging initialized using configuration in file:/home/hadoop/hive/conf/hive-log4j2.properties Async: true
--02T22::, WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Hive-on-MR is deprecated in Hive and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive .X releases.
hive> use db_hive;
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Compiling command(queryId=hadoop_20190402225652_7939edc0-f75c--8bc3-a2b793b0251f): use db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.ObjectStore: ObjectStore, initialize called
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.ObjectStore: Initialized ObjectStore
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: Added admin role in metastore
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: Added public role in metastore
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: No user is added in admin role, since config is empty
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : get_all_functions
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_all_functions
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : get_database: db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Semantic Analysis Completed
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Completed compiling command(queryId=hadoop_20190402225652_7939edc0-f75c--8bc3-a2b793b0251f); Time taken: 3.036 seconds
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Concurrency mode is disabled, not creating a lock manager
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Executing command(queryId=hadoop_20190402225652_7939edc0-f75c--8bc3-a2b793b0251f): use db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] sqlstd.SQLStdHiveAccessController: Created SQLStdHiveAccessController for session context : HiveAuthzSessionContext [sessionString=ad332b5e-6ec9--bef1-253a5ab59e59, clientType=HIVECLI]
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] hive.metastore: Mestastore configuration hive.metastore.filter.hook changed from org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl to org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : Cleaning up thread local RawStore...
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=Cleaning up thread local RawStore...
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : Done cleaning up thread local RawStore
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=Done cleaning up thread local RawStore
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Starting task [Stage-:DDL] in serial mode
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : get_database: db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.ObjectStore: ObjectStore, initialize called
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.ObjectStore: Initialized ObjectStore
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : get_database: db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Completed executing command(queryId=hadoop_20190402225652_7939edc0-f75c--8bc3-a2b793b0251f); Time taken: 0.175 seconds
OK
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: OK
Time taken: 3.222 seconds
hive> show tables;
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Compiling command(queryId=hadoop_20190402225706_0f585a6e-ca17--8d99-3260e7caabf5): show tables
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : get_database: db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Semantic Analysis Completed
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string, comment:from deserializer)], properties:null)
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] exec.ListSinkOperator: Initializing operator LIST_SINK[]
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Completed compiling command(queryId=hadoop_20190402225706_0f585a6e-ca17--8d99-3260e7caabf5); Time taken: 0.076 seconds
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Concurrency mode is disabled, not creating a lock manager
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Executing command(queryId=hadoop_20190402225706_0f585a6e-ca17--8d99-3260e7caabf5): show tables
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Starting task [Stage-:DDL] in serial mode
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : get_database: db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: db_hive
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] metastore.HiveMetaStore: : get_tables: db=db_hive pat=.*
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_tables: db=db_hive pat=.*
OK
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: Completed executing command(queryId=hadoop_20190402225706_0f585a6e-ca17--8d99-3260e7caabf5); Time taken: 0.028 seconds
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] ql.Driver: OK
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] mapred.FileInputFormat: Total input paths to process :
--02T22::, INFO [ad332b5e-6ec9--bef1-253a5ab59e59 main] exec.ListSinkOperator: Closing operator LIST_SINK[]
u2
u4
Time taken: 0.105 seconds, Fetched: row(s)
日志比较详细。
Hive的日志操作的更多相关文章
- 基于hive的日志分析系统
转自 http://www.cppblog.com/koson/archive/2010/07/19/120773.html hive 简介 hive 是一个基于 ...
- Hive配置与操作实践
Hive配置与操作实践 @(Hadoop) 安装hive hive的安装十分简单,只需要在一台服务器上部署即可. 上传hive安装包,解压缩,将其配入环境变量. mysql的设置 在要作为元数据库的m ...
- mysql 查看 删除 日志操作总结(包括单独和主从mysql)
我们可以在mysql的安装目录下看到mysql的二进制日志文件,如mysql-bin.000***等,很多人都不及时的处理,导致整个硬盘被塞满也是有可能的.这些是数据库的操作日志.它记录了我们平时使用 ...
- SQL Server 最小化日志操作解析,应用
Sql Server 中数据库在BULK_LOGGED/SIMPLE模式下的一些操作会采用最小化日志的记录方式,以减小tran log落盘日志量从而提高整体性能. 这里我简单介绍下哪些操作在什么样的情 ...
- 使用Log4j进行日志操作
使用Log4j进行日志操作 一.Log4j简介 (1)概述 Log4j是Apache的一个开放源代码项目,通过使用Log4j,我们可以控制日志信息输送的目的地是控制台.文件.GUI组件.甚至是套接字服 ...
- SQL Server 最小化日志操作解析,应用[手稿]
Sql Server 中数据库在BULK_LOGGED/SIMPLE模式下的一些操作会采用最小化日志的记录方式,以减小tran log落盘日志量从而提高整体性能. 这里我简单介绍下哪些操作在什么样的情 ...
- xBIM 日志操作
目录 xBIM 应用与学习 (一) xBIM 应用与学习 (二) xBIM 基本的模型操作 xBIM 日志操作 XBIM 3D 墙壁案例 xBIM 格式之间转换 xBIM 使用Linq 来优化查询 x ...
- python中的日志操作和发送邮件
1.python中的日志操作 安装log模块:pip install nnlog 参数:my_log = nnlog.Logger('server_log.log',level='debug',bac ...
- 如何监听对 HIVE 元数据的操作
目录 简介 HIVE 基本操作 获取 HIVE 源码 编译 HIVE 源码 启动 HIVE 停止 HIVE 监听对 HIVE 元数据的操作 参考文档 简介 公司有个元数据管理平台,会定期同步 HIVE ...
随机推荐
- selenium webdriver常用函数
from selenium import webdriver driver = webdriver.Ie(executable_path = "e:\\IEDriverServer" ...
- kubernetes架构(2)
一.Kubernetes 架构: Kubernetes Cluster 由 Master 和 Node 组成,节点上运行着若干 Kubernetes 服务. Master 节点 Master 是 Ku ...
- MATLAB学习(十)实现文件、图像读写
t=1:5; s1=sin(t); s2=cos(t); s=[t;s1;s2]; fid1=fopen('test.dat','wt'); fprintf(fid1,'\nThis is a For ...
- React Native细节知识点总结<一>
1.propTypes: static propTypes = { name:PropTypes.string, ID:PropTypes.number.isRequired, } 'isRequir ...
- k8s 网络模型解析之实践
一. 实践说明 首先我们先创建一组资源,包括一个deployment和一个service apiVersion: apps/v1 kind: Deployment metadata: name: ng ...
- v-model语法糖在组件中的使用
原文地址 v-model 主要是用于表单上数据的双向绑定 一:基本 1:主要用于 input,select,textarea,component 2:修饰符: .lazy- 取代input监听chan ...
- 现代化的拷贝文字---clipboard.js
参考链接:http://www.clipboardjs.cn/
- 论文阅读 | Robust Neural Machine Translation with Doubly Adversarial Inputs
(1)用对抗性的源实例攻击翻译模型; (2)使用对抗性目标输入来保护翻译模型,提高其对对抗性源输入的鲁棒性. 生成对抗输入:基于梯度 (平均损失) -> AdvGen 我们的工作处理由白盒N ...
- Linux安装git (git-2.11.0)
本文旨在讲述如何在linux上安装最新版的git. 1.查看当前git版本:git --version 查看最新版git:访问https://www.kernel.org/pub/softwa ...
- Mac 操作小技巧
系统版本 MacOs Mojava # 快捷键篇: 1. 打开终端:command+空格,输入terminal:在终端页面,新建终端command + T 2. 打开文件夹:command + T 3 ...