-- beeline帮助
: jdbc:hive2://100.69.216.40:10001> !help
!addlocaldriverjar Add driver jar file in the beeline client side.
!addlocaldrivername Add driver name that needs to be supported in the beeline
client side.
!all Execute the specified SQL against all the current connections
!autocommit Set autocommit mode on or off
!batch Start or execute a batch of statements
!brief Set verbose mode off
!call Execute a callable statement
!close Close the current connection to the database
!closeall Close all current open connections
!columns List all the columns for the specified table
!commit Commit the current transaction (if autocommit is off)
!connect Open a new connection to the database.
!dbinfo Give metadata information about the database
!describe Describe a table
!dropall Drop all tables in the current database
!exportedkeys List all the exported keys for the specified table
!go Select the current connection
!help Print a summary of command usage
!history Display the command history
!importedkeys List all the imported keys for the specified table
!indexes List all the indexes for the specified table
!isolation Set the transaction isolation for this connection
!list List the current connections
!manual Display the BeeLine manual
!metadata Obtain metadata information
!nativesql Show the native SQL for the specified statement
!nullemptystring Set to true to get historic behavior of printing null as
empty string. Default is false.
!outputformat Set the output format for displaying results
(table,vertical,csv2,dsv,tsv2,xmlattrs,xmlelements, and
deprecated formats(csv, tsv))
!primarykeys List all the primary keys for the specified table
!procedures List all the procedures
!properties Connect to the database specified in the properties file(s)
!quit Exits the program
!reconnect Reconnect to the database
!record Record all output to the specified file
!rehash Fetch table and column names for command completion
!rollback Roll back the current transaction (if autocommit is off)
!run Run a script from the specified file
!save Save the current variabes and aliases
!scan Scan for installed JDBC drivers
!script Start saving a script to a file
!set Set a beeline variable
!sh Execute a shell command
!sql Execute a SQL command
!tables List all the tables in the database
!typeinfo Display the type map for the current connection
!verbose Set verbose mode on Comments, bug reports, and patches go to ???
### beeline命令帮助
[cc@host bin]$ beeline --help
Usage: java org.apache.hive.cli.beeline.BeeLine
-u <database url> the JDBC URL to connect to
-r reconnect to last saved connect url (in conjunction with !save)
-n <username> the username to connect as
-p <password> the password to connect as
-d <driver class> the driver class to use
-i <init file> script file for initialization
-e <query> query that should be executed
-f <exec file> script file that should be executed
-w (or) --password-file <password file> the password file to read password from
--hiveconf property=value Use value for given property
--hivevar name=value hive variable name and value
This is Hive specific settings in which variables
can be set at session level and referenced in Hive
commands or queries.
--property-file=<property-file> the file to read connection properties (url, driver, user, password) from
--color=[true/false] control whether color is used for display
--showHeader=[true/false] show column names in query results
--headerInterval=ROWS; the interval between which heades are displayed
--fastConnect=[true/false] skip building table/column list for tab-completion
--autoCommit=[true/false] enable/disable automatic transaction commit
--verbose=[true/false] show verbose error messages and debug info
--showWarnings=[true/false] display connection warnings
--showNestedErrs=[true/false] display nested errors
--numberFormat=[pattern] format numbers using DecimalFormat pattern
--force=[true/false] continue running script even after errors
--maxWidth=MAXWIDTH the maximum width of the terminal
--maxColumnWidth=MAXCOLWIDTH the maximum width to use when displaying columns
--silent=[true/false] be more silent
--autosave=[true/false] automatically save preferences
--outputformat=[table/vertical/csv2/tsv2/dsv/csv/tsv] format mode for result display
Note that csv, and tsv are deprecated - use csv2, tsv2 instead
--truncateTable=[true/false] truncate table column when it exceeds length
--delimiterForDSV=DELIMITER specify the delimiter for delimiter-separated values output format (default: |)
--isolation=LEVEL set the transaction isolation level
--nullemptystring=[true/false] set to true to get historic behavior of printing null as empty string
--help display this message Example:
. Connect using simple authentication to HiveServer2 on localhost:
$ beeline -u jdbc:hive2://localhost:10000 username password . Connect using simple authentication to HiveServer2 on hs.local: using -n for username and -p for password
$ beeline -n username -p password -u jdbc:hive2://hs2.local:10012 . Connect using Kerberos authentication with hive/localhost@mydomain.com as HiveServer2 principal
$ beeline -u "jdbc:hive2://hs2.local:10013/default;principal=hive/localhost@mydomain.com . Connect using SSL connection to HiveServer2 on localhost at
$ beeline jdbc:hive2://localhost:10000/default;ssl=true;sslTrustStore=/usr/local/truststore;trustStorePassword=mytruststorepassword . Connect using LDAP authentication
$ beeline -u jdbc:hive2://hs2.local:10013/default <ldap-username> <ldap-password>
// spark-shell帮助
scala> :help
All commands can be abbreviated, e.g. :he instead of :help.
Those marked with a * have more detailed help, e.g. :help imports. :cp <path> add a jar or directory to the classpath
:help [command] print this summary or command-specific help
:history [num] show the history (optional num is commands to show)
:h? <string> search the history
:imports [name name ...] show import history, identifying sources of names
:implicits [-v] show the implicits in scope
:javap <path|class> disassemble a file or class name
:load <path> load and interpret a Scala file
:paste enter paste mode: all input up to ctrl-D compiled together
:quit exit the repl
:replay reset execution and replay all previous commands
:reset reset the repl to its initial state, forgetting all session entries
:sh <command line> run a shell command (result is implicitly => List[String])
:silent disable/enable automatic printing of results
:fallback
disable/enable advanced repl changes, these fix some issues but may introduce others.
This mode will be removed once these fixes stablize
:type [-v] <expr> display the type of an expression without evaluating it
:warnings show the suppressed warnings from the most recent line which had any
### hive命令帮助1
[cc@host bin]$ hive --help
Parameters parsed:
--auxpath : Auxillary jars
--config : Hive configuration directory
--service : Starts specific service/component. cli is default
Parameters used:
HADOOP_HOME or HADOOP_PREFIX : Hadoop install directory
HIVE_OPT : Hive options
For help on a particular service:
./hive --service serviceName --help
### hive命令帮助1
[cc@host bin]$ hive -help
usage: hive
-d,--define <key=value> Variable subsitution to apply to hive
commands. e.g. -d A=B or --define A=B
--database <databasename> Specify the database to use
-e <quoted-query-string> SQL from command line
-f <filename> SQL from files
-H,--help Print help information
--hiveconf <property=value> Use value for given property
--hivevar <key=value> Variable subsitution to apply to hive
commands. e.g. --hivevar A=B
-i <filename> Initialization SQL file
-S,--silent Silent mode in interactive shell
-v,--verbose Verbose mode (echo executed SQL to the
console)

ref: https://blog.csdn.net/maizi1045/article/details/79481686

beleline hive spark-shell帮助的更多相关文章

  1. Spark Shell简单使用

    基础 Spark的shell作为一个强大的交互式数据分析工具,提供了一个简单的方式学习API.它可以使用Scala(在Java虚拟机上运行现有的Java库的一个很好方式)或Python.在Spark目 ...

  2. Spark shell的原理

    Spark shell是一个特别适合快速开发Spark原型程序的工具,可以帮助我们熟悉Scala语言.即使你对Scala不熟悉,仍然可以使用这个工具.Spark shell使得用户可以和Spark集群 ...

  3. Spark:使用Spark Shell的两个示例

    Spark:使用Spark Shell的两个示例 Python 行数统计 ** 注意: **使用的是Hadoop的HDFS作为持久层,需要先配置Hadoop 命令行代码 # pyspark >& ...

  4. Spark源码分析之Spark Shell(上)

    终于开始看Spark源码了,先从最常用的spark-shell脚本开始吧.不要觉得一个启动脚本有什么东东,其实里面还是有很多知识点的.另外,从启动脚本入手,是寻找代码入口最简单的方法,很多开源框架,其 ...

  5. Spark源码分析之Spark Shell(下)

    继上次的Spark-shell脚本源码分析,还剩下后面半段.由于上次涉及了不少shell的基本内容,因此就把trap和stty放在这篇来讲述. 上篇回顾:Spark源码分析之Spark Shell(上 ...

  6. hadoop+hive+spark搭建(一)

    1.准备三台虚拟机 2.hadoop+hive+spark+java软件包 传送门:Hadoop官网 Hive官网 Spark官网      一.修改主机名,hosts文件 主机名修改 hostnam ...

  7. [Spark内核] 第36课:TaskScheduler内幕天机解密:Spark shell案例运行日志详解、TaskScheduler和SchedulerBackend、FIFO与FAIR、Task运行时本地性算法详解等

    本課主題 通过 Spark-shell 窥探程序运行时的状况 TaskScheduler 与 SchedulerBackend 之间的关系 FIFO 与 FAIR 两种调度模式彻底解密 Task 数据 ...

  8. 【原创 Hadoop&Spark 动手实践 5】Spark 基础入门,集群搭建以及Spark Shell

    Spark 基础入门,集群搭建以及Spark Shell 主要借助Spark基础的PPT,再加上实际的动手操作来加强概念的理解和实践. Spark 安装部署 理论已经了解的差不多了,接下来是实际动手实 ...

  9. [Spark Core] Spark Shell 实现 Word Count

    0. 说明 在 Spark Shell 实现 Word Count RDD (Resilient Distributed dataset), 弹性分布式数据集. 示意图 1. 实现 1.1 分步实现 ...

  10. Spark Shell Examples

    Spark Shell Example 1 - Process Data from List: scala> val pairs = sc.parallelize( List( ("T ...

随机推荐

  1. [poj2398]Toy Storage

    接替关键:和上题类似,输出不同,注意输入这道题需要排序. #include<cstdio> #include<cstring> #include<algorithm> ...

  2. css知多少(4)——解读浏览器默认样式(转)

    css知多少(4)——解读浏览器默认样式   上一节<css知多少(3)——样式来源与层叠规则>介绍了样式的五种来源,咱们再通过一张图回顾一下. 对于上面的三层,咱们大概都比较熟悉了.下面 ...

  3. ruby 【rails在win7_64位操作系统安装】

    gem update --system --source http://production.s3.rubygems.org 安装截图

  4. Struts第三天

    OgnlValueStack贯穿整个 Action 的生命周期. 它是ContextMap中的一部分,里面的结构是一个List,是我们可以快速访问数据一个容器.它的封装是由struts2框架完成的. ...

  5. ROS Learning-001 安装 ROS indigo

    如何在 Ubuntu14.04 上安装 ROS indigo 我使用的虚拟机软件:VMware Workstation 11 使用的Ubuntu系统:Ubuntu 14.04.4 LTS ROS 版本 ...

  6. ae中用粒子系统做的特效怎么循环

  7. Edison UVALive3488

    传送门 题目大意 有一个0~n-1的序列,有m次操作,操作包含三个元素:pl,len,ti,表示这个操作进行ti次,每次将从pl+1开始的len个元素移到序列头部.分析 看到题不难想到使用平衡树将需移 ...

  8. ZOJ 2301 Color the Ball (离散化+线段树)

    题意:有从 1 开始递增依次编号的很多球,开始他们都是黑色的,现在依次给出 n 个操作(ai,bi,ci),每个操作都是把编号 ai 到 bi 区间内 的-所有球涂成 ci 表示的颜色(黑 or 白) ...

  9. wifi下远程连接Android设备方法

    问题描述: android开发真机调试过程中,我们总是会重复卸载.安装这两个过程进行调试,通常都是用数据线连接电脑,能否摆脱数据线呢? 无线调试: 前提条件,电脑和手机必须处于同一局域网. 1.手机通 ...

  10. 对bookinfo.dat的说明

    作者:马健邮箱:stronghorse_mj@hotmail.com发布:2008.08.03 现在论坛推出的下载工具五花八门,但是有不少都忽视了bookinfo.dat的生成,因此有必要说明一下这个 ...