Hadoop生态圈-Sqoop部署以及基本使用方法

                                            作者:尹正杰

版权声明:原创作品,谢绝转载!否则将追究法律责任。

  Sqoop(发音:skup)是一款开源的工具,主要用于在Hadoop(Hive)与传统的数据库(mysql、postgresql...)间进行数据的传递,可以将一个关系型数据库(例如 : MySQL ,Oracle ,Postgres等)中的数据导进到Hadoop的HDFS中,也可以将HDFS的数据导进到关系型数据库中。
  Sqoop项目开始于2009年,最早是作为Hadoop的一个第三方模块存在,后来为了让使用者能够快速部署,也为了让开发人员能够更快速的迭代开发,Sqoop独立成为一个Apache项目。详情请参考:http://sqoop.apache.org/)
  注意,本篇博客部署方式是建立在高可用集群的基础上部署的Sqoop,关于高可用集群部署请参考:https://www.cnblogs.com/yinzhengjie/p/9154265.html

一.部署Sqoop工具

1>.下载Sqoop软件(下载地址:http://mirrors.hust.edu.cn/apache/sqoop/1.4.7/,建议下载最新版本,截止2018-06-14时,最新版本为1.4.7。)

2>.解压并创建符号链接

[yinzhengjie@s101 data]$ tar zxf sqoop-1.4..bin__hadoop-2.6..tar.gz -C /soft/
[yinzhengjie@s101 data]$ ln -s /soft/sqoop-1.4..bin__hadoop-2.6./ /soft/sqoop
[yinzhengjie@s101 data]$

3>.配置环境变量并使之生效

[yinzhengjie@s101 ~]$ sudo vi /etc/profile
[sudo] password for yinzhengjie:
[yinzhengjie@s101 ~]$ tail - /etc/profile
#ADD SQOOP
SQOOP_HOME=/soft/sqoop
PATH=$PATH:$SQOOP_HOME/bin
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ source /etc/profile
[yinzhengjie@s101 ~]$

4>.创建sqoop-env.sh配置文件

[yinzhengjie@s101 ~]$ cp /soft/sqoop/conf/sqoop-env-template.sh  /soft/sqoop/conf/sqoop-env.sh
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ more /soft/sqoop/conf/sqoop-env.sh | grep -v ^# | grep -v ^$
export HADOOP_COMMON_HOME=/soft/hadoop
export HADOOP_MAPRED_HOME=/soft/hadoop
export HBASE_HOME=/soft/hbase
export HIVE_HOME=/soft/hive
export ZOOCFGDIR=/soft/zk/conf
[yinzhengjie@s101 ~]$

5>.将mysql驱动放置在sqoop/lib下

[yinzhengjie@s101 ~]$ cp /soft/hive/lib/mysql-connector-java-5.1..jar /soft/sqoop/lib/
[yinzhengjie@s101 ~]$

6>.sqoop version验证安装

[yinzhengjie@s101 ~]$ sqoop version
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
Sqoop 1.4.
git commit id 2328971411f57f0cb683dfb79d19d4d19d185dd8
Compiled by maugli on Thu Dec :: STD
[yinzhengjie@s101 ~]$

二.基本使用

1>.使用sqoop命令行链接MySQL数据库

[yinzhengjie@s101 ~]$ sqoop list-databases --connect jdbc:mysql://s101 --username root -P
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
Enter password:
// :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/soft/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/phoenix-4.10.-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
information_schema
hive
mysql
performance_schema
[yinzhengjie@s101 ~]$

2>.sqoop查看帮助

[yinzhengjie@s101 ~]$ sqoop help
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
usage: sqoop COMMAND [ARGS] Available commands:
codegen Generate code to interact with database records
create-hive-table Import a table definition into Hive
eval Evaluate a SQL statement and display the results
export Export an HDFS directory to a database table
help List available commands
import Import a table from a database to HDFS
import-all-tables Import tables from a database to HDFS
import-mainframe Import datasets from a mainframe server to HDFS
job Work with saved jobs
list-databases List available databases on a server
list-tables List available tables in a database
merge Merge results of incremental imports
metastore Run a standalone Sqoop metastore
version Display version information See 'sqoop help COMMAND' for information on a specific command.
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ sqoop help

[yinzhengjie@s101 ~]$ sqoop import --help
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
usage: sqoop import [GENERIC-ARGS] [TOOL-ARGS] Common arguments:
--connect <jdbc-uri> Specify JDBC
connect
string
--connection-manager <class-name> Specify
connection
manager
class name
--connection-param-file <properties-file> Specify
connection
parameters
file
--driver <class-name> Manually
specify JDBC
driver class
to use
--hadoop-home <hdir> Override
$HADOOP_MAPR
ED_HOME_ARG
--hadoop-mapred-home <dir> Override
$HADOOP_MAPR
ED_HOME_ARG
--help Print usage
instructions
--metadata-transaction-isolation-level <isolationlevel> Defines the
transaction
isolation
level for
metadata
queries. For
more details
check
java.sql.Con
nection
javadoc or
the JDBC
specificaito
n
--oracle-escaping-disabled <boolean> Disable the
escaping
mechanism of
the
Oracle/OraOo
p connection
managers
-P Read
password
from console
--password <password> Set
authenticati
on password
--password-alias <password-alias> Credential
provider
password
alias
--password-file <password-file> Set
authenticati
on password
file path
--relaxed-isolation Use
read-uncommi
tted
isolation
for imports
--skip-dist-cache Skip copying
jars to
distributed
cache
--temporary-rootdir <rootdir> Defines the
temporary
root
directory
for the
import
--throw-on-error Rethrow a
RuntimeExcep
tion on
error
occurred
during the
job
--username <username> Set
authenticati
on username
--verbose Print more
information
while
working Import control arguments:
--append Imports data
in append
mode
--as-avrodatafile Imports data
to Avro data
files
--as-parquetfile Imports data
to Parquet
files
--as-sequencefile Imports data
to
SequenceFile
s
--as-textfile Imports data
as plain
text
(default)
--autoreset-to-one-mapper Reset the
number of
mappers to
one mapper
if no split
key
available
--boundary-query <statement> Set boundary
query for
retrieving
max and min
value of the
primary key
--columns <col,col,col...> Columns to
import from
table
--compression-codec <codec> Compression
codec to use
for import
--delete-target-dir Imports data
in delete
mode
--direct Use direct
import fast
path
--direct-split-size <n> Split the
input stream
every 'n'
bytes when
importing in
direct mode
-e,--query <statement> Import
results of
SQL
'statement'
--fetch-size <n> Set number
'n' of rows
to fetch
from the
database
when more
rows are
needed
--inline-lob-limit <n> Set the
maximum size
for an
inline LOB
-m,--num-mappers <n> Use 'n' map
tasks to
import in
parallel
--mapreduce-job-name <name> Set name for
generated
mapreduce
job
--merge-key <column> Key column
to use to
join results
--split-by <column-name> Column of
the table
used to
split work
units
--split-limit <size> Upper Limit
of rows per
split for
split
columns of
Date/Time/Ti
mestamp and
integer
types. For
date or
timestamp
fields it is
calculated
in seconds.
split-limit
should be
greater than --table <table-name> Table to
read
--target-dir <dir> HDFS plain
table
destination
--validate Validate the
copy using
the
configured
validator
--validation-failurehandler <validation-failurehandler> Fully
qualified
class name
for
ValidationFa
ilureHandler
--validation-threshold <validation-threshold> Fully
qualified
class name
for
ValidationTh
reshold
--validator <validator> Fully
qualified
class name
for the
Validator
--warehouse-dir <dir> HDFS parent
for table
destination
--where <where clause> WHERE clause
to use
during
import
-z,--compress Enable
compression Incremental import arguments:
--check-column <column> Source column to check for incremental
change
--incremental <import-type> Define an incremental import of type
'append' or 'lastmodified'
--last-value <value> Last imported value in the incremental
check column Output line formatting arguments:
--enclosed-by <char> Sets a required field enclosing
character
--escaped-by <char> Sets the escape character
--fields-terminated-by <char> Sets the field separator character
--lines-terminated-by <char> Sets the end-of-line character
--mysql-delimiters Uses MySQL's default delimiter set:
fields: , lines: \n escaped-by: \
optionally-enclosed-by: '
--optionally-enclosed-by <char> Sets a field enclosing character Input parsing arguments:
--input-enclosed-by <char> Sets a required field encloser
--input-escaped-by <char> Sets the input escape
character
--input-fields-terminated-by <char> Sets the input field separator
--input-lines-terminated-by <char> Sets the input end-of-line
char
--input-optionally-enclosed-by <char> Sets a field enclosing
character Hive arguments:
--create-hive-table Fail if the target hive
table exists
--external-table-dir <hdfs path> Sets where the external
table is in HDFS
--hive-database <database-name> Sets the database name to
use when importing to hive
--hive-delims-replacement <arg> Replace Hive record \0x01
and row delimiters (\n\r)
from imported string fields
with user-defined string
--hive-drop-import-delims Drop Hive record \0x01 and
row delimiters (\n\r) from
imported string fields
--hive-home <dir> Override $HIVE_HOME
--hive-import Import tables into Hive
(Uses Hive's default
delimiters if none are
set.)
--hive-overwrite Overwrite existing data in
the Hive table
--hive-partition-key <partition-key> Sets the partition key to
use when importing to hive
--hive-partition-value <partition-value> Sets the partition value to
use when importing to hive
--hive-table <table-name> Sets the table name to use
when importing to hive
--map-column-hive <arg> Override mapping for
specific column to hive
types. HBase arguments:
--column-family <family> Sets the target column family for the
import
--hbase-bulkload Enables HBase bulk loading
--hbase-create-table If specified, create missing HBase tables
--hbase-row-key <col> Specifies which input column to use as the
row key
--hbase-table <table> Import to <table> in HBase HCatalog arguments:
--hcatalog-database <arg> HCatalog database name
--hcatalog-home <hdir> Override $HCAT_HOME
--hcatalog-partition-keys <partition-key> Sets the partition
keys to use when
importing to hive
--hcatalog-partition-values <partition-value> Sets the partition
values to use when
importing to hive
--hcatalog-table <arg> HCatalog table name
--hive-home <dir> Override $HIVE_HOME
--hive-partition-key <partition-key> Sets the partition key
to use when importing
to hive
--hive-partition-value <partition-value> Sets the partition
value to use when
importing to hive
--map-column-hive <arg> Override mapping for
specific column to
hive types. HCatalog import specific options:
--create-hcatalog-table Create HCatalog before import
--drop-and-create-hcatalog-table Drop and Create HCatalog before
import
--hcatalog-storage-stanza <arg> HCatalog storage stanza for table
creation Accumulo arguments:
--accumulo-batch-size <size> Batch size in bytes
--accumulo-column-family <family> Sets the target column family for
the import
--accumulo-create-table If specified, create missing
Accumulo tables
--accumulo-instance <instance> Accumulo instance name.
--accumulo-max-latency <latency> Max write latency in milliseconds
--accumulo-password <password> Accumulo password.
--accumulo-row-key <col> Specifies which input column to
use as the row key
--accumulo-table <table> Import to <table> in Accumulo
--accumulo-user <user> Accumulo user name.
--accumulo-visibility <vis> Visibility token to be applied to
all rows imported
--accumulo-zookeepers <zookeepers> Comma-separated list of
zookeepers (host:port) Code generation arguments:
--bindir <dir> Output directory for
compiled objects
--class-name <name> Sets the generated class
name. This overrides
--package-name. When
combined with --jar-file,
sets the input class.
--escape-mapping-column-names <boolean> Disable special characters
escaping in column names
--input-null-non-string <null-str> Input null non-string
representation
--input-null-string <null-str> Input null string
representation
--jar-file <file> Disable code generation; use
specified jar
--map-column-java <arg> Override mapping for
specific columns to java
types
--null-non-string <null-str> Null non-string
representation
--null-string <null-str> Null string representation
--outdir <dir> Output directory for
generated code
--package-name <name> Put auto-generated classes
in this package Generic Hadoop command-line arguments:
(must preceed any tool-specific arguments)
Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|resourcemanager:port> specify a ResourceManager
-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines. The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions] At minimum, you must specify --connect and --table
Arguments to mysqldump and other subprograms may be supplied
after a '--' on the command line. [yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ sqoop import --help

3>.sqoop列出表

[yinzhengjie@s101 ~]$ sqoop list-tables --connect jdbc:mysql://s101/yinzhengjie --username root -P
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
Enter password:
// :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/soft/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/phoenix-4.10.-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Classmate
word
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ sqoop list-tables --connect jdbc:mysql://s101/yinzhengjie --username root -P

4>.Sqoop列出数据库

[yinzhengjie@s101 ~]$ sqoop list-databases --connect jdbc:mysql://s101 --username root -P
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
Enter password:
// :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/soft/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/phoenix-4.10.-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
information_schema
hive
mysql
performance_schema
yinzhengjie
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ sqoop list-databases --connect jdbc:mysql://s101 --username root -P

三.Sqoop将数据导入HDFS(需要启动hdfs,yarn,MySQL等相关服务)

1>.在数据库进行授权操作

mysql> grant all PRIVILEGES on *.* to root@'s101'  identified by 'yinzhengjie';
Query OK, rows affected (0.31 sec) mysql> grant all PRIVILEGES on *.* to root@'s102' identified by 'yinzhengjie';
Query OK, rows affected (0.02 sec) mysql> grant all PRIVILEGES on *.* to root@'s103' identified by 'yinzhengjie';
Query OK, rows affected (0.00 sec) mysql> grant all PRIVILEGES on *.* to root@'s104' identified by 'yinzhengjie';
Query OK, rows affected (0.00 sec) mysql> grant all PRIVILEGES on *.* to root@'s105' identified by 'yinzhengjie';
Query OK, rows affected (0.00 sec) mysql> flush privileges;
Query OK, rows affected (0.02 sec) mysql>

2>.将数据库的数据导入到hdfs中

[yinzhengjie@s101 ~]$ sqoop import --connect jdbc:mysql://s101/yinzhengjie --username root -P --table word --fields-terminated-by '\t' --target-dir /wc -m 1
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
Enter password:
// :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
// :: INFO tool.CodeGenTool: Beginning code generation
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/soft/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/phoenix-4.10.-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `word` AS t LIMIT
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `word` AS t LIMIT
// :: INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /soft/hadoop
Note: /tmp/sqoop-yinzhengjie/compile/506dbf41a3a9165eebe93e9d2ec30818/word.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
// :: INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-yinzhengjie/compile/506dbf41a3a9165eebe93e9d2ec30818/word.jar
// :: WARN manager.MySQLManager: It looks like you are importing from mysql.
// :: WARN manager.MySQLManager: This transfer can be faster! Use the --direct
// :: WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
// :: INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
// :: INFO mapreduce.ImportJobBase: Beginning import of word
// :: INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
// :: INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
// :: INFO db.DBInputFormat: Using read commited transaction isolation
// :: INFO mapreduce.JobSubmitter: number of splits:
// :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1528967628934_0002
// :: INFO impl.YarnClientImpl: Submitted application application_1528967628934_0002
// :: INFO mapreduce.Job: The url to track the job: http://s101:8088/proxy/application_1528967628934_0002/
// :: INFO mapreduce.Job: Running job: job_1528967628934_0002
// :: INFO mapreduce.Job: Job job_1528967628934_0002 running in uber mode : false
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: Task Id : attempt_1528967628934_0002_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: java.lang.RuntimeException: java.sql.SQLException: null, message from server: "Host 's105' is not allowed to connect to this MySQL server"
at org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:)
at org.apache.hadoop.mapred.YarnChild$.run(YarnChild.java:)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:)
Caused by: java.lang.RuntimeException: java.sql.SQLException: null, message from server: "Host 's105' is not allowed to connect to this MySQL server"
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:)
... more
Caused by: java.sql.SQLException: null, message from server: "Host 's105' is not allowed to connect to this MySQL server"
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:)
at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:)
at java.lang.reflect.Constructor.newInstance(Constructor.java:)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:)
at java.sql.DriverManager.getConnection(DriverManager.java:)
at java.sql.DriverManager.getConnection(DriverManager.java:)
at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:)
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:)
... more Container killed by the ApplicationMaster.
Container killed on request. Exit code is
Container exited with a non-zero exit code // :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: Job job_1528967628934_0002 completed successfully
// :: INFO mapreduce.Job: Counters:
File System Counters
FILE: Number of bytes read=
FILE: Number of bytes written=
FILE: Number of read operations=
FILE: Number of large read operations=
FILE: Number of write operations=
HDFS: Number of bytes read=
HDFS: Number of bytes written=
HDFS: Number of read operations=
HDFS: Number of large read operations=
HDFS: Number of write operations=
Job Counters
Failed map tasks=
Launched map tasks=
Other local map tasks=
Total time spent by all maps in occupied slots (ms)=
Total time spent by all reduces in occupied slots (ms)=
Total time spent by all map tasks (ms)=
Total vcore-milliseconds taken by all map tasks=
Total megabyte-milliseconds taken by all map tasks=
Map-Reduce Framework
Map input records=
Map output records=
Input split bytes=
Spilled Records=
Failed Shuffles=
Merged Map outputs=
GC time elapsed (ms)=
CPU time spent (ms)=
Physical memory (bytes) snapshot=
Virtual memory (bytes) snapshot=
Total committed heap usage (bytes)=
File Input Format Counters
Bytes Read=
File Output Format Counters
Bytes Written=
// :: INFO mapreduce.ImportJobBase: Transferred bytes in 29.3085 seconds (2.5249 bytes/sec)
// :: INFO mapreduce.ImportJobBase: Retrieved records.
[yinzhengjie@s101 ~]$ hdfs dfs -cat /wc/part-m-
hello world
yinzhengjie hadoop
yinzhengjie hive
yinzhengjie hbase
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ sqoop import --connect jdbc:mysql://s101/yinzhengjie --username root -P --table word --fields-terminated-by '\t' --target-dir /wc -m 1

[yinzhengjie@s101 ~]$ hdfs dfs -cat /wc/part-m-
hello world
yinzhengjie hadoop
yinzhengjie hive
yinzhengjie hbase
[yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ hdfs dfs -cat /wc/part-m-00000

3>.在hdfs的WebUI中查看数据

4>.其他参数介绍

    --table                    //指定导入mysql表
-m         //mapper数量
--target-dir     //指定导入hdfs的目录
--fields-terminated-by   //指定列分隔符
--lines-terminated-by      //指定行分隔符
--append       //追加
--as-avrodatafile    //设置文件格式为avrodatafile
--as-parquetfile   ·  //设置文件格式为parquetfile
--as-sequencefile      //设置文件格式为sequencefile
--as-textfile         //设置文件格式为textfile
--columns <col,col,col...> //指定导入的mysql列
--compression-codec <codec> //制定压缩

四.sqoop导入mysql数据到hive(需要启动hdfs,yarn,MySQL等相关服务,hive不需要手动启动,因为导入的时候它自己会自行启动

1>.修改sqoop-env.sh

[yinzhengjie@s101 ~]$ tail - /soft/sqoop/conf/sqoop-env.sh
#ADD BY YINZHENGJIE
export HIVE_CONF_DIR=/soft/hive/conf
[yinzhengjie@s101 ~]$

2>.编辑环境变量

[yinzhengjie@s101 ~]$ sudo vi /etc/profile
[sudo] password for yinzhengjie:
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ tail - /etc/profile
#ADD sqool import hive
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HIVE_HOME/lib/*
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ source /etc/profile
[yinzhengjie@s101 ~]$

3>.关闭安全方面的异常信息(不修改也不会影响测试结果)

4>.导入数据到hive中

: jdbc:hive2://s101:10000> show tables;
+---------------+--+
| tab_name |
+---------------+--+
| pv |
| user_orc |
| user_parquet |
| user_rc |
| user_seq |
| user_text |
| users |
+---------------+--+
rows selected (0.061 seconds)
: jdbc:hive2://s101:10000>

导入之前(0: jdbc:hive2://s101:10000> show tables;)

[yinzhengjie@s101 ~]$ sqoop import --connect jdbc:mysql://s101/yinzhengjie --username root -P --table word --fields-terminated-by '\t' --hive-import --create-hive-table --hive-database yinzhengjie --hive-table wc -m 1
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
Enter password:
// :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
// :: INFO tool.CodeGenTool: Beginning code generation
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/soft/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/phoenix-4.10.-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/apache-hive-2.1.-bin/lib/log4j-slf4j-impl-2.4..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `word` AS t LIMIT
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `word` AS t LIMIT
// :: INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /soft/hadoop
Note: /tmp/sqoop-yinzhengjie/compile/a904d79d3e86841540489a5459400e8b/word.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
// :: INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-yinzhengjie/compile/a904d79d3e86841540489a5459400e8b/word.jar
// :: WARN manager.MySQLManager: It looks like you are importing from mysql.
// :: WARN manager.MySQLManager: This transfer can be faster! Use the --direct
// :: WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
// :: INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
// :: INFO mapreduce.ImportJobBase: Beginning import of word
// :: INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
// :: INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
// :: INFO db.DBInputFormat: Using read commited transaction isolation
// :: INFO mapreduce.JobSubmitter: number of splits:
// :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1528967628934_0005
// :: INFO impl.YarnClientImpl: Submitted application application_1528967628934_0005
// :: INFO mapreduce.Job: The url to track the job: http://s101:8088/proxy/application_1528967628934_0005/
// :: INFO mapreduce.Job: Running job: job_1528967628934_0005
// :: INFO mapreduce.Job: Job job_1528967628934_0005 running in uber mode : false
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: Job job_1528967628934_0005 completed successfully
// :: INFO mapreduce.Job: Counters:
File System Counters
FILE: Number of bytes read=
FILE: Number of bytes written=
FILE: Number of read operations=
FILE: Number of large read operations=
FILE: Number of write operations=
HDFS: Number of bytes read=
HDFS: Number of bytes written=
HDFS: Number of read operations=
HDFS: Number of large read operations=
HDFS: Number of write operations=
Job Counters
Launched map tasks=
Other local map tasks=
Total time spent by all maps in occupied slots (ms)=
Total time spent by all reduces in occupied slots (ms)=
Total time spent by all map tasks (ms)=
Total vcore-milliseconds taken by all map tasks=
Total megabyte-milliseconds taken by all map tasks=
Map-Reduce Framework
Map input records=
Map output records=
Input split bytes=
Spilled Records=
Failed Shuffles=
Merged Map outputs=
GC time elapsed (ms)=
CPU time spent (ms)=
Physical memory (bytes) snapshot=
Virtual memory (bytes) snapshot=
Total committed heap usage (bytes)=
File Input Format Counters
Bytes Read=
File Output Format Counters
Bytes Written=
// :: INFO mapreduce.ImportJobBase: Transferred bytes in 58.4206 seconds (1.2667 bytes/sec)
// :: INFO mapreduce.ImportJobBase: Retrieved records.
// :: INFO mapreduce.ImportJobBase: Publishing Hive/Hcat import job data to Listeners for table word
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `word` AS t LIMIT
// :: INFO hive.HiveImport: Loading uploaded data into Hive
// :: INFO conf.HiveConf: Found configuration file file:/soft/hive/conf/hive-site.xml Logging initialized using configuration in jar:file:/soft/apache-hive-2.1.-bin/lib/hive-common-2.1..jar!/hive-log4j2.properties Async: true
// :: INFO SessionState:
Logging initialized using configuration in jar:file:/soft/apache-hive-2.1.-bin/lib/hive-common-2.1..jar!/hive-log4j2.properties Async: true
// :: INFO metastore.HiveMetaStore: : Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
// :: INFO metastore.ObjectStore: ObjectStore, initialize called
// :: INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
// :: INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
// :: INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
// :: INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
// :: INFO metastore.ObjectStore: Initialized ObjectStore
// :: INFO metastore.HiveMetaStore: Added admin role in metastore
// :: INFO metastore.HiveMetaStore: Added public role in metastore
// :: INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
// :: INFO metastore.HiveMetaStore: : get_all_functions
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_all_functions
// :: INFO metadata.Hive: Registering function parsejson cn.org.yinzhengjie.udf.ParseJson
// :: WARN metadata.Hive: Failed to register persistent function parsejson:cn.org.yinzhengjie.udf.ParseJson. Ignore and continue.
// :: INFO metadata.Hive: Registering function parsejson cn.org.yinzhengjie.udf.MyUDTF
// :: WARN metadata.Hive: Failed to register persistent function parsejson:cn.org.yinzhengjie.udf.MyUDTF. Ignore and continue.
// :: INFO metadata.Hive: Registering function todate cn.org.yinzhengjie.udf.MyUDTF
// :: WARN metadata.Hive: Failed to register persistent function todate:cn.org.yinzhengjie.udf.MyUDTF. Ignore and continue.
// :: INFO session.SessionState: Created HDFS directory: /tmp/hive/yinzhengjie/ab78aeaa-274a-4ed6-bff0-ffa488a2c8df
// :: INFO session.SessionState: Created local directory: /home/yinzhengjie/yinzhengjie/ab78aeaa-274a-4ed6-bff0-ffa488a2c8df
// :: INFO session.SessionState: Created HDFS directory: /tmp/hive/yinzhengjie/ab78aeaa-274a-4ed6-bff0-ffa488a2c8df/_tmp_space.db
// :: INFO conf.HiveConf: Using the default value passed in for log id: ab78aeaa-274a-4ed6-bff0-ffa488a2c8df
// :: INFO session.SessionState: Updating thread name to ab78aeaa-274a-4ed6-bff0-ffa488a2c8df main
// :: INFO conf.HiveConf: Using the default value passed in for log id: ab78aeaa-274a-4ed6-bff0-ffa488a2c8df
// :: INFO ql.Driver: Compiling command(queryId=yinzhengjie_20180614030207_d95714d9-84da-405e-97b6-d9f36436e2f2): CREATE TABLE `yinzhengjie`.`wc` ( `id` INT, `string` STRING) COMMENT 'Imported by sqoop on 2018/06/14 03:01:43' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\011' LINES TERMINATED BY '\012' STORED AS TEXTFILE
// :: INFO parse.CalcitePlanner: Starting Semantic Analysis
// :: INFO parse.CalcitePlanner: Creating table yinzhengjie.wc position=
// :: INFO metastore.HiveMetaStore: : get_database: yinzhengjie
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_database: yinzhengjie
// :: INFO sqlstd.SQLStdHiveAccessController: Created SQLStdHiveAccessController for session context : HiveAuthzSessionContext [sessionString=ab78aeaa-274a-4ed6-bff0-ffa488a2c8df, clientType=HIVECLI]
// :: WARN session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
// :: INFO hive.metastore: Mestastore configuration hive.metastore.filter.hook changed from org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl to org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook
// :: INFO metastore.HiveMetaStore: : Cleaning up thread local RawStore...
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Cleaning up thread local RawStore...
// :: INFO metastore.HiveMetaStore: : Done cleaning up thread local RawStore
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Done cleaning up thread local RawStore
// :: INFO ql.Driver: Semantic Analysis Completed
// :: INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
// :: INFO ql.Driver: Completed compiling command(queryId=yinzhengjie_20180614030207_d95714d9-84da-405e-97b6-d9f36436e2f2); Time taken: 2.447 seconds
// :: INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
// :: INFO ql.Driver: Executing command(queryId=yinzhengjie_20180614030207_d95714d9-84da-405e-97b6-d9f36436e2f2): CREATE TABLE `yinzhengjie`.`wc` ( `id` INT, `string` STRING) COMMENT 'Imported by sqoop on 2018/06/14 03:01:43' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\011' LINES TERMINATED BY '\012' STORED AS TEXTFILE
// :: INFO ql.Driver: Starting task [Stage-:DDL] in serial mode
// :: INFO exec.DDLTask: creating table yinzhengjie.wc on null
// :: INFO metastore.HiveMetaStore: : create_table: Table(tableName:wc, dbName:yinzhengjie, owner:yinzhengjie, createTime:, lastAccessTime:, retention:, sd:StorageDescriptor(cols:[FieldSchema(name:id, type:int, comment:null), FieldSchema(name:string, type:string, comment:null)], location:null, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format= , line.delim=
, field.delim= }), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[], parameters:{totalSize=, numRows=, rawDataSize=, COLUMN_STATS_ACCURATE={"BASIC_STATS":"true"}, numFiles=, comment=Imported by sqoop on // ::}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{yinzhengjie=[PrivilegeGrantInfo(privilege:INSERT, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:SELECT, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:UPDATE, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:DELETE, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true)]}, groupPrivileges:null, rolePrivileges:null), temporary:false)
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=create_table: Table(tableName:wc, dbName:yinzhengjie, owner:yinzhengjie, createTime:, lastAccessTime:, retention:, sd:StorageDescriptor(cols:[FieldSchema(name:id, type:int, comment:null), FieldSchema(name:string, type:string, comment:null)], location:null, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format= , line.delim=
, field.delim= }), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[], parameters:{totalSize=, numRows=, rawDataSize=, COLUMN_STATS_ACCURATE={"BASIC_STATS":"true"}, numFiles=, comment=Imported by sqoop on // ::}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{yinzhengjie=[PrivilegeGrantInfo(privilege:INSERT, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:SELECT, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:UPDATE, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:DELETE, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true)]}, groupPrivileges:null, rolePrivileges:null), temporary:false)
// :: INFO metastore.HiveMetaStore: : Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
// :: INFO metastore.ObjectStore: ObjectStore, initialize called
// :: INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
// :: INFO metastore.ObjectStore: Initialized ObjectStore
// :: INFO common.FileUtils: Creating directory if it doesn't exist: hdfs://mycluster/user/hive/warehouse/yinzhengjie.db/wc
// :: INFO metadata.Hive: Dumping metastore api call timing information for : execution phase
// :: INFO metadata.Hive: Total time spent in this metastore function was greater than 1000ms : createTable_(Table, )=
// :: INFO ql.Driver: Completed executing command(queryId=yinzhengjie_20180614030207_d95714d9-84da-405e-97b6-d9f36436e2f2); Time taken: 1.668 seconds
OK
// :: INFO ql.Driver: OK
Time taken: 4.152 seconds
// :: INFO CliDriver: Time taken: 4.152 seconds
// :: INFO conf.HiveConf: Using the default value passed in for log id: ab78aeaa-274a-4ed6-bff0-ffa488a2c8df
// :: INFO session.SessionState: Resetting thread name to main
// :: INFO conf.HiveConf: Using the default value passed in for log id: ab78aeaa-274a-4ed6-bff0-ffa488a2c8df
// :: INFO session.SessionState: Updating thread name to ab78aeaa-274a-4ed6-bff0-ffa488a2c8df main
// :: INFO ql.Driver: Compiling command(queryId=yinzhengjie_20180614030211_2b878339-754f-4d7d-985d-fe9b86f5ec88):
LOAD DATA INPATH 'hdfs://mycluster/user/yinzhengjie/word' INTO TABLE `yinzhengjie`.`wc`
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=wc
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=wc
// :: INFO ql.Driver: Semantic Analysis Completed
// :: INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
// :: INFO ql.Driver: Completed compiling command(queryId=yinzhengjie_20180614030211_2b878339-754f-4d7d-985d-fe9b86f5ec88); Time taken: 0.987 seconds
// :: INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
// :: INFO ql.Driver: Executing command(queryId=yinzhengjie_20180614030211_2b878339-754f-4d7d-985d-fe9b86f5ec88):
LOAD DATA INPATH 'hdfs://mycluster/user/yinzhengjie/word' INTO TABLE `yinzhengjie`.`wc`
// :: INFO ql.Driver: Starting task [Stage-:MOVE] in serial mode
Loading data to table yinzhengjie.wc
// :: INFO exec.Task: Loading data to table yinzhengjie.wc from hdfs://mycluster/user/yinzhengjie/word
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=wc
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=wc
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=wc
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=wc
// :: ERROR hdfs.KeyProviderCache: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !!
// :: INFO metastore.HiveMetaStore: : alter_table: db=yinzhengjie tbl=wc newtbl=wc
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=alter_table: db=yinzhengjie tbl=wc newtbl=wc
// :: INFO ql.Driver: Starting task [Stage-:STATS] in serial mode
// :: INFO exec.StatsTask: Executing stats task
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=wc
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=wc
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=wc
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=wc
// :: INFO metastore.HiveMetaStore: : alter_table: db=yinzhengjie tbl=wc newtbl=wc
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=alter_table: db=yinzhengjie tbl=wc newtbl=wc
// :: INFO hive.log: Updating table stats fast for wc
// :: INFO hive.log: Updated size of table wc to
// :: INFO exec.StatsTask: Table yinzhengjie.wc stats: [numFiles=, numRows=, totalSize=, rawDataSize=]
// :: INFO ql.Driver: Completed executing command(queryId=yinzhengjie_20180614030211_2b878339-754f-4d7d-985d-fe9b86f5ec88); Time taken: 0.858 seconds
OK
// :: INFO ql.Driver: OK
Time taken: 1.847 seconds
// :: INFO CliDriver: Time taken: 1.847 seconds
// :: INFO conf.HiveConf: Using the default value passed in for log id: ab78aeaa-274a-4ed6-bff0-ffa488a2c8df
// :: INFO session.SessionState: Resetting thread name to main
// :: INFO conf.HiveConf: Using the default value passed in for log id: ab78aeaa-274a-4ed6-bff0-ffa488a2c8df
// :: INFO session.SessionState: Deleted directory: /tmp/hive/yinzhengjie/ab78aeaa-274a-4ed6-bff0-ffa488a2c8df on fs with scheme hdfs
// :: INFO session.SessionState: Deleted directory: /home/yinzhengjie/yinzhengjie/ab78aeaa-274a-4ed6-bff0-ffa488a2c8df on fs with scheme file
// :: INFO metastore.HiveMetaStore: : Cleaning up thread local RawStore...
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Cleaning up thread local RawStore...
// :: INFO metastore.HiveMetaStore: : Done cleaning up thread local RawStore
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Done cleaning up thread local RawStore
// :: INFO hive.HiveImport: Hive import complete.
// :: INFO hive.HiveImport: Export directory is contains the _SUCCESS file only, removing the directory.
[yinzhengjie@s101 ~]$

将数据导入到hive中([yinzhengjie@s101 ~]$ sqoop import --connect jdbc:mysql://s101/yinzhengjie --username root -P --table word --fields-terminated-by '\t' --hive-import --create-hive-table --hive-database yinzhengjie --hive-table wc -m 1)

: jdbc:hive2://s101:10000> show tables;
+---------------+--+
| tab_name |
+---------------+--+
| pv |
| user_orc |
| user_parquet |
| user_rc |
| user_seq |
| user_text |
| users |
| wc |
+---------------+--+
rows selected (0.19 seconds)
: jdbc:hive2://s101:10000> select * from wc;
+--------+---------------------+--+
| wc.id | wc.string |
+--------+---------------------+--+
| | hello world |
| | yinzhengjie hadoop |
| | yinzhengjie hive |
| | yinzhengjie hbase |
+--------+---------------------+--+
rows selected (2.717 seconds)
: jdbc:hive2://s101:10000>

导入之后(0: jdbc:hive2://s101:10000> select * from wc;)

  注意:将数据导入到hive的过程中,估计大家也发现了一个显现,数据会临时保存到hdfs上,等MapReduce运行完毕之后,再将数据load到服务器上,将数据加载到hive之后,hdfs临时存在的文件就会被自动删除。这个时候如果你在重新将同一张表导入到hive的同一个数据库时,就会抛出表已经存在的异常(如下图)。想要解决这个问题,除了删除hive中的表还要删除hdfs的临时文件,否在再次运行该命令依然会抛出同样的异常哟!

5>.Sqoop将MySQL数据导入到hive中不需要启动服务,验证如下(我们需要编写配置文件“.hiverc”)

6>.其他常用参数介绍

  --create-hive-table                          //改参数表示如果表不存在就创建,若存在就忽略该参数
--external-table-dir <hdfs path> //指定外部表路径
--hive-database <database-name> //指定hive的数据库
--hive-import //指定导入hive表
--hive-partition-key <partition-key> //指定分区的key
--hive-partition-value <partition-value> //指定分区的value
--hive-table <table-name> //指定hive的表

 7>.sqoop只创建hive表

[yinzhengjie@s101 ~]$ sqoop create-hive-table --connect jdbc:mysql://s101/yinzhengjie --username root -P --table word --fields-terminated-by '\t'  --hive-database yinzhengjie --hive-table test1 --hive-partition-key province   --hive-partition-value beijing
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
Enter password:
// :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/soft/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/phoenix-4.10.-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/apache-hive-2.1.-bin/lib/log4j-slf4j-impl-2.4..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `word` AS t LIMIT
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `word` AS t LIMIT
// :: INFO hive.HiveImport: Loading uploaded data into Hive
// :: INFO conf.HiveConf: Found configuration file file:/soft/hive/conf/hive-site.xml Logging initialized using configuration in jar:file:/soft/apache-hive-2.1.-bin/lib/hive-common-2.1..jar!/hive-log4j2.properties Async: true
// :: INFO SessionState:
Logging initialized using configuration in jar:file:/soft/apache-hive-2.1.-bin/lib/hive-common-2.1..jar!/hive-log4j2.properties Async: true
// :: INFO metastore.HiveMetaStore: : Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
// :: INFO metastore.ObjectStore: ObjectStore, initialize called
// :: INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
// :: INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
// :: INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
// :: INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
// :: INFO metastore.ObjectStore: Initialized ObjectStore
// :: INFO metastore.HiveMetaStore: Added admin role in metastore
// :: INFO metastore.HiveMetaStore: Added public role in metastore
// :: INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
// :: INFO metastore.HiveMetaStore: : get_all_functions
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_all_functions
// :: INFO metadata.Hive: Registering function parsejson cn.org.yinzhengjie.udf.ParseJson
// :: WARN metadata.Hive: Failed to register persistent function parsejson:cn.org.yinzhengjie.udf.ParseJson. Ignore and continue.
// :: INFO metadata.Hive: Registering function parsejson cn.org.yinzhengjie.udf.MyUDTF
// :: WARN metadata.Hive: Failed to register persistent function parsejson:cn.org.yinzhengjie.udf.MyUDTF. Ignore and continue.
// :: INFO metadata.Hive: Registering function todate cn.org.yinzhengjie.udf.MyUDTF
// :: WARN metadata.Hive: Failed to register persistent function todate:cn.org.yinzhengjie.udf.MyUDTF. Ignore and continue.
// :: INFO session.SessionState: Created HDFS directory: /tmp/hive/yinzhengjie/96f23e30-9bca---beb3260c29c0
// :: INFO session.SessionState: Created local directory: /home/yinzhengjie/yinzhengjie/96f23e30-9bca---beb3260c29c0
// :: INFO session.SessionState: Created HDFS directory: /tmp/hive/yinzhengjie/96f23e30-9bca---beb3260c29c0/_tmp_space.db
// :: INFO conf.HiveConf: Using the default value passed in for log id: 96f23e30-9bca---beb3260c29c0
// :: INFO session.SessionState: Updating thread name to 96f23e30-9bca---beb3260c29c0 main
// :: INFO conf.HiveConf: Using the default value passed in for log id: 96f23e30-9bca---beb3260c29c0
// :: INFO ql.Driver: Compiling command(queryId=yinzhengjie_20180614063244_016a0a4f-e1c4--907f-6990016f3010): show databases
// :: INFO ql.Driver: Semantic Analysis Completed
// :: INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:database_name, type:string, comment:from deserializer)], properties:null)
// :: INFO exec.ListSinkOperator: Initializing operator LIST_SINK[]
// :: INFO ql.Driver: Completed compiling command(queryId=yinzhengjie_20180614063244_016a0a4f-e1c4--907f-6990016f3010); Time taken: 1.491 seconds
// :: INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
// :: INFO ql.Driver: Executing command(queryId=yinzhengjie_20180614063244_016a0a4f-e1c4--907f-6990016f3010): show databases
// :: INFO ql.Driver: Starting task [Stage-:DDL] in serial mode
// :: INFO metastore.HiveMetaStore: : get_all_databases
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_all_databases
// :: INFO exec.DDLTask: results :
// :: INFO ql.Driver: Completed executing command(queryId=yinzhengjie_20180614063244_016a0a4f-e1c4--907f-6990016f3010); Time taken: 0.036 seconds
// :: INFO ql.Driver: OK
// :: INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
// :: INFO mapred.FileInputFormat: Total input paths to process :
default
yinzhengjie
// :: INFO CliDriver: Time taken: 1.534 seconds, Fetched: row(s)
// :: INFO conf.HiveConf: Using the default value passed in for log id: 96f23e30-9bca---beb3260c29c0
// :: INFO session.SessionState: Resetting thread name to main
// :: INFO conf.HiveConf: Using the default value passed in for log id: 96f23e30-9bca---beb3260c29c0
// :: INFO session.SessionState: Updating thread name to 96f23e30-9bca---beb3260c29c0 main
// :: INFO ql.Driver: Compiling command(queryId=yinzhengjie_20180614063246_b8444d13-d69e-40c4-b717-7917e2bce6af): CREATE TABLE IF NOT EXISTS `yinzhengjie`.`test1` ( `id` INT, `string` STRING) COMMENT 'Imported by sqoop on 2018/06/14 06:32:35' PARTITIONED BY (province STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\011' LINES TERMINATED BY '\012' STORED AS TEXTFILE
// :: INFO parse.CalcitePlanner: Starting Semantic Analysis
// :: INFO parse.CalcitePlanner: Creating table yinzhengjie.test1 position=
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test1
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test1
// :: INFO metastore.HiveMetaStore: : get_database: yinzhengjie
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_database: yinzhengjie
// :: INFO sqlstd.SQLStdHiveAccessController: Created SQLStdHiveAccessController for session context : HiveAuthzSessionContext [sessionString=96f23e30-9bca---beb3260c29c0, clientType=HIVECLI]
// :: WARN session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
// :: INFO hive.metastore: Mestastore configuration hive.metastore.filter.hook changed from org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl to org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook
// :: INFO metastore.HiveMetaStore: : Cleaning up thread local RawStore...
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Cleaning up thread local RawStore...
// :: INFO metastore.HiveMetaStore: : Done cleaning up thread local RawStore
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Done cleaning up thread local RawStore
// :: INFO ql.Driver: Semantic Analysis Completed
// :: INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
// :: INFO ql.Driver: Completed compiling command(queryId=yinzhengjie_20180614063246_b8444d13-d69e-40c4-b717-7917e2bce6af); Time taken: 0.176 seconds
// :: INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
// :: INFO ql.Driver: Executing command(queryId=yinzhengjie_20180614063246_b8444d13-d69e-40c4-b717-7917e2bce6af): CREATE TABLE IF NOT EXISTS `yinzhengjie`.`test1` ( `id` INT, `string` STRING) COMMENT 'Imported by sqoop on 2018/06/14 06:32:35' PARTITIONED BY (province STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\011' LINES TERMINATED BY '\012' STORED AS TEXTFILE
// :: INFO ql.Driver: Starting task [Stage-:DDL] in serial mode
// :: INFO exec.DDLTask: creating table yinzhengjie.test1 on null
// :: INFO metastore.HiveMetaStore: : create_table: Table(tableName:test1, dbName:yinzhengjie, owner:yinzhengjie, createTime:, lastAccessTime:, retention:, sd:StorageDescriptor(cols:[FieldSchema(name:id, type:int, comment:null), FieldSchema(name:string, type:string, comment:null)], location:null, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format= , line.delim=
, field.delim= }), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[FieldSchema(name:province, type:string, comment:null)], parameters:{comment=Imported by sqoop on // ::}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{yinzhengjie=[PrivilegeGrantInfo(privilege:INSERT, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:SELECT, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:UPDATE, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:DELETE, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true)]}, groupPrivileges:null, rolePrivileges:null), temporary:false)
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=create_table: Table(tableName:test1, dbName:yinzhengjie, owner:yinzhengjie, createTime:, lastAccessTime:, retention:, sd:StorageDescriptor(cols:[FieldSchema(name:id, type:int, comment:null), FieldSchema(name:string, type:string, comment:null)], location:null, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format= , line.delim=
, field.delim= }), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[FieldSchema(name:province, type:string, comment:null)], parameters:{comment=Imported by sqoop on // ::}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{yinzhengjie=[PrivilegeGrantInfo(privilege:INSERT, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:SELECT, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:UPDATE, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:DELETE, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true)]}, groupPrivileges:null, rolePrivileges:null), temporary:false)
// :: INFO metastore.HiveMetaStore: : Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
// :: INFO metastore.ObjectStore: ObjectStore, initialize called
// :: INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
// :: INFO metastore.ObjectStore: Initialized ObjectStore
// :: INFO common.FileUtils: Creating directory if it doesn't exist: hdfs://mycluster/user/hive/warehouse/yinzhengjie.db/test1
// :: INFO ql.Driver: Completed executing command(queryId=yinzhengjie_20180614063246_b8444d13-d69e-40c4-b717-7917e2bce6af); Time taken: 0.334 seconds
OK
// :: INFO ql.Driver: OK
Time taken: 0.511 seconds
// :: INFO CliDriver: Time taken: 0.511 seconds
// :: INFO conf.HiveConf: Using the default value passed in for log id: 96f23e30-9bca---beb3260c29c0
// :: INFO session.SessionState: Resetting thread name to main
// :: INFO conf.HiveConf: Using the default value passed in for log id: 96f23e30-9bca---beb3260c29c0
// :: INFO session.SessionState: Deleted directory: /tmp/hive/yinzhengjie/96f23e30-9bca---beb3260c29c0 on fs with scheme hdfs
// :: INFO session.SessionState: Deleted directory: /home/yinzhengjie/yinzhengjie/96f23e30-9bca---beb3260c29c0 on fs with scheme file
// :: INFO metastore.HiveMetaStore: : Cleaning up thread local RawStore...
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Cleaning up thread local RawStore...
// :: INFO metastore.HiveMetaStore: : Done cleaning up thread local RawStore
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Done cleaning up thread local RawStore
// :: INFO hive.HiveImport: Hive import complete.
[yinzhengjie@s101 ~]$ echo $? [yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ sqoop create-hive-table --connect jdbc:mysql://s101/yinzhengjie --username root -P --table word --fields-terminated-by '\t' --hive-database yinzhengjie --hive-table test1 --hive-partition-key province --hive-partition-value beijing

8>.sqoop导入hive分区表(hive导入分区表时会进行自动创建,hive导入分区表只能静态导入,支持一个分区 省去创建文件夹流程)

[yinzhengjie@s101 ~]$ sqoop import --connect jdbc:mysql://s101/yinzhengjie  --username root -P --table word --fields-terminated-by '\t' --hive-import --create-hive-table --hive-database yinzhengjie --hive-table test2 --hive-partition-key province   --hive-partition-value beijing  -m 1
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
Enter password:
// :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
// :: INFO tool.CodeGenTool: Beginning code generation
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/soft/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/phoenix-4.10.-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/apache-hive-2.1.-bin/lib/log4j-slf4j-impl-2.4..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `word` AS t LIMIT
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `word` AS t LIMIT
// :: INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /soft/hadoop
Note: /tmp/sqoop-yinzhengjie/compile/2ff01e5a4aa9de071eea44aba493fc22/word.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
// :: INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-yinzhengjie/compile/2ff01e5a4aa9de071eea44aba493fc22/word.jar
// :: WARN manager.MySQLManager: It looks like you are importing from mysql.
// :: WARN manager.MySQLManager: This transfer can be faster! Use the --direct
// :: WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
// :: INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
// :: INFO mapreduce.ImportJobBase: Beginning import of word
// :: INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
// :: INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
// :: ERROR tool.ImportTool: Import failed: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://mycluster/user/yinzhengjie/word already exists
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:)
at org.apache.hadoop.mapreduce.Job$.run(Job.java:)
at org.apache.hadoop.mapreduce.Job$.run(Job.java:)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:)
at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:)
at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:)
at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:)
at org.apache.sqoop.Sqoop.run(Sqoop.java:)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:)
at org.apache.sqoop.Sqoop.main(Sqoop.java:) [yinzhengjie@s101 ~]$ hdfs dfs -rm -r /user/yinzhengjie/word
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/soft/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/apache-hive-2.1.-bin/lib/log4j-slf4j-impl-2.4..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
// :: INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = minutes, Emptier interval = minutes.
Deleted /user/yinzhengjie/word
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ sqoop import --connect jdbc:mysql://s101/yinzhengjie --username root -P --table word --fields-terminated-by '\t' --hive-import --create-hive-table --hive-database yinzhengjie --hive-table test2 --hive-partition-key province --hive-partition-value beijing -m 1
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
Enter password:
// :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
// :: INFO tool.CodeGenTool: Beginning code generation
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/soft/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/phoenix-4.10.-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/apache-hive-2.1.-bin/lib/log4j-slf4j-impl-2.4..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `word` AS t LIMIT
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `word` AS t LIMIT
// :: INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /soft/hadoop
Note: /tmp/sqoop-yinzhengjie/compile/c8d9be59546846cfb07ab171c91ca0ac/word.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
// :: INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-yinzhengjie/compile/c8d9be59546846cfb07ab171c91ca0ac/word.jar
// :: WARN manager.MySQLManager: It looks like you are importing from mysql.
// :: WARN manager.MySQLManager: This transfer can be faster! Use the --direct
// :: WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
// :: INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
// :: INFO mapreduce.ImportJobBase: Beginning import of word
// :: INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
// :: INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
// :: INFO db.DBInputFormat: Using read commited transaction isolation
// :: INFO mapreduce.JobSubmitter: number of splits:
// :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1528967628934_0014
// :: INFO impl.YarnClientImpl: Submitted application application_1528967628934_0014
// :: INFO mapreduce.Job: The url to track the job: http://s101:8088/proxy/application_1528967628934_0014/
// :: INFO mapreduce.Job: Running job: job_1528967628934_0014
// :: INFO mapreduce.Job: Job job_1528967628934_0014 running in uber mode : false
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: Job job_1528967628934_0014 completed successfully
// :: INFO mapreduce.Job: Counters:
File System Counters
FILE: Number of bytes read=
FILE: Number of bytes written=
FILE: Number of read operations=
FILE: Number of large read operations=
FILE: Number of write operations=
HDFS: Number of bytes read=
HDFS: Number of bytes written=
HDFS: Number of read operations=
HDFS: Number of large read operations=
HDFS: Number of write operations=
Job Counters
Launched map tasks=
Other local map tasks=
Total time spent by all maps in occupied slots (ms)=
Total time spent by all reduces in occupied slots (ms)=
Total time spent by all map tasks (ms)=
Total vcore-milliseconds taken by all map tasks=
Total megabyte-milliseconds taken by all map tasks=
Map-Reduce Framework
Map input records=
Map output records=
Input split bytes=
Spilled Records=
Failed Shuffles=
Merged Map outputs=
GC time elapsed (ms)=
CPU time spent (ms)=
Physical memory (bytes) snapshot=
Virtual memory (bytes) snapshot=
Total committed heap usage (bytes)=
File Input Format Counters
Bytes Read=
File Output Format Counters
Bytes Written=
// :: INFO mapreduce.ImportJobBase: Transferred bytes in 38.8416 seconds (1.9052 bytes/sec)
// :: INFO mapreduce.ImportJobBase: Retrieved records.
// :: INFO mapreduce.ImportJobBase: Publishing Hive/Hcat import job data to Listeners for table word
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `word` AS t LIMIT
// :: INFO hive.HiveImport: Loading uploaded data into Hive
// :: INFO conf.HiveConf: Found configuration file file:/soft/hive/conf/hive-site.xml Logging initialized using configuration in jar:file:/soft/apache-hive-2.1.-bin/lib/hive-common-2.1..jar!/hive-log4j2.properties Async: true
// :: INFO SessionState:
Logging initialized using configuration in jar:file:/soft/apache-hive-2.1.-bin/lib/hive-common-2.1..jar!/hive-log4j2.properties Async: true
// :: INFO metastore.HiveMetaStore: : Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
// :: INFO metastore.ObjectStore: ObjectStore, initialize called
// :: INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
// :: INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
// :: INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
// :: INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
// :: INFO metastore.ObjectStore: Initialized ObjectStore
// :: INFO metastore.HiveMetaStore: Added admin role in metastore
// :: INFO metastore.HiveMetaStore: Added public role in metastore
// :: INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
// :: INFO metastore.HiveMetaStore: : get_all_functions
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_all_functions
// :: INFO metadata.Hive: Registering function parsejson cn.org.yinzhengjie.udf.ParseJson
// :: WARN metadata.Hive: Failed to register persistent function parsejson:cn.org.yinzhengjie.udf.ParseJson. Ignore and continue.
// :: INFO metadata.Hive: Registering function parsejson cn.org.yinzhengjie.udf.MyUDTF
// :: WARN metadata.Hive: Failed to register persistent function parsejson:cn.org.yinzhengjie.udf.MyUDTF. Ignore and continue.
// :: INFO metadata.Hive: Registering function todate cn.org.yinzhengjie.udf.MyUDTF
// :: WARN metadata.Hive: Failed to register persistent function todate:cn.org.yinzhengjie.udf.MyUDTF. Ignore and continue.
// :: INFO session.SessionState: Created HDFS directory: /tmp/hive/yinzhengjie/520c334d--4ac4-aa9e-32b6afc099a2
// :: INFO session.SessionState: Created local directory: /home/yinzhengjie/yinzhengjie/520c334d--4ac4-aa9e-32b6afc099a2
// :: INFO session.SessionState: Created HDFS directory: /tmp/hive/yinzhengjie/520c334d--4ac4-aa9e-32b6afc099a2/_tmp_space.db
// :: INFO conf.HiveConf: Using the default value passed in for log id: 520c334d--4ac4-aa9e-32b6afc099a2
// :: INFO session.SessionState: Updating thread name to 520c334d--4ac4-aa9e-32b6afc099a2 main
// :: INFO conf.HiveConf: Using the default value passed in for log id: 520c334d--4ac4-aa9e-32b6afc099a2
// :: INFO ql.Driver: Compiling command(queryId=yinzhengjie_20180614065157_e829671d---a232-16ed5e196e1e): show databases
// :: INFO ql.Driver: Semantic Analysis Completed
// :: INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:database_name, type:string, comment:from deserializer)], properties:null)
// :: INFO exec.ListSinkOperator: Initializing operator LIST_SINK[]
// :: INFO ql.Driver: Completed compiling command(queryId=yinzhengjie_20180614065157_e829671d---a232-16ed5e196e1e); Time taken: 1.945 seconds
// :: INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
// :: INFO ql.Driver: Executing command(queryId=yinzhengjie_20180614065157_e829671d---a232-16ed5e196e1e): show databases
// :: INFO ql.Driver: Starting task [Stage-:DDL] in serial mode
// :: INFO metastore.HiveMetaStore: : get_all_databases
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_all_databases
// :: INFO exec.DDLTask: results :
// :: INFO ql.Driver: Completed executing command(queryId=yinzhengjie_20180614065157_e829671d---a232-16ed5e196e1e); Time taken: 0.805 seconds
// :: INFO ql.Driver: OK
// :: INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
// :: INFO mapred.FileInputFormat: Total input paths to process :
default
yinzhengjie
// :: INFO CliDriver: Time taken: 2.773 seconds, Fetched: row(s)
// :: INFO conf.HiveConf: Using the default value passed in for log id: 520c334d--4ac4-aa9e-32b6afc099a2
// :: INFO session.SessionState: Resetting thread name to main
// :: INFO conf.HiveConf: Using the default value passed in for log id: 520c334d--4ac4-aa9e-32b6afc099a2
// :: INFO session.SessionState: Updating thread name to 520c334d--4ac4-aa9e-32b6afc099a2 main
// :: INFO ql.Driver: Compiling command(queryId=yinzhengjie_20180614065200_0b542503-b168-4e41-9e3d-2122d1a47de6): CREATE TABLE `yinzhengjie`.`test2` ( `id` INT, `string` STRING) COMMENT 'Imported by sqoop on 2018/06/14 06:51:35' PARTITIONED BY (province STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\011' LINES TERMINATED BY '\012' STORED AS TEXTFILE
// :: INFO parse.CalcitePlanner: Starting Semantic Analysis
// :: INFO parse.CalcitePlanner: Creating table yinzhengjie.test2 position=
// :: INFO metastore.HiveMetaStore: : get_database: yinzhengjie
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_database: yinzhengjie
// :: INFO sqlstd.SQLStdHiveAccessController: Created SQLStdHiveAccessController for session context : HiveAuthzSessionContext [sessionString=520c334d--4ac4-aa9e-32b6afc099a2, clientType=HIVECLI]
// :: WARN session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
// :: INFO hive.metastore: Mestastore configuration hive.metastore.filter.hook changed from org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl to org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook
// :: INFO metastore.HiveMetaStore: : Cleaning up thread local RawStore...
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Cleaning up thread local RawStore...
// :: INFO metastore.HiveMetaStore: : Done cleaning up thread local RawStore
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Done cleaning up thread local RawStore
// :: INFO ql.Driver: Semantic Analysis Completed
// :: INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
// :: INFO ql.Driver: Completed compiling command(queryId=yinzhengjie_20180614065200_0b542503-b168-4e41-9e3d-2122d1a47de6); Time taken: 0.828 seconds
// :: INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
// :: INFO ql.Driver: Executing command(queryId=yinzhengjie_20180614065200_0b542503-b168-4e41-9e3d-2122d1a47de6): CREATE TABLE `yinzhengjie`.`test2` ( `id` INT, `string` STRING) COMMENT 'Imported by sqoop on 2018/06/14 06:51:35' PARTITIONED BY (province STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\011' LINES TERMINATED BY '\012' STORED AS TEXTFILE
// :: INFO ql.Driver: Starting task [Stage-:DDL] in serial mode
// :: INFO exec.DDLTask: creating table yinzhengjie.test2 on null
// :: INFO metastore.HiveMetaStore: : create_table: Table(tableName:test2, dbName:yinzhengjie, owner:yinzhengjie, createTime:, lastAccessTime:, retention:, sd:StorageDescriptor(cols:[FieldSchema(name:id, type:int, comment:null), FieldSchema(name:string, type:string, comment:null)], location:null, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format= , line.delim=
, field.delim= }), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[FieldSchema(name:province, type:string, comment:null)], parameters:{comment=Imported by sqoop on // ::}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{yinzhengjie=[PrivilegeGrantInfo(privilege:INSERT, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:SELECT, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:UPDATE, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:DELETE, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true)]}, groupPrivileges:null, rolePrivileges:null), temporary:false)
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=create_table: Table(tableName:test2, dbName:yinzhengjie, owner:yinzhengjie, createTime:, lastAccessTime:, retention:, sd:StorageDescriptor(cols:[FieldSchema(name:id, type:int, comment:null), FieldSchema(name:string, type:string, comment:null)], location:null, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format= , line.delim=
, field.delim= }), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[FieldSchema(name:province, type:string, comment:null)], parameters:{comment=Imported by sqoop on // ::}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{yinzhengjie=[PrivilegeGrantInfo(privilege:INSERT, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:SELECT, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:UPDATE, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:DELETE, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true)]}, groupPrivileges:null, rolePrivileges:null), temporary:false)
// :: INFO metastore.HiveMetaStore: : Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
// :: INFO metastore.ObjectStore: ObjectStore, initialize called
// :: INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
// :: INFO metastore.ObjectStore: Initialized ObjectStore
// :: INFO common.FileUtils: Creating directory if it doesn't exist: hdfs://mycluster/user/hive/warehouse/yinzhengjie.db/test2
// :: INFO metadata.Hive: Dumping metastore api call timing information for : execution phase
// :: INFO metadata.Hive: Total time spent in this metastore function was greater than 1000ms : createTable_(Table, )=
// :: INFO ql.Driver: Completed executing command(queryId=yinzhengjie_20180614065200_0b542503-b168-4e41-9e3d-2122d1a47de6); Time taken: 1.476 seconds
OK
// :: INFO ql.Driver: OK
Time taken: 2.304 seconds
// :: INFO CliDriver: Time taken: 2.304 seconds
// :: INFO conf.HiveConf: Using the default value passed in for log id: 520c334d--4ac4-aa9e-32b6afc099a2
// :: INFO session.SessionState: Resetting thread name to main
// :: INFO conf.HiveConf: Using the default value passed in for log id: 520c334d--4ac4-aa9e-32b6afc099a2
// :: INFO session.SessionState: Updating thread name to 520c334d--4ac4-aa9e-32b6afc099a2 main
// :: INFO ql.Driver: Compiling command(queryId=yinzhengjie_20180614065203_2cb29172-e5bb-43bd-abf6-4d33c3fffb48):
LOAD DATA INPATH 'hdfs://mycluster/user/yinzhengjie/word' INTO TABLE `yinzhengjie`.`test2` PARTITION (province='beijing')
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test2
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test2
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test2[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test2[beijing]
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test2[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test2[beijing]
// :: INFO ql.Driver: Semantic Analysis Completed
// :: INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
// :: INFO ql.Driver: Completed compiling command(queryId=yinzhengjie_20180614065203_2cb29172-e5bb-43bd-abf6-4d33c3fffb48); Time taken: 0.931 seconds
// :: INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
// :: INFO ql.Driver: Executing command(queryId=yinzhengjie_20180614065203_2cb29172-e5bb-43bd-abf6-4d33c3fffb48):
LOAD DATA INPATH 'hdfs://mycluster/user/yinzhengjie/word' INTO TABLE `yinzhengjie`.`test2` PARTITION (province='beijing')
// :: INFO ql.Driver: Starting task [Stage-:MOVE] in serial mode
Loading data to table yinzhengjie.test2 partition (province=beijing)
// :: INFO exec.Task: Loading data to table yinzhengjie.test2 partition (province=beijing) from hdfs://mycluster/user/yinzhengjie/word
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test2
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test2
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test2[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test2[beijing]
// :: INFO exec.MoveTask: Partition is: {province=beijing}
// :: INFO metastore.HiveMetaStore: : partition_name_has_valid_characters
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=partition_name_has_valid_characters
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test2
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test2
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test2[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test2[beijing]
// :: INFO common.FileUtils: Creating directory if it doesn't exist: hdfs://mycluster/user/hive/warehouse/yinzhengjie.db/test2/province=beijing
// :: ERROR hdfs.KeyProviderCache: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !!
// :: INFO metastore.HiveMetaStore: : add_partition : db=yinzhengjie tbl=test2
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=add_partition : db=yinzhengjie tbl=test2
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test2[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test2[beijing]
// :: INFO ql.Driver: Starting task [Stage-:STATS] in serial mode
// :: INFO exec.StatsTask: Executing stats task
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test2
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test2
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test2
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test2
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test2[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test2[beijing]
// :: INFO exec.StatsTask: Partition yinzhengjie.test2{province=beijing} stats: [numFiles=, numRows=, totalSize=, rawDataSize=]
// :: INFO metastore.HiveMetaStore: : alter_partitions : db=yinzhengjie tbl=test2
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=alter_partitions : db=yinzhengjie tbl=test2
// :: INFO metastore.HiveMetaStore: New partition values:[beijing]
// :: INFO ql.Driver: Completed executing command(queryId=yinzhengjie_20180614065203_2cb29172-e5bb-43bd-abf6-4d33c3fffb48); Time taken: 1.324 seconds
OK
// :: INFO ql.Driver: OK
Time taken: 2.255 seconds
// :: INFO CliDriver: Time taken: 2.255 seconds
// :: INFO conf.HiveConf: Using the default value passed in for log id: 520c334d--4ac4-aa9e-32b6afc099a2
// :: INFO session.SessionState: Resetting thread name to main
// :: INFO conf.HiveConf: Using the default value passed in for log id: 520c334d--4ac4-aa9e-32b6afc099a2
// :: INFO session.SessionState: Deleted directory: /tmp/hive/yinzhengjie/520c334d--4ac4-aa9e-32b6afc099a2 on fs with scheme hdfs
// :: INFO session.SessionState: Deleted directory: /home/yinzhengjie/yinzhengjie/520c334d--4ac4-aa9e-32b6afc099a2 on fs with scheme file
// :: INFO metastore.HiveMetaStore: : Cleaning up thread local RawStore...
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Cleaning up thread local RawStore...
// :: INFO metastore.HiveMetaStore: : Done cleaning up thread local RawStore
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Done cleaning up thread local RawStore
// :: INFO hive.HiveImport: Hive import complete.
[yinzhengjie@s101 ~]$ echo $? [yinzhengjie@s101 ~]$

[yinzhengjie@s101 ~]$ sqoop import --connect jdbc:mysql://s101/yinzhengjie --username root -P --table word --fields-terminated-by '\t' --hive-import --create-hive-table --hive-database yinzhengjie --hive-table test2 --hive-partition-key province --hive-partition-value beijing -m 1

9>.sqoop增量导入

[yinzhengjie@s101 ~]$ sqoop import --connect jdbc:mysql://s101/yinzhengjie --username root -P --table user --fields-terminated-by '\t' --hive-import --create-hive-table --hive-database yinzhengjie --hive-table test3 --hive-partition-key province --hive-partition-value beijing -m 1
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
Enter password:
// :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
// :: INFO tool.CodeGenTool: Beginning code generation
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/soft/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/phoenix-4.10.-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/apache-hive-2.1.-bin/lib/log4j-slf4j-impl-2.4..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `user` AS t LIMIT
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `user` AS t LIMIT
// :: INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /soft/hadoop
Note: /tmp/sqoop-yinzhengjie/compile/094e32eb529484850a3218f3ce12dff2/user.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
// :: INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-yinzhengjie/compile/094e32eb529484850a3218f3ce12dff2/user.jar
// :: WARN manager.MySQLManager: It looks like you are importing from mysql.
// :: WARN manager.MySQLManager: This transfer can be faster! Use the --direct
// :: WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
// :: INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
// :: INFO mapreduce.ImportJobBase: Beginning import of user
// :: INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
// :: INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
// :: INFO db.DBInputFormat: Using read commited transaction isolation
// :: INFO mapreduce.JobSubmitter: number of splits:
// :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1528967628934_0016
// :: INFO impl.YarnClientImpl: Submitted application application_1528967628934_0016
// :: INFO mapreduce.Job: The url to track the job: http://s101:8088/proxy/application_1528967628934_0016/
// :: INFO mapreduce.Job: Running job: job_1528967628934_0016
// :: INFO mapreduce.Job: Job job_1528967628934_0016 running in uber mode : false
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: Job job_1528967628934_0016 completed successfully
// :: INFO mapreduce.Job: Counters:
File System Counters
FILE: Number of bytes read=
FILE: Number of bytes written=
FILE: Number of read operations=
FILE: Number of large read operations=
FILE: Number of write operations=
HDFS: Number of bytes read=
HDFS: Number of bytes written=
HDFS: Number of read operations=
HDFS: Number of large read operations=
HDFS: Number of write operations=
Job Counters
Launched map tasks=
Other local map tasks=
Total time spent by all maps in occupied slots (ms)=
Total time spent by all reduces in occupied slots (ms)=
Total time spent by all map tasks (ms)=
Total vcore-milliseconds taken by all map tasks=
Total megabyte-milliseconds taken by all map tasks=
Map-Reduce Framework
Map input records=
Map output records=
Input split bytes=
Spilled Records=
Failed Shuffles=
Merged Map outputs=
GC time elapsed (ms)=
CPU time spent (ms)=
Physical memory (bytes) snapshot=
Virtual memory (bytes) snapshot=
Total committed heap usage (bytes)=
File Input Format Counters
Bytes Read=
File Output Format Counters
Bytes Written=
// :: INFO mapreduce.ImportJobBase: Transferred bytes in 43.2821 seconds (1.34 bytes/sec)
// :: INFO mapreduce.ImportJobBase: Retrieved records.
// :: INFO mapreduce.ImportJobBase: Publishing Hive/Hcat import job data to Listeners for table user
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `user` AS t LIMIT
// :: INFO hive.HiveImport: Loading uploaded data into Hive
// :: INFO conf.HiveConf: Found configuration file file:/soft/hive/conf/hive-site.xml Logging initialized using configuration in jar:file:/soft/apache-hive-2.1.-bin/lib/hive-common-2.1..jar!/hive-log4j2.properties Async: true
// :: INFO SessionState:
Logging initialized using configuration in jar:file:/soft/apache-hive-2.1.-bin/lib/hive-common-2.1..jar!/hive-log4j2.properties Async: true
// :: INFO metastore.HiveMetaStore: : Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
// :: INFO metastore.ObjectStore: ObjectStore, initialize called
// :: INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
// :: INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
// :: INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
// :: INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
// :: INFO metastore.ObjectStore: Initialized ObjectStore
// :: INFO metastore.HiveMetaStore: Added admin role in metastore
// :: INFO metastore.HiveMetaStore: Added public role in metastore
// :: INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
// :: INFO metastore.HiveMetaStore: : get_all_functions
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_all_functions
// :: INFO metadata.Hive: Registering function parsejson cn.org.yinzhengjie.udf.ParseJson
// :: WARN metadata.Hive: Failed to register persistent function parsejson:cn.org.yinzhengjie.udf.ParseJson. Ignore and continue.
// :: INFO metadata.Hive: Registering function parsejson cn.org.yinzhengjie.udf.MyUDTF
// :: WARN metadata.Hive: Failed to register persistent function parsejson:cn.org.yinzhengjie.udf.MyUDTF. Ignore and continue.
// :: INFO metadata.Hive: Registering function todate cn.org.yinzhengjie.udf.MyUDTF
// :: WARN metadata.Hive: Failed to register persistent function todate:cn.org.yinzhengjie.udf.MyUDTF. Ignore and continue.
// :: INFO session.SessionState: Created HDFS directory: /tmp/hive/yinzhengjie/b32fc742-42a1-4fe0-b9be-2892ba183f7f
// :: INFO session.SessionState: Created local directory: /home/yinzhengjie/yinzhengjie/b32fc742-42a1-4fe0-b9be-2892ba183f7f
// :: INFO session.SessionState: Created HDFS directory: /tmp/hive/yinzhengjie/b32fc742-42a1-4fe0-b9be-2892ba183f7f/_tmp_space.db
// :: INFO conf.HiveConf: Using the default value passed in for log id: b32fc742-42a1-4fe0-b9be-2892ba183f7f
// :: INFO session.SessionState: Updating thread name to b32fc742-42a1-4fe0-b9be-2892ba183f7f main
// :: INFO conf.HiveConf: Using the default value passed in for log id: b32fc742-42a1-4fe0-b9be-2892ba183f7f
// :: INFO ql.Driver: Compiling command(queryId=yinzhengjie_20180614072622_4c4ae3f2-b869--a4dd-d4e8956b508c): show databases
// :: INFO ql.Driver: Semantic Analysis Completed
// :: INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:database_name, type:string, comment:from deserializer)], properties:null)
// :: INFO exec.ListSinkOperator: Initializing operator LIST_SINK[]
// :: INFO ql.Driver: Completed compiling command(queryId=yinzhengjie_20180614072622_4c4ae3f2-b869--a4dd-d4e8956b508c); Time taken: 1.844 seconds
// :: INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
// :: INFO ql.Driver: Executing command(queryId=yinzhengjie_20180614072622_4c4ae3f2-b869--a4dd-d4e8956b508c): show databases
// :: INFO ql.Driver: Starting task [Stage-:DDL] in serial mode
// :: INFO metastore.HiveMetaStore: : get_all_databases
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_all_databases
// :: INFO exec.DDLTask: results :
// :: INFO ql.Driver: Completed executing command(queryId=yinzhengjie_20180614072622_4c4ae3f2-b869--a4dd-d4e8956b508c); Time taken: 0.148 seconds
// :: INFO ql.Driver: OK
// :: INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
// :: INFO mapred.FileInputFormat: Total input paths to process :
default
yinzhengjie
// :: INFO CliDriver: Time taken: 2.028 seconds, Fetched: row(s)
// :: INFO conf.HiveConf: Using the default value passed in for log id: b32fc742-42a1-4fe0-b9be-2892ba183f7f
// :: INFO session.SessionState: Resetting thread name to main
// :: INFO conf.HiveConf: Using the default value passed in for log id: b32fc742-42a1-4fe0-b9be-2892ba183f7f
// :: INFO session.SessionState: Updating thread name to b32fc742-42a1-4fe0-b9be-2892ba183f7f main
// :: INFO ql.Driver: Compiling command(queryId=yinzhengjie_20180614072624_7058afb1-ac99--923a-5c47a4671cf1): CREATE TABLE `yinzhengjie`.`test3` ( `id` INT, `name` STRING, `age` INT) COMMENT 'Imported by sqoop on 2018/06/14 07:26:07' PARTITIONED BY (province STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\011' LINES TERMINATED BY '\012' STORED AS TEXTFILE
// :: INFO parse.CalcitePlanner: Starting Semantic Analysis
// :: INFO parse.CalcitePlanner: Creating table yinzhengjie.test3 position=
// :: INFO metastore.HiveMetaStore: : get_database: yinzhengjie
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_database: yinzhengjie
// :: INFO sqlstd.SQLStdHiveAccessController: Created SQLStdHiveAccessController for session context : HiveAuthzSessionContext [sessionString=b32fc742-42a1-4fe0-b9be-2892ba183f7f, clientType=HIVECLI]
// :: WARN session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
// :: INFO hive.metastore: Mestastore configuration hive.metastore.filter.hook changed from org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl to org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook
// :: INFO metastore.HiveMetaStore: : Cleaning up thread local RawStore...
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Cleaning up thread local RawStore...
// :: INFO metastore.HiveMetaStore: : Done cleaning up thread local RawStore
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Done cleaning up thread local RawStore
// :: INFO ql.Driver: Semantic Analysis Completed
// :: INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
// :: INFO ql.Driver: Completed compiling command(queryId=yinzhengjie_20180614072624_7058afb1-ac99--923a-5c47a4671cf1); Time taken: 0.731 seconds
// :: INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
// :: INFO ql.Driver: Executing command(queryId=yinzhengjie_20180614072624_7058afb1-ac99--923a-5c47a4671cf1): CREATE TABLE `yinzhengjie`.`test3` ( `id` INT, `name` STRING, `age` INT) COMMENT 'Imported by sqoop on 2018/06/14 07:26:07' PARTITIONED BY (province STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\011' LINES TERMINATED BY '\012' STORED AS TEXTFILE
// :: INFO ql.Driver: Starting task [Stage-:DDL] in serial mode
// :: INFO exec.DDLTask: creating table yinzhengjie.test3 on null
// :: INFO metastore.HiveMetaStore: : create_table: Table(tableName:test3, dbName:yinzhengjie, owner:yinzhengjie, createTime:, lastAccessTime:, retention:, sd:StorageDescriptor(cols:[FieldSchema(name:id, type:int, comment:null), FieldSchema(name:name, type:string, comment:null), FieldSchema(name:age, type:int, comment:null)], location:null, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format= , line.delim=
, field.delim= }), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[FieldSchema(name:province, type:string, comment:null)], parameters:{comment=Imported by sqoop on // ::}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{yinzhengjie=[PrivilegeGrantInfo(privilege:INSERT, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:SELECT, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:UPDATE, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:DELETE, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true)]}, groupPrivileges:null, rolePrivileges:null), temporary:false)
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=create_table: Table(tableName:test3, dbName:yinzhengjie, owner:yinzhengjie, createTime:, lastAccessTime:, retention:, sd:StorageDescriptor(cols:[FieldSchema(name:id, type:int, comment:null), FieldSchema(name:name, type:string, comment:null), FieldSchema(name:age, type:int, comment:null)], location:null, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format= , line.delim=
, field.delim= }), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[FieldSchema(name:province, type:string, comment:null)], parameters:{comment=Imported by sqoop on // ::}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{yinzhengjie=[PrivilegeGrantInfo(privilege:INSERT, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:SELECT, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:UPDATE, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:DELETE, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true)]}, groupPrivileges:null, rolePrivileges:null), temporary:false)
// :: INFO metastore.HiveMetaStore: : Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
// :: INFO metastore.ObjectStore: ObjectStore, initialize called
// :: INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
// :: INFO metastore.ObjectStore: Initialized ObjectStore
// :: INFO common.FileUtils: Creating directory if it doesn't exist: hdfs://mycluster/user/hive/warehouse/yinzhengjie.db/test3
// :: INFO ql.Driver: Completed executing command(queryId=yinzhengjie_20180614072624_7058afb1-ac99--923a-5c47a4671cf1); Time taken: 0.922 seconds
OK
// :: INFO ql.Driver: OK
Time taken: 1.653 seconds
// :: INFO CliDriver: Time taken: 1.653 seconds
// :: INFO conf.HiveConf: Using the default value passed in for log id: b32fc742-42a1-4fe0-b9be-2892ba183f7f
// :: INFO session.SessionState: Resetting thread name to main
// :: INFO conf.HiveConf: Using the default value passed in for log id: b32fc742-42a1-4fe0-b9be-2892ba183f7f
// :: INFO session.SessionState: Updating thread name to b32fc742-42a1-4fe0-b9be-2892ba183f7f main
// :: INFO ql.Driver: Compiling command(queryId=yinzhengjie_20180614072626_ed96bdc2-701f-456e-9c4b-d0ce261fed96):
LOAD DATA INPATH 'hdfs://mycluster/user/yinzhengjie/user' INTO TABLE `yinzhengjie`.`test3` PARTITION (province='beijing')
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test3
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test3
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO ql.Driver: Semantic Analysis Completed
// :: INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
// :: INFO ql.Driver: Completed compiling command(queryId=yinzhengjie_20180614072626_ed96bdc2-701f-456e-9c4b-d0ce261fed96); Time taken: 1.265 seconds
// :: INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
// :: INFO ql.Driver: Executing command(queryId=yinzhengjie_20180614072626_ed96bdc2-701f-456e-9c4b-d0ce261fed96):
LOAD DATA INPATH 'hdfs://mycluster/user/yinzhengjie/user' INTO TABLE `yinzhengjie`.`test3` PARTITION (province='beijing')
// :: INFO ql.Driver: Starting task [Stage-:MOVE] in serial mode
Loading data to table yinzhengjie.test3 partition (province=beijing)
// :: INFO exec.Task: Loading data to table yinzhengjie.test3 partition (province=beijing) from hdfs://mycluster/user/yinzhengjie/user
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test3
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test3
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO exec.MoveTask: Partition is: {province=beijing}
// :: INFO metastore.HiveMetaStore: : partition_name_has_valid_characters
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=partition_name_has_valid_characters
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test3
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test3
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO common.FileUtils: Creating directory if it doesn't exist: hdfs://mycluster/user/hive/warehouse/yinzhengjie.db/test3/province=beijing
// :: ERROR hdfs.KeyProviderCache: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !!
// :: INFO metastore.HiveMetaStore: : add_partition : db=yinzhengjie tbl=test3
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=add_partition : db=yinzhengjie tbl=test3
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO ql.Driver: Starting task [Stage-:STATS] in serial mode
// :: INFO exec.StatsTask: Executing stats task
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test3
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test3
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test3
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test3
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO exec.StatsTask: Partition yinzhengjie.test3{province=beijing} stats: [numFiles=, numRows=, totalSize=, rawDataSize=]
// :: INFO metastore.HiveMetaStore: : alter_partitions : db=yinzhengjie tbl=test3
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=alter_partitions : db=yinzhengjie tbl=test3
// :: INFO metastore.HiveMetaStore: New partition values:[beijing]
// :: INFO ql.Driver: Completed executing command(queryId=yinzhengjie_20180614072626_ed96bdc2-701f-456e-9c4b-d0ce261fed96); Time taken: 0.926 seconds
OK
// :: INFO ql.Driver: OK
Time taken: 2.192 seconds
// :: INFO CliDriver: Time taken: 2.192 seconds
// :: INFO conf.HiveConf: Using the default value passed in for log id: b32fc742-42a1-4fe0-b9be-2892ba183f7f
// :: INFO session.SessionState: Resetting thread name to main
// :: INFO conf.HiveConf: Using the default value passed in for log id: b32fc742-42a1-4fe0-b9be-2892ba183f7f
// :: INFO session.SessionState: Deleted directory: /tmp/hive/yinzhengjie/b32fc742-42a1-4fe0-b9be-2892ba183f7f on fs with scheme hdfs
// :: INFO session.SessionState: Deleted directory: /home/yinzhengjie/yinzhengjie/b32fc742-42a1-4fe0-b9be-2892ba183f7f on fs with scheme file
// :: INFO metastore.HiveMetaStore: : Cleaning up thread local RawStore...
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Cleaning up thread local RawStore...
// :: INFO metastore.HiveMetaStore: : Done cleaning up thread local RawStore
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Done cleaning up thread local RawStore
// :: INFO hive.HiveImport: Hive import complete.
[yinzhengjie@s101 ~]$

首次导入hive分区表([yinzhengjie@s101 ~]$ sqoop import --connect jdbc:mysql://s101/yinzhengjie --username root -P --table user --fields-terminated-by '\t' --hive-import --create-hive-table --hive-database yinzhengjie --hive-table test3 --hive-partition-key province --hive-partition-value beijing -m 1)

[yinzhengjie@s101 ~]$ sqoop import --connect jdbc:mysql://s101/yinzhengjie  --username root  -P --table user --fields-terminated-by '\t' --hive-import  --hive-database yinzhengjie --hive-table test3 --hive-partition-key province   --hive-partition-value beijing --check-column id --last-value  3 --incremental append  -m 1
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
Enter password:
// :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
// :: INFO tool.CodeGenTool: Beginning code generation
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/soft/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/phoenix-4.10.-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/apache-hive-2.1.-bin/lib/log4j-slf4j-impl-2.4..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `user` AS t LIMIT
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `user` AS t LIMIT
// :: INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /soft/hadoop
Note: /tmp/sqoop-yinzhengjie/compile/18863b9e7c77cfbebe522577912fcd65/user.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
// :: INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-yinzhengjie/compile/18863b9e7c77cfbebe522577912fcd65/user.jar
// :: INFO tool.ImportTool: Maximal id query for free form incremental import: SELECT MAX(`id`) FROM `user`
// :: INFO tool.ImportTool: Incremental import based on column `id`
// :: INFO tool.ImportTool: Lower bound value:
// :: INFO tool.ImportTool: Upper bound value:
// :: WARN manager.MySQLManager: It looks like you are importing from mysql.
// :: WARN manager.MySQLManager: This transfer can be faster! Use the --direct
// :: WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
// :: INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
// :: INFO mapreduce.ImportJobBase: Beginning import of user
// :: INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
// :: INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
// :: INFO db.DBInputFormat: Using read commited transaction isolation
// :: INFO mapreduce.JobSubmitter: number of splits:
// :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1528967628934_0017
// :: INFO impl.YarnClientImpl: Submitted application application_1528967628934_0017
// :: INFO mapreduce.Job: The url to track the job: http://s101:8088/proxy/application_1528967628934_0017/
// :: INFO mapreduce.Job: Running job: job_1528967628934_0017
// :: INFO mapreduce.Job: Job job_1528967628934_0017 running in uber mode : false
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: Job job_1528967628934_0017 completed successfully
// :: INFO mapreduce.Job: Counters:
File System Counters
FILE: Number of bytes read=
FILE: Number of bytes written=
FILE: Number of read operations=
FILE: Number of large read operations=
FILE: Number of write operations=
HDFS: Number of bytes read=
HDFS: Number of bytes written=
HDFS: Number of read operations=
HDFS: Number of large read operations=
HDFS: Number of write operations=
Job Counters
Launched map tasks=
Other local map tasks=
Total time spent by all maps in occupied slots (ms)=
Total time spent by all reduces in occupied slots (ms)=
Total time spent by all map tasks (ms)=
Total vcore-milliseconds taken by all map tasks=
Total megabyte-milliseconds taken by all map tasks=
Map-Reduce Framework
Map input records=
Map output records=
Input split bytes=
Spilled Records=
Failed Shuffles=
Merged Map outputs=
GC time elapsed (ms)=
CPU time spent (ms)=
Physical memory (bytes) snapshot=
Virtual memory (bytes) snapshot=
Total committed heap usage (bytes)=
File Input Format Counters
Bytes Read=
File Output Format Counters
Bytes Written=
// :: INFO mapreduce.ImportJobBase: Transferred bytes in 27.3862 seconds (0.7668 bytes/sec)
// :: INFO mapreduce.ImportJobBase: Retrieved records.
// :: INFO mapreduce.ImportJobBase: Publishing Hive/Hcat import job data to Listeners for table user
// :: INFO util.AppendUtils: Creating missing output directory - user
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `user` AS t LIMIT
// :: INFO hive.HiveImport: Loading uploaded data into Hive
// :: INFO conf.HiveConf: Found configuration file file:/soft/hive/conf/hive-site.xml Logging initialized using configuration in jar:file:/soft/apache-hive-2.1.-bin/lib/hive-common-2.1..jar!/hive-log4j2.properties Async: true
// :: INFO SessionState:
Logging initialized using configuration in jar:file:/soft/apache-hive-2.1.-bin/lib/hive-common-2.1..jar!/hive-log4j2.properties Async: true
// :: INFO metastore.HiveMetaStore: : Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
// :: INFO metastore.ObjectStore: ObjectStore, initialize called
// :: INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
// :: INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
// :: INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
// :: INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
// :: INFO metastore.ObjectStore: Initialized ObjectStore
// :: INFO metastore.HiveMetaStore: Added admin role in metastore
// :: INFO metastore.HiveMetaStore: Added public role in metastore
// :: INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
// :: INFO metastore.HiveMetaStore: : get_all_functions
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_all_functions
// :: INFO metadata.Hive: Registering function parsejson cn.org.yinzhengjie.udf.ParseJson
// :: WARN metadata.Hive: Failed to register persistent function parsejson:cn.org.yinzhengjie.udf.ParseJson. Ignore and continue.
// :: INFO metadata.Hive: Registering function parsejson cn.org.yinzhengjie.udf.MyUDTF
// :: WARN metadata.Hive: Failed to register persistent function parsejson:cn.org.yinzhengjie.udf.MyUDTF. Ignore and continue.
// :: INFO metadata.Hive: Registering function todate cn.org.yinzhengjie.udf.MyUDTF
// :: WARN metadata.Hive: Failed to register persistent function todate:cn.org.yinzhengjie.udf.MyUDTF. Ignore and continue.
// :: INFO session.SessionState: Created HDFS directory: /tmp/hive/yinzhengjie/8c8d3572-c995--8ab4-8e57b757b141
// :: INFO session.SessionState: Created local directory: /home/yinzhengjie/yinzhengjie/8c8d3572-c995--8ab4-8e57b757b141
// :: INFO session.SessionState: Created HDFS directory: /tmp/hive/yinzhengjie/8c8d3572-c995--8ab4-8e57b757b141/_tmp_space.db
// :: INFO conf.HiveConf: Using the default value passed in for log id: 8c8d3572-c995--8ab4-8e57b757b141
// :: INFO session.SessionState: Updating thread name to 8c8d3572-c995--8ab4-8e57b757b141 main
// :: INFO conf.HiveConf: Using the default value passed in for log id: 8c8d3572-c995--8ab4-8e57b757b141
// :: INFO ql.Driver: Compiling command(queryId=yinzhengjie_20180614074052_f0cd040e-63ce-4bbf-a10b-6abc6610e295): show databases
// :: INFO ql.Driver: Semantic Analysis Completed
// :: INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:database_name, type:string, comment:from deserializer)], properties:null)
// :: INFO exec.ListSinkOperator: Initializing operator LIST_SINK[]
// :: INFO ql.Driver: Completed compiling command(queryId=yinzhengjie_20180614074052_f0cd040e-63ce-4bbf-a10b-6abc6610e295); Time taken: 1.493 seconds
// :: INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
// :: INFO ql.Driver: Executing command(queryId=yinzhengjie_20180614074052_f0cd040e-63ce-4bbf-a10b-6abc6610e295): show databases
// :: INFO ql.Driver: Starting task [Stage-:DDL] in serial mode
// :: INFO metastore.HiveMetaStore: : get_all_databases
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_all_databases
// :: INFO exec.DDLTask: results :
// :: INFO ql.Driver: Completed executing command(queryId=yinzhengjie_20180614074052_f0cd040e-63ce-4bbf-a10b-6abc6610e295); Time taken: 0.074 seconds
// :: INFO ql.Driver: OK
// :: INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
// :: INFO mapred.FileInputFormat: Total input paths to process :
default
yinzhengjie
// :: INFO CliDriver: Time taken: 1.574 seconds, Fetched: row(s)
// :: INFO conf.HiveConf: Using the default value passed in for log id: 8c8d3572-c995--8ab4-8e57b757b141
// :: INFO session.SessionState: Resetting thread name to main
// :: INFO conf.HiveConf: Using the default value passed in for log id: 8c8d3572-c995--8ab4-8e57b757b141
// :: INFO session.SessionState: Updating thread name to 8c8d3572-c995--8ab4-8e57b757b141 main
// :: INFO ql.Driver: Compiling command(queryId=yinzhengjie_20180614074053_92c70cb8--4bd0-b61b-81e7935456de): CREATE TABLE IF NOT EXISTS `yinzhengjie`.`test3` ( `id` INT, `name` STRING, `age` INT) COMMENT 'Imported by sqoop on 2018/06/14 07:40:39' PARTITIONED BY (province STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\011' LINES TERMINATED BY '\012' STORED AS TEXTFILE
// :: INFO parse.CalcitePlanner: Starting Semantic Analysis
// :: INFO parse.CalcitePlanner: Creating table yinzhengjie.test3 position=
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test3
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test3
// :: INFO ql.Driver: Semantic Analysis Completed
// :: INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
// :: INFO ql.Driver: Completed compiling command(queryId=yinzhengjie_20180614074053_92c70cb8--4bd0-b61b-81e7935456de); Time taken: 0.276 seconds
// :: INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
// :: INFO ql.Driver: Executing command(queryId=yinzhengjie_20180614074053_92c70cb8--4bd0-b61b-81e7935456de): CREATE TABLE IF NOT EXISTS `yinzhengjie`.`test3` ( `id` INT, `name` STRING, `age` INT) COMMENT 'Imported by sqoop on 2018/06/14 07:40:39' PARTITIONED BY (province STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\011' LINES TERMINATED BY '\012' STORED AS TEXTFILE
// :: INFO ql.Driver: Completed executing command(queryId=yinzhengjie_20180614074053_92c70cb8--4bd0-b61b-81e7935456de); Time taken: 0.014 seconds
OK
// :: INFO ql.Driver: OK
Time taken: 0.292 seconds
// :: INFO CliDriver: Time taken: 0.292 seconds
// :: INFO conf.HiveConf: Using the default value passed in for log id: 8c8d3572-c995--8ab4-8e57b757b141
// :: INFO session.SessionState: Resetting thread name to main
// :: INFO conf.HiveConf: Using the default value passed in for log id: 8c8d3572-c995--8ab4-8e57b757b141
// :: INFO session.SessionState: Updating thread name to 8c8d3572-c995--8ab4-8e57b757b141 main
// :: INFO ql.Driver: Compiling command(queryId=yinzhengjie_20180614074054_841a2915-844f-45ae-9ad9-431c5179ee48):
LOAD DATA INPATH 'hdfs://mycluster/user/yinzhengjie/user' INTO TABLE `yinzhengjie`.`test3` PARTITION (province='beijing')
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test3
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test3
// :: INFO sqlstd.SQLStdHiveAccessController: Created SQLStdHiveAccessController for session context : HiveAuthzSessionContext [sessionString=8c8d3572-c995--8ab4-8e57b757b141, clientType=HIVECLI]
// :: WARN session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
// :: INFO hive.metastore: Mestastore configuration hive.metastore.filter.hook changed from org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl to org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook
// :: INFO metastore.HiveMetaStore: : Cleaning up thread local RawStore...
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Cleaning up thread local RawStore...
// :: INFO metastore.HiveMetaStore: : Done cleaning up thread local RawStore
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Done cleaning up thread local RawStore
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO metastore.HiveMetaStore: : Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
// :: INFO metastore.ObjectStore: ObjectStore, initialize called
// :: INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
// :: INFO metastore.ObjectStore: Initialized ObjectStore
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO ql.Driver: Semantic Analysis Completed
// :: INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
// :: INFO ql.Driver: Completed compiling command(queryId=yinzhengjie_20180614074054_841a2915-844f-45ae-9ad9-431c5179ee48); Time taken: 0.601 seconds
// :: INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
// :: INFO ql.Driver: Executing command(queryId=yinzhengjie_20180614074054_841a2915-844f-45ae-9ad9-431c5179ee48):
LOAD DATA INPATH 'hdfs://mycluster/user/yinzhengjie/user' INTO TABLE `yinzhengjie`.`test3` PARTITION (province='beijing')
// :: INFO ql.Driver: Starting task [Stage-:MOVE] in serial mode
Loading data to table yinzhengjie.test3 partition (province=beijing)
// :: INFO exec.Task: Loading data to table yinzhengjie.test3 partition (province=beijing) from hdfs://mycluster/user/yinzhengjie/user
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test3
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test3
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO exec.MoveTask: Partition is: {province=beijing}
// :: INFO metastore.HiveMetaStore: : partition_name_has_valid_characters
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=partition_name_has_valid_characters
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test3
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test3
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: ERROR hdfs.KeyProviderCache: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !!
// :: INFO metastore.HiveMetaStore: : alter_partition : db=yinzhengjie tbl=test3
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=alter_partition : db=yinzhengjie tbl=test3
// :: INFO metastore.HiveMetaStore: New partition values:[beijing]
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO ql.Driver: Starting task [Stage-:STATS] in serial mode
// :: INFO exec.StatsTask: Executing stats task
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test3
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test3
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test3
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test3
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test3[beijing]
// :: INFO exec.StatsTask: Partition yinzhengjie.test3{province=beijing} stats: [numFiles=, numRows=, totalSize=, rawDataSize=]
// :: INFO metastore.HiveMetaStore: : alter_partitions : db=yinzhengjie tbl=test3
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=alter_partitions : db=yinzhengjie tbl=test3
// :: INFO metastore.HiveMetaStore: New partition values:[beijing]
// :: WARN hive.log: Updating partition stats fast for: test3
// :: WARN hive.log: Updated size to
// :: INFO ql.Driver: Completed executing command(queryId=yinzhengjie_20180614074054_841a2915-844f-45ae-9ad9-431c5179ee48); Time taken: 0.847 seconds
OK
// :: INFO ql.Driver: OK
Time taken: 1.463 seconds
// :: INFO CliDriver: Time taken: 1.463 seconds
// :: INFO conf.HiveConf: Using the default value passed in for log id: 8c8d3572-c995--8ab4-8e57b757b141
// :: INFO session.SessionState: Resetting thread name to main
// :: INFO conf.HiveConf: Using the default value passed in for log id: 8c8d3572-c995--8ab4-8e57b757b141
// :: INFO session.SessionState: Deleted directory: /tmp/hive/yinzhengjie/8c8d3572-c995--8ab4-8e57b757b141 on fs with scheme hdfs
// :: INFO session.SessionState: Deleted directory: /home/yinzhengjie/yinzhengjie/8c8d3572-c995--8ab4-8e57b757b141 on fs with scheme file
// :: INFO metastore.HiveMetaStore: : Cleaning up thread local RawStore...
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Cleaning up thread local RawStore...
// :: INFO metastore.HiveMetaStore: : Done cleaning up thread local RawStore
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Done cleaning up thread local RawStore
// :: INFO hive.HiveImport: Hive import complete.
// :: INFO hive.HiveImport: Export directory is empty, removing it.
// :: INFO tool.ImportTool: Incremental import complete! To run another incremental import of all data following this import, supply the following arguments:
// :: INFO tool.ImportTool: --incremental append
// :: INFO tool.ImportTool: --check-column id
// :: INFO tool.ImportTool: --last-value
// :: INFO tool.ImportTool: (Consider saving this with 'sqoop job --create')
[yinzhengjie@s101 ~]$ echo $? [yinzhengjie@s101 ~]$

增量导入,指定id的分隔符为3【即id为3之后的数据会被认为是增量数据,注意,这个id建议设置为主键,让其值具有唯一性,因为这样程序更加容易判断当前行是否是新增行】([yinzhengjie@s101 ~]$ sqoop import --connect jdbc:mysql://s101/yinzhengjie --username root -P --table user --fields-terminated-by '\t' --hive-import --hive-database yinzhengjie --hive-table test3 --hive-partition-key province --hive-partition-value beijing --check-column id --last-value 3 --incremental append -m 1 )

关键参数说明:
--incremental append //增量模式,追加(append) --check-column id //检查需要增量导入的指定列,一般采取主键检查,建议将其设置为主键,让其具有唯一性! --last-value //检查增量导入的最终值,以便增量导入,这个3只是一个判断值,如果id是主键,那么3表示id在3之后的都是新增数据

10>.sqoop指定query导入

[yinzhengjie@s101 ~]$ sqoop import --connect jdbc:mysql://s101/yinzhengjie --username root -P --query 'select a.id, a.name, a.age from user a  where a.id=1 and $CONDITIONS' --fields-terminated-by '\t' --hive-import --create-hive-table --hive-database yinzhengjie --hive-table test4 --hive-partition-key province  --hive-partition-value beijing --target-dir /test4 -m 1
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
Enter password:
// :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
// :: INFO tool.CodeGenTool: Beginning code generation
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/soft/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/phoenix-4.10.-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/apache-hive-2.1.-bin/lib/log4j-slf4j-impl-2.4..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
// :: INFO manager.SqlManager: Executing SQL statement: select a.id, a.name, a.age from user a where a.id= and ( = )
// :: INFO manager.SqlManager: Executing SQL statement: select a.id, a.name, a.age from user a where a.id= and ( = )
// :: INFO manager.SqlManager: Executing SQL statement: select a.id, a.name, a.age from user a where a.id= and ( = )
// :: INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /soft/hadoop
Note: /tmp/sqoop-yinzhengjie/compile/a8130a005528b8613c8aebcdf5f8109f/QueryResult.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
// :: INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-yinzhengjie/compile/a8130a005528b8613c8aebcdf5f8109f/QueryResult.jar
// :: INFO mapreduce.ImportJobBase: Beginning query import.
// :: INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
// :: INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
// :: INFO db.DBInputFormat: Using read commited transaction isolation
// :: INFO mapreduce.JobSubmitter: number of splits:
// :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1528967628934_0018
// :: INFO impl.YarnClientImpl: Submitted application application_1528967628934_0018
// :: INFO mapreduce.Job: The url to track the job: http://s101:8088/proxy/application_1528967628934_0018/
// :: INFO mapreduce.Job: Running job: job_1528967628934_0018
// :: INFO mapreduce.Job: Job job_1528967628934_0018 running in uber mode : false
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: Job job_1528967628934_0018 completed successfully
// :: INFO mapreduce.Job: Counters:
File System Counters
FILE: Number of bytes read=
FILE: Number of bytes written=
FILE: Number of read operations=
FILE: Number of large read operations=
FILE: Number of write operations=
HDFS: Number of bytes read=
HDFS: Number of bytes written=
HDFS: Number of read operations=
HDFS: Number of large read operations=
HDFS: Number of write operations=
Job Counters
Launched map tasks=
Other local map tasks=
Total time spent by all maps in occupied slots (ms)=
Total time spent by all reduces in occupied slots (ms)=
Total time spent by all map tasks (ms)=
Total vcore-milliseconds taken by all map tasks=
Total megabyte-milliseconds taken by all map tasks=
Map-Reduce Framework
Map input records=
Map output records=
Input split bytes=
Spilled Records=
Failed Shuffles=
Merged Map outputs=
GC time elapsed (ms)=
CPU time spent (ms)=
Physical memory (bytes) snapshot=
Virtual memory (bytes) snapshot=
Total committed heap usage (bytes)=
File Input Format Counters
Bytes Read=
File Output Format Counters
Bytes Written=
// :: INFO mapreduce.ImportJobBase: Transferred bytes in 50.7953 seconds (0.3347 bytes/sec)
// :: INFO mapreduce.ImportJobBase: Retrieved records.
// :: INFO mapreduce.ImportJobBase: Publishing Hive/Hcat import job data to Listeners for table null
// :: INFO manager.SqlManager: Executing SQL statement: select a.id, a.name, a.age from user a where a.id= and ( = )
// :: INFO manager.SqlManager: Executing SQL statement: select a.id, a.name, a.age from user a where a.id= and ( = )
// :: INFO hive.HiveImport: Loading uploaded data into Hive
// :: INFO conf.HiveConf: Found configuration file file:/soft/hive/conf/hive-site.xml Logging initialized using configuration in jar:file:/soft/apache-hive-2.1.-bin/lib/hive-common-2.1..jar!/hive-log4j2.properties Async: true
// :: INFO SessionState:
Logging initialized using configuration in jar:file:/soft/apache-hive-2.1.-bin/lib/hive-common-2.1..jar!/hive-log4j2.properties Async: true
// :: INFO metastore.HiveMetaStore: : Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
// :: INFO metastore.ObjectStore: ObjectStore, initialize called
// :: INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
// :: INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
// :: INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
// :: INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
// :: INFO metastore.ObjectStore: Initialized ObjectStore
// :: INFO metastore.HiveMetaStore: Added admin role in metastore
// :: INFO metastore.HiveMetaStore: Added public role in metastore
// :: INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
// :: INFO metastore.HiveMetaStore: : get_all_functions
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_all_functions
// :: INFO metadata.Hive: Registering function parsejson cn.org.yinzhengjie.udf.ParseJson
// :: WARN metadata.Hive: Failed to register persistent function parsejson:cn.org.yinzhengjie.udf.ParseJson. Ignore and continue.
// :: INFO metadata.Hive: Registering function parsejson cn.org.yinzhengjie.udf.MyUDTF
// :: WARN metadata.Hive: Failed to register persistent function parsejson:cn.org.yinzhengjie.udf.MyUDTF. Ignore and continue.
// :: INFO metadata.Hive: Registering function todate cn.org.yinzhengjie.udf.MyUDTF
// :: WARN metadata.Hive: Failed to register persistent function todate:cn.org.yinzhengjie.udf.MyUDTF. Ignore and continue.
// :: INFO session.SessionState: Created HDFS directory: /tmp/hive/yinzhengjie/8c662785-93ad-4dec-af7d-43310f284ec4
// :: INFO session.SessionState: Created local directory: /home/yinzhengjie/yinzhengjie/8c662785-93ad-4dec-af7d-43310f284ec4
// :: INFO session.SessionState: Created HDFS directory: /tmp/hive/yinzhengjie/8c662785-93ad-4dec-af7d-43310f284ec4/_tmp_space.db
// :: INFO conf.HiveConf: Using the default value passed in for log id: 8c662785-93ad-4dec-af7d-43310f284ec4
// :: INFO session.SessionState: Updating thread name to 8c662785-93ad-4dec-af7d-43310f284ec4 main
// :: INFO conf.HiveConf: Using the default value passed in for log id: 8c662785-93ad-4dec-af7d-43310f284ec4
// :: INFO ql.Driver: Compiling command(queryId=yinzhengjie_20180614080227_5fcf7963-a968---629da87bb749): show databases
// :: INFO ql.Driver: Semantic Analysis Completed
// :: INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:database_name, type:string, comment:from deserializer)], properties:null)
// :: INFO exec.ListSinkOperator: Initializing operator LIST_SINK[]
// :: INFO ql.Driver: Completed compiling command(queryId=yinzhengjie_20180614080227_5fcf7963-a968---629da87bb749); Time taken: 1.925 seconds
// :: INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
// :: INFO ql.Driver: Executing command(queryId=yinzhengjie_20180614080227_5fcf7963-a968---629da87bb749): show databases
// :: INFO ql.Driver: Starting task [Stage-:DDL] in serial mode
// :: INFO metastore.HiveMetaStore: : get_all_databases
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_all_databases
// :: INFO exec.DDLTask: results :
// :: INFO ql.Driver: Completed executing command(queryId=yinzhengjie_20180614080227_5fcf7963-a968---629da87bb749); Time taken: 0.419 seconds
// :: INFO ql.Driver: OK
// :: INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
// :: INFO mapred.FileInputFormat: Total input paths to process :
default
yinzhengjie
// :: INFO CliDriver: Time taken: 2.398 seconds, Fetched: row(s)
// :: INFO conf.HiveConf: Using the default value passed in for log id: 8c662785-93ad-4dec-af7d-43310f284ec4
// :: INFO session.SessionState: Resetting thread name to main
// :: INFO conf.HiveConf: Using the default value passed in for log id: 8c662785-93ad-4dec-af7d-43310f284ec4
// :: INFO session.SessionState: Updating thread name to 8c662785-93ad-4dec-af7d-43310f284ec4 main
// :: INFO ql.Driver: Compiling command(queryId=yinzhengjie_20180614080229_34715c5f-cb4d-414c-a173-594b4dd88248): CREATE TABLE `yinzhengjie`.`test4` ( `id` INT, `name` STRING, `age` INT) COMMENT 'Imported by sqoop on 2018/06/14 08:02:11' PARTITIONED BY (province STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\011' LINES TERMINATED BY '\012' STORED AS TEXTFILE
// :: INFO parse.CalcitePlanner: Starting Semantic Analysis
// :: INFO parse.CalcitePlanner: Creating table yinzhengjie.test4 position=
// :: INFO metastore.HiveMetaStore: : get_database: yinzhengjie
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_database: yinzhengjie
// :: INFO sqlstd.SQLStdHiveAccessController: Created SQLStdHiveAccessController for session context : HiveAuthzSessionContext [sessionString=8c662785-93ad-4dec-af7d-43310f284ec4, clientType=HIVECLI]
// :: WARN session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
// :: INFO hive.metastore: Mestastore configuration hive.metastore.filter.hook changed from org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl to org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook
// :: INFO metastore.HiveMetaStore: : Cleaning up thread local RawStore...
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Cleaning up thread local RawStore...
// :: INFO metastore.HiveMetaStore: : Done cleaning up thread local RawStore
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Done cleaning up thread local RawStore
// :: INFO ql.Driver: Semantic Analysis Completed
// :: INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
// :: INFO ql.Driver: Completed compiling command(queryId=yinzhengjie_20180614080229_34715c5f-cb4d-414c-a173-594b4dd88248); Time taken: 0.774 seconds
// :: INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
// :: INFO ql.Driver: Executing command(queryId=yinzhengjie_20180614080229_34715c5f-cb4d-414c-a173-594b4dd88248): CREATE TABLE `yinzhengjie`.`test4` ( `id` INT, `name` STRING, `age` INT) COMMENT 'Imported by sqoop on 2018/06/14 08:02:11' PARTITIONED BY (province STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\011' LINES TERMINATED BY '\012' STORED AS TEXTFILE
// :: INFO ql.Driver: Starting task [Stage-:DDL] in serial mode
// :: INFO exec.DDLTask: creating table yinzhengjie.test4 on null
// :: INFO metastore.HiveMetaStore: : create_table: Table(tableName:test4, dbName:yinzhengjie, owner:yinzhengjie, createTime:, lastAccessTime:, retention:, sd:StorageDescriptor(cols:[FieldSchema(name:id, type:int, comment:null), FieldSchema(name:name, type:string, comment:null), FieldSchema(name:age, type:int, comment:null)], location:null, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format= , line.delim=
, field.delim= }), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[FieldSchema(name:province, type:string, comment:null)], parameters:{comment=Imported by sqoop on // ::}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{yinzhengjie=[PrivilegeGrantInfo(privilege:INSERT, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:SELECT, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:UPDATE, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:DELETE, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true)]}, groupPrivileges:null, rolePrivileges:null), temporary:false)
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=create_table: Table(tableName:test4, dbName:yinzhengjie, owner:yinzhengjie, createTime:, lastAccessTime:, retention:, sd:StorageDescriptor(cols:[FieldSchema(name:id, type:int, comment:null), FieldSchema(name:name, type:string, comment:null), FieldSchema(name:age, type:int, comment:null)], location:null, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format= , line.delim=
, field.delim= }), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[FieldSchema(name:province, type:string, comment:null)], parameters:{comment=Imported by sqoop on // ::}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{yinzhengjie=[PrivilegeGrantInfo(privilege:INSERT, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:SELECT, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:UPDATE, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true), PrivilegeGrantInfo(privilege:DELETE, createTime:-, grantor:yinzhengjie, grantorType:USER, grantOption:true)]}, groupPrivileges:null, rolePrivileges:null), temporary:false)
// :: INFO metastore.HiveMetaStore: : Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
// :: INFO metastore.ObjectStore: ObjectStore, initialize called
// :: INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
// :: INFO metastore.ObjectStore: Initialized ObjectStore
// :: INFO common.FileUtils: Creating directory if it doesn't exist: hdfs://mycluster/user/hive/warehouse/yinzhengjie.db/test4
// :: INFO metadata.Hive: Dumping metastore api call timing information for : execution phase
// :: INFO metadata.Hive: Total time spent in this metastore function was greater than 1000ms : createTable_(Table, )=
// :: INFO ql.Driver: Completed executing command(queryId=yinzhengjie_20180614080229_34715c5f-cb4d-414c-a173-594b4dd88248); Time taken: 1.427 seconds
OK
// :: INFO ql.Driver: OK
Time taken: 2.202 seconds
// :: INFO CliDriver: Time taken: 2.202 seconds
// :: INFO conf.HiveConf: Using the default value passed in for log id: 8c662785-93ad-4dec-af7d-43310f284ec4
// :: INFO session.SessionState: Resetting thread name to main
// :: INFO conf.HiveConf: Using the default value passed in for log id: 8c662785-93ad-4dec-af7d-43310f284ec4
// :: INFO session.SessionState: Updating thread name to 8c662785-93ad-4dec-af7d-43310f284ec4 main
// :: INFO ql.Driver: Compiling command(queryId=yinzhengjie_20180614080232_c7148c60-e51c-4a5d-85e0-11727ef0e8bc):
LOAD DATA INPATH 'hdfs://mycluster/test4' INTO TABLE `yinzhengjie`.`test4` PARTITION (province='beijing')
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test4
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test4
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test4[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test4[beijing]
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test4[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test4[beijing]
// :: INFO ql.Driver: Semantic Analysis Completed
// :: INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
// :: INFO ql.Driver: Completed compiling command(queryId=yinzhengjie_20180614080232_c7148c60-e51c-4a5d-85e0-11727ef0e8bc); Time taken: 0.813 seconds
// :: INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
// :: INFO ql.Driver: Executing command(queryId=yinzhengjie_20180614080232_c7148c60-e51c-4a5d-85e0-11727ef0e8bc):
LOAD DATA INPATH 'hdfs://mycluster/test4' INTO TABLE `yinzhengjie`.`test4` PARTITION (province='beijing')
// :: INFO ql.Driver: Starting task [Stage-:MOVE] in serial mode
Loading data to table yinzhengjie.test4 partition (province=beijing)
// :: INFO exec.Task: Loading data to table yinzhengjie.test4 partition (province=beijing) from hdfs://mycluster/test4
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test4
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test4
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test4[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test4[beijing]
// :: INFO exec.MoveTask: Partition is: {province=beijing}
// :: INFO metastore.HiveMetaStore: : partition_name_has_valid_characters
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=partition_name_has_valid_characters
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test4
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test4
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test4[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test4[beijing]
// :: INFO common.FileUtils: Creating directory if it doesn't exist: hdfs://mycluster/user/hive/warehouse/yinzhengjie.db/test4/province=beijing
// :: ERROR hdfs.KeyProviderCache: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !!
// :: INFO metastore.HiveMetaStore: : add_partition : db=yinzhengjie tbl=test4
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=add_partition : db=yinzhengjie tbl=test4
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test4[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test4[beijing]
// :: INFO ql.Driver: Starting task [Stage-:STATS] in serial mode
// :: INFO exec.StatsTask: Executing stats task
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test4
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test4
// :: INFO metastore.HiveMetaStore: : get_table : db=yinzhengjie tbl=test4
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_table : db=yinzhengjie tbl=test4
// :: INFO metastore.HiveMetaStore: : get_partition_with_auth : db=yinzhengjie tbl=test4[beijing]
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=get_partition_with_auth : db=yinzhengjie tbl=test4[beijing]
// :: INFO exec.StatsTask: Partition yinzhengjie.test4{province=beijing} stats: [numFiles=, numRows=, totalSize=, rawDataSize=]
// :: INFO metastore.HiveMetaStore: : alter_partitions : db=yinzhengjie tbl=test4
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=alter_partitions : db=yinzhengjie tbl=test4
// :: INFO metastore.HiveMetaStore: New partition values:[beijing]
// :: INFO ql.Driver: Completed executing command(queryId=yinzhengjie_20180614080232_c7148c60-e51c-4a5d-85e0-11727ef0e8bc); Time taken: 0.886 seconds
OK
// :: INFO ql.Driver: OK
Time taken: 1.701 seconds
// :: INFO CliDriver: Time taken: 1.701 seconds
// :: INFO conf.HiveConf: Using the default value passed in for log id: 8c662785-93ad-4dec-af7d-43310f284ec4
// :: INFO session.SessionState: Resetting thread name to main
// :: INFO conf.HiveConf: Using the default value passed in for log id: 8c662785-93ad-4dec-af7d-43310f284ec4
// :: INFO session.SessionState: Deleted directory: /tmp/hive/yinzhengjie/8c662785-93ad-4dec-af7d-43310f284ec4 on fs with scheme hdfs
// :: INFO session.SessionState: Deleted directory: /home/yinzhengjie/yinzhengjie/8c662785-93ad-4dec-af7d-43310f284ec4 on fs with scheme file
// :: INFO metastore.HiveMetaStore: : Cleaning up thread local RawStore...
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Cleaning up thread local RawStore...
// :: INFO metastore.HiveMetaStore: : Done cleaning up thread local RawStore
// :: INFO HiveMetaStore.audit: ugi=yinzhengjie ip=unknown-ip-addr cmd=Done cleaning up thread local RawStore
// :: INFO hive.HiveImport: Hive import complete.
[yinzhengjie@s101 ~]$ echo $? [yinzhengjie@s101 ~]$

将一个查询语句导入到hive的表中([yinzhengjie@s101 ~]$ sqoop import --connect jdbc:mysql://s101/yinzhengjie --username root -P --query 'select a.id, a.name, a.age from user a where a.id=1 and $CONDITIONS' --fields-terminated-by '\t' --hive-import --create-hive-table --hive-database yinzhengjie --hive-table test4 --hive-partition-key province --hive-partition-value beijing --target-dir /test4 -m 1)

注意:
>.在查询语句中需要使用单引号'', 查询语句末尾添加 where $CONDITIONS
>.--targrt-dir //指定mr中产生的中间数据,此数据会被load到hive表中

五.sqoop导入mysql数据到hbase(需要启动hdfs,yarn,MySQL,HBase等相关服务)

1>.导入数据到HBase

[yinzhengjie@s101 ~]$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/phoenix-4.10.-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/apache-hive-2.1.-bin/lib/log4j-slf4j-impl-2.4..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2., rUnknown, Mon May :: CDT hbase(main)::> list
TABLE
SYSTEM.CATALOG
SYSTEM.FUNCTION
SYSTEM.MUTEX
SYSTEM.SEQUENCE
SYSTEM.STATS
YINZHENGJIE.T1
ns1:calllog
ns1:observer
ns1:t1
yinzhengjie:WordCount
yinzhengjie:WordCount2
yinzhengjie:WordCount3
yinzhengjie:t1
yinzhengjie:test
row(s) in 0.3720 seconds => ["SYSTEM.CATALOG", "SYSTEM.FUNCTION", "SYSTEM.MUTEX", "SYSTEM.SEQUENCE", "SYSTEM.STATS", "YINZHENGJIE.T1", "ns1:calllog", "ns1:observer", "ns1:t1", "yinzhengjie:WordCount", "yinzhengjie:WordCount2", "yinzhengjie:WordCount3", "yinzhengjie:t1", "yinzhengjie:test"]
hbase(main)::>

导入数据之前(hbase(main):001:0> list)

[yinzhengjie@s101 ~]$ sqoop import --connect jdbc:mysql://s101/yinzhengjie --username root -P --table word --hbase-create-table --hbase-table yinzhengjie:wc --hbase-row-key  id  --column-family f1  -m 1
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
Enter password:
// :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
// :: INFO tool.CodeGenTool: Beginning code generation
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/soft/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/phoenix-4.10.-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/apache-hive-2.1.-bin/lib/log4j-slf4j-impl-2.4..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `word` AS t LIMIT
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `word` AS t LIMIT
// :: INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /soft/hadoop
Note: /tmp/sqoop-yinzhengjie/compile/b502c5c084cf744b05c1dfec13590b2c/word.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
// :: INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-yinzhengjie/compile/b502c5c084cf744b05c1dfec13590b2c/word.jar
// :: WARN manager.MySQLManager: It looks like you are importing from mysql.
// :: WARN manager.MySQLManager: This transfer can be faster! Use the --direct
// :: WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
// :: INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
// :: INFO mapreduce.ImportJobBase: Beginning import of word
// :: INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
// :: INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
// :: INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3c782d8e connecting to ZooKeeper ensemble=s102:,s103:,s104:
// :: INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.-, built on // : GMT
// :: INFO zookeeper.ZooKeeper: Client environment:host.name=s101
// :: INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_131
// :: INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
// :: INFO zookeeper.ZooKeeper: Client environment:java.home=/soft/jdk1..0_131/jre
// :: INFO zookeeper.ZooKeeper: Client environment:java.class.path=/soft/hadoop-2.7./etc/hadoop:/soft/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.-.jar:/soft/hadoop/share/hadoop/common/lib/jaxb-api-2.2..jar:/soft/hadoop/share/hadoop/common/lib/stax-api-1.0-.jar:/soft/hadoop/share/hadoop/common/lib/activation-1.1.jar:/soft/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9..jar:/soft/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9..jar:/soft/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9..jar:/soft/hadoop/share/hadoop/common/lib/jackson-xc-1.9..jar:/soft/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/soft/hadoop/share/hadoop/common/lib/asm-3.2.jar:/soft/hadoop/share/hadoop/common/lib/log4j-1.2..jar:/soft/hadoop/share/hadoop/common/lib/jets3t-0.9..jar:/soft/hadoop/share/hadoop/common/lib/httpclient-4.2..jar:/soft/hadoop/share/hadoop/common/lib/httpcore-4.2..jar:/soft/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/soft/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/soft/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/soft/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/soft/hadoop/share/hadoop/common/lib/commons-beanutils-1.7..jar:/soft/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8..jar:/soft/hadoop/share/hadoop/common/lib/slf4j-api-1.7..jar:/soft/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7..jar:/soft/hadoop/share/hadoop/common/lib/avro-1.7..jar:/soft/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/soft/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/soft/hadoop/share/hadoop/common/lib/commons-compress-1.4..jar:/soft/hadoop/share/hadoop/common/lib/xz-1.0.jar:/soft/hadoop/share/hadoop/common/lib/protobuf-java-2.5..jar:/soft/hadoop/share/hadoop/common/lib/gson-2.2..jar:/soft/hadoop/share/hadoop/common/lib/hadoop-auth-2.7..jar:/soft/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.-M15.jar:/soft/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.-M15.jar:/soft/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.-M20.jar:/soft/hadoop/share/hadoop/common/lib/api-util-1.0.-M20.jar:/soft/hadoop/share/hadoop/common/lib/zookeeper-3.4..jar:/soft/hadoop/share/hadoop/common/lib/netty-3.6..Final.jar:/soft/hadoop/share/hadoop/common/lib/curator-framework-2.7..jar:/soft/hadoop/share/hadoop/common/lib/curator-client-2.7..jar:/soft/hadoop/share/hadoop/common/lib/jsch-0.1..jar:/soft/hadoop/share/hadoop/common/lib/curator-recipes-2.7..jar:/soft/hadoop/share/hadoop/common/lib/htrace-core-3.1.-incubating.jar:/soft/hadoop/share/hadoop/common/lib/junit-4.11.jar:/soft/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/soft/hadoop/share/hadoop/common/lib/mockito-all-1.8..jar:/soft/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7..jar:/soft/hadoop/share/hadoop/common/lib/guava-11.0..jar:/soft/hadoop/share/hadoop/common/lib/jsr305-3.0..jar:/soft/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/soft/hadoop/share/hadoop/common/lib/commons-math3-3.1..jar:/soft/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/soft/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/soft/hadoop/share/hadoop/common/lib/commons-logging-1.1..jar:/soft/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/soft/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/soft/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/soft/hadoop/share/hadoop/common/lib/commons-collections-3.2..jar:/soft/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/soft/hadoop/share/hadoop/common/lib/jetty-6.1..jar:/soft/hadoop/share/hadoop/common/lib/jetty-util-6.1..jar:/soft/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/soft/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/soft/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/soft/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/soft/hadoop/share/hadoop/common/lib/lzo-core-1.0..jar:/soft/hadoop/share/hadoop/common/lib/lzo-hadoop-1.0..jar:/soft/hadoop/share/hadoop/common/lib/fastjson-1.2..jar:/soft/hadoop/share/hadoop/common/lib/MyHbase-1.0-SNAPSHOT.jar:/soft/hadoop/share/hadoop/common/hadoop-common-2.7..jar:/soft/hadoop/share/hadoop/common/hadoop-common-2.7.-tests.jar:/soft/hadoop/share/hadoop/common/hadoop-nfs-2.7..jar:/soft/hadoop-2.7./share/hadoop/hdfs:/soft/hadoop-2.7./share/hadoop/hdfs/lib/commons-codec-1.4.jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/log4j-1.2..jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/commons-logging-1.1..jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/netty-3.6..Final.jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/guava-11.0..jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/jsr305-3.0..jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/commons-cli-1.2.jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/xmlenc-0.52.jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/commons-io-2.4.jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/servlet-api-2.5.jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/jetty-6.1..jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/jetty-util-6.1..jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/jersey-core-1.9.jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/jackson-core-asl-1.9..jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/jackson-mapper-asl-1.9..jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/jersey-server-1.9.jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/asm-3.2.jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/commons-lang-2.6.jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/protobuf-java-2.5..jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/htrace-core-3.1.-incubating.jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/commons-daemon-1.0..jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/netty-all-4.0..Final.jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/xercesImpl-2.9..jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/xml-apis-1.3..jar:/soft/hadoop-2.7./share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7./share/hadoop/hdfs/hadoop-hdfs-2.7..jar:/soft/hadoop-2.7./share/hadoop/hdfs/hadoop-hdfs-2.7.-tests.jar:/soft/hadoop-2.7./share/hadoop/hdfs/hadoop-hdfs-nfs-2.7..jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/zookeeper-3.4.-tests.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/commons-lang-2.6.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/guava-11.0..jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/jsr305-3.0..jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/commons-logging-1.1..jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/protobuf-java-2.5..jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/commons-cli-1.2.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/log4j-1.2..jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/jaxb-api-2.2..jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/stax-api-1.0-.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/activation-1.1.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/commons-compress-1.4..jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/xz-1.0.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/servlet-api-2.5.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/commons-codec-1.4.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/jetty-util-6.1..jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/jersey-core-1.9.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/jersey-client-1.9.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/jackson-core-asl-1.9..jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/jackson-mapper-asl-1.9..jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/jackson-jaxrs-1.9..jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/jackson-xc-1.9..jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/guice-servlet-3.0.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/guice-3.0.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/javax.inject-.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/aopalliance-1.0.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/commons-io-2.4.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/jersey-server-1.9.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/asm-3.2.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/jersey-json-1.9.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/jettison-1.1.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/jaxb-impl-2.2.-.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/jersey-guice-1.9.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/zookeeper-3.4..jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/netty-3.6..Final.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/commons-collections-3.2..jar:/soft/hadoop-2.7./share/hadoop/yarn/lib/jetty-6.1..jar:/soft/hadoop-2.7./share/hadoop/yarn/hadoop-yarn-api-2.7..jar:/soft/hadoop-2.7./share/hadoop/yarn/hadoop-yarn-common-2.7..jar:/soft/hadoop-2.7./share/hadoop/yarn/hadoop-yarn-server-common-2.7..jar:/soft/hadoop-2.7./share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7..jar:/soft/hadoop-2.7./share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7..jar:/soft/hadoop-2.7./share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7..jar:/soft/hadoop-2.7./share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7..jar:/soft/hadoop-2.7./share/hadoop/yarn/hadoop-yarn-server-tests-2.7..jar:/soft/hadoop-2.7./share/hadoop/yarn/hadoop-yarn-client-2.7..jar:/soft/hadoop-2.7./share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7..jar:/soft/hadoop-2.7./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7..jar:/soft/hadoop-2.7./share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7..jar:/soft/hadoop-2.7./share/hadoop/yarn/hadoop-yarn-registry-2.7..jar:/soft/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5..jar:/soft/hadoop/share/hadoop/mapreduce/lib/avro-1.7..jar:/soft/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9..jar:/soft/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9..jar:/soft/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/soft/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/soft/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4..jar:/soft/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/soft/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7..jar:/soft/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/soft/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/soft/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/soft/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/soft/hadoop/share/hadoop/mapreduce/lib/log4j-1.2..jar:/soft/hadoop/share/hadoop/mapreduce/lib/netty-3.6..Final.jar:/soft/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/soft/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/soft/hadoop/share/hadoop/mapreduce/lib/javax.inject-.jar:/soft/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/soft/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/soft/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/soft/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/soft/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7..jar:/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7..jar:/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7..jar:/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7..jar:/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7..jar:/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7..jar:/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7..jar:/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7..jar:/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.-tests.jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../conf:/soft/zk/conf::/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/ant-contrib-.0b3.jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/ant-eclipse-1.0-jvm1..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/avro-1.8..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/avro-mapred-1.8.-hadoop2.jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/commons-codec-1.4.jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/commons-compress-1.8..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/commons-io-1.4.jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/commons-jexl-2.1..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/commons-lang3-3.4.jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/commons-logging-1.1..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/hsqldb-1.8.0.10.jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/jackson-annotations-2.3..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/jackson-core-2.3..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/jackson-core-asl-1.9..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/jackson-databind-2.3..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/jackson-mapper-asl-1.9..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/kite-data-core-1.1..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/kite-data-hive-1.1..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/kite-data-mapreduce-1.1..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/kite-hadoop-compatibility-1.1..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/mysql-connector-java-5.1..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/opencsv-2.3.jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/paranamer-2.7.jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/parquet-avro-1.6..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/parquet-column-1.6..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/parquet-common-1.6..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/parquet-encoding-1.6..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/parquet-format-2.2.-rc1.jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/parquet-generator-1.6..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/parquet-hadoop-1.6..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/parquet-jackson-1.6..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/slf4j-api-1.6..jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/snappy-java-1.1.1.6.jar:/soft/sqoop-1.4..bin__hadoop-2.6./bin/../lib/xz-1.5.jar:/soft/hbase/conf:/soft/jdk//lib/tools.jar:/soft/hbase:/soft/hbase/lib/activation-1.1.jar:/soft/hbase/lib/aopalliance-1.0.jar:/soft/hbase/lib/apacheds-i18n-2.0.0-M15.jar:/soft/hbase/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/soft/hbase/lib/api-asn1-api-1.0.0-M20.jar:/soft/hbase/lib/api-util-1.0.0-M20.jar:/soft/hbase/lib/asm-3.1.jar:/soft/hbase/lib/avro-1.7.4.jar:/soft/hbase/lib/commons-beanutils-1.7.0.jar:/soft/hbase/lib/commons-beanutils-core-1.8.0.jar:/soft/hbase/lib/commons-cli-1.2.jar:/soft/hbase/lib/commons-codec-1.9.jar:/soft/hbase/lib/commons-collections-3.2.2.jar:/soft/hbase/lib/commons-compress-1.4.1.jar:/soft/hbase/lib/commons-configuration-1.6.jar:/soft/hbase/lib/commons-daemon-1.0.13.jar:/soft/hbase/lib/commons-digester-1.8.jar:/soft/hbase/lib/commons-el-1.0.jar:/soft/hbase/lib/commons-httpclient-3.1.jar:/soft/hbase/lib/commons-io-2.4.jar:/soft/hbase/lib/commons-lang-2.6.jar:/soft/hbase/lib/commons-logging-1.2.jar:/soft/hbase/lib/commons-math-2.2.jar:/soft/hbase/lib/commons-math3-3.1.1.jar:/soft/hbase/lib/commons-net-3.1.jar:/soft/hbase/lib/disruptor-3.3.0.jar:/soft/hbase/lib/findbugs-annotations-1.3.9-1.jar:/soft/hbase/lib/guava-12.0.1.jar:/soft/hbase/lib/guice-3.0.jar:/soft/hbase/lib/guice-servlet-3.0.jar:/soft/hbase/lib/hadoop-annotations-2.5.1.jar:/soft/hbase/lib/hadoop-auth-2.5.1.jar:/soft/hbase/lib/hadoop-client-2.5.1.jar:/soft/hbase/lib/hadoop-common-2.5.1.jar:/soft/hbase/lib/hadoop-hdfs-2.5.1.jar:/soft/hbase/lib/hadoop-mapreduce-client-app-2.5.1.jar:/soft/hbase/lib/hadoop-mapreduce-client-common-2.5.1.jar:/soft/hbase/lib/hadoop-mapreduce-client-core-2.5.1.jar:/soft/hbase/lib/hadoop-mapreduce-client-jobclient-2.5.1.jar:/soft/hbase/lib/hadoop-mapreduce-client-shuffle-2.5.1.jar:/soft/hbase/lib/hadoop-yarn-api-2.5.1.jar:/soft/hbase/lib/hadoop-yarn-client-2.5.1.jar:/soft/hbase/lib/hadoop-yarn-common-2.5.1.jar:/soft/hbase/lib/hadoop-yarn-server-common-2.5.1.jar:/soft/hbase/lib/hbase-annotations-1.2.6.jar:/soft/hbase/lib/hbase-annotations-1.2.6-tests.jar:/soft/hbase/lib/hbase-client-1.2.6.jar:/soft/hbase/lib/hbase-common-1.2.6.jar:/soft/hbase/lib/hbase-common-1.2.6-tests.jar:/soft/hbase/lib/hbase-examples-1.2.6.jar:/soft/hbase/lib/hbase-external-blockcache-1.2.6.jar:/soft/hbase/lib/hbase-hadoop2-compat-1.2.6.jar:/soft/hbase/lib/hbase-hadoop-compat-1.2.6.jar:/soft/hbase/lib/hbase-it-1.2.6.jar:/soft/hbase/lib/hbase-it-1.2.6-tests.jar:/soft/hbase/lib/hbase-prefix-tree-1.2.6.jar:/soft/hbase/lib/hbase-procedure-1.2.6.jar:/soft/hbase/lib/hbase-protocol-1.2.6.jar:/soft/hbase/lib/hbase-resource-bundle-1.2.6.jar:/soft/hbase/lib/hbase-rest-1.2.6.jar:/soft/hbase/lib/hbase-server-1.2.6.jar:/soft/hbase/lib/hbase-server-1.2.6-tests.jar:/soft/hbase/lib/hbase-shell-1.2.6.jar:/soft/hbase/lib/hbase-thrift-1.2.6.jar:/soft/hbase/lib/htrace-core-3.1.0-incubating.jar:/soft/hbase/lib/httpclient-4.2.5.jar:/soft/hbase/lib/httpcore-4.4.1.jar:/soft/hbase/lib/jackson-core-asl-1.9.13.jar:/soft/hbase/lib/jackson-jaxrs-1.9.13.jar:/soft/hbase/lib/jackson-mapper-asl-1.9.13.jar:/soft/hbase/lib/jackson-xc-1.9.13.jar:/soft/hbase/lib/jamon-runtime-2.4.1.jar:/soft/hbase/lib/jasper-compiler-5.5.23.jar:/soft/hbase/lib/jasper-runtime-5.5.23.jar:/soft/hbase/lib/javax.inject-1.jar:/soft/hbase/lib/java-xmlbuilder-0.4.jar:/soft/hbase/lib/jaxb-api-2.2.2.jar:/soft/hbase/lib/jaxb-impl-2.2.3-1.jar:/soft/hbase/lib/jcodings-1.0.8.jar:/soft/hbase/lib/jersey-client-1.9.jar:/soft/hbase/lib/jersey-core-1.9.jar:/soft/hbase/lib/jersey-guice-1.9.jar:/soft/hbase/lib/jersey-json-1.9.jar:/soft/hbase/lib/jersey-server-1.9.jar:/soft/hbase/lib/jets3t-0.9.0.jar:/soft/hbase/lib/jettison-1.3.3.jar:/soft/hbase/lib/jetty-6.1.26.jar:/soft/hbase/lib/jetty-sslengine-6.1.26.jar:/soft/hbase/lib/jetty-util-6.1.26.jar:/soft/hbase/lib/joni-2.1.2.jar:/soft/hbase/lib/jruby-complete-1.6.8.jar:/soft/hbase/lib/jsch-0.1.42.jar:/soft/hbase/lib/jsp-2.1-6.1.14.jar:/soft/hbase/lib/jsp-api-2.1-6.1.14.jar:/soft/hbase/lib/junit-4.12.jar:/soft/hbase/lib/leveldbjni-all-1.8.jar:/soft/hbase/lib/libthrift-0.9.3.jar:/soft/hbase/lib/log4j-1.2.17.jar:/soft/hbase/lib/metrics-core-2.2.0.jar:/soft/hbase/lib/MyHbase-1.0-SNAPSHOT.jar:/soft/hbase/lib/netty-all-4.0.23.Final.jar:/soft/hbase/lib/paranamer-2.3.jar:/soft/hbase/lib/phoenix-4.10.0-HBase-1.2-client.jar:/soft/hbase/lib/protobuf-java-2.5.0.jar:/soft/hbase/lib/servlet-api-2.5-6.1.14.jar:/soft/hbase/lib/servlet-api-2.5.jar:/soft/hbase/lib/slf4j-api-1.7.7.jar:/soft/hbase/lib/slf4j-log4j12-1.7.5.jar:/soft/hbase/lib/snappy-java-1.0.4.1.jar:/soft/hbase/lib/spymemcached-2.11.6.jar:/soft/hbase/lib/xmlenc-0.52.jar:/soft/hbase/lib/xz-1.0.jar:/soft/hbase/lib/zookeeper-3.4.6.jar:/soft/hadoop-2.7.3/etc/hadoop:/soft/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/soft/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/soft/hadoop/share/hadoop/common/lib/activation-1.1.jar:/soft/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/soft/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/soft/hadoop/share/hadoop/common/lib/asm-3.2.jar:/soft/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/soft/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/soft/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/soft/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/soft/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/soft/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/soft/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/soft/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/soft/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/soft/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/soft/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/soft/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/soft/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/soft/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/soft/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/soft/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/soft/hadoop/share/hadoop/common/lib/xz-1.0.jar:/soft/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/soft/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/soft/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/soft/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/soft/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/soft/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/soft/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/soft/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/soft/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/soft/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/soft/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/soft/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/soft/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/soft/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/soft/hadoop/share/hadoop/common/lib/junit-4.11.jar:/soft/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/soft/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/soft/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/soft/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/soft/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/soft/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/soft/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/soft/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/soft/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/soft/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/soft/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/soft/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/soft/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/soft/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/soft/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/soft/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/soft/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/soft/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/soft/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/soft/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/soft/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/soft/hadoop/share/hadoop/common/lib/lzo-core-1.0.0.jar:/soft/hadoop/share/hadoop/common/lib/lzo-hadoop-1.0.0.jar:/soft/hadoop/share/hadoop/common/lib/fastjson-1.2.47.jar:/soft/hadoop/share/hadoop/common/lib/MyHbase-1.0-SNAPSHOT.jar:/soft/hadoop/share/hadoop/common/hadoop-common-2.7.3.jar:/soft/hadoop/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/soft/hadoop/share/hadoop/common/hadoop-nfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/soft/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/soft/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/soft/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/soft/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/soft/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/soft/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/soft/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/soft/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/soft/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/soft/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/soft/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/soft/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/soft/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/soft/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/soft/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/soft/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/soft/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/soft/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/soft/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/soft/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/soft/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar::/soft/hive/lib/hive-common-2.1.1.jar:/soft/hive/lib/hive-shims-2.1.1.jar:/soft/hive/lib/hive-shims-common-2.1.1.jar:/soft/hive/lib/log4j-slf4j-impl-2.4.1.jar:/soft/hive/lib/log4j-api-2.4.1.jar:/soft/hive/lib/guava-14.0.1.jar:/soft/hive/lib/commons-lang-2.6.jar:/soft/hive/lib/libthrift-0.9.3.jar:/soft/hive/lib/httpclient-4.4.jar:/soft/hive/lib/httpcore-4.4.jar:/soft/hive/lib/commons-logging-1.2.jar:/soft/hive/lib/commons-codec-1.4.jar:/soft/hive/lib/curator-framework-2.6.0.jar:/soft/hive/lib/curator-client-2.6.0.jar:/soft/hive/lib/zookeeper-3.4.6.jar:/soft/hive/lib/jline-2.12.jar:/soft/hive/lib/netty-3.7.0.Final.jar:/soft/hive/lib/hive-shims-0.23-2.1.1.jar:/soft/hive/lib/guice-servlet-3.0.jar:/soft/hive/lib/guice-3.0.jar:/soft/hive/lib/javax.inject-1.jar:/soft/hive/lib/aopalliance-1.0.jar:/soft/hive/lib/protobuf-java-2.5.0.jar:/soft/hive/lib/commons-io-2.4.jar:/soft/hive/lib/activation-1.1.jar:/soft/hive/lib/jackson-jaxrs-1.9.2.jar:/soft/hive/lib/jackson-xc-1.9.2.jar:/soft/hive/lib/jersey-server-1.14.jar:/soft/hive/lib/asm-3.1.jar:/soft/hive/lib/commons-compress-1.9.jar:/soft/hive/lib/jetty-util-6.1.26.jar:/soft/hive/lib/jersey-client-1.9.jar:/soft/hive/lib/commons-cli-1.2.jar:/soft/hive/lib/commons-collections-3.2.2.jar:/soft/hive/lib/commons-httpclient-3.0.1.jar:/soft/hive/lib/junit-4.11.jar:/soft/hive/lib/hamcrest-core-1.3.jar:/soft/hive/lib/jetty-6.1.26.jar:/soft/hive/lib/hive-shims-scheduler-2.1.1.jar:/soft/hive/lib/hive-storage-api-2.1.1.jar:/soft/hive/lib/hive-orc-2.1.1.jar:/soft/hive/lib/jasper-compiler-5.5.23.jar:/soft/hive/lib/jasper-runtime-5.5.23.jar:/soft/hive/lib/commons-el-1.0.jar:/soft/hive/lib/gson-2.2.4.jar:/soft/hive/lib/curator-recipes-2.6.0.jar:/soft/hive/lib/jsr305-3.0.0.jar:/soft/hive/lib/snappy-0.2.jar:/soft/hive/lib/jetty-all-7.6.0.v20120127.jar:/soft/hive/lib/geronimo-jta_1.1_spec-1.1.1.jar:/soft/hive/lib/mail-1.4.1.jar:/soft/hive/lib/geronimo-jaspic_1.0_spec-1.0.jar:/soft/hive/lib/geronimo-annotation_1.0_spec-1.1.1.jar:/soft/hive/lib/asm-commons-3.1.jar:/soft/hive/lib/asm-tree-3.1.jar:/soft/hive/lib/javax.servlet-3.0.0.v201112011016.jar:/soft/hive/lib/joda-time-2.5.jar:/soft/hive/lib/log4j-1.2-api-2.4.1.jar:/soft/hive/lib/log4j-core-2.4.1.jar:/soft/hive/lib/log4j-web-2.4.1.jar:/soft/hive/lib/ant-1.9.1.jar:/soft/hive/lib/ant-launcher-1.9.1.jar:/soft/hive/lib/json-20090211.jar:/soft/hive/lib/metrics-core-3.1.0.jar:/soft/hive/lib/metrics-jvm-3.1.0.jar:/soft/hive/lib/metrics-json-3.1.0.jar:/soft/hive/lib/jackson-databind-2.4.2.jar:/soft/hive/lib/jackson-annotations-2.4.0.jar:/soft/hive/lib/jackson-core-2.4.2.jar:/soft/hive/lib/dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar:/soft/hive/lib/hive-serde-2.1.1.jar:/soft/hive/lib/hive-service-rpc-2.1.1.jar:/soft/hive/lib/jsp-api-2.0.jar:/soft/hive/lib/servlet-api-2.4.jar:/soft/hive/lib/ant-1.6.5.jar:/soft/hive/lib/libfb303-0.9.3.jar:/soft/hive/lib/avro-1.7.7.jar:/soft/hive/lib/paranamer-2.3.jar:/soft/hive/lib/snappy-java-1.0.5.jar:/soft/hive/lib/opencsv-2.3.jar:/soft/hive/lib/parquet-hadoop-bundle-1.8.1.jar:/soft/hive/lib/hive-metastore-2.1.1.jar:/soft/hive/lib/javolution-5.5.1.jar:/soft/hive/lib/hbase-client-1.1.1.jar:/soft/hive/lib/hbase-annotations-1.1.1.jar:/soft/hive/lib/findbugs-annotations-1.3.9-1.jar:/soft/hive/lib/hbase-common-1.1.1.jar:/soft/hive/lib/hbase-protocol-1.1.1.jar:/soft/hive/lib/htrace-core-3.1.0-incubating.jar:/soft/hive/lib/netty-all-4.0.23.Final.jar:/soft/hive/lib/jcodings-1.0.8.jar:/soft/hive/lib/joni-2.1.2.jar:/soft/hive/lib/bonecp-0.8.0.RELEASE.jar:/soft/hive/lib/derby-10.10.2.0.jar:/soft/hive/lib/datanucleus-api-jdo-4.2.1.jar:/soft/hive/lib/datanucleus-core-4.1.6.jar:/soft/hive/lib/datanucleus-rdbms-4.1.7.jar:/soft/hive/lib/commons-pool-1.5.4.jar:/soft/hive/lib/commons-dbcp-1.4.jar:/soft/hive/lib/jdo-api-3.0.1.jar:/soft/hive/lib/jta-1.1.jar:/soft/hive/lib/javax.jdo-3.2.0-m3.jar:/soft/hive/lib/transaction-api-1.1.jar:/soft/hive/lib/antlr-runtime-3.4.jar:/soft/hive/lib/stringtemplate-3.2.1.jar:/soft/hive/lib/antlr-2.7.7.jar:/soft/hive/lib/tephra-api-0.6.0.jar:/soft/hive/lib/tephra-core-0.6.0.jar:/soft/hive/lib/guice-assistedinject-3.0.jar:/soft/hive/lib/fastutil-6.5.6.jar:/soft/hive/lib/twill-common-0.6.0-incubating.jar:/soft/hive/lib/twill-core-0.6.0-incubating.jar:/soft/hive/lib/twill-api-0.6.0-incubating.jar:/soft/hive/lib/twill-discovery-api-0.6.0-incubating.jar:/soft/hive/lib/twill-zookeeper-0.6.0-incubating.jar:/soft/hive/lib/twill-discovery-core-0.6.0-incubating.jar:/soft/hive/lib/tephra-hbase-compat-1.0-0.6.0.jar:/soft/hive/lib/hive-testutils-2.1.1.jar:/soft/hive/lib/tempus-fugit-1.1.jar:/soft/hive/lib/hive-exec-2.1.1.jar:/soft/hive/lib/hive-ant-2.1.1.jar:/soft/hive/lib/velocity-1.5.jar:/soft/hive/lib/hive-llap-tez-2.1.1.jar:/soft/hive/lib/hive-llap-client-2.1.1.jar:/soft/hive/lib/hive-llap-common-2.1.1.jar:/soft/hive/lib/commons-lang3-3.1.jar:/soft/hive/lib/ST4-4.0.4.jar:/soft/hive/lib/ivy-2.4.0.jar:/soft/hive/lib/groovy-all-2.4.4.jar:/soft/hive/lib/calcite-core-1.6.0.jar:/soft/hive/lib/calcite-avatica-1.6.0.jar:/soft/hive/lib/calcite-linq4j-1.6.0.jar:/soft/hive/lib/eigenbase-properties-1.1.5.jar:/soft/hive/lib/janino-2.7.6.jar:/soft/hive/lib/commons-compiler-2.7.6.jar:/soft/hive/lib/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar:/soft/hive/lib/stax-api-1.0.1.jar:/soft/hive/lib/hive-service-2.1.1.jar:/soft/hive/lib/hive-llap-server-2.1.1.jar:/soft/hive/lib/slider-core-0.90.2-incubating.jar:/soft/hive/lib/jcommander-1.32.jar:/soft/hive/lib/jsp-api-2.1.jar:/soft/hive/lib/hbase-hadoop2-compat-1.1.1.jar:/soft/hive/lib/hbase-hadoop-compat-1.1.1.jar:/soft/hive/lib/commons-math-2.2.jar:/soft/hive/lib/metrics-core-2.2.0.jar:/soft/hive/lib/hbase-server-1.1.1.jar:/soft/hive/lib/hbase-procedure-1.1.1.jar:/soft/hive/lib/hbase-common-1.1.1-tests.jar:/soft/hive/lib/hbase-prefix-tree-1.1.1.jar:/soft/hive/lib/jetty-sslengine-6.1.26.jar:/soft/hive/lib/jsp-2.1-6.1.14.jar:/soft/hive/lib/jsp-api-2.1-6.1.14.jar:/soft/hive/lib/servlet-api-2.5-6.1.14.jar:/soft/hive/lib/jamon-runtime-2.3.1.jar:/soft/hive/lib/disruptor-3.3.0.jar:/soft/hive/lib/jpam-1.1.jar:/soft/hive/lib/hive-jdbc-2.1.1.jar:/soft/hive/lib/hive-beeline-2.1.1.jar:/soft/hive/lib/super-csv-2.2.0.jar:/soft/hive/lib/hive-cli-2.1.1.jar:/soft/hive/lib/hive-contrib-2.1.1.jar:/soft/hive/lib/hive-hbase-handler-2.1.1.jar:/soft/hive/lib/hbase-hadoop2-compat-1.1.1-tests.jar:/soft/hive/lib/hive-hwi-2.1.1.jar:/soft/hive/lib/jetty-all-server-7.6.0.v20120127.jar:/soft/hive/lib/hive-accumulo-handler-2.1.1.jar:/soft/hive/lib/accumulo-core-1.6.0.jar:/soft/hive/lib/accumulo-fate-1.6.0.jar:/soft/hive/lib/accumulo-start-1.6.0.jar:/soft/hive/lib/commons-vfs2-2.0.jar:/soft/hive/lib/maven-scm-api-1.4.jar:/soft/hive/lib/plexus-utils-1.5.6.jar:/soft/hive/lib/maven-scm-provider-svnexe-1.4.jar:/soft/hive/lib/maven-scm-provider-svn-commons-1.4.jar:/soft/hive/lib/regexp-1.3.jar:/soft/hive/lib/accumulo-trace-1.6.0.jar:/soft/hive/lib/hive-llap-ext-client-2.1.1.jar:/soft/hive/lib/hive-hplsql-2.1.1.jar:/soft/hive/lib/antlr4-runtime-4.5.jar:/soft/hive/lib/org.abego.treelayout.core-1.0.1.jar:/soft/hive/lib/mysql-connector-java-5.1.41.jar:/soft/hadoop/contrib/capacity-scheduler/*.jar:/soft/sqoop-1.4.7.bin__hadoop-2.6.0/bin/../sqoop-1.4.7.jar:/soft/sqoop-1.4.7.bin__hadoop-2.6.0/bin/../sqoop-test-1.4.7.jar::/soft/hive/lib/hive-common-2.1.1.jar:/soft/hive/lib/hive-shims-2.1.1.jar:/soft/hive/lib/hive-shims-common-2.1.1.jar:/soft/hive/lib/log4j-slf4j-impl-2.4.1.jar:/soft/hive/lib/log4j-api-2.4.1.jar:/soft/hive/lib/guava-14.0.1.jar:/soft/hive/lib/commons-lang-2.6.jar:/soft/hive/lib/libthrift-0.9.3.jar:/soft/hive/lib/httpclient-4.4.jar:/soft/hive/lib/httpcore-4.4.jar:/soft/hive/lib/commons-logging-1.2.jar:/soft/hive/lib/commons-codec-1.4.jar:/soft/hive/lib/curator-framework-2.6.0.jar:/soft/hive/lib/curator-client-2.6.0.jar:/soft/hive/lib/zookeeper-3.4.6.jar:/soft/hive/lib/jline-2.12.jar:/soft/hive/lib/netty-3.7.0.Final.jar:/soft/hive/lib/hive-shims-0.23-2.1.1.jar:/soft/hive/lib/guice-servlet-3.0.jar:/soft/hive/lib/guice-3.0.jar:/soft/hive/lib/javax.inject-1.jar:/soft/hive/lib/aopalliance-1.0.jar:/soft/hive/lib/protobuf-java-2.5.0.jar:/soft/hive/lib/commons-io-2.4.jar:/soft/hive/lib/activation-1.1.jar:/soft/hive/lib/jackson-jaxrs-1.9.2.jar:/soft/hive/lib/jackson-xc-1.9.2.jar:/soft/hive/lib/jersey-server-1.14.jar:/soft/hive/lib/asm-3.1.jar:/soft/hive/lib/commons-compress-1.9.jar:/soft/hive/lib/jetty-util-6.1.26.jar:/soft/hive/lib/jersey-client-1.9.jar:/soft/hive/lib/commons-cli-1.2.jar:/soft/hive/lib/commons-collections-3.2.2.jar:/soft/hive/lib/commons-httpclient-3.0.1.jar:/soft/hive/lib/junit-4.11.jar:/soft/hive/lib/hamcrest-core-1.3.jar:/soft/hive/lib/jetty-6.1.26.jar:/soft/hive/lib/hive-shims-scheduler-2.1.1.jar:/soft/hive/lib/hive-storage-api-2.1.1.jar:/soft/hive/lib/hive-orc-2.1.1.jar:/soft/hive/lib/jasper-compiler-5.5.23.jar:/soft/hive/lib/jasper-runtime-5.5.23.jar:/soft/hive/lib/commons-el-1.0.jar:/soft/hive/lib/gson-2.2.4.jar:/soft/hive/lib/curator-recipes-2.6.0.jar:/soft/hive/lib/jsr305-3.0.0.jar:/soft/hive/lib/snappy-0.2.jar:/soft/hive/lib/jetty-all-7.6.0.v20120127.jar:/soft/hive/lib/geronimo-jta_1.1_spec-1.1.1.jar:/soft/hive/lib/mail-1.4.1.jar:/soft/hive/lib/geronimo-jaspic_1.0_spec-1.0.jar:/soft/hive/lib/geronimo-annotation_1.0_spec-1.1.1.jar:/soft/hive/lib/asm-commons-3.1.jar:/soft/hive/lib/asm-tree-3.1.jar:/soft/hive/lib/javax.servlet-3.0.0.v201112011016.jar:/soft/hive/lib/joda-time-2.5.jar:/soft/hive/lib/log4j-1.2-api-2.4.1.jar:/soft/hive/lib/log4j-core-2.4.1.jar:/soft/hive/lib/log4j-web-2.4.1.jar:/soft/hive/lib/ant-1.9.1.jar:/soft/hive/lib/ant-launcher-1.9.1.jar:/soft/hive/lib/json-20090211.jar:/soft/hive/lib/metrics-core-3.1.0.jar:/soft/hive/lib/metrics-jvm-3.1.0.jar:/soft/hive/lib/metrics-json-3.1.0.jar:/soft/hive/lib/jackson-databind-2.4.2.jar:/soft/hive/lib/jackson-annotations-2.4.0.jar:/soft/hive/lib/jackson-core-2.4.2.jar:/soft/hive/lib/dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar:/soft/hive/lib/hive-serde-2.1.1.jar:/soft/hive/lib/hive-service-rpc-2.1.1.jar:/soft/hive/lib/jsp-api-2.0.jar:/soft/hive/lib/servlet-api-2.4.jar:/soft/hive/lib/ant-1.6.5.jar:/soft/hive/lib/libfb303-0.9.3.jar:/soft/hive/lib/avro-1.7.7.jar:/soft/hive/lib/paranamer-2.3.jar:/soft/hive/lib/snappy-java-1.0.5.jar:/soft/hive/lib/opencsv-2.3.jar:/soft/hive/lib/parquet-hadoop-bundle-1.8.1.jar:/soft/hive/lib/hive-metastore-2.1.1.jar:/soft/hive/lib/javolution-5.5.1.jar:/soft/hive/lib/hbase-client-1.1.1.jar:/soft/hive/lib/hbase-annotations-1.1.1.jar:/soft/hive/lib/findbugs-annotations-1.3.9-1.jar:/soft/hive/lib/hbase-common-1.1.1.jar:/soft/hive/lib/hbase-protocol-1.1.1.jar:/soft/hive/lib/htrace-core-3.1.0-incubating.jar:/soft/hive/lib/netty-all-4.0.23.Final.jar:/soft/hive/lib/jcodings-1.0.8.jar:/soft/hive/lib/joni-2.1.2.jar:/soft/hive/lib/bonecp-0.8.0.RELEASE.jar:/soft/hive/lib/derby-10.10.2.0.jar:/soft/hive/lib/datanucleus-api-jdo-4.2.1.jar:/soft/hive/lib/datanucleus-core-4.1.6.jar:/soft/hive/lib/datanucleus-rdbms-4.1.7.jar:/soft/hive/lib/commons-pool-1.5.4.jar:/soft/hive/lib/commons-dbcp-1.4.jar:/soft/hive/lib/jdo-api-3.0.1.jar:/soft/hive/lib/jta-1.1.jar:/soft/hive/lib/javax.jdo-3.2.0-m3.jar:/soft/hive/lib/transaction-api-1.1.jar:/soft/hive/lib/antlr-runtime-3.4.jar:/soft/hive/lib/stringtemplate-3.2.1.jar:/soft/hive/lib/antlr-2.7.7.jar:/soft/hive/lib/tephra-api-0.6.0.jar:/soft/hive/lib/tephra-core-0.6.0.jar:/soft/hive/lib/guice-assistedinject-3.0.jar:/soft/hive/lib/fastutil-6.5.6.jar:/soft/hive/lib/twill-common-0.6.0-incubating.jar:/soft/hive/lib/twill-core-0.6.0-incubating.jar:/soft/hive/lib/twill-api-0.6.0-incubating.jar:/soft/hive/lib/twill-discovery-api-0.6.0-incubating.jar:/soft/hive/lib/twill-zookeeper-0.6.0-incubating.jar:/soft/hive/lib/twill-discovery-core-0.6.0-incubating.jar:/soft/hive/lib/tephra-hbase-compat-1.0-0.6.0.jar:/soft/hive/lib/hive-testutils-2.1.1.jar:/soft/hive/lib/tempus-fugit-1.1.jar:/soft/hive/lib/hive-exec-2.1.1.jar:/soft/hive/lib/hive-ant-2.1.1.jar:/soft/hive/lib/velocity-1.5.jar:/soft/hive/lib/hive-llap-tez-2.1.1.jar:/soft/hive/lib/hive-llap-client-2.1.1.jar:/soft/hive/lib/hive-llap-common-2.1.1.jar:/soft/hive/lib/commons-lang3-3.1.jar:/soft/hive/lib/ST4-4.0.4.jar:/soft/hive/lib/ivy-2.4.0.jar:/soft/hive/lib/groovy-all-2.4.4.jar:/soft/hive/lib/calcite-core-1.6.0.jar:/soft/hive/lib/calcite-avatica-1.6.0.jar:/soft/hive/lib/calcite-linq4j-1.6.0.jar:/soft/hive/lib/eigenbase-properties-1.1.5.jar:/soft/hive/lib/janino-2.7.6.jar:/soft/hive/lib/commons-compiler-2.7.6.jar:/soft/hive/lib/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar:/soft/hive/lib/stax-api-1.0.1.jar:/soft/hive/lib/hive-service-2.1.1.jar:/soft/hive/lib/hive-llap-server-2.1.1.jar:/soft/hive/lib/slider-core-0.90.2-incubating.jar:/soft/hive/lib/jcommander-1.32.jar:/soft/hive/lib/jsp-api-2.1.jar:/soft/hive/lib/hbase-hadoop2-compat-1.1.1.jar:/soft/hive/lib/hbase-hadoop-compat-1.1.1.jar:/soft/hive/lib/commons-math-2.2.jar:/soft/hive/lib/metrics-core-2.2.0.jar:/soft/hive/lib/hbase-server-1.1.1.jar:/soft/hive/lib/hbase-procedure-1.1.1.jar:/soft/hive/lib/hbase-common-1.1.1-tests.jar:/soft/hive/lib/hbase-prefix-tree-1.1.1.jar:/soft/hive/lib/jetty-sslengine-6.1.26.jar:/soft/hive/lib/jsp-2.1-6.1.14.jar:/soft/hive/lib/jsp-api-2.1-6.1.14.jar:/soft/hive/lib/servlet-api-2.5-6.1.14.jar:/soft/hive/lib/jamon-runtime-2.3.1.jar:/soft/hive/lib/disruptor-3.3.0.jar:/soft/hive/lib/jpam-1.1.jar:/soft/hive/lib/hive-jdbc-2.1.1.jar:/soft/hive/lib/hive-beeline-2.1.1.jar:/soft/hive/lib/super-csv-2.2.0.jar:/soft/hive/lib/hive-cli-2.1.1.jar:/soft/hive/lib/hive-contrib-2.1.1.jar:/soft/hive/lib/hive-hbase-handler-2.1.1.jar:/soft/hive/lib/hbase-hadoop2-compat-1.1.1-tests.jar:/soft/hive/lib/hive-hwi-2.1.1.jar:/soft/hive/lib/jetty-all-server-7.6.0.v20120127.jar:/soft/hive/lib/hive-accumulo-handler-2.1.1.jar:/soft/hive/lib/accumulo-core-1.6.0.jar:/soft/hive/lib/accumulo-fate-1.6.0.jar:/soft/hive/lib/accumulo-start-1.6.0.jar:/soft/hive/lib/commons-vfs2-2.0.jar:/soft/hive/lib/maven-scm-api-1.4.jar:/soft/hive/lib/plexus-utils-1.5.6.jar:/soft/hive/lib/maven-scm-provider-svnexe-1.4.jar:/soft/hive/lib/maven-scm-provider-svn-commons-1.4.jar:/soft/hive/lib/regexp-1.3.jar:/soft/hive/lib/accumulo-trace-1.6.0.jar:/soft/hive/lib/hive-llap-ext-client-2.1.1.jar:/soft/hive/lib/hive-hplsql-2.1.1.jar:/soft/hive/lib/antlr4-runtime-4.5.jar:/soft/hive/lib/org.abego.treelayout.core-1.0.1.jar:/soft/hive/lib/mysql-connector-java-5.1.41.jar:/soft/hadoop/contrib/capacity-scheduler/*.jar
// :: INFO zookeeper.ZooKeeper: Client environment:java.library.path=/soft/hadoop-2.7./lib/native
// :: INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
// :: INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
// :: INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
// :: INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
// :: INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.-.el7.x86_64
// :: INFO zookeeper.ZooKeeper: Client environment:user.name=yinzhengjie
// :: INFO zookeeper.ZooKeeper: Client environment:user.home=/home/yinzhengjie
// :: INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/yinzhengjie
// :: INFO zookeeper.ZooKeeper: Initiating client connection, connectString=s102:,s103:,s104: sessionTimeout= watcher=hconnection-0x3c782d8e0x0, quorum=s102:,s103:,s104:, baseZNode=/hbase
// :: INFO zookeeper.ClientCnxn: Opening socket connection to server s102/172.30.100.102:. Will not attempt to authenticate using SASL (unknown error)
// :: INFO zookeeper.ClientCnxn: Socket connection established to s102/172.30.100.102:, initiating session
// :: INFO zookeeper.ClientCnxn: Session establishment complete on server s102/172.30.100.102:, sessionid = 0x6600000ebb860010, negotiated timeout =
// :: INFO mapreduce.HBaseImportJob: Creating missing HBase table yinzhengjie:wc
// :: INFO client.HBaseAdmin: Created yinzhengjie:wc
// :: INFO client.ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService
// :: INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x6600000ebb860010
// :: INFO zookeeper.ZooKeeper: Session: 0x6600000ebb860010 closed
// :: INFO zookeeper.ClientCnxn: EventThread shut down
// :: INFO db.DBInputFormat: Using read commited transaction isolation
// :: INFO mapreduce.JobSubmitter: number of splits:
// :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1528967628934_0012
// :: INFO impl.YarnClientImpl: Submitted application application_1528967628934_0012
// :: INFO mapreduce.Job: The url to track the job: http://s101:8088/proxy/application_1528967628934_0012/
// :: INFO mapreduce.Job: Running job: job_1528967628934_0012
// :: INFO mapreduce.Job: Job job_1528967628934_0012 running in uber mode : false
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: Job job_1528967628934_0012 completed successfully
// :: INFO mapreduce.Job: Counters:
File System Counters
FILE: Number of bytes read=
FILE: Number of bytes written=
FILE: Number of read operations=
FILE: Number of large read operations=
FILE: Number of write operations=
HDFS: Number of bytes read=
HDFS: Number of bytes written=
HDFS: Number of read operations=
HDFS: Number of large read operations=
HDFS: Number of write operations=
Job Counters
Launched map tasks=
Other local map tasks=
Total time spent by all maps in occupied slots (ms)=
Total time spent by all reduces in occupied slots (ms)=
Total time spent by all map tasks (ms)=
Total vcore-milliseconds taken by all map tasks=
Total megabyte-milliseconds taken by all map tasks=
Map-Reduce Framework
Map input records=
Map output records=
Input split bytes=
Spilled Records=
Failed Shuffles=
Merged Map outputs=
GC time elapsed (ms)=
CPU time spent (ms)=
Physical memory (bytes) snapshot=
Virtual memory (bytes) snapshot=
Total committed heap usage (bytes)=
File Input Format Counters
Bytes Read=
File Output Format Counters
Bytes Written=
// :: INFO mapreduce.ImportJobBase: Transferred bytes in 26.3319 seconds ( bytes/sec)
// :: INFO mapreduce.ImportJobBase: Retrieved records.
[yinzhengjie@s101 ~]$

导入数据([yinzhengjie@s101 ~]$ sqoop import --connect jdbc:mysql://s101/yinzhengjie --username root -P --table word --hbase-create-table --hbase-table yinzhengjie:wc --hbase-row-key id --column-family f1 -m 1)

[yinzhengjie@s101 ~]$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/phoenix-4.10.-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/apache-hive-2.1.-bin/lib/log4j-slf4j-impl-2.4..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2., rUnknown, Mon May :: CDT hbase(main)::> list
TABLE
SYSTEM.CATALOG
SYSTEM.FUNCTION
SYSTEM.MUTEX
SYSTEM.SEQUENCE
SYSTEM.STATS
YINZHENGJIE.T1
ns1:calllog
ns1:observer
ns1:t1
yinzhengjie:WordCount
yinzhengjie:WordCount2
yinzhengjie:WordCount3
yinzhengjie:t1
yinzhengjie:test
yinzhengjie:wc
row(s) in 0.1960 seconds => ["SYSTEM.CATALOG", "SYSTEM.FUNCTION", "SYSTEM.MUTEX", "SYSTEM.SEQUENCE", "SYSTEM.STATS", "YINZHENGJIE.T1", "ns1:calllog", "ns1:observer", "ns1:t1", "yinzhengjie:WordCount", "yinzhengjie:WordCount2", "yinzhengjie:WordCount3", "yinzhengjie:t1", "yinzhengjie:test", "yinzhengjie:wc"]
hbase(main)::> scan 'yinzhengjie:wc'
ROW COLUMN+CELL
column=f1:string, timestamp=, value=hello world
column=f1:string, timestamp=, value=yinzhengjie hbase
row(s) in 0.1190 seconds hbase(main)::>

导入数据之后(hbase(main):002:0> scan 'yinzhengjie:wc')

2>.其他常用参数介绍

--column-family <family>               //指定列族
--hbase-bulkload //指定批量加载
--hbase-create-table //改参数表示如果表不存在就创建,若存在就忽略该参数
--hbase-row-key <col> //指定hbase的rowkey
--hbase-table <table> //指定hbase的表

六.sqoop的导出 

1>.关键参数说明

   --columns <col,col,col...>                    //指定mysql列
--direct //使用直接导入,速度较快
--export-dir <dir> //导出源数据
-m //mapper数量
--table <table-name> //指定mysql表
--input-fields-terminated-by <char> //输入字段分割符

2>.建立mysql表,指明字段

[yinzhengjie@s101 ~]$ mysql -uroot -pyinzhengjie
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is
Server version: 5.6. MySQL Community Server (GPL) Copyright (c) , , Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> use yinzhengjie
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A Database changed
mysql> create table yinzhengjie_export(id int primary key AUTO_INCREMENT, name varchar(), age int);
Query OK, rows affected (0.14 sec) mysql> desc yinzhengjie_export;
+-------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+----------------+
| id | int() | NO | PRI | NULL | auto_increment |
| name | varchar() | YES | | NULL | |
| age | int() | YES | | NULL | |
+-------+-------------+------+-----+---------+----------------+
rows in set (0.04 sec) mysql>

3>.开始导出

[yinzhengjie@s101 ~]$ sqoop export --connect jdbc:mysql://s101/yinzhengjie --username root -P --table yinzhengjie_export --export-dir /user/hive/warehouse/yinzhengjie.db/test4/province=beijing/part-m-00000 --columns id,name,age --input-fields-terminated-by "\t" -m 1
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /soft/sqoop-1.4..bin__hadoop-2.6./bin/../../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
Enter password:
// :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
// :: INFO tool.CodeGenTool: Beginning code generation
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/soft/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/phoenix-4.10.-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/hbase-1.2./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/soft/apache-hive-2.1.-bin/lib/log4j-slf4j-impl-2.4..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `yinzhengjie_export` AS t LIMIT
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `yinzhengjie_export` AS t LIMIT
// :: INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /soft/hadoop
Note: /tmp/sqoop-yinzhengjie/compile/6b92104c3d95bb8deacbe1af30022e16/yinzhengjie_export.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
// :: INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-yinzhengjie/compile/6b92104c3d95bb8deacbe1af30022e16/yinzhengjie_export.jar
// :: INFO mapreduce.ExportJobBase: Beginning export of yinzhengjie_export
// :: INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
// :: INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
// :: INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
// :: INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
// :: INFO input.FileInputFormat: Total input paths to process :
// :: INFO input.FileInputFormat: Total input paths to process :
// :: INFO mapreduce.JobSubmitter: number of splits:
// :: INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
// :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1528967628934_0020
// :: INFO impl.YarnClientImpl: Submitted application application_1528967628934_0020
// :: INFO mapreduce.Job: The url to track the job: http://s101:8088/proxy/application_1528967628934_0020/
// :: INFO mapreduce.Job: Running job: job_1528967628934_0020
// :: INFO mapreduce.Job: Job job_1528967628934_0020 running in uber mode : false
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: Job job_1528967628934_0020 completed successfully
// :: INFO mapreduce.Job: Counters:
File System Counters
FILE: Number of bytes read=
FILE: Number of bytes written=
FILE: Number of read operations=
FILE: Number of large read operations=
FILE: Number of write operations=
HDFS: Number of bytes read=
HDFS: Number of bytes written=
HDFS: Number of read operations=
HDFS: Number of large read operations=
HDFS: Number of write operations=
Job Counters
Launched map tasks=
Other local map tasks=
Total time spent by all maps in occupied slots (ms)=
Total time spent by all reduces in occupied slots (ms)=
Total time spent by all map tasks (ms)=
Total vcore-milliseconds taken by all map tasks=
Total megabyte-milliseconds taken by all map tasks=
Map-Reduce Framework
Map input records=
Map output records=
Input split bytes=
Spilled Records=
Failed Shuffles=
Merged Map outputs=
GC time elapsed (ms)=
CPU time spent (ms)=
Physical memory (bytes) snapshot=
Virtual memory (bytes) snapshot=
Total committed heap usage (bytes)=
File Input Format Counters
Bytes Read=
File Output Format Counters
Bytes Written=
// :: INFO mapreduce.ExportJobBase: Transferred bytes in 20.9128 seconds (8.9897 bytes/sec)
// :: INFO mapreduce.ExportJobBase: Exported records.
[yinzhengjie@s101 ~]$ echo $? [yinzhengjie@s101 ~]$

指定导出代码【将hive的数据导入到我们之前新建的MySQL表中】([yinzhengjie@s101 ~]$ sqoop export --connect jdbc:mysql://s101/yinzhengjie --username root -P --table yinzhengjie_export --export-dir /user/hive/warehouse/yinzhengjie.db/test4/province=beijing/part-m-00000 --columns id,name,age --input-fields-terminated-by "\t" -m 1)

Hadoop生态圈-Sqoop部署以及基本使用方法的更多相关文章

  1. Hadoop生态圈-Oozie部署实战

    Hadoop生态圈-Oozie部署实战 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.Oozie简介 1>.什么是Oozie Oozie英文翻译为:驯象人.一个基于工作流 ...

  2. Hadoop生态圈-Azkaban部署实战

    Hadoop生态圈-Azkaban部署实战 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任.  一.Azkaban部署流程 1>.上传azkaban程序并创建解压目录 [yinz ...

  3. Hadoop生态圈-zookeeper完全分布式部署

    Hadoop生态圈-zookeeper完全分布式部署 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 本篇博客部署是建立在Hadoop高可用基础之上的,关于Hadoop高可用部署请参 ...

  4. Hadoop生态圈-CentOs7.5单机部署ClickHouse

    Hadoop生态圈-CentOs7.5单机部署ClickHouse 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 到了新的公司,认识了新的同事,生产环境也得你去适应新的集群环境,我 ...

  5. Hadoop生态圈-基于yum源的方式部署Cloudera Manager5.15.1

    Hadoop生态圈-基于yum源的方式部署Cloudera Manager5.15.1 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 我之前分享过关于离线方式部署Cloudera ...

  6. Hadoop生态圈-离线方式部署Cloudera Manager5.15.1

    Hadoop生态圈-离线方式部署Cloudera Manager5.15.1 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 到目前位置,Cloudera Manager和CDH最新 ...

  7. Hadoop生态圈-Kafka的本地模式部署

    Hadoop生态圈-Kafka的本地模式部署 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.Kafka简介 1>.什么是JMS 答:在Java中有一个角消息系统的东西,我 ...

  8. Hadoop生态圈-phoenix完全分布式部署以及常用命令介绍

    Hadoop生态圈-phoenix完全分布式部署 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. phoenix只是一个插件,我们可以用hive给hbase套上一个JDBC壳,但是你 ...

  9. Hadoop生态圈-单点登录框架之CAS(Central Authentication Service)部署

    Hadoop生态圈-单点登录框架之CAS(Central Authentication Service)部署 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.CAS简介 CAS( ...

随机推荐

  1. EGener2四则运算出题器

    项目源码: https://git.coding.net/beijl695/EGener2.git (代码纯属原创,设计细节不同,请思量) 项目发布后,由于期间各种事情,耽搁至最后一天交付.这次的项目 ...

  2. python下的Box2d物理引擎的配置

    /******************************* I come back! 由于已经大四了,正在找工作 导致了至今以来第二长的时间内没有更新博客.向大家表示道歉 *********** ...

  3. Beta阶段——第四篇 Scrum 冲刺博客

    i. 提供当天站立式会议照片一张: ii. 每个人的工作 (有work item 的ID) (1) 昨天已完成的工作: 充值与账单的数据库操作结合,实现余额功能 (2) 今天计划完成的工作: 用户权限 ...

  4. 利用css制作带边框的小三角

    标签(空格分隔):css 在项目中会使用到的小实例,目前知道的有两种方法来实现 设置元素的宽和高,利用rotate实现,比较简单的一种 div{ width: 10px; height: 10px; ...

  5. yii2微博第三方登录

    原作者:杜文建 原博客:http://www.cnblogs.com/dwj97/p/6530568.html yii2微博第三方登录   微博登录是最常用的第三方账号登录之一.由于其网站用户量大,可 ...

  6. 多校联赛7 1001 hdu 4666(最远哈曼顿距离+优先队列)

    吐个糟,尼玛今天被虐成狗了,一题都没搞出来,这题搞了N久居然还是搞不出来,一直TLE,最后还是参考别人代码才领悟的,思路就这么简单, 就是不会转弯,看着模板却不会改,艹,真怀疑自己是不是个笨蛋题意:求 ...

  7. js & 快捷键 & vue bind bug

    js & 快捷键 & vue bind bug how to prevent addEventListener bind many times solution dataset &am ...

  8. 如何修改antd中表格的表头样式和奇偶行颜色交替

    在做antd表格时通常会用到table组件,但是table的表头时给定的,那么怎么修改表头的颜色呢? 这里用的时less的写法,在全局环境中写,所有的table表头都会变成自己定义的颜色 定义好表头的 ...

  9. HDU4436_str2int

    很好的一个题目.对于理解后缀自动机很有用. 题目给你若干数字串,总长度不超过100000,任意一个串的任意一个子串都可以拿出来单独的作为一个数字.同一个数字只算一次. 问所有不同数字的和为多少? 嗯嗯 ...

  10. struts 跳转的四种常用类型

    1 dispatcher 默认的跳转类型 地址栏不变 2.redirect 跳转后地址会变化 3 chain 跳转到一个动作类 地址栏不会变 4 redirectAction 跳转到一个动作类 地址栏 ...