来源https://www.cnblogs.com/qingyunzong/p/8807252.html

一、概述

sqoop 是 apache 旗下一款“Hadoop 和关系数据库服务器之间传送数据”的工具。

核心的功能有两个:

导入、迁入

导出、迁出

导入数据:MySQL,Oracle 导入数据到 Hadoop 的 HDFS、HIVE、HBASE 等数据存储系统

导出数据:从 Hadoop 的文件系统中导出数据到关系数据库 mysql 等 Sqoop 的本质还是一个命令行工具,和 HDFS,Hive 相比,并没有什么高深的理论。

sqoop:

工具:本质就是迁移数据, 迁移的方式:就是把sqoop的迁移命令转换成MR程序

hive

工具:本质就是执行计算,依赖于HDFS存储数据,把SQL转换成MR程序

生产环境中sqoop的使用

二、工作机制

将导入或导出命令翻译成 MapReduce 程序来实现 在翻译出的 MapReduce 中主要是对 InputFormat 和 OutputFormat 进行定制

三、安装

1、前提概述

将来sqoop在使用的时候有可能会跟那些系统或者组件打交道?

HDFS, MapReduce, YARN, ZooKeeper, Hive, HBase, MySQL

sqoop就是一个工具, 只需要在一个节点上进行安装即可。

补充一点: 如果你的sqoop工具将来要进行hive或者hbase等等的系统和MySQL之间的交互

你安装的SQOOP软件的节点一定要包含以上你要使用的集群或者软件系统的安装包

补充一点: 将来要使用的azakban这个软件 除了会调度 hadoop的任务或者hbase或者hive的任务之外, 还会调度sqoop的任务

azkaban这个软件的安装节点也必须包含以上这些软件系统的客户端/2、

2、软件下载

下载地址http://mirrors.hust.edu.cn/apache/

sqoop版本说明

绝大部分企业所使用的sqoop的版本都是 sqoop1

sqoop-1.4.6 或者 sqoop-1.4.7 它是 sqoop1

sqoop-1.99.4----都是 sqoop2

此处使用sqoop-1.4.6版本sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz

(1)上传解压缩安装包到指定目录

因为之前hive只是安装在hadoop3机器上,所以sqoop也同样安装在hadoop3机器上

  1. [hadoop@hadoop3 ~]$ tar -zxvf sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz -C apps/

2)进入到 conf 文件夹,找到 sqoop-env-template.sh,修改其名称为 sqoop-env.sh

  1. [hadoop@hadoop3 ~]$ cd apps/
  2. [hadoop@hadoop3 apps]$ ls
  3. apache-hive-2.3.-bin hadoop-2.7. hbase-1.2. sqoop-1.4..bin__hadoop-2.0.-alpha zookeeper-3.4.
  4. [hadoop@hadoop3 apps]$ mv sqoop-1.4..bin__hadoop-2.0.-alpha/ sqoop-1.4.
  5. [hadoop@hadoop3 apps]$ cd sqoop-1.4./conf/
  6. [hadoop@hadoop3 conf]$ ls
  7. oraoop-site-template.xml sqoop-env-template.sh sqoop-site.xml
  8. sqoop-env-template.cmd sqoop-site-template.xml
  9. [hadoop@hadoop3 conf]$ mv sqoop-env-template.sh sqoop-env.sh

(3)修改 sqoop-env.sh

  1. hadoop@hadoop3 conf]$ vi sqoop-env.sh
  1. export HADOOP_COMMON_HOME=/home/hadoop/apps/hadoop-2.7.
  2.  
  3. #Set path to where hadoop-*-core.jar is available
  4. export HADOOP_MAPRED_HOME=/home/hadoop/apps/hadoop-2.7.
  5.  
  6. #set the path to where bin/hbase is available
  7. export HBASE_HOME=/home/hadoop/apps/hbase-1.2.
  8.  
  9. #Set the path to where bin/hive is available
  10. export HIVE_HOME=/home/hadoop/apps/apache-hive-2.3.-bin
  11.  
  12. #Set the path for where zookeper config dir is
  13. export ZOOCFGDIR=/home/hadoop/apps/zookeeper-3.4./conf

为什么在sqoop-env.sh 文件中会要求分别进行 common和mapreduce的配置呢???

  1. apachehadoop的安装中;四大组件都是安装在同一个hadoop_home中的
  2.  
  3. 但是在CDH, HDP中, 这些组件都是可选的。
  4.  
  5. 在安装hadoop的时候,可以选择性的只安装HDFS或者YARN
  6.  
  7. CDH,HDP在安装hadoop的时候,会把HDFSMapReduce有可能分别安装在不同的地方。

4)加入 mysql 驱动包到 sqoop1.4.6/lib 目录下

  1. hadoop@hadoop3 ~]$ cp mysql-connector-java-5.1.40-bin.jar apps/sqoop-1.4.6/lib/

(5)配置系统环境变量

  1. [hadoop@hadoop3 ~]$ vi .bashrc
  1. #Sqoop
  2. export SQOOP_HOME=/home/hadoop/apps/sqoop-1.4.
  3. export PATH=$PATH:$SQOOP_HOME/bin

保存退出使其立即生效

  1. [hadoop@hadoop3 ~]$ source .bashrc

(6)验证安装是否成功

  1.  sqoop-version 或者 sqoop version

Sqoop的基本命令

基本操作

首先,我们可以使用 sqoop help 来查看,sqoop 支持哪些命令

  1. [hadoop@hadoop3 ~]$ sqoop help
  2. Warning: /home/hadoop/apps/sqoop-1.4./../hcatalog does not exist! HCatalog jobs will fail.
  3. Please set $HCAT_HOME to the root of your HCatalog installation.
  4. Warning: /home/hadoop/apps/sqoop-1.4./../accumulo does not exist! Accumulo imports will fail.
  5. Please set $ACCUMULO_HOME to the root of your Accumulo installation.
  6. // :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
  7. usage: sqoop COMMAND [ARGS]
  8.  
  9. Available commands:
  10. codegen Generate code to interact with database records
  11. create-hive-table Import a table definition into Hive
  12. eval Evaluate a SQL statement and display the results
  13. export Export an HDFS directory to a database table
  14. help List available commands
  15. import Import a table from a database to HDFS
  16. import-all-tables Import tables from a database to HDFS
  17. import-mainframe Import datasets from a mainframe server to HDFS
  18. job Work with saved jobs
  19. list-databases List available databases on a server
  20. list-tables List available tables in a database
  21. merge Merge results of incremental imports
  22. metastore Run a standalone Sqoop metastore
  23. version Display version information
  24.  
  25. See 'sqoop help COMMAND' for information on a specific command.
  26. [hadoop@hadoop3 ~]$

然后得到这些支持了的命令之后,如果不知道使用方式,可以使用 sqoop command 的方式 来查看某条具体命令的使用方式,比如:

  1. [hadoop@hadoop3 ~]$ sqoop help import
  2. Warning: /home/hadoop/apps/sqoop-1.4./../hcatalog does not exist! HCatalog jobs will fail.
  3. Please set $HCAT_HOME to the root of your HCatalog installation.
  4. Warning: /home/hadoop/apps/sqoop-1.4./../accumulo does not exist! Accumulo imports will fail.
  5. Please set $ACCUMULO_HOME to the root of your Accumulo installation.
  6. // :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
  7. usage: sqoop import [GENERIC-ARGS] [TOOL-ARGS]
  8.  
  9. Common arguments:
  10. --connect <jdbc-uri> Specify JDBC connect
  11. string
  12. --connection-manager <class-name> Specify connection manager
  13. class name
  14. --connection-param-file <properties-file> Specify connection
  15. parameters file
  16. --driver <class-name> Manually specify JDBC
  17. driver class to use
  18. --hadoop-home <hdir> Override
  19. $HADOOP_MAPRED_HOME_ARG
  20. --hadoop-mapred-home <dir> Override
  21. $HADOOP_MAPRED_HOME_ARG
  22. --help Print usage instructions
  23. -P Read password from console
  24. --password <password> Set authentication
  25. password
  26. --password-alias <password-alias> Credential provider
  27. password alias
  28. --password-file <password-file> Set authentication
  29. password file path
  30. --relaxed-isolation Use read-uncommitted
  31. isolation for imports
  32. --skip-dist-cache Skip copying jars to
  33. distributed cache
  34. --username <username> Set authentication
  35. username
  36. --verbose Print more information
  37. while working
  38.  
  39. Import control arguments:
  40. --append Imports data
  41. in append
  42. mode
  43. --as-avrodatafile Imports data
  44. to Avro data
  45. files
  46. --as-parquetfile Imports data
  47. to Parquet
  48. files
  49. --as-sequencefile Imports data
  50. to
  51. SequenceFile
  52. s
  53. --as-textfile Imports data
  54. as plain
  55. text
  56. (default)
  57. --autoreset-to-one-mapper Reset the
  58. number of
  59. mappers to
  60. one mapper
  61. if no split
  62. key
  63. available
  64. --boundary-query <statement> Set boundary
  65. query for
  66. retrieving
  67. max and min
  68. value of the
  69. primary key
  70. --columns <col,col,col...> Columns to
  71. import from
  72. table
  73. --compression-codec <codec> Compression
  74. codec to use
  75. for import
  76. --delete-target-dir Imports data
  77. in delete
  78. mode
  79. --direct Use direct
  80. import fast
  81. path
  82. --direct-split-size <n> Split the
  83. input stream
  84. every 'n'
  85. bytes when
  86. importing in
  87. direct mode
  88. -e,--query <statement> Import
  89. results of
  90. SQL
  91. 'statement'
  92. --fetch-size <n> Set number
  93. 'n' of rows
  94. to fetch
  95. from the
  96. database
  97. when more
  98. rows are
  99. needed
  100. --inline-lob-limit <n> Set the
  101. maximum size
  102. for an
  103. inline LOB
  104. -m,--num-mappers <n> Use 'n' map
  105. tasks to
  106. import in
  107. parallel
  108. --mapreduce-job-name <name> Set name for
  109. generated
  110. mapreduce
  111. job
  112. --merge-key <column> Key column
  113. to use to
  114. join results
  115. --split-by <column-name> Column of
  116. the table
  117. used to
  118. split work
  119. units
  120. --table <table-name> Table to
  121. read
  122. --target-dir <dir> HDFS plain
  123. table
  124. destination
  125. --validate Validate the
  126. copy using
  127. the
  128. configured
  129. validator
  130. --validation-failurehandler <validation-failurehandler> Fully
  131. qualified
  132. class name
  133. for
  134. ValidationFa
  135. ilureHandler
  136. --validation-threshold <validation-threshold> Fully
  137. qualified
  138. class name
  139. for
  140. ValidationTh
  141. reshold
  142. --validator <validator> Fully
  143. qualified
  144. class name
  145. for the
  146. Validator
  147. --warehouse-dir <dir> HDFS parent
  148. for table
  149. destination
  150. --where <where clause> WHERE clause
  151. to use
  152. during
  153. import
  154. -z,--compress Enable
  155. compression
  156.  
  157. Incremental import arguments:
  158. --check-column <column> Source column to check for incremental
  159. change
  160. --incremental <import-type> Define an incremental import of type
  161. 'append' or 'lastmodified'
  162. --last-value <value> Last imported value in the incremental
  163. check column
  164.  
  165. Output line formatting arguments:
  166. --enclosed-by <char> Sets a required field enclosing
  167. character
  168. --escaped-by <char> Sets the escape character
  169. --fields-terminated-by <char> Sets the field separator character
  170. --lines-terminated-by <char> Sets the end-of-line character
  171. --mysql-delimiters Uses MySQL's default delimiter set:
  172. fields: , lines: \n escaped-by: \
  173. optionally-enclosed-by: '
  174. --optionally-enclosed-by <char> Sets a field enclosing character
  175.  
  176. Input parsing arguments:
  177. --input-enclosed-by <char> Sets a required field encloser
  178. --input-escaped-by <char> Sets the input escape
  179. character
  180. --input-fields-terminated-by <char> Sets the input field separator
  181. --input-lines-terminated-by <char> Sets the input end-of-line
  182. char
  183. --input-optionally-enclosed-by <char> Sets a field enclosing
  184. character
  185.  
  186. Hive arguments:
  187. --create-hive-table Fail if the target hive
  188. table exists
  189. --hive-database <database-name> Sets the database name to
  190. use when importing to hive
  191. --hive-delims-replacement <arg> Replace Hive record \0x01
  192. and row delimiters (\n\r)
  193. from imported string fields
  194. with user-defined string
  195. --hive-drop-import-delims Drop Hive record \0x01 and
  196. row delimiters (\n\r) from
  197. imported string fields
  198. --hive-home <dir> Override $HIVE_HOME
  199. --hive-import Import tables into Hive
  200. (Uses Hive's default
  201. delimiters if none are
  202. set.)
  203. --hive-overwrite Overwrite existing data in
  204. the Hive table
  205. --hive-partition-key <partition-key> Sets the partition key to
  206. use when importing to hive
  207. --hive-partition-value <partition-value> Sets the partition value to
  208. use when importing to hive
  209. --hive-table <table-name> Sets the table name to use
  210. when importing to hive
  211. --map-column-hive <arg> Override mapping for
  212. specific column to hive
  213. types.
  214.  
  215. HBase arguments:
  216. --column-family <family> Sets the target column family for the
  217. import
  218. --hbase-bulkload Enables HBase bulk loading
  219. --hbase-create-table If specified, create missing HBase tables
  220. --hbase-row-key <col> Specifies which input column to use as the
  221. row key
  222. --hbase-table <table> Import to <table> in HBase
  223.  
  224. HCatalog arguments:
  225. --hcatalog-database <arg> HCatalog database name
  226. --hcatalog-home <hdir> Override $HCAT_HOME
  227. --hcatalog-partition-keys <partition-key> Sets the partition
  228. keys to use when
  229. importing to hive
  230. --hcatalog-partition-values <partition-value> Sets the partition
  231. values to use when
  232. importing to hive
  233. --hcatalog-table <arg> HCatalog table name
  234. --hive-home <dir> Override $HIVE_HOME
  235. --hive-partition-key <partition-key> Sets the partition key
  236. to use when importing
  237. to hive
  238. --hive-partition-value <partition-value> Sets the partition
  239. value to use when
  240. importing to hive
  241. --map-column-hive <arg> Override mapping for
  242. specific column to
  243. hive types.
  244.  
  245. HCatalog import specific options:
  246. --create-hcatalog-table Create HCatalog before import
  247. --hcatalog-storage-stanza <arg> HCatalog storage stanza for table
  248. creation
  249.  
  250. Accumulo arguments:
  251. --accumulo-batch-size <size> Batch size in bytes
  252. --accumulo-column-family <family> Sets the target column family for
  253. the import
  254. --accumulo-create-table If specified, create missing
  255. Accumulo tables
  256. --accumulo-instance <instance> Accumulo instance name.
  257. --accumulo-max-latency <latency> Max write latency in milliseconds
  258. --accumulo-password <password> Accumulo password.
  259. --accumulo-row-key <col> Specifies which input column to
  260. use as the row key
  261. --accumulo-table <table> Import to <table> in Accumulo
  262. --accumulo-user <user> Accumulo user name.
  263. --accumulo-visibility <vis> Visibility token to be applied to
  264. all rows imported
  265. --accumulo-zookeepers <zookeepers> Comma-separated list of
  266. zookeepers (host:port)
  267.  
  268. Code generation arguments:
  269. --bindir <dir> Output directory for compiled
  270. objects
  271. --class-name <name> Sets the generated class name.
  272. This overrides --package-name.
  273. When combined with --jar-file,
  274. sets the input class.
  275. --input-null-non-string <null-str> Input null non-string
  276. representation
  277. --input-null-string <null-str> Input null string representation
  278. --jar-file <file> Disable code generation; use
  279. specified jar
  280. --map-column-java <arg> Override mapping for specific
  281. columns to java types
  282. --null-non-string <null-str> Null non-string representation
  283. --null-string <null-str> Null string representation
  284. --outdir <dir> Output directory for generated
  285. code
  286. --package-name <name> Put auto-generated classes in
  287. this package
  288.  
  289. Generic Hadoop command-line arguments:
  290. (must preceed any tool-specific arguments)
  291. Generic options supported are
  292. -conf <configuration file> specify an application configuration file
  293. -D <property=value> use value for given property
  294. -fs <local|namenode:port> specify a namenode
  295. -jt <local|resourcemanager:port> specify a ResourceManager
  296. -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
  297. -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
  298. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
  299.  
  300. The general command line syntax is
  301. bin/hadoop command [genericOptions] [commandOptions]
  302.  
  303. At minimum, you must specify --connect and --table
  304. Arguments to mysqldump and other subprograms may be supplied
  305. after a '--' on the command line.
  306. [hadoop@hadoop3 ~]$

示例

列出MySQL数据有哪些数据库

  1. [hadoop@hadoop3 ~]$ sqoop list-databases \
  2. > --connect jdbc:mysql://hadoop1:3306/ \
  3. > --username root \
  4. > --password root
  5. Warning: /home/hadoop/apps/sqoop-1.4./../hcatalog does not exist! HCatalog jobs will fail.
  6. Please set $HCAT_HOME to the root of your HCatalog installation.
  7. Warning: /home/hadoop/apps/sqoop-1.4./../accumulo does not exist! Accumulo imports will fail.
  8. Please set $ACCUMULO_HOME to the root of your Accumulo installation.
  9. // :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
  10. // :: WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
  11. // :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
  12. information_schema
  13. hivedb
  14. mysql
  15. performance_schema
  16. test
  17. [hadoop@hadoop3 ~]$

列出MySQL中的某个数据库有哪些数据表:

  1. [hadoop@hadoop3 ~]$ sqoop list-tables \
  2. > --connect jdbc:mysql://hadoop1:3306/mysql \
  3. > --username root \
  4. > --password root
  1. Warning: /home/hadoop/apps/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
  2. Please set $HCAT_HOME to the root of your HCatalog installation.
  3. Warning: /home/hadoop/apps/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
  4. Please set $ACCUMULO_HOME to the root of your Accumulo installation.
  5. 18/04/12 13:46:21 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
  6. 18/04/12 13:46:21 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
  7. 18/04/12 13:46:21 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
  8. columns_priv
  9. db
  10. event
  11. func
  12. general_log
  13. help_category
  14. help_keyword
  15. help_relation
  16. help_topic
  17. innodb_index_stats
  18. innodb_table_stats
  19. ndb_binlog_index
  20. plugin
  21. proc
  22. procs_priv
  23. proxies_priv
  24. servers
  25. slave_master_info
  26. slave_relay_log_info
  27. slave_worker_info
  28. slow_log
  29. tables_priv
  30. time_zone
  31. time_zone_leap_second
  32. time_zone_name
  33. time_zone_transition
  34. time_zone_transition_type
  35. user
  36. [hadoop@hadoop3 ~]$

创建一张跟mysql中的help_keyword表一样的hive表hk:

  1. [hadoop@hadoop3 ~]$ sqoop create-hive-table \
  2. > --connect jdbc:mysql://hadoop1:3306/mysql \
  3. > --username root \
  4. > --password root \
  5. > --table help_keyword \
  6. > --hive-table hk
  7. Warning: /home/hadoop/apps/sqoop-1.4./../hcatalog does not exist! HCatalog jobs will fail.
  8. Please set $HCAT_HOME to the root of your HCatalog installation.
  9. Warning: /home/hadoop/apps/sqoop-1.4./../accumulo does not exist! Accumulo imports will fail.
  10. Please set $ACCUMULO_HOME to the root of your Accumulo installation.
  11. // :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.
  12. // :: WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
  13. // :: INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
  14. // :: INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
  15. // :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
  16. // :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `help_keyword` AS t LIMIT
  17. // :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `help_keyword` AS t LIMIT
  18. SLF4J: Class path contains multiple SLF4J bindings.
  19. SLF4J: Found binding in [jar:file:/home/hadoop/apps/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
  20. SLF4J: Found binding in [jar:file:/home/hadoop/apps/hbase-1.2./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
  21. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  22. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
  23. // :: INFO hive.HiveImport: Loading uploaded data into Hive
  24. // :: INFO hive.HiveImport: SLF4J: Class path contains multiple SLF4J bindings.
  25. // :: INFO hive.HiveImport: SLF4J: Found binding in [jar:file:/home/hadoop/apps/apache-hive-2.3.-bin/lib/log4j-slf4j-impl-2.6..jar!/org/slf4j/impl/StaticLoggerBinder.class]
  26. // :: INFO hive.HiveImport: SLF4J: Found binding in [jar:file:/home/hadoop/apps/hbase-1.2./lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
  27. // :: INFO hive.HiveImport: SLF4J: Found binding in [jar:file:/home/hadoop/apps/hadoop-2.7./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
  28. // :: INFO hive.HiveImport: SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  29. // :: INFO hive.HiveImport: SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
  30. // :: INFO hive.HiveImport:
  31. // :: INFO hive.HiveImport: Logging initialized using configuration in jar:file:/home/hadoop/apps/apache-hive-2.3.-bin/lib/hive-common-2.3..jar!/hive-log4j2.properties Async: true
  32. // :: INFO hive.HiveImport: OK
  33. // :: INFO hive.HiveImport: Time taken: 11.651 seconds
  34. // :: INFO hive.HiveImport: Hive import complete.
  35. [hadoop@hadoop3 ~]$

Sqoop的数据导入

、从RDBMS导入到HDFS中

、把MySQL数据库中的表数据导入到Hive中

3、把MySQL数据库中的表数据导入到hbase

“导入工具”导入单个表从 RDBMS 到 HDFS。表中的每一行被视为 HDFS 的记录。所有记录 都存储为文本文件的文本数据(或者 Avro、sequence 文件等二进制数据)

https://www.jianshu.com/p/be33f4b5c62e 详细说明和参数设置

  1. Sqoop并行化是启多个map task实现的,-m(或--num-mappers)参数指定map task数,默认是四个。并行度不是设置的越大越好,map task的启动和销毁都会消耗资源,
    而且过多的数据库连接对数据库本身也会造成压力。在并行操作里,首先要解决输入数据是以什么方式负债均衡到多个map的,即怎么保证每个map处理的数据量大致相同且数据不重复。
    --split-by指定了split column,在执行并行操作时(多个map task),Sqoop需要知道以什么列split数据,其思想是:
  2.  
  3.     1、先查出split column的最小值和最大值
  4.  
  5.     2、然后根据map task数对(max-min)之间的数据进行均匀的范围切分
  6.  
  7. 例如id作为split column,其最小值是0、最大值1000,如果设置4map数,每个map task执行的查询语句类似于:
    SELECT * FROM sometable WHERE id >= lo AND id < hi
    每个task里(lo,hi)的值分别是 (0, 250), (250, 500), (500, 750), and (750, 1001)。

  

1、从RDBMS导入到HDFS中

语法格式

  1. sqoop import (generic-args) (import-args)

常用参数

  1. --connect <jdbc-uri> jdbc 连接地址
  2. --connection-manager <class-name> 连接管理者
  3. --driver <class-name> 驱动类
  4. --hadoop-mapred-home <dir> $HADOOP_MAPRED_HOME
  5. --help help 信息
  6. -P 从命令行输入密码
  7. --password <password> 密码
  8. --username <username> 账号
  9. --verbose 打印流程信息
  10. --connection-param-file <filename> 可选参数

  

示例

普通导入:导入mysql库中的help_keyword的数据到HDFS上

导入的默认路径:/user/hadoop/help_keyword

  1. sqoop import \
  2. --connect jdbc:mysql://hadoop1:3306/mysql \
  3. --username root \
  4. --password root \
  5. --table help_keyword \
  6. -m
  1. [hadoop@hadoop3 ~]$ sqoop import \
  2. > --connect jdbc:mysql://hadoop1:3306/mysql \
  3. > --username root \
  4. > --password root \
  5. > --table help_keyword \
  6. > -m 1
  7. Warning: /home/hadoop/apps/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
  8. Please set $HCAT_HOME to the root of your HCatalog installation.
  9. Warning: /home/hadoop/apps/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
  10. Please set $ACCUMULO_HOME to the root of your Accumulo installation.
  11. 18/04/12 13:53:48 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
  12. 18/04/12 13:53:48 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
  13. 18/04/12 13:53:48 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
  14. 18/04/12 13:53:48 INFO tool.CodeGenTool: Beginning code generation
  15. 18/04/12 13:53:49 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `help_keyword` AS t LIMIT 1
  16. 18/04/12 13:53:49 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `help_keyword` AS t LIMIT 1
  17. 18/04/12 13:53:49 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoop/apps/hadoop-2.7.5
  18. 注: /tmp/sqoop-hadoop/compile/979d87b9521d0a09ee6620060a112d60/help_keyword.java使用或覆盖了已过时的 API
  19. 注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。
  20. 18/04/12 13:53:51 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/979d87b9521d0a09ee6620060a112d60/help_keyword.jar
  21. 18/04/12 13:53:51 WARN manager.MySQLManager: It looks like you are importing from mysql.
  22. 18/04/12 13:53:51 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
  23. 18/04/12 13:53:51 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
  24. 18/04/12 13:53:51 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
  25. 18/04/12 13:53:51 INFO mapreduce.ImportJobBase: Beginning import of help_keyword
  26. SLF4J: Class path contains multiple SLF4J bindings.
  27. SLF4J: Found binding in [jar:file:/home/hadoop/apps/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  28. SLF4J: Found binding in [jar:file:/home/hadoop/apps/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  29. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  30. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
  31. 18/04/12 13:53:52 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
  32. 18/04/12 13:53:53 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
  33. 18/04/12 13:53:58 INFO db.DBInputFormat: Using read commited transaction isolation
  34. 18/04/12 13:53:58 INFO mapreduce.JobSubmitter: number of splits:1
  35. 18/04/12 13:53:59 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1523510178850_0001
  36. 18/04/12 13:54:00 INFO impl.YarnClientImpl: Submitted application application_1523510178850_0001
  37. 18/04/12 13:54:00 INFO mapreduce.Job: The url to track the job: http://hadoop3:8088/proxy/application_1523510178850_0001/
  38. 18/04/12 13:54:00 INFO mapreduce.Job: Running job: job_1523510178850_0001
  39. 18/04/12 13:54:17 INFO mapreduce.Job: Job job_1523510178850_0001 running in uber mode : false
  40. 18/04/12 13:54:17 INFO mapreduce.Job: map 0% reduce 0%
  41. 18/04/12 13:54:33 INFO mapreduce.Job: map 100% reduce 0%
  42. 18/04/12 13:54:34 INFO mapreduce.Job: Job job_1523510178850_0001 completed successfully
  43. 18/04/12 13:54:35 INFO mapreduce.Job: Counters: 30
  44. File System Counters
  45. FILE: Number of bytes read=0
  46. FILE: Number of bytes written=142965
  47. FILE: Number of read operations=0
  48. FILE: Number of large read operations=0
  49. FILE: Number of write operations=0
  50. HDFS: Number of bytes read=87
  51. HDFS: Number of bytes written=8264
  52. HDFS: Number of read operations=4
  53. HDFS: Number of large read operations=0
  54. HDFS: Number of write operations=2
  55. Job Counters
  56. Launched map tasks=1
  57. Other local map tasks=1
  58. Total time spent by all maps in occupied slots (ms)=12142
  59. Total time spent by all reduces in occupied slots (ms)=0
  60. Total time spent by all map tasks (ms)=12142
  61. Total vcore-milliseconds taken by all map tasks=12142
  62. Total megabyte-milliseconds taken by all map tasks=12433408
  63. Map-Reduce Framework
  64. Map input records=619
  65. Map output records=619
  66. Input split bytes=87
  67. Spilled Records=0
  68. Failed Shuffles=0
  69. Merged Map outputs=0
  70. GC time elapsed (ms)=123
  71. CPU time spent (ms)=1310
  72. Physical memory (bytes) snapshot=93212672
  73. Virtual memory (bytes) snapshot=2068234240
  74. Total committed heap usage (bytes)=17567744
  75. File Input Format Counters
  76. Bytes Read=0
  77. File Output Format Counters
  78. Bytes Written=8264
  79. 18/04/12 13:54:35 INFO mapreduce.ImportJobBase: Transferred 8.0703 KB in 41.8111 seconds (197.6507 bytes/sec)
  80. 18/04/12 13:54:35 INFO mapreduce.ImportJobBase: Retrieved 619 records.
  81. [hadoop@hadoop3 ~]$

  查看导入的文件

  1. [hadoop@hadoop4 ~]$ hadoop fs -cat /user/hadoop/help_keyword/part-m-

导入: 指定分隔符和导入路径

  1. sqoop import \
  2. --connect jdbc:mysql://hadoop1:3306/mysql \
  3. --username root \
  4. --password root \
  5. --table help_keyword \
  6. --target-dir /user/hadoop11/my_help_keyword1 \
  7. --fields-terminated-by '\t' \
  8. -m

导入数据:带where条件

  1. sqoop import \
  2. --connect jdbc:mysql://hadoop1:3306/mysql \
  3. --username root \
  4. --password root \
  5. --where "name='STRING' " \
  6. --table help_keyword \
  7. --target-dir /sqoop/hadoop11/myoutport1 \
  8. -m

查询指定列

  1. sqoop import \
  2. --connect jdbc:mysql://hadoop1:3306/mysql \
  3. --username root \
  4. --password root \
  5. --columns "name" \
  6. --where "name='STRING' " \
  7. --table help_keyword \
  8. --target-dir /sqoop/hadoop11/myoutport22 \
  9. -m
  10. selct name from help_keyword where name = "string"

导入:指定自定义查询SQL

  1. sqoop import \
  2. --connect jdbc:mysql://hadoop1:3306/ \
  3. --username root \
  4. --password root \
  5. --target-dir /user/hadoop/myimport33_1 \
  6. --query 'select help_keyword_id,name from mysql.help_keyword where $CONDITIONS and name = "STRING"' \
  7. --split-by help_keyword_id \
  8. --fields-terminated-by '\t' \
  9. -m

在以上需要按照自定义SQL语句导出数据到HDFS的情况下:
1、引号问题,要么外层使用单引号,内层使用双引号,$CONDITIONS的$符号不用转义, 要么外层使用双引号,那么内层使用单引号,然后$CONDITIONS的$符号需要转义
2、自定义的SQL语句中必须带有WHERE \$CONDITIONS

2、把MySQL数据库中的表数据导入到Hive中

Sqoop 导入关系型数据到 hive 的过程是先导入到 hdfs,然后再 load 进入 hive

普通导入:数据存储在默认的default hive库中,表名就是对应的mysql的表名:

  1. sqoop import \
  2. --connect jdbc:mysql://hadoop1:3306/mysql \
  3. --username root \
  4. --password root \
  5. --table help_keyword \
  6. --hive-import \
  7. -m

导入过程

第一步:导入mysql.help_keyword的数据到hdfs的默认路径
第二步:自动仿造mysql.help_keyword去创建一张hive表, 创建在默认的default库中
第三步:把临时目录中的数据导入到hive表中

查看数据

  1. [hadoop@hadoop3 ~]$ hadoop fs -cat /user/hive/warehouse/help_keyword/part-m-

指定行分隔符和列分隔符,指定hive-import,指定覆盖导入,指定自动创建hive表,指定表名,指定删除中间结果数据目录

手动创建mydb_test数据块

  1. create database mydb_test;
  2.  
  1. sqoop import \
  2. --connect jdbc:mysql://hadoop1:3306/mysql \
  3. --username root \
  4. --password root \
  5. --table help_keyword \
  6. --fields-terminated-by "\t" \
  7. --lines-terminated-by "\n" \
  8. --hive-import \
  9. --hive-overwrite \
  10. --create-hive-table \
  11. --delete-target-dir \
  12. --hive-database mydb_test \
  13. --hive-table new_help_keyword

查询验证

  1. select * from new_help_keyword limit 10;
  2.  
  3. 上面的导入语句等价于
  1. sqoop import \
  2. --connect jdbc:mysql://hadoop1:3306/mysql \
  3. --username root \
  4. --password root \
  5. --table help_keyword \
  6. --fields-terminated-by "\t" \
  7. --lines-terminated-by "\n" \
  8. --hive-import \
  9. --hive-overwrite \
  10. --create-hive-table \
  11. --hive-table mydb_test.new_help_keyword \
  12. --delete-target-dir

增量导入

执行增量导入之前,先清空hive数据库中的help_keyword表中的数据

  1. truncate table help_keyword;
  1. sqoop import \
  2. --connect jdbc:mysql://hadoop1:3306/mysql \
  3. --username root \
  4. --password root \
  5. --table help_keyword \
  6. --target-dir /user/hadoop/myimport_add \
  7. --incremental append \
  8. --check-column help_keyword_id \
  9. --last-value \
  10. -m

语句执行成功

  1. [hadoop@hadoop3 ~]$ sqoop import \
  2. > --connect jdbc:mysql://hadoop1:3306/mysql \
  3. > --username root \
  4. > --password root \
  5. > --table help_keyword \
  6. > --target-dir /user/hadoop/myimport_add \
  7. > --incremental append \
  8. > --check-column help_keyword_id \
  9. > --last-value 500 \
  10. > -m 1
  11. Warning: /home/hadoop/apps/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
  12. Please set $HCAT_HOME to the root of your HCatalog installation.
  13. Warning: /home/hadoop/apps/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
  14. Please set $ACCUMULO_HOME to the root of your Accumulo installation.
  15. 18/04/12 22:01:07 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
  16. 18/04/12 22:01:08 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
  17. 18/04/12 22:01:08 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
  18. 18/04/12 22:01:08 INFO tool.CodeGenTool: Beginning code generation
  19. 18/04/12 22:01:08 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `help_keyword` AS t LIMIT 1
  20. 18/04/12 22:01:08 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `help_keyword` AS t LIMIT 1
  21. 18/04/12 22:01:08 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoop/apps/hadoop-2.7.5
  22. 注: /tmp/sqoop-hadoop/compile/a51619d1ef8c6e4b112a209326ed9e0f/help_keyword.java使用或覆盖了已过时的 API
  23. 注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。
  24. 18/04/12 22:01:11 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/a51619d1ef8c6e4b112a209326ed9e0f/help_keyword.jar
  25. SLF4J: Class path contains multiple SLF4J bindings.
  26. SLF4J: Found binding in [jar:file:/home/hadoop/apps/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  27. SLF4J: Found binding in [jar:file:/home/hadoop/apps/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  28. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  29. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
  30. 18/04/12 22:01:12 INFO tool.ImportTool: Maximal id query for free form incremental import: SELECT MAX(`help_keyword_id`) FROM `help_keyword`
  31. 18/04/12 22:01:12 INFO tool.ImportTool: Incremental import based on column `help_keyword_id`
  32. 18/04/12 22:01:12 INFO tool.ImportTool: Lower bound value: 500
  33. 18/04/12 22:01:12 INFO tool.ImportTool: Upper bound value: 618
  34. 18/04/12 22:01:12 WARN manager.MySQLManager: It looks like you are importing from mysql.
  35. 18/04/12 22:01:12 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
  36. 18/04/12 22:01:12 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
  37. 18/04/12 22:01:12 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
  38. 18/04/12 22:01:12 INFO mapreduce.ImportJobBase: Beginning import of help_keyword
  39. 18/04/12 22:01:12 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
  40. 18/04/12 22:01:12 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
  41. 18/04/12 22:01:17 INFO db.DBInputFormat: Using read commited transaction isolation
  42. 18/04/12 22:01:17 INFO mapreduce.JobSubmitter: number of splits:1
  43. 18/04/12 22:01:17 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1523510178850_0010
  44. 18/04/12 22:01:19 INFO impl.YarnClientImpl: Submitted application application_1523510178850_0010
  45. 18/04/12 22:01:19 INFO mapreduce.Job: The url to track the job: http://hadoop3:8088/proxy/application_1523510178850_0010/
  46. 18/04/12 22:01:19 INFO mapreduce.Job: Running job: job_1523510178850_0010
  47. 18/04/12 22:01:30 INFO mapreduce.Job: Job job_1523510178850_0010 running in uber mode : false
  48. 18/04/12 22:01:30 INFO mapreduce.Job: map 0% reduce 0%
  49. 18/04/12 22:01:40 INFO mapreduce.Job: map 100% reduce 0%
  50. 18/04/12 22:01:40 INFO mapreduce.Job: Job job_1523510178850_0010 completed successfully
  51. 18/04/12 22:01:41 INFO mapreduce.Job: Counters: 30
  52. File System Counters
  53. FILE: Number of bytes read=0
  54. FILE: Number of bytes written=143200
  55. FILE: Number of read operations=0
  56. FILE: Number of large read operations=0
  57. FILE: Number of write operations=0
  58. HDFS: Number of bytes read=87
  59. HDFS: Number of bytes written=1576
  60. HDFS: Number of read operations=4
  61. HDFS: Number of large read operations=0
  62. HDFS: Number of write operations=2
  63. Job Counters
  64. Launched map tasks=1
  65. Other local map tasks=1
  66. Total time spent by all maps in occupied slots (ms)=7188
  67. Total time spent by all reduces in occupied slots (ms)=0
  68. Total time spent by all map tasks (ms)=7188
  69. Total vcore-milliseconds taken by all map tasks=7188
  70. Total megabyte-milliseconds taken by all map tasks=7360512
  71. Map-Reduce Framework
  72. Map input records=118
  73. Map output records=118
  74. Input split bytes=87
  75. Spilled Records=0
  76. Failed Shuffles=0
  77. Merged Map outputs=0
  78. GC time elapsed (ms)=86
  79. CPU time spent (ms)=870
  80. Physical memory (bytes) snapshot=95576064
  81. Virtual memory (bytes) snapshot=2068234240
  82. Total committed heap usage (bytes)=18608128
  83. File Input Format Counters
  84. Bytes Read=0
  85. File Output Format Counters
  86. Bytes Written=1576
  87. 18/04/12 22:01:41 INFO mapreduce.ImportJobBase: Transferred 1.5391 KB in 28.3008 seconds (55.6875 bytes/sec)
  88. 18/04/12 22:01:41 INFO mapreduce.ImportJobBase: Retrieved 118 records.
  89. 18/04/12 22:01:41 INFO util.AppendUtils: Creating missing output directory - myimport_add
  90. 18/04/12 22:01:41 INFO tool.ImportTool: Incremental import complete! To run another incremental import of all data following this import, supply the following arguments:
  91. 18/04/12 22:01:41 INFO tool.ImportTool: --incremental append
  92. 18/04/12 22:01:41 INFO tool.ImportTool: --check-column help_keyword_id
  93. 18/04/12 22:01:41 INFO tool.ImportTool: --last-value 618
  94. 18/04/12 22:01:41 INFO tool.ImportTool: (Consider saving this with 'sqoop job --create')
  95. [hadoop@hadoop3 ~]$

3、把MySQL数据库中的表数据导入到hbase

普通导入:先创建Hbase里面的表,再执行导入的语句

  1. hbase(main):001:0> create 'new_help_keyword', 'base_info'
  1. sqoop import \
  2. --connect jdbc:mysql://hadoop1:3306/mysql \
  3. --username root \
  4. --password root \
  5. --table help_keyword \
  6. --hbase-table new_help_keyword \
  7. --column-family person \
  8. --hbase-row-key help_keyword_id

实验案例:

sqoop导入数据的更多相关文章

  1. Sqoop导入数据到mysql数据库报错:ERROR tool.ExportTool: Error during export: Export job failed!(已解决)

    问题描述: Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Conta ...

  2. sqoop导入数据到hive---2

    1.hive-table 从mysql导入数据到hive表中,可以使用--hive-table来指定hive的表名,不指定hive表名,则hive表名与mysql表名保持一致. sqoop impor ...

  3. sqoop导入数据到hive

    1.1hive-import参数 使用--hive-import就可以将数据导入到hive中,但是下面这个命令执行后会报错,报错信息如下: sqoop import --connect jdbc:my ...

  4. sqoop导入数据到hive中元数据问题

    简单配置了sqoop之后开始使用,之前用的时候很好用,也不记得有没有启动hivemetastore,今天用的时候没有启动,结果导入数据时,如果使用了db.tablename,就会出现找不到数据库的错, ...

  5. 1.6-1.10 使用Sqoop导入数据到HDFS及一些设置

    一.导数据 1.import和export Sqoop可以在HDFS/Hive和关系型数据库之间进行数据的导入导出,其中主要使用了import和export这两个工具.这两个工具非常强大, 提供了很多 ...

  6. 大数据学习——sqoop导入数据

    把数据从关系型数据库导入到hadoop 启动sqoop 导入表表数据到HDFS 下面的命令用于从MySQL数据库服务器中的emp表导入HDFS. sqoop import \ --connect jd ...

  7. sqoop导入数据到hive表中的相关操作

    1.使用sqoop创建表并且指定对应的hive表中的字段的数据类型,同时指定该表的分区字段名称 sqoop create-hive-table --connect "jdbc:oracle: ...

  8. 1.11-1.12 Sqoop导入数据时两种增量方式导入及direct

    一.增量数据的导入 1.两种方式 ## query 有一个唯一标识符,通常这个表都有一个字段,类似于插入时间createtime where createtime => 201509240000 ...

  9. 使用sqoop1.4.4从oracle导入数据到hive中错误记录及解决方案

    在使用命令导数据过程中,出现如下错误 sqoop import --hive-import --connect jdbc:oracle:thin:@192.168.29.16:1521/testdb ...

随机推荐

  1. Luogu 3172 [CQOI2015]选数

    考虑枚举$k$的倍数$dk$,容易知道$\left \lceil \frac{L}{K} \right \rceil\leq d\leq \left \lfloor \frac{H}{k} \righ ...

  2. 总结:kathasis如何发送get请求获取数据

    1.进入前端页面,找到对应的模块,开始塞字段数据. 2.如果字段为基本类型,如String,比如website,则在前段界面,右击,inspect,找到对应的代码所处的jsp,跳转到该jsp,通过该j ...

  3. Thread.currentThread().getContextClassLoader() 和 Class.getClassLoader()区别

    查了一些资料也不是太明白两个的区别,但是前者是最安全的用法 打个简单的比方,你一个WEB程序,发布到Tomcat里面运行.首先是执行Tomcat org.apache.catalina.startup ...

  4. auto和register关键字

    关键字概述 很多朋友看到这儿可能会有疑问,往往其它讲C语言的书籍都是从HelloWorld,数据类型开始C语言学习的,为什么我们要从C语言的关键字开始呢?关于这点,我有两点需要说明: 本章节面向的读者 ...

  5. Java流机制详解

    转自http://blog.csdn.net/qq_16558621/article/details/51377887  http://www.cr173.com/html/18666_1.html

  6. java获取Excel的导入

    先准备好这2个架包 import java.io.*; import org.apache.commons.io.FileUtils; import org.apache.poi.hssf.userm ...

  7. 用MODI OCR 21种语言

    作者:马健邮箱:stronghorse_mj@hotmail.com发布:2007.12.08更新:2012.07.09按照<MODI中的OCR模块>一文相关内容进行修订2012.07.0 ...

  8. mongodb对数组中的元素进行查询详解

    原文链接:http://blog.csdn.net/renfufei/article/details/78320176 MongoDB中根据数组子元素进行匹配,有两种方式. 使用 “[数组名].[子元 ...

  9. AppDelegate生命周期回调顺序

    1. 应用初次启动: - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDiction ...

  10. kylin 连接 hortonworks 中的 hive 遇到的问题

    用 hortonworks(V3.1.0.0) 部署了 ambari (V2.7.3),用 ambari 部署了 hadoop 及 hive. 1.  启动 kylin(V2.6)时,遇到如下问题: ...