1、当时初学Sqoop的时候,mysql导入到hdfs导入命令执行以后,在hdfs上面没有找到对应的数据,今天根据这个bug,顺便解决这个问题吧,之前写的http://www.cnblogs.com/biehongli/p/8039128.html

 [hadoop@slaver1 sqoop-1.4.-cdh5.3.6]$ bin/sqoop import \
> --connect jdbc:mysql://slaver1:3306/test \
> --username root \
> --password \
> --table tb_user \
> --m
Warning: /home/hadoop/soft/sqoop-1.4.-cdh5.3.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/soft/sqoop-1.4.-cdh5.3.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.-cdh5.3.6
// :: WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
// :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
// :: INFO tool.CodeGenTool: Beginning code generation
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `tb_user` AS t LIMIT
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `tb_user` AS t LIMIT
// :: INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoop/soft/hadoop-2.5.-cdh5.3.6
Note: /tmp/sqoop-hadoop/compile/cb147e9deb144db0034d6f38cb47ad68/tb_user.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
// :: INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/cb147e9deb144db0034d6f38cb47ad68/tb_user.jar
// :: WARN manager.MySQLManager: It looks like you are importing from mysql.
// :: WARN manager.MySQLManager: This transfer can be faster! Use the --direct
// :: WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
// :: INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
// :: INFO mapreduce.ImportJobBase: Beginning import of tb_user
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
// :: INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
// :: INFO client.RMProxy: Connecting to ResourceManager at slaver1/192.168.19.131:
// :: WARN security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://slaver1:9000/user/hadoop/tb_user already exists
// :: ERROR tool.ImportTool: Encountered IOException running import job: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://slaver1:9000/user/hadoop/tb_user already exists
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:)
at org.apache.hadoop.mapreduce.Job$.run(Job.java:)
at org.apache.hadoop.mapreduce.Job$.run(Job.java:)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:)
at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:)
at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:)
at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:)
at org.apache.sqoop.Sqoop.run(Sqoop.java:)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:)
at org.apache.sqoop.Sqoop.main(Sqoop.java:) [hadoop@slaver1 sqoop-1.4.-cdh5.3.6]$

2、报错说以及存在了,hdfs://slaver1:9000/user/hadoop/tb_user already exists,首先根据路径找到了问题,先将这个路径上面的删除了,然后再执行的时候发现将mysql的数据表数据可以导入到hdfs分布式文件系统上面。

 [hadoop@slaver1 ~]$ hdfs dfs -rm -r /user/hadoop/tb_user
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = minutes, Emptier interval = minutes.
Deleted /user/hadoop/tb_user
[hadoop@slaver1 ~]$

3、执行如下所示:

[hadoop@slaver1 sqoop-1.4.-cdh5.3.6]$ bin/sqoop import \
> --connect jdbc:mysql://slaver1:3306/test \
> --username root \
> --password \
> --table tb_user \
> --m
Warning: /home/hadoop/soft/sqoop-1.4.-cdh5.3.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/soft/sqoop-1.4.-cdh5.3.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.-cdh5.3.6
// :: WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
// :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
// :: INFO tool.CodeGenTool: Beginning code generation
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `tb_user` AS t LIMIT
// :: INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `tb_user` AS t LIMIT
// :: INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoop/soft/hadoop-2.5.-cdh5.3.6
Note: /tmp/sqoop-hadoop/compile/464d9aff412a6285fd9f6f4c6d16b4e6/tb_user.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
// :: INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/464d9aff412a6285fd9f6f4c6d16b4e6/tb_user.jar
// :: WARN manager.MySQLManager: It looks like you are importing from mysql.
// :: WARN manager.MySQLManager: This transfer can be faster! Use the --direct
// :: WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
// :: INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
// :: INFO mapreduce.ImportJobBase: Beginning import of tb_user
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
// :: INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
// :: INFO client.RMProxy: Connecting to ResourceManager at slaver1/192.168.19.131:
// :: INFO db.DBInputFormat: Using read commited transaction isolation
// :: INFO mapreduce.JobSubmitter: number of splits:
// :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1526642793183_0001
// :: INFO impl.YarnClientImpl: Submitted application application_1526642793183_0001
// :: INFO mapreduce.Job: The url to track the job: http://slaver1:8088/proxy/application_1526642793183_0001/
// :: INFO mapreduce.Job: Running job: job_1526642793183_0001
// :: INFO mapreduce.Job: Job job_1526642793183_0001 running in uber mode : false
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: Job job_1526642793183_0001 completed successfully
// :: INFO mapreduce.Job: Counters:
File System Counters
FILE: Number of bytes read=
FILE: Number of bytes written=
FILE: Number of read operations=
FILE: Number of large read operations=
FILE: Number of write operations=
HDFS: Number of bytes read=
HDFS: Number of bytes written=
HDFS: Number of read operations=
HDFS: Number of large read operations=
HDFS: Number of write operations=
Job Counters
Launched map tasks=
Other local map tasks=
Total time spent by all maps in occupied slots (ms)=
Total time spent by all reduces in occupied slots (ms)=
Total time spent by all map tasks (ms)=
Total vcore-seconds taken by all map tasks=
Total megabyte-seconds taken by all map tasks=
Map-Reduce Framework
Map input records=
Map output records=
Input split bytes=
Spilled Records=
Failed Shuffles=
Merged Map outputs=
GC time elapsed (ms)=
CPU time spent (ms)=
Physical memory (bytes) snapshot=
Virtual memory (bytes) snapshot=
Total committed heap usage (bytes)=
File Input Format Counters
Bytes Read=
File Output Format Counters
Bytes Written=
// :: INFO mapreduce.ImportJobBase: Transferred bytes in 38.0904 seconds (4.0168 bytes/sec)
// :: INFO mapreduce.ImportJobBase: Retrieved records.
[hadoop@slaver1 sqoop-1.4.-cdh5.3.6]$

4、数据如下所示:

[hadoop@slaver1 ~]$ hdfs dfs -cat /user/hadoop/tb_user/part-m-
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
,张三,
,李四,
,王五,
,小明,
,小红,
,小别,
,,
,,
,,
,,
[hadoop@slaver1 ~]$

Encountered IOException running import job: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://slaver1:9000/user/hadoop/tb_user already exists的更多相关文章

  1. ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Hive exited with status 2

    这是sqoop的 迁移数据到hive的报错 解决方案: 1,已经尝试不是  晚上大多数说的 libthrity的原因 2,查看自己的配置  sqoop-env.sh  如果配置的路径写的不对,对应的包 ...

  2. ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Cannot run program "hive": error=2, No such file or directory

    原因是hive没有设置环境变量 1,vim /etc/profile  (切换root用户) 2.source /etc/profile

  3. Sqoop异常解决ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: No columns to generate for ClassWriter问题

    问题详情如下: 解决办法 这个是由于mysql-connector-java的bug造成的,出错时我用的是mysql-connector-java-5.1.10-bin.jar,更新成mysql-co ...

  4. Hadoop问题:Input path does not exist: hdfs://Master:9000/user/hadoop/input

    问题描述: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs:/ ...

  5. Hadoop第6周练习—在Eclipse中安装Hadoop插件及测试(Linux操作系统)

    1    运行环境说明 1.1     硬软件环境 1.2     机器网络环境 2    :安装Eclipse并测试 2.1     内容 2.2     实现过程 2.2.1   2.2.2   ...

  6. ERROR tool.ImportTool: Import failed: java.io.IOException: java.lang.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConf

    sqoop从mysql导入到hive报错: 18/08/22 13:30:53 ERROR tool.ImportTool: Import failed: java.io.IOException: j ...

  7. sqoop mysql--->hive 报错 (ERROR tool.ImportTool: Import failed: java.io.IOException: java.lang.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConf)

    ERROR tool.ImportTool: Import failed: java.io.IOException: java.lang.ClassNotFoundException: org.apa ...

  8. hadoop错误Ignoring exception during close for org.apache.hadoop.mapred.MapTask$NewOutputCollector@17bda0f2 java.io.IOException Spill failed

    1.错误    Ignoring exception during close for org.apache.hadoop.mapred.MapTask$NewOutputCollector@17bd ...

  9. 报错org.apache.hadoop.mapreduce.lib.input.FileSplit cannot be cast to org.apache.hadoop.mapred.FileSplit

    报错 java.lang.Exception: java.lang.ClassCastException: org.apache.hadoop.mapreduce.lib.input.FileSpli ...

随机推荐

  1. CMake的一些正确姿势

    1, 2,

  2. Python3-协程

    协程 引子 协程介绍 Greenlet Gevent介绍 Gevent之应用举例 一 引子 本节的主题是基于单线程来实现并发,即只用一个主线程(很明显可利用的cpu只有一个)情况下实现并发,为此我们需 ...

  3. 如何在Delphi 中使用 DevExpressVCL的 CxGrid与CxTreeList,编辑某列后计算另一列的值

    如何在Delphi 中使用 DevExpressVCL的 CxGrid与CxTreeList,编辑某列后计算另一列的值:比如 输入 单价,数量,计算金额. 参考: 1.  输入 单价,数量,计算金额 ...

  4. Cisco端口限速配置

    作者:邓聪聪 Cisco端口限速的配置 配置案例如下: 定义策略组: access-list ID permit ip any any 模版关联策略组: class-map match-all nam ...

  5. 题解-UOJ284 快乐游戏鸡

    Problem uoj 题意大意: 一棵树,点权\(w_i\),每次玩家可以在树上行走,一条边需要\(1\)的时间,只能往儿子走.每次游戏需要从\(s\)到\(t\). 玩家有一个总死亡次数,初始为\ ...

  6. 【转】Java finally语句到底是在return之前还是之后执行?

    网上有很多人探讨Java中异常捕获机制try...catch...finally块中的finally语句是不是一定会被执行?很多人都说不是,当然他们的回答是正确的,经过试验,至少有两种情况下final ...

  7. BZOJ 3620: 似乎在梦中见过的样子

    似乎在梦中见过的样子.... 一道水题调了这么久,还半天想不出来怎么 T 的...佩服自己(果然蒟蒻) 这题想想 KMP 但是半天没思路瞟了一眼题解发现暴力枚举起始点,然后 KMP 如图: O( n2 ...

  8. bootstrap4简单使用和入门02-bootstrap的js组件简单使用

    自带默认的css和js弹框控制 <!DOCTYPE html> <html lang="en"> <head> <meta charset ...

  9. [转]PHP中file_put_contents追加和换行

    在PHP的一些应用中需要写日志或者记录一些信息,这样的话. 可以使用fopen(),fwrite()以及 fclose()这些进行操作. 也可以简单的使用file_get_contents()和fil ...

  10. Codeforces 1114F Please, another Queries on Array? [线段树,欧拉函数]

    Codeforces 洛谷:咕咕咕 CF少有的大数据结构题. 思路 考虑一些欧拉函数的性质: \[ \varphi(p)=p-1\\ \varphi(p^k)=p^{k-1}\times (p-1)= ...