1.权限问题

// :: INFO mapreduce.Job: Job job_1555064328415_0328 failed with state FAILED due to: Job setup failed : org.apache.hadoop.security.AccessControlException: Permission denied: user=yarn, access=WRITE, inode="/user/yarn":hdfs:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:)
at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:)

oozie调用sqoop时报错,解决:改所有者

sudo -u hdfs hadoop fs -chown -R yarn /user/yarn

2.少MySQL驱动包

// :: INFO hive.metastore: Closed a connection to metastore, current connections:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.0.-.cdh6.0.0.p0./jars/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.0.-.cdh6.0.0.p0./jars/log4j-slf4j-impl-2.8..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
// :: INFO sqoop.Sqoop: Running Sqoop version: 1.4.-cdh6.0.0
// :: WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
// :: INFO conf.HiveConf: Found configuration file file:/etc/hive/conf.cloudera.hive/hive-site.xml
// :: INFO hive.metastore: Trying to connect to metastore with URI thrift://hadoop1:9083
// :: INFO hive.metastore: Opened a connection to metastore, current connections:
// :: INFO hive.metastore: Connected to metastore.
// :: INFO hive.metastore: Trying to connect to metastore with URI thrift://hadoop1:9083
// :: INFO hive.metastore: Opened a connection to metastore, current connections:
// :: INFO hive.metastore: Connected to metastore.
// :: WARN tool.BaseSqoopTool: Output field/record delimiter options are not useful in HCatalog jobs for most of the output types except text based formats is text. It is better to use --hive-import in those cases. For non text formats,
// :: INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
// :: INFO tool.CodeGenTool: Beginning code generation
// :: ERROR sqoop.Sqoop: Got exception running Sqoop: java.lang.RuntimeException: Could not load db driver class: com.mysql.jdbc.Driver
java.lang.RuntimeException: Could not load db driver class: com.mysql.jdbc.Driver
at org.apache.sqoop.manager.SqlManager.makeConnection(SqlManager.java:)
at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:)
at org.apache.sqoop.manager.SqlManager.getColumnInfoForRawQuery(SqlManager.java:)
at org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:)
at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:)
at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:)
at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:)
at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:)
at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:)
at org.apache.sqoop.Sqoop.run(Sqoop.java:)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:)
at org.apache.sqoop.Sqoop.main(Sqoop.java:)
// :: INFO hive.metastore: Closed a connection to metastore, current connections:

sqoop时,应该是有的节点没有放mysql驱动连接jar,将mysql驱动包放入/usr/share/java/下(mysql-connector-java.jar)

Sqoop-问题的更多相关文章

  1. sqoop:Failed to download file from http://hdp01:8080/resources//oracle-jdbc-driver.jar due to HTTP error: HTTP Error 404: Not Found

    环境:ambari2.3,centos7,sqoop1.4.6 问题描述:通过ambari安装了sqoop,又添加了oracle驱动配置,如下: 保存配置后,重启sqoop报错:http://hdp0 ...

  2. 安装sqoop

    安装sqoop 1.默认已经安装好java+hadoop 2.下载对应hadoop版本的sqoop版本 3.解压安装包 tar zxvf sqoop-1.4.6.bin__hadoop-2.0.4-a ...

  3. Hadoop学习笔记—18.Sqoop框架学习

    一.Sqoop基础:连接关系型数据库与Hadoop的桥梁 1.1 Sqoop的基本概念 Hadoop正成为企业用于大数据分析的最热门选择,但想将你的数据移植过去并不容易.Apache Sqoop正在加 ...

  4. Oozie分布式任务的工作流——Sqoop篇

    Sqoop的使用应该是Oozie里面最常用的了,因为很多BI数据分析都是基于业务数据库来做的,因此需要把mysql或者oracle的数据导入到hdfs中再利用mapreduce或者spark进行ETL ...

  5. [大数据之Sqoop] —— Sqoop初探

    Sqoop是一款用于把关系型数据库中的数据导入到hdfs中或者hive中的工具,当然也支持把数据从hdfs或者hive导入到关系型数据库中. Sqoop也是基于Mapreduce来做的数据导入. 关于 ...

  6. [大数据之Sqoop] —— 什么是Sqoop?

    介绍 sqoop是一款用于hadoop和关系型数据库之间数据导入导出的工具.你可以通过sqoop把数据从数据库(比如mysql,oracle)导入到hdfs中:也可以把数据从hdfs中导出到关系型数据 ...

  7. Sqoop切分数据的思想概况

    Sqoop通过--split-by指定切分的字段,--m设置mapper的数量.通过这两个参数分解生成m个where子句,进行分段查询.因此sqoop的split可以理解为where子句的切分. 第一 ...

  8. sqoop数据导出导入命令

    1. 将mysql中的数据导入到hive中 sqoop import --connect jdbc:mysql://localhost:3306/sqoop --direct --username r ...

  9. Apache Sqoop - Overview——Sqoop 概述

    Apache Sqoop - Overview Apache Sqoop 概述 使用Hadoop来分析和处理数据需要将数据加载到集群中并且将它和企业生产数据库中的其他数据进行结合处理.从生产系统加载大 ...

  10. sqoop使用中的小问题

    1.数据库连接异常 执行数据导出 sqoop export --connect jdbc:mysql://192.168.208.129:3306/test --username hive --P - ...

随机推荐

  1. pipreqs

    安装:pip3 install pipreqs 作用:帮你检测当前程序所有的安装模块,并输入到requirements.txt 执行:pipreqs ./  (必须在当前程序目录下) pycharm会 ...

  2. 5.2 - ToDoList

    一.ToDoList需求 参考链接http://www.todolist.cn/ 1.将用户输入添加至待办项 2.可以对todolist进行分类(待办项和已完成组),用户勾选既将待办项分入已完成组 3 ...

  3. 都说新的Arraylist 扩容是(1.5倍+1) 看了1.8的源代码发现不是这么回事

    都说新的Arraylist 扩容是(1.5倍+1) 看了1.8的源代码发现不是这么回事 就用下面这段代码在jdk的三个版本运行看了下效果 import java.lang.reflect.Field; ...

  4. javascrpt 页面格式化页面

    下面这个页面,格式化javaScript <html> <head> <title>JS格式化工具 </title> <meta http-equ ...

  5. Mybatis框架学习总结-优化Mybatis配置文件中的配置

    连接数据库的配置单独放在一个properties文件中 之前,是直接将数据库的连接配置信息卸载了Mybatis的conf.xml文件中,如下: <?xml version="1.0&q ...

  6. python 学习笔记(循环,print的几种写法,操作符)

    一.循环( for, while) while循环是指在给定的条件成立时(true),执行循环体,否则退出循环.for循环是指重复执行语句. break 在需要时终止for /while循环 cont ...

  7. vue项目中多个入口的配置

    出处:http://www.qingpingshan.com/jb/javascript/221105.html 基于vue2.0生成项目,一段时间都在找如何配置成多个页面的.网上有这样的例子相对也是 ...

  8. java jdom 解析CDATA内容

    package com; import java.io.IOException; import java.io.StringReader; import java.util.List; import ...

  9. HDU 5963 朋友(树+博弈)

    #include<vector> #include<cstdio> #include<cstring> #include<algorithm> #def ...

  10. 2018.9 ECNU ICPC/CCPC Trial Round #2 Query On Tree (树链剖分+线段树维护)

    传送门:https://acm.ecnu.edu.cn/contest/105/problem/Q/ 一棵树,支持两种操作:给一条路径上的节点加上一个等差数列;求两点路径上节点和. 很明显,熟练剖分. ...