insert overwrite table canal_amt1......
2014-10-09 10:40:27,368 Stage-1 map = 100%,  reduce = 32%, Cumulative CPU 2772.48 sec
2014-10-09 10:40:28,426 Stage-1 map = 100%, reduce = 32%, Cumulative CPU 2772.48 sec
2014-10-09 10:40:29,481 Stage-1 map = 100%, reduce = 32%, Cumulative CPU 2774.12 sec
2014-10-09 10:40:30,885 Stage-1 map = 100%, reduce = 32%, Cumulative CPU 2774.36 sec
2014-10-09 10:40:31,963 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2693.96 sec
2014-10-09 10:40:33,071 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2693.96 sec
2014-10-09 10:40:34,126 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2693.96 sec
2014-10-09 10:40:35,182 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2693.96 sec
MapReduce Total cumulative CPU time: 44 minutes 53 seconds 960 msec
Ended Job = job_1409124602974_0745 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1409124602974_0745_m_000003 (and more) from job job_1409124602974_0745
Examining task ID: task_1409124602974_0745_m_000002 (and more) from job job_1409124602974_0745
Examining task ID: task_1409124602974_0745_r_000000 (and more) from job job_1409124602974_0745
Examining task ID: task_1409124602974_0745_r_000006 (and more) from job job_1409124602974_0745 Task with the most failures(4):
-----
Task ID:
task_1409124602974_0745_r_000003 URL:
http://HADOOP2:8088/taskdetails.jsp?jobid=job_1409124602974_0745&tipid=task_1409124602974_0745_r_000003
-----
Diagnostic Messages for this Task:
Container [pid=22068,containerID=container_1409124602974_0745_01_000047] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 2.6 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1409124602974_0745_01_000047 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 22087 22068 22068 22068 (java) 2536 833 2730713088 265378 /usr/jdk64/jdk1.6.0_31/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx2048m -Djava.io.tmpdir=/hadoop/yarn/local/usercache/root/appcache/application_1409124602974_0745/container_1409124602974_0745_01_000047/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/hadoop/yarn/log/application_1409124602974_0745/container_1409124602974_0745_01_000047 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 54.0.88.58 41150 attempt_1409124602974_0745_r_000003_3 47
|- 22068 2381 22068 22068 (bash) 1 1 110755840 302 /bin/bash -c /usr/jdk64/jdk1.6.0_31/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx2048m -Djava.io.tmpdir=/hadoop/yarn/local/usercache/root/appcache/application_1409124602974_0745/container_1409124602974_0745_01_000047/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/hadoop/yarn/log/application_1409124602974_0745/container_1409124602974_0745_01_000047 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 54.0.88.58 41150 attempt_1409124602974_0745_r_000003_3 47 1>/hadoop/yarn/log/application_1409124602974_0745/container_1409124602974_0745_01_000047/stdout 2>/hadoop/yarn/log/application_1409124602974_0745/container_1409124602974_0745_01_000047/stderr Container killed on request. Exit code is 143 FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 23 Reduce: 7 Cumulative CPU: 2693.96 sec HDFS Read: 6278784712 HDFS Write: 590228229 FAIL
Total MapReduce CPU Time Spent: 44 minutes 53 seconds 960 msec

原因:空间不足

解决办法:

在执行hive语句前加上

set mapreduce.map.memory.mb=1025;//只要大于1024,hive默认分配的内存分大一倍,也就是2048M
set mapreduce.reduce.memory.mb=1025;

执行结果:

MapReduce Total cumulative CPU time: 0 days 1 hours 10 minutes 14 seconds 590 msec
Ended Job = job_1409124602974_0746
Loading data to table default.canal_amt1
Table default.canal_amt1 stats: [num_partitions: 0, num_files: 7, num_rows: 0, total_size: 4131948868, raw_data_size: 0]
MapReduce Jobs Launched:
Job 0: Map: 23 Reduce: 7 Cumulative CPU: 4214.59 sec HDFS Read: 6278784712 HDFS Write: 4131948868 SUCCESS
Total MapReduce CPU Time Spent: 0 days 1 hours 10 minutes 14 seconds 590 msec
OK
Time taken: 673.851 seconds

网上查询可能其他原因:

1.map阶段报空指针

原因:数据字段中插入了空值

2.Exception in thread "Thread-19" java.lang.IllegalArgumentException:
Does not contain a valid host:port authority: local

参考http://grokbase.com/p/cloudera/cdh-user/126wqvfwyt/hive-refuses-to-work-with-yarn

解决方法:

就是在hive-site.xml中添加设置

In the meantime I recommend doing the following if you need to run Hive on
MR2:
* Keep Hive happy by setting mapred.job.tracker to a bogus value.
* Disable task log retrieval by setting
hive.exec.show.job.failure.debug.info=false

3.protuf版本不一致。

hive: insert数据时Error during job, obtaining debugging information 以及beyond physical memory limits的更多相关文章

  1. 使用Sqoop从mysql向hdfs或者hive导入数据时出现的一些错误

    1.原表没有设置主键,出现错误提示: ERROR tool.ImportTool: Error during import: No primary key could be found for tab ...

  2. PHP使用prepare(),insert数据时要注意的一点!!!

    今天看了PHP防SQL注入,使用预处理prepare,但是我insert数据时,总是插不进去,但是select却可以,弄了很久终于知道原来问题在这里,先上代码 <?php header('con ...

  3. iOS解析数据时Error=3840

    1.解析JSon数据格式出错的问题 unescaped control character around character XXXX 和 The data couldn’t be read beca ...

  4. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十三)kafka+spark streaming打包好的程序提交时提示虚拟内存不足(Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical memory used; 2.2 GB of 2.1 G)

    异常问题:Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical mem ...

  5. mysql中在表中insert数据时,有重复主键id时,变成update

    MySQL 自4.1版以后开始支持INSERT … ON DUPLICATE KEY UPDATE语法 例如:  id name sex age  1 kathy male 23  2 Javer f ...

  6. peewee insert 数据时报错:'buffer' object has no attribute 'translate'

    错误信息: "'buffer' object has no attribute 'translate'" 场景:使用peewee insert 数据时,BlobField 字段存储 ...

  7. hive insert 动态分区异常(Error encountered near token)与解决

    当insert数据到有分区的hive表里时若不明显指定分区会抛出异常 insert overwrite table persons_tmp select * from persons; FAILED: ...

  8. sqoop从hive导入数据到mysql时出现主键冲突

    今天在将一个hive数仓表导出到mysql数据库时出现进度条一直维持在95%一段时间后提示失败的情况,搞了好久才解决.使用的环境是HUE中的Oozie的workflow任何调用sqoop命令,该死的o ...

  9. Hive读取外表数据时跳过文件行首和行尾

    作者:Syn良子 出处:http://www.cnblogs.com/cssdongl 转载请注明出处 有时候用hive读取外表数据时,比如csv这种类型的,需要跳过行首或者行尾一些和数据无关的或者自 ...

随机推荐

  1. NDk编译opencv for Android,并引用在Unity3d游戏中的一般步骤

    本文使用:Unity3d + opencv + Android Unity3d中可以调用opencv 编译好的.so 动态库,在生成Android apk时可以正常运行.   因为Android系统是 ...

  2. Curl是什么,原文地址:http://www.phpchina.com/portal.php?mod=view&aid=40161

    Curl是什么PHP supports libcurl, a library created by Daniel Stenberg, that allows you to connect and co ...

  3. 最全 Linux 磁盘管理基础知识全汇总

    一.存储设备的挂载和卸载 存储设备的挂载和卸载常用操作命令:fdisk  -l.df.du.mount.umount. fdisk  -l 命令 1.作用 查看所有硬盘的分区信息,包括没有挂上的分区和 ...

  4. 安装supervisor

    机器版本 centos 6.5 python 版本 2.6.6 在终端输入 easy_install supervisor 并回车,linux会自动联网并下载supervisor源码解压并安装 安装成 ...

  5. 分布式唯一id:snowflake算法思考

    匠心零度 转载请注明原创出处,谢谢! 缘起 为什么会突然谈到分布式唯一id呢?原因是最近在准备使用RocketMQ,看看官网介绍: 一句话,消息可能会重复,所以消费端需要做幂等.为什么消息会重复后续R ...

  6. dubbo中Listener的实现

    这里继续dubbo的源码旅程,在过程中学习它的设计和技巧,看优秀的代码,我想对我们日程编码必然有帮助的.而那些开源的代码正是千锤百炼的东西,希望和各位共勉. 拿ProtocolListenerWrap ...

  7. NOIP 2017 day 1 游记

    心情非常复杂.大概就是我问到的所有人都A掉了T1那样. 的确没有按套路出牌,今年T1不是大模拟,反倒是T2. ……已经不想再提到今天的T1了.如果真的要我说,我只能说 我再次学了一整年的OI,结果栽到 ...

  8. OI黑科技:读入优化

    利用getchar()函数加速读入. Q:读入优化是什么? A :更加快速地读入一些较大的数字. Q:scanf不是已经够快了吗? A:Naive,scanf还是不!够!快! Q:那怎么办呢? A:我 ...

  9. A 洛谷 P3601 签到题 [欧拉函数 质因子分解]

    题目背景 这是一道签到题! 建议做题之前仔细阅读数据范围! 题目描述 我们定义一个函数:qiandao(x)为小于等于x的数中与x不互质的数的个数. 这题作为签到题,给出l和r,要求求. 输入输出格式 ...

  10. react native 常用组件汇总

    react-native-uploader //文件上传https://github.com/aroth/react-native-uploader jpush-react-native //官方版本 ...