java.net.ConnectException: Call From slaver1/192.168.19.128 to slaver1:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org
1:练习spark的时候,操作大概如我读取hdfs上面的文件,然后spark懒加载以后,我读取详细信息出现如下所示的错误,错误虽然不大,我感觉有必要记录一下,因为错误的起因是对命令的不熟悉造成的,错误如下所示:
scala> text.collect
java.net.ConnectException: Call From slaver1/192.168.19.128 to slaver1: failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:)
at java.lang.reflect.Constructor.newInstance(Constructor.java:)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:)
at com.sun.proxy.$Proxy36.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:)
at com.sun.proxy.$Proxy37.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem$.doCall(DistributedFileSystem.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem$.doCall(DistributedFileSystem.java:)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:)
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:)
at org.apache.hadoop.fs.Globber.glob(Globber.java:)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:)
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:)
at org.apache.spark.rdd.RDD$$anonfun$collect$.apply(RDD.scala:)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:)
at org.apache.spark.rdd.RDD.collect(RDD.scala:)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:)
at $iwC$$iwC$$iwC.<init>(<console>:)
at $iwC$$iwC.<init>(<console>:)
at $iwC.<init>(<console>:)
at <init>(<console>:)
at .<init>(<console>:)
at .<clinit>(<console>)
at .<init>(<console>:)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:)
at org.apache.spark.repl.SparkILoop.reallyInterpret$(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.processLine$(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.innerLoop$(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$.apply$mcZ$sp(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$.apply(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$.apply(SparkILoop.scala:)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:)
at org.apache.spark.repl.Main$.main(Main.scala:)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:)
at org.apache.hadoop.ipc.Client$Connection.access$(Client.java:)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
... more
2:错误原因如下所示:
我使用了如下所示命令来读取hdfs上面的文件,scala> var text = sc.textFile("hdfs://slaver1:/input.txt");,然后使用text.collect命令来查看详细信息,就是查看详细信息的时候报的上面的错误,错误原因是因为我读取hdfs文件的时候少了端口号,造成的错误;
修改为如下所示即可:
scala> var text = sc.textFile("hdfs://slaver1:9000/input.txt");
scala> text.collect
java.net.ConnectException: Call From slaver1/192.168.19.128 to slaver1:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org的更多相关文章
- exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
1.虽然,不是大错,还说要贴一下,由于我运行run-example streaming.NetworkWordCount localhost 9999的测试案例,出现的错误,第一感觉就是Spark没有 ...
- Caused by: java.net.ConnectException: Call From master/192.168.199.130 to master:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.
1:安装好hive,准备启动的时候出现下面的错误(由于hive是基于Hadoop的,所以必须先将你的集群启动起来,我就是没有启动集群,直接启动hive导致的错误): [root@master bin] ...
- ls: Call From hdoop2/192.168.18.87 to hdoop2:8020 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see
场景: 预发环境中,同事已经搭建了一套hadoop集群,由于版本与所需不符,所以需要替换版本 问题描述: 在配置文件都准确的情况下,启动hadoop,出现以下报错: 启动之前初始化: 初始化目录 ...
- Error: java.net.ConnectException: Call From tuge1/192.168.40.100 to tuge2:8032 failed on connection exception
先看解决方案,再看唠嗑,唠嗑可以忽略. 解决方案: 使用start yarn.sh启动yarn就可以了. 唠嗑: 今天学习Spark基于Yarn部署.然后总以为Yarn是让Spark启动的,提交程序的 ...
- Call From master/192.168.128.135 to master:8485 failed on connection exception: java.net.ConnectException: Connection refused
hadoop集群搭建了ha,初次启动正常,最近几天启动时偶尔发现,namenode1节点启动后一段时间(大约10几秒-半分钟左右),namenode1上namenode进程停掉,查看日志: -- :: ...
- Hadoop格式化 From hu-hadoop1/192.168.11.11 to hu-hadoop2:8485 failed on connection exception: java.net.
192.168.11.12:8485: Call From hu-hadoop1/192.168.11.11 to hu-hadoop2:8485 failed on connection excep ...
- INFO org.apache.hadoop.ipc.RPC: Server at master/192.168.200.128:9000 not available yet, Zzzzz...
hadoop 启动时namenode和datanode可以启动,使用jps命令也可以看到进程,但是在浏览器中输入master:50070却没有显示datanode 查看datanode的log日志: ...
- Bad connection to FS. command aborted. exception: Call to chaoren/192.168.80.100:9000 failed on connection exception: java.net.ConnectException: Connection refused
Bad connection to FS. command aborted. exception: Call to chaoren/192.168.80.100:9000 failed on conn ...
- 格式化namenode时 报错 No Route to Host from node1/192.168.1.111 to node3:8485 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host
// :: FATAL namenode.NameNode: Failed to start namenode. org.apache.hadoop.hdfs.qjournal.client.Quor ...
随机推荐
- platform模块
import platform ''' python中,platform模块给我们提供了很多方法去获取操作系统的信息 如: import platform platform.platform() #获 ...
- CORS(Cross-origin resource sharing) “跨域资源共享”
CORS与JSONP的比较 在出现CORS标准之前, 我们还只能通过jsonp的形式去向“跨源”服务器去发送 XMLHttpRequest 请求,这种方式吃力不讨好,在请求方与接收方都需要做处理,而且 ...
- 【转】判断处理器是Big_endian的还是Little_endian的
首先说明一下Little_endian和Big_endian是怎么回事. Little_endian模式的CPU对操作数的存放方式是从低字节到高字节,而Big_endian模式则是从高字节到低字节,比 ...
- druid安装
只要下载duridjar包,然后在web.xml配置拦截器(此处不配置监控无法显示web情况,只能看到sql情况)和servlet, 然后在spring配置文件中修改DataSource即可.
- spring3.1 profile 配置不同的环境
<?xml version="1.0" encoding="UTF-8"?><beans xmlns="http://www.spr ...
- SP2-0734: 未知的命令开头 "exp wlc/ra..." - 忽略了剩余的行。
SP2-0734: 未知的命令开头 "exp wlc/ra..." - 忽略了剩余的行. 原来只需要在 $exp wlc/radial_wlc123@ora11g owner=w ...
- 《Spring5官方文档》新功能(4,3)
<Spring5官方文档>新功能 原文链接 译者:supriseli Spring框架的新功能 这一章主要提供Spring框架新的功能和变更. 升级到新版本的框架可以参考.Spring g ...
- nginx负载均衡后端tomcat无法加载js资源
JS或css无法完全加载 nginx的代理缓存区,默认较小导致部分文件出现加载不全的问题,比较典型的如jQuery框架,可以通过配置调整nginx的缓存区即可.主要参考proxy参数 最终完整配置如下 ...
- input错误提示,点击提交,提示有未填项,屏幕滑到input未填项的位置
function errorInfo(parm) { //获取文本框值 var $val = parm.val(); if ($val==""||undefined||null){ ...
- python与用户交互、数据类型
一.与用户交互 1.什么是用户交互: 程序等待用户输入一些数据,程序执行完毕反馈信息. 2.如何使用 在python3中使用input,input会将用户输入的如何内容存为字符串:在python中分为 ...