exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
1、虽然,不是大错,还说要贴一下,由于我运行run-example streaming.NetworkWordCount localhost 9999的测试案例,出现的错误,第一感觉就是Spark没有启动导致的:
// :: ERROR SparkContext: Error initializing SparkContext.
java.net.ConnectException: Call From slaver1/192.168.19.131 to slaver1: failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:)
at java.lang.reflect.Constructor.newInstance(Constructor.java:)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:)
at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:)
at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem$.doCall(DistributedFileSystem.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem$.doCall(DistributedFileSystem.java:)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:)
at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:)
at org.apache.spark.streaming.StreamingContext$.createNewSparkContext(StreamingContext.scala:)
at org.apache.spark.streaming.StreamingContext.<init>(StreamingContext.scala:)
at org.apache.spark.examples.streaming.NetworkWordCount$.main(NetworkWordCount.scala:)
at org.apache.spark.examples.streaming.NetworkWordCount.main(NetworkWordCount.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:)
at org.apache.hadoop.ipc.Client$Connection.access$(Client.java:)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
... more
// :: INFO SparkUI: Stopped Spark web UI at http://192.168.19.131:4040
// :: INFO DAGScheduler: Stopping DAGScheduler
// :: INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
// :: INFO MemoryStore: MemoryStore cleared
// :: INFO BlockManager: BlockManager stopped
// :: INFO BlockManagerMaster: BlockManagerMaster stopped
// :: INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" java.net.ConnectException: Call From slaver1/192.168.19.131 to slaver1: failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:)
at java.lang.reflect.Constructor.newInstance(Constructor.java:)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:)
at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:)
at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem$.doCall(DistributedFileSystem.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem$.doCall(DistributedFileSystem.java:)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:)
at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:)
at org.apache.spark.streaming.StreamingContext$.createNewSparkContext(StreamingContext.scala:)
at org.apache.spark.streaming.StreamingContext.<init>(StreamingContext.scala:)
at org.apache.spark.examples.streaming.NetworkWordCount$.main(NetworkWordCount.scala:)
at org.apache.spark.examples.streaming.NetworkWordCount.main(NetworkWordCount.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:)
at org.apache.hadoop.ipc.Client$Connection.access$(Client.java:)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:)
at org.apache.hadoop.ipc.Client.call(Client.java:)
... more
// :: INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
// :: INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
// :: INFO ShutdownHookManager: Shutdown hook called
// :: INFO ShutdownHookManager: Deleting directory /tmp/spark-7ef5c2da-0b57--a9f9-6e215885c7ba
// :: INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
2、启动Spark的脚本命令:
[hadoop@slaver1 spark-1.5.1-bin-hadoop2.4]$ sbin/start-all.sh
[hadoop@slaver2 ~]$ run-example streaming.NetworkWordCount localhost 9999
exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused的更多相关文章
- java.net.ConnectException: Call From slaver1/192.168.19.128 to slaver1:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org
1:练习spark的时候,操作大概如我读取hdfs上面的文件,然后spark懒加载以后,我读取详细信息出现如下所示的错误,错误虽然不大,我感觉有必要记录一下,因为错误的起因是对命令的不熟悉造成的,错误 ...
- Caused by: java.net.ConnectException: Call From master/192.168.199.130 to master:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.
1:安装好hive,准备启动的时候出现下面的错误(由于hive是基于Hadoop的,所以必须先将你的集群启动起来,我就是没有启动集群,直接启动hive导致的错误): [root@master bin] ...
- Call From master/192.168.128.135 to master:8485 failed on connection exception: java.net.ConnectException: Connection refused
hadoop集群搭建了ha,初次启动正常,最近几天启动时偶尔发现,namenode1节点启动后一段时间(大约10几秒-半分钟左右),namenode1上namenode进程停掉,查看日志: -- :: ...
- Bad connection to FS. command aborted. exception: Call to chaoren/192.168.80.100:9000 failed on connection exception: java.net.ConnectException: Connection refused
Bad connection to FS. command aborted. exception: Call to chaoren/192.168.80.100:9000 failed on conn ...
- 异常: Call From * 9000 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
场景: eclipse链接不上阿里云hadoop解决: 将hadoop的配置文件中的ip改为内网IP即可
- error: not found: value sqlContext/import sqlContext.implicits._/error: not found: value sqlContext /import sqlContext.sql/Caused by: java.net.ConnectException: Connection refused
1.今天启动启动spark的spark-shell命令的时候报下面的错误,百度了很多,也没解决问题,最后想着是不是没有启动hadoop集群的问题 ,可是之前启动spark-shell命令是不用启动ha ...
- ls: Call From hdoop2/192.168.18.87 to hdoop2:8020 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see
场景: 预发环境中,同事已经搭建了一套hadoop集群,由于版本与所需不符,所以需要替换版本 问题描述: 在配置文件都准确的情况下,启动hadoop,出现以下报错: 启动之前初始化: 初始化目录 ...
- Hadoop: failed on connection exception: java.net.ConnectException: Connection refuse
ssh那些都已经搞了,跑一个书上的例子出现了Connection Refused异常,如下: 12/04/09 01:00:54 INFO ipc.Client: Retrying connect t ...
- Exception in thread "main" redis.clients.jedis.exceptions.JedisConnectionException: java.net.ConnectException: Connection refused (Connection refused)
一.linux中配置redis,使用java连接测试时报错: Exception in thread "main" redis.clients.jedis.exceptions.J ...
随机推荐
- 节流(Throttling)和去抖(Debouncing)详解
这篇文章的作者是 David Corbacho,伦敦的一名前端开发工程师.之前我们有一篇关于”节流”和”去抖”的文章:The Difference Between Throttling and Deb ...
- Xml 文件读取
.NET 读取Xml文件,用到XmlDocument类. 1.要获取文档的根: DocumentElement. 2.Attributes :获取 XmlAttributeCollection 包含此 ...
- Nginx和apache服务器中php运行方式
PHP5的CGI方式的一大优势是内置了FastCGI的支持,只需指明绑定的地址和端口参数便可以以FastCGI的方式运行,如下: php-cgi -b 127.0.0.1:9000 配置Nginx的P ...
- 修改.bashrc文件PATH变量错误导致系统大部分命令失效
修改.bashrc环境变量,在文件最后添加openssl变量, 本来应该写 export PATH=$PATH:/usr/local/openssl/bin 误写成 export PATH=/usr/ ...
- REST风格接口测试利器Wisdom rest-client
前言 偶然间接触到Wisdom rest-client这款测试工具,后来经过尝试体验,感觉还不错,现在分享给大家,如何使用这款测试利器 Wisdom rest-client是什么? Wisdom re ...
- [PHP]session的一些要点
一.session_start([array $options=array()]) 1.只能在输出http头前启动此函数,因为如果需要改写sessid的键和值,需要在http报文头发出前就开始定义了: ...
- python创建tcp服务端和客户端
1.tcp服务端server from socket import * from time import ctime HOST = '' PORT = 9999 BUFSIZ = 1024 ADDR ...
- iOS weak 内存释放问题
我们都知道weak 关键字可以解决内存不释放问题,但是使用上有些讲究. 看代码: import UIKit var str = "Hello, playground" class ...
- 44)django-环境变量设置
如果外面程序需要调用django,就需要设置django环境指明调用那个项目. # _*_ coding:utf-8 _*_ __author__ = "shisanjun" im ...
- kindeditor用法简单介绍(转)
1,首先去官网下载http://www.kindsoft.net/ 2,解压之后如图所示: 由于本人做的是用的是JSP,所以ASP,PHP什么的就用不上了,直接把那些去掉然后将整个文件夹扔进Myecl ...