运行

mport org.apache.log4j.{Level, Logger}
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext} /**
* Created by Lee_Rz on 2017/8/30.
*/
object SparkDemo {
def main(args: Array[String]) {
Logger.getLogger("org.apache.spark").setLevel(Level.OFF)
val sc: SparkContext = new SparkContext(new SparkConf().setAppName(this.getClass().getName()).setMaster("local[2]"))
val rdd1: RDD[String] = sc.textFile("C:\\Users\\166\\Desktop\\text.txt") //一行一行的读数据 //懒算子
val key: RDD[(String, Int)] = rdd1.flatMap(_.split(" ")).map((_,)).reduceByKey(_+_)
println(key.collect().toBuffer)//收集到Driver
}
}

报错

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
// :: WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO Slf4jLogger: Slf4jLogger started
// :: INFO Remoting: Starting remoting
// :: INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.0.166:51388]
// :: ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:)
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$$$anonfun$.apply(SparkContext.scala:)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$$$anonfun$.apply(SparkContext.scala:)
at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$.apply(HadoopRDD.scala:)
at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$.apply(HadoopRDD.scala:)
at scala.Option.map(Option.scala:)
at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:)
at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$.apply(PairRDDFunctions.scala:)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$.apply(PairRDDFunctions.scala:)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:)
at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:)
at zx.SparkDemo$.main(SparkDemo.scala:)
at zx.SparkDemo.main(SparkDemo.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:)
// :: INFO FileInputFormat: Total input paths to process :
// :: INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
// :: INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
// :: INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
// :: INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
// :: INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
ArrayBuffer((are,), (hello,), (any,), (ok,), (world,), (me,), (alone,), (you,), (no,), (believie,), (more,))
// :: INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. Process finished with exit code

检查发现hadoop下bin目录下已经存在winutils.exe,检查hadoop的path路径,发现没有严格按照格式创建hadoop的path,真确的格式是HADOOP_HOME=......,因为在hadoop的生态圈中很多框架都是依赖hadoop的,所以他们的配置文件中,默认的export的hadoop路径是格式是HADOOP_HOME

Spark- ERROR Shell: Failed to locate the winutils binary in the hadoop binary path java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.的更多相关文章

  1. spark开发常见问题之一:java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

    最近在学习研究pyspark机器学习算法,执行代码出现以下异常: 19/06/29 10:08:26 ERROR Shell: Failed to locate the winutils binary ...

  2. java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries

    在已经搭建好的集群环境Centos6.6+Hadoop2.7+Hbase0.98+Spark1.3.1下,在Win7系统Intellij开发工具中调试Spark读取Hbase.运行直接报错: ? 1 ...

  3. windows 中使用hbase 异常:java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

    平时一般是在windows环境下进行开发,在windows 环境下操作hbase可能会出现异常(java.io.IOException: Could not locate executable nul ...

  4. idea 提示:ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary path java.io.IOException解决方法

    Windows系统中的IDEA链接Linux里面的Hadoop的api时出现的问题 提示:ERROR util.Shell: Failed to locate the winutils binary ...

  5. Spark报错java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

    Spark 读取 JSON 文件时运行报错 java.io.IOException: Could not locate executable null\bin\winutils.exe in the ...

  6. 安装spark 报错:java.io.IOException: Could not locate executable E:\hadoop-2.7.7\bin\winutils.exe

    打开 cmd 输入 spark-shell 虽然可以正常出现 spark 的标志符,但是报错:java.io.IOException: Could not locate executable E:\h ...

  7. executable null\bin\winutils.exe in the Hadoop binaries.

    在windows 使用eclipse远程调用hadoop集群时抛出下面异常 executable null\bin\winutils.exe in the Hadoop binaries. 这个问题 ...

  8. Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

    很明显应该是HADOOP_HOME的问题.如果HADOOP_HOME为空,必然fullExeName为null\bin\winutils.exe.解决方法很简单,配置环境变量,不想重启电脑可以在程序里 ...

  9. Could not locate executable null\bin\winutils.exe in the Hadoop binaries解决方式 spark运行wordcoult

    虽然可以正常运行,但是会出异常,现给出解决方法. 1.问题:   2.  问题解决: 仔细查看报错是缺少winutils.exe程序. Hadoop都是运行在Linux系统下的,在windows下ec ...

随机推荐

  1. 应该知道的Linux技巧(转载)

    这篇文章来源于Quroa的一个问答<What are some time-saving tips that every Linux user should know?>—— Linux用户 ...

  2. hdu 2602 - Bone Collector(01背包)解题报告

    Bone Collector Time Limit: 2000/1000 MS (Java/Others)    Memory Limit: 32768/32768 K (Java/Others) T ...

  3. 14:质数因子PrimeNum

    14:题目描述 功能:输入一个正整数,按照从小到大的顺序输出它的所有质数的因子(如180的质数因子为2 2 3 3 5 ) 详细描述: 函数接口说明: public String getResult( ...

  4. 12 redis之aof日志持久化

    Aof 的配置 appendonly no # 是否打开 aof日志功能 appendfsync always # 每1个命令,都立即同步到aof. 安全,速度慢 appendfsync everys ...

  5. 11 redis之rdb快照持久化

    一:Redis持久化配置 Redis的持久化有2种方式[快照,是日志] 二:Rdb快照的配置选项 save 900 1 // 900内,有1条写入,则产生快照 save 300 1000 // 如果3 ...

  6. 22 nginx配置与集群

    一:编译nginx ,并配置 Cd /app/pcre-8.12 ./configure Make && make install Cd nginx-1.2.7 ./configure ...

  7. uva--10714+找规律

    题意: 一根长度为len的木棍上有n仅仅蚂蚁.蚂蚁们都以1cm/s的速度爬行;假设一仅仅蚂蚁爬到了木棍的端点,那么他就会掉下去;假设两仅仅蚂蚁碰到一起了,他们就会掉头往相反方向爬行.输入len和n仅仅 ...

  8. iOS block-base 动画简单用法+关键帧动画设置线性变化速度的问题

    本文转载至 http://www.tuicool.com/articles/aANBF3m 时间 2014-12-07 20:13:37  segmentfault-博客原文  http://segm ...

  9. rtsp转rtmp、hls网页直播服务器EasyNVR前端兼容性调试:ie下的 pointer-events- none

    发现问题: 之前在做EasyNVR 的web页面开发过程中,力求的都是一个播放效果的.功能的展示.对于兼容性也有注意,但有些细节还是难免有所疏忽. 内部测试发现:由于我们是流媒体的实时视频直播,在we ...

  10. 3行代码 多元线性方程组 rank=4 多元-一元 降元

    对于线性方程组Ax=b 对A和b执行同样的一串行初等运算, 那么该方程组的解集不发生变化. [未知-已知   高阶--低阶] http://mathworld.wolfram.com/CramersR ...