WordCount.Scala代码如下:

package com.husor.Spark

/**
* Created by huxiu on 2014/11/26.
*/ import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.SparkContext._ object SparkWordCount { def main(args: Array[String]) { println("Test is starting......") System.setProperty("hadoop.home.dir", "d:\\winutil\\") //val conf = new SparkConf().setAppName("WordCount")
// .setMaster("spark://Master:7077")
// .setSparkHome("SPARK_HOME")
// .set("spark.cores.max","2") //val spark = new SparkContext(conf)
    //以本地模式运行WordCount程序
val spark = new SparkContext("local","WordCount",System.getenv("SPARK_HOME"))
val file = spark.textFile("hdfs://Master:9000/data/test1")
   //将输出结果直接输出到控制台上
//file.flatMap(_.split(" ")).map((_, 1)).reduceByKey(_+_).collect().foreach(println) val wordCounts = file.flatMap(_.split(" ")).map((_, 1)).reduceByKey(_+_)
    //将输出结果直接输出到hdfs上
wordCounts.saveAsTextFile("hdfs://Master:9000/user/huxiu/WordCountOutput")
   spark.stop()

   println("Test is Succeed!!!") 

  } 
}

执行上述WordCount过程中,所遇异常如下:

Exception 1:

java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries ...............

Reason: Hadoop Bug

Solution: http://qnalist.com/questions/4994960/run-spark-unit-test-on-windows-7

Namely,

1) download compiled winutils.exe from
http://social.msdn.microsoft.com/Forums/windowsazure/en-US/28a57efb-082b-424b-8d9e-731b1fe135de/please-read-if-experiencing-job-failures?forum=hdinsight
2) put this file into d:\winutil\bin
3) add in my test: System.setProperty("hadoop.home.dir", "d:\\winutil\\")

Exception 2:

Exception in thread "main" org.apache.hadoop.security.AccessControlException: Permission denied: user=huxiu, access=WRITE, inode="/":Spark:supergroup:drwxr-xr-x

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:265)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:251)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:232)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:176)

Reason:

From the above exception it is easy to see that a job is trying to a create a directory using the username huxiu under a directory named "/" folder is owned by user  Spark who belongs to group named supergroup . Since other users don’t have access to the "/" folder(  rwxr-xr-x ) writes under "/" fails for  huxiu

Solution: http://www.hadoopinrealworld.com/fixing-org-apache-hadoop-security-accesscontrolexception-permission-denied/

Namely,

1> To keep things clean and for better control lets specify the location of the staging directory by setting the mapreduce.jobtracker.staging.root.dir  property in mapred-site.xml . After the property is set, restart mapred service for the property to take effect.

 <property>
<name>mapreduce.jobtracker.staging.root.dir</name>
<value>/user</value>
</property>

2> I have seen several suggestions online suggesting to do a chmod on /user to 777. This is not advisable as doing so will give other users access to delete or modify other users files in HDFS. Instead create a folder named huxiu  under/user  using the root user (in our case it is Spark ) in HDFS. After creating the folder, change the folder permissions to huxiu.

 [Spark@Master hadoop]$ hadoop fs -mkdir /user
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[Spark@Master hadoop]$ hadoop fs -mkdir /user/huxiu
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[Spark@Master hadoop]$ hadoop fs -chown huxiu:huxiu /user/huxiu
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

更改异常后,成功运行结果如下:

"C:\Program Files\Java\jdk1.7.0_67\bin\java" -Didea.launcher.port= "-Didea.launcher.bin.path=D:\ScalaIDE\IntelliJ IDEA Community Edition 14.0.1\bin" -Dfile.encoding=UTF- -classpath "C:\Program Files\Java\jdk1.7.0_67\jre\lib\charsets.jar;C:\Program Files\Java\jdk1.7.0_67\jre\lib\deploy.jar;C:\Program Files\Java\jdk1.7.0_67\jre\lib\javaws.jar;C:\Program Files\Java\jdk1.7.0_67\jre\lib\jce.jar;C:\Program Files\Java\jdk1.7.0_67\jre\lib\jfr.jar;C:\Program Files\Java\jdk1.7.0_67\jre\lib\jfxrt.jar;C:\Program Files\Java\jdk1.7.0_67\jre\lib\jsse.jar;C:\Program Files\Java\jdk1.7.0_67\jre\lib\management-agent.jar;C:\Program Files\Java\jdk1.7.0_67\jre\lib\plugin.jar;C:\Program Files\Java\jdk1.7.0_67\jre\lib\resources.jar;C:\Program Files\Java\jdk1.7.0_67\jre\lib\rt.jar;C:\Program Files\Java\jdk1.7.0_67\jre\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jdk1.7.0_67\jre\lib\ext\dnsns.jar;C:\Program Files\Java\jdk1.7.0_67\jre\lib\ext\jaccess.jar;C:\Program Files\Java\jdk1.7.0_67\jre\lib\ext\localedata.jar;C:\Program Files\Java\jdk1.7.0_67\jre\lib\ext\sunec.jar;C:\Program Files\Java\jdk1.7.0_67\jre\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jdk1.7.0_67\jre\lib\ext\sunmscapi.jar;C:\Program Files\Java\jdk1.7.0_67\jre\lib\ext\zipfs.jar;D:\IntelliJ_IDE\WorkSpace\out\production\Test;D:\vagrant\data\Scala2.10.4\lib\scala-actors-migration.jar;D:\vagrant\data\Scala2.10.4\lib\scala-actors.jar;D:\vagrant\data\Scala2.10.4\lib\scala-library.jar;D:\vagrant\data\Scala2.10.4\lib\scala-reflect.jar;D:\vagrant\data\Scala2.10.4\lib\scala-swing.jar;D:\SparkSrc\spark-assembly-1.1.0-hadoop2.4.0.jar;D:\ScalaIDE\IntelliJ IDEA Community Edition 14.0.1\lib\idea_rt.jar" com.intellij.rt.execution.application.AppMain com.husor.Spark.SparkWordCount
Test is starting......
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
// :: INFO SecurityManager: Changing view acls to: huxiu,
// :: INFO SecurityManager: Changing modify acls to: huxiu,
// :: INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(huxiu, ); users with modify permissions: Set(huxiu, )
// :: INFO Slf4jLogger: Slf4jLogger started
// :: INFO Remoting: Starting remoting
// :: INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@huxiu-PC:54972]
// :: INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriver@huxiu-PC:54972]
// :: INFO Utils: Successfully started service 'sparkDriver' on port .
// :: INFO SparkEnv: Registering MapOutputTracker
// :: INFO SparkEnv: Registering BlockManagerMaster
// :: INFO DiskBlockManager: Created local directory at C:\Users\huxiu\AppData\Local\Temp\spark-local--9dad
// :: INFO Utils: Successfully started service 'Connection manager for block manager' on port .
// :: INFO ConnectionManager: Bound socket to port with id = ConnectionManagerId(huxiu-PC,)
// :: INFO MemoryStore: MemoryStore started with capacity 969.6 MB
// :: INFO BlockManagerMaster: Trying to register BlockManager
// :: INFO BlockManagerMasterActor: Registering block manager huxiu-PC: with 969.6 MB RAM
// :: INFO BlockManagerMaster: Registered BlockManager
// :: INFO HttpFileServer: HTTP File server directory is C:\Users\huxiu\AppData\Local\Temp\spark-423dcd83-624e-404a-bbf6-a1190f77290f
// :: INFO HttpServer: Starting HTTP Server
// :: INFO Utils: Successfully started service 'HTTP file server' on port .
// :: INFO Utils: Successfully started service 'SparkUI' on port .
// :: INFO SparkUI: Started SparkUI at http://huxiu-PC:4040
// :: WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@huxiu-PC:54972/user/HeartbeatReceiver
// :: INFO MemoryStore: ensureFreeSpace() called with curMem=, maxMem=
// :: INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 159.9 KB, free 969.4 MB)
// :: INFO FileInputFormat: Total input paths to process :
// :: INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
// :: INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
// :: INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
// :: INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
// :: INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
// :: INFO SparkContext: Starting job: saveAsTextFile at SparkWordCount.scala:
// :: INFO DAGScheduler: Registering RDD (map at SparkWordCount.scala:)
// :: INFO DAGScheduler: Got job (saveAsTextFile at SparkWordCount.scala:) with output partitions (allowLocal=false)
// :: INFO DAGScheduler: Final stage: Stage (saveAsTextFile at SparkWordCount.scala:)
// :: INFO DAGScheduler: Parents of final stage: List(Stage )
// :: INFO DAGScheduler: Missing parents: List(Stage )
// :: INFO DAGScheduler: Submitting Stage (MappedRDD[] at map at SparkWordCount.scala:), which has no missing parents
// :: INFO MemoryStore: ensureFreeSpace() called with curMem=, maxMem=
// :: INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.3 KB, free 969.4 MB)
// :: INFO DAGScheduler: Submitting missing tasks from Stage (MappedRDD[] at map at SparkWordCount.scala:)
// :: INFO TaskSchedulerImpl: Adding task set 1.0 with tasks
// :: INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID , localhost, ANY, bytes)
// :: INFO Executor: Running task 0.0 in stage 1.0 (TID )
// :: INFO HadoopRDD: Input split: hdfs://Master:9000/data/test1:0+27
// :: INFO Executor: Finished task 0.0 in stage 1.0 (TID ). bytes result sent to driver
// :: INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID ) in ms on localhost (/)
// :: INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
// :: INFO DAGScheduler: Stage (map at SparkWordCount.scala:) finished in 0.500 s
// :: INFO DAGScheduler: looking for newly runnable stages
// :: INFO DAGScheduler: running: Set()
// :: INFO DAGScheduler: waiting: Set(Stage )
// :: INFO DAGScheduler: failed: Set()
// :: INFO DAGScheduler: Missing parents for Stage : List()
// :: INFO DAGScheduler: Submitting Stage (MappedRDD[] at saveAsTextFile at SparkWordCount.scala:), which is now runnable
// :: INFO MemoryStore: ensureFreeSpace() called with curMem=, maxMem=
// :: INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 56.2 KB, free 969.4 MB)
// :: INFO DAGScheduler: Submitting missing tasks from Stage (MappedRDD[] at saveAsTextFile at SparkWordCount.scala:)
// :: INFO TaskSchedulerImpl: Adding task set 0.0 with tasks
// :: INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID , localhost, PROCESS_LOCAL, bytes)
// :: INFO Executor: Running task 0.0 in stage 0.0 (TID )
// :: INFO BlockFetcherIterator$BasicBlockFetcherIterator: maxBytesInFlight: , targetRequestSize:
// :: INFO BlockFetcherIterator$BasicBlockFetcherIterator: Getting non-empty blocks out of blocks
// :: INFO BlockFetcherIterator$BasicBlockFetcherIterator: Started remote fetches in ms
// :: INFO FileOutputCommitter: Saved output of task 'attempt_201411261415_0000_m_000000_1' to hdfs://Master:9000/user/huxiu/WordCountOutput/_temporary/0/task_201411261415_0000_m_000000
// :: INFO SparkHadoopWriter: attempt_201411261415_0000_m_000000_1: Committed
// :: INFO Executor: Finished task 0.0 in stage 0.0 (TID ). bytes result sent to driver
// :: INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID ) in ms on localhost (/)
// :: INFO DAGScheduler: Stage (saveAsTextFile at SparkWordCount.scala:) finished in 0.847 s
// :: INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
// :: INFO SparkContext: Job finished: saveAsTextFile at SparkWordCount.scala:, took 1.469630513 s
// :: INFO SparkUI: Stopped Spark web UI at http://huxiu-PC:4040
// :: INFO DAGScheduler: Stopping DAGScheduler
// :: INFO MapOutputTrackerMasterActor: MapOutputTrackerActor stopped!
// :: INFO ConnectionManager: Selector thread was interrupted!
// :: INFO ConnectionManager: ConnectionManager stopped
// :: INFO MemoryStore: MemoryStore cleared
// :: INFO BlockManager: BlockManager stopped
// :: INFO BlockManagerMaster: BlockManagerMaster stopped
Test is Succeed!!!
// :: INFO SparkContext: Successfully stopped SparkContext
// :: INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
// :: INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
// :: INFO Remoting: Remoting shut down
// :: INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down. Process finished with exit code

Win7上Spark WordCount运行过程及异常的更多相关文章

  1. Spark Standalone运行过程

    以下内容参考http://www.cnblogs.com/luogankun/p/3912956.html 一.集群启动过程--启动Master 二.集群启动过程--启动WorkerWorker运行时 ...

  2. Hadoop/Spark环境运行过程中可能遇到的问题或注意事项

    1.集群启动的时候,从节点的datanode没有启动 问题原因:从节点的tmp/data下的配置文件中的clusterID与主节点的tmp/data下的配置文件中的clusterID不一致,导致集群启 ...

  3. Oracle11g在虚拟机win7上的详细安装过程(包括win7在虚拟机上的安装)

    http://www.imsdn.cn/这个是镜像文件的下载地址,之前下载雨林和深度的VM识别不了. 这个好了之后就可以去这个网址下看安装教程很详细.https://blog.csdn.net/u01 ...

  4. spark 任务运行原理

    调优概述 在开发完Spark作业之后,就该为作业配置合适的资源了.Spark的资源参数,基本都可以在spark-submit命令中作为参数设置.很多Spark初学者,通常不知道该设置哪些必要的参数,以 ...

  5. Update(Stage4):Spark原理_运行过程_高级特性

    如何判断宽窄依赖: =================================== 6. Spark 底层逻辑 导读 从部署图了解 Spark 部署了什么, 有什么组件运行在集群中 通过对 W ...

  6. WPF打包32位和64位程序 运行在ghost WIN7上问题

    WIN10,VS2015,编译平台"anycpu".WIN7系统为ghost版 1. 在.NET4.5下编译,程序打包以后,在WIN7上运行界面启动有3.4秒的延迟:将.NET版本 ...

  7. 转:WIN7上搭建Windows Phone 8 开发环境——VMware Workstation下Win8 “无法安装Hyper-V, 某个虚拟机监控程序正在运行”问题解决的办法

    转自:http://www.cnblogs.com/shaddock2013/p/3155024.html 最近在试着在Windows 7上搭建Windows Phone 8的开发调试环境,使用的是V ...

  8. Win7上Git安装及配置过程

    Win7上Git安装及配置过程 文档名称 Win7上Git安装及配置过程 创建时间 2012/8/20 修改时间 2012/8/20 创建人 Baifx 简介(收获) 1.在win7上安装msysgi ...

  9. WIN7上搭建Windows Phone 8 开发环境——VMware Workstation下Win8 “无法安装Hyper-V, 某个虚拟机监控程序正在运行”问题解决的办法

    最近在试着在Windows 7上搭建Windows Phone 8的开发调试环境,使用的是VMware Workstation + Win8 Pro的虚拟环境, 在漫长的WPexpress_full下 ...

随机推荐

  1. windows socket扩展函数

    1.AcceptEx() AcceptEx()用于异步接收连接,可以取得客户程序发送的第一块数据. BOOL AcceptEx( _In_  SOCKET       sListenSocket,   ...

  2. MFC单文档带窗体创建

    我用的vs05.先随便起个名字qwerty. 确定以后在左边最下面有一个生成的类,点击生成的类,把基类改成CFormView 最后点击完成就创建好了. 单文档的窗口不是后来创建后插入的,是在创建后就自 ...

  3. IntelliJ IDEA 2017版 编译器使用学习笔记(六) (图文详尽版);IDE快捷键使用

    一.alter + enter使用 应用于很对场景不知道如何操作代码时使用          1.场景一:自动创建函数         调用一个没有的函数的时候,alter+enter,弹出自动创建函 ...

  4. elasticsearch-环境搭建

    1:下载并安装JDK 下载地址:jdk-8u91-windows-x64.exe 2:下载elasticsearch压缩包 下载地址:elasticsearch-2.3.0.zip 下载之后解压缩文件 ...

  5. 实现1sym转换成2个sym送给CVI(VGA数据)

    CVI的时序如下 :de指示数据有效. 从下面的程序看,同步码的长度不会影响对有效数据的判断.同步码的作用更多的是用于计算行及一行的像素数目.方案一: 1 module vga_1sym_2_2sym ...

  6. 20) maven 项目结构:all in one

    这是最常见的项目结构 垂直结构 也是初学者常用的 也是小项目常用的 优点 全部代码在一个项目里,一目了然. 结构简单易于理解 一开始时小巧 缺点 随之而来的缺点也十分明显 前端项目,后端项目,接口项目 ...

  7. 知识点:定义input type=file 样式的方法(转)

    ——“当我们想要用css美化表单的时候,有两个控件就会和前段人员作对,一个是是大名鼎鼎的select,另一个就是我现在要说说的 input type=file” 为什么要美化file控件?试想一下,别 ...

  8. Objective-C:01简介

    1.Objective-C简介 Objective-C是一种面向对象的计算机语言 OC不是一门全新的语言 C语言的基础上增加了一层最小的面向对象语法 OC完全兼容C语言 可以在OC代码中混入C语言代码 ...

  9. hive sql 查询一张表的数据不在另一张表中

    有时,我们需要对比两张表的数据,找到在其中一张表,不在另一张表中的数据 hql 如下: SELECT * FROM (SELECT id FROM a WHERE dt = '2019-03-17' ...

  10. Android-Java-解决(多线程存钱案例)的安全隐患-synchronized

    多线程存钱案例: package android.java.thread10; /** * 两个储户,到同一个银行存钱,每个人存了3次,一次1000000.00元 * 1.描述银行 * 2.描述储户任 ...