Hadoop的安装和配置可以参考我之前的文章:在Win7虚拟机下搭建Hadoop2.6.0伪分布式环境

本篇介绍如何在Hadoop2.6.0基础上搭建spark1.4.0单机环境。

1. 软件准备

scala-2.11.7.tgz

spark-1.4.0-bin-hadoop2.6.tgz

都可以从官网下载。

2. scala安装和配置

scala-2.11.7.tgz解压缩即可。我解压缩到目录/home/vm/tools/scala,之后配置~/.bash_profile环境变量。

#scala

export SCALA_HOME=/home/vm/tools/scala

export PATH=$SCALA_HOME/bin:$PATH

使用source ~/.bash_profile生效。

验证scala安装是否成功:

交互式使用scala:

3. spark安装和配置

解压缩spark-1.4.0-bin-hadoop2.6.tgz到/home/vm/tools/spark目录,之后配置~/.bash_profile环境变量。

#spark

export SPARK_HOME=/home/vm/tools/spark

export PATH=$SPARK_HOME/bin:$PATH

修改$SPARK_HOME/conf/spark-env.sh

export SPARK_HOME=/home/vm/tools/spark

export SCALA_HOME=/home/vm/tools/scala

export JAVA_HOME=/home/vm/tools/jdk

export SPARK_MASTER_IP=192.168.62.129

export SPARK_WORKER_MEMORY=512m

修改$SPARK_HOME/conf/spark-defaults.conf

spark.master spark://192.168.62.129:7077

spark.serializer org.apache.spark.serializer.KryoSerializer

修改$SPARK_HOME/conf/spark-defaults.conf

192.168.62.129 这是我测试机器的IP地址

启动spark

cd /home/vm/tools/spark/sbin

sh start-all.sh

测试Spark是否安装成功

cd $SPARK_HOME/bin/

./run-example SparkPi

SparkPi的执行日志:

 vm@ubuntu:~/tools/spark/bin$ ./run-example SparkPi

 Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties

 // :: INFO SparkContext: Running Spark version 1.4.

 // :: WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

 // :: INFO SecurityManager: Changing view acls to: vm

 // :: INFO SecurityManager: Changing modify acls to: vm

 // :: INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(vm); users with modify permissions: Set(vm)

 // :: INFO Slf4jLogger: Slf4jLogger started

 // :: INFO Remoting: Starting remoting

 // :: INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.62.129:34337]

 // :: INFO Utils: Successfully started service 'sparkDriver' on port .

 // :: INFO SparkEnv: Registering MapOutputTracker

 // :: INFO SparkEnv: Registering BlockManagerMaster

 // :: INFO DiskBlockManager: Created local directory at /tmp/spark--e4c4-4dcc-8c16-f46fce5e657d/blockmgr-be03da6d-31fe-43dd-959c-6cfa4307b269

 // :: INFO MemoryStore: MemoryStore started with capacity 267.3 MB

 // :: INFO HttpFileServer: HTTP File server directory is /tmp/spark--e4c4-4dcc-8c16-f46fce5e657d/httpd-fdc26a4d-c0b6-4fc9-9dee-fb085191ee5a

 // :: INFO HttpServer: Starting HTTP Server

 // :: INFO Utils: Successfully started service 'HTTP file server' on port .

 // :: INFO SparkEnv: Registering OutputCommitCoordinator

 // :: INFO Utils: Successfully started service 'SparkUI' on port .

 // :: INFO SparkUI: Started SparkUI at http://192.168.62.129:4040

 // :: INFO SparkContext: Added JAR file:/home/vm/tools/spark/lib/spark-examples-1.4.-hadoop2.6.0.jar at http://192.168.62.129:56880/jars/spark-examples-1.4.0-hadoop2.6.0.jar with timestamp 1438099360726

 // :: INFO Executor: Starting executor ID driver on host localhost

 // :: INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port .

 // :: INFO NettyBlockTransferService: Server created on 

 // :: INFO BlockManagerMaster: Trying to register BlockManager

 // :: INFO BlockManagerMasterEndpoint: Registering block manager localhost: with 267.3 MB RAM, BlockManagerId(driver, localhost, )

 // :: INFO BlockManagerMaster: Registered BlockManager

 // :: INFO SparkContext: Starting job: reduce at SparkPi.scala:

 // :: INFO DAGScheduler: Got job  (reduce at SparkPi.scala:) with  output partitions (allowLocal=false)

 // :: INFO DAGScheduler: Final stage: ResultStage (reduce at SparkPi.scala:)

 // :: INFO DAGScheduler: Parents of final stage: List()

 // :: INFO DAGScheduler: Missing parents: List()

 // :: INFO DAGScheduler: Submitting ResultStage  (MapPartitionsRDD[] at map at SparkPi.scala:), which has no missing parents

 // :: INFO MemoryStore: ensureFreeSpace() called with curMem=, maxMem=

 // :: INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1888.0 B, free 267.3 MB)

 // :: INFO MemoryStore: ensureFreeSpace() called with curMem=, maxMem=

 // :: INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1186.0 B, free 267.3 MB)

 // :: INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost: (size: 1186.0 B, free: 267.3 MB)

 // :: INFO SparkContext: Created broadcast  from broadcast at DAGScheduler.scala:

 // :: INFO DAGScheduler: Submitting  missing tasks from ResultStage  (MapPartitionsRDD[] at map at SparkPi.scala:)

 // :: INFO TaskSchedulerImpl: Adding task set 0.0 with  tasks

 // :: INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID , localhost, PROCESS_LOCAL,  bytes)

 // :: INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID , localhost, PROCESS_LOCAL,  bytes)

 // :: INFO Executor: Running task 1.0 in stage 0.0 (TID )

 // :: INFO Executor: Running task 0.0 in stage 0.0 (TID )

 // :: INFO Executor: Fetching http://192.168.62.129:56880/jars/spark-examples-1.4.0-hadoop2.6.0.jar with timestamp 1438099360726

 // :: INFO Utils: Fetching http://192.168.62.129:56880/jars/spark-examples-1.4.0-hadoop2.6.0.jar to /tmp/spark-78277899-e4c4-4dcc-8c16-f46fce5e657d/userFiles-27c8dd76-e417-4d13-9bfd-a978cbbaacd1/fetchFileTemp5302506499464337647.tmp

 // :: INFO Executor: Adding file:/tmp/spark--e4c4-4dcc-8c16-f46fce5e657d/userFiles-27c8dd76-e417-4d13-9bfd-a978cbbaacd1/spark-examples-1.4.-hadoop2.6.0.jar to class loader

 // :: INFO Executor: Finished task 1.0 in stage 0.0 (TID ).  bytes result sent to driver

 // :: INFO Executor: Finished task 0.0 in stage 0.0 (TID ).  bytes result sent to driver

 // :: INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID ) in  ms on localhost (/)

 // :: INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID ) in  ms on localhost (/)

 // :: INFO DAGScheduler: ResultStage  (reduce at SparkPi.scala:) finished in 2.817 s

 // :: INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool

 // :: INFO DAGScheduler: Job  finished: reduce at SparkPi.scala:, took 4.244145 s

 Pi is roughly 3.14622

 // :: INFO SparkUI: Stopped Spark web UI at http://192.168.62.129:4040

 // :: INFO DAGScheduler: Stopping DAGScheduler

 // :: INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!

 // :: INFO Utils: path = /tmp/spark--e4c4-4dcc-8c16-f46fce5e657d/blockmgr-be03da6d-31fe-43dd-959c-6cfa4307b269, already present as root for deletion.

 // :: INFO MemoryStore: MemoryStore cleared

 // :: INFO BlockManager: BlockManager stopped

 // :: INFO BlockManagerMaster: BlockManagerMaster stopped

 // :: INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!

 // :: INFO SparkContext: Successfully stopped SparkContext

 // :: INFO Utils: Shutdown hook called

 // :: INFO Utils: Deleting directory /tmp/spark--e4c4-4dcc-8c16-f46fce5e657d

在浏览器中打开地址 http://192.168.62.129:8080 可以查看spark集群和任务基本情况:

4. spark-shell工具

在/home/vm/tools/spark/bin下执行./spark-shell,即可进入spark-shell交互界面。通过spark-shell可以进行一些调试工作。

 vm@ubuntu:~/tools/spark/bin$ ./spark-shell

 log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).

 log4j:WARN Please initialize the log4j system properly.

 log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

 Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties

 // :: INFO SecurityManager: Changing view acls to: vm

 // :: INFO SecurityManager: Changing modify acls to: vm

 // :: INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(vm); users with modify permissions: Set(vm)

 // :: INFO HttpServer: Starting HTTP Server

 // :: INFO Utils: Successfully started service 'HTTP class server' on port .

 Welcome to

 ____ __

 / __/__ ___ _____/ /__

 _\ \/ _ \/ _ `/ __/ '_/

 /___/ .__/\_,_/_/ /_/\_\ version 1.4.

 /_/

 Using Scala version 2.10. (Java HotSpot(TM) -Bit Server VM, Java 1.7.0_80)

 Type in expressions to have them evaluated.

 Type :help for more information.

 // :: INFO SparkContext: Running Spark version 1.4.

 // :: INFO SecurityManager: Changing view acls to: vm

 // :: INFO SecurityManager: Changing modify acls to: vm

 // :: INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(vm); users with modify permissions: Set(vm)

 // :: INFO Slf4jLogger: Slf4jLogger started

 // :: INFO Remoting: Starting remoting

 // :: INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.62.129:59312]

 // :: INFO Utils: Successfully started service 'sparkDriver' on port .

 // :: INFO SparkEnv: Registering MapOutputTracker

 // :: INFO SparkEnv: Registering BlockManagerMaster

 // :: INFO DiskBlockManager: Created local directory at /tmp/spark-621ebed4-8bd8-4e87-9ea5-08b5c7f05e98/blockmgr-a12211dd-e0ba--999c-6249b9c44d8a

 // :: INFO MemoryStore: MemoryStore started with capacity 267.3 MB

 // :: INFO HttpFileServer: HTTP File server directory is /tmp/spark-621ebed4-8bd8-4e87-9ea5-08b5c7f05e98/httpd-8512d909-5a81--8fbd-2b2ed741ae26

 // :: INFO HttpServer: Starting HTTP Server

 // :: INFO Utils: Successfully started service 'HTTP file server' on port .

 // :: INFO SparkEnv: Registering OutputCommitCoordinator

 // :: INFO Utils: Successfully started service 'SparkUI' on port .

 // :: INFO SparkUI: Started SparkUI at http://192.168.62.129:4040

 // :: INFO Executor: Starting executor ID driver on host localhost

 // :: INFO Executor: Using REPL class URI: http://192.168.62.129:56464

 // :: INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port .

 // :: INFO NettyBlockTransferService: Server created on 

 // :: INFO BlockManagerMaster: Trying to register BlockManager

 // :: INFO BlockManagerMasterEndpoint: Registering block manager localhost: with 267.3 MB RAM, BlockManagerId(driver, localhost, )

 // :: INFO BlockManagerMaster: Registered BlockManager

 // :: INFO SparkILoop: Created spark context..

 Spark context available as sc.

 // :: INFO HiveContext: Initializing execution hive, version 0.13.

 // :: INFO HiveMetaStore: : Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore

 // :: INFO ObjectStore: ObjectStore, initialize called

 // :: INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored

 // :: INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored

 // :: WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)

 // :: WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)

 // :: INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"

 // :: INFO MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line , column . Encountered: "@" (), after : "".

 // :: INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.

 // :: INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.

 // :: INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.

 // :: INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.

 // :: INFO ObjectStore: Initialized ObjectStore

 // :: WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 0.13.1aa

 // :: INFO HiveMetaStore: Added admin role in metastore

 // :: INFO HiveMetaStore: Added public role in metastore

 // :: INFO HiveMetaStore: No user is added in admin role, since config is empty

 // :: INFO SessionState: No Tez session required at this point. hive.execution.engine=mr.

 // :: INFO SparkILoop: Created sql context (with Hive support)..

 SQL context available as sqlContext.

 scala>

下一篇将介绍分别用eclipse和IDEA搭建spark开发环境。

在Win7虚拟机下搭建Hadoop2.6.0+Spark1.4.0单机环境的更多相关文章

  1. 在Win7虚拟机下搭建Hadoop2.6.0伪分布式环境

    近几年大数据越来越火热.由于工作需要以及个人兴趣,最近开始学习大数据相关技术.学习过程中的一些经验教训希望能通过博文沉淀下来,与网友分享讨论,作为个人备忘. 第一篇,在win7虚拟机下搭建hadoop ...

  2. 搭建Hadoop2.6.0+Spark1.1.0集群环境

    前几篇文章主要介绍了单机模式的hadoop和spark的安装和配置,方便开发和调试.本文主要介绍,真正集群环境下hadoop和spark的安装和使用. 1. 环境准备 集群有三台机器: master: ...

  3. Win7 32bit下一个hadoop2.5.1源代码编译平台的搭建各种错误遇到

    从小白在安装hadoop困难和错误时遇到说起,同时,我们也希望能得到上帝的指示. 首先hadoop更新速度非常快,最新的是hadoop2.5.1,因此就介绍下在安装2.5.1时遇到的各种困难. 假设直 ...

  4. CentOS7下搭建hadoop2.7.3完全分布式

    这里搭建的是3个节点的完全分布式,即1个nameNode,2个dataNode,分别如下: CentOS-master   nameNode   192.168.11.128 CentOS-node1 ...

  5. windows下搭建hadoop-2.6.0本地idea开发环境

    概述 本文记录windows下hadoop本地开发环境的搭建: OS:windows hadoop执行模式:独立模式 安装包结构: Hadoop-2.6.0-Windows.zip - cygwinI ...

  6. 在CentOS7下搭建Hadoop2.9.0集群

    系统环境:CentOS 7 JDK版本:jdk-8u191-linux-x64 MYSQL版本:5.7.26 Hadoop版本:2.9.0 Hive版本:2.3.4 Host Name Ip User ...

  7. Eclipse下搭建Hadoop2.4.0开发环境

    一.安装Eclipse 下载Eclipse,解压安装,例如安装到/usr/local,即/usr/local/eclipse 4.3.1版本下载地址:http://pan.baidu.com/s/1e ...

  8. centos7 下搭建hadoop2.9 分布式集群

    首先说明,本文记录的是博主搭建的3节点的完全分布式hadoop集群的过程,环境是centos 7,1个nameNode,2个dataNode,如下: 1.首先,创建好3个Centos7的虚拟机,具体的 ...

  9. myeclipse下搭建hadoop2.7.3开发环境

    需要下载的文件:链接:http://pan.baidu.com/s/1i5yRyuh 密码:ms91 一  下载并编译  hadoop-eclipse-plugin-2.7.3.jar 二  将had ...

随机推荐

  1. 深入理解JavaScript系列(12):变量对象(Variable Object)

    介绍 JavaScript编程的时候总避免不了声明函数和变量,以成功构建我们的系统,但是解释器是如何并且在什么地方去查找这些函数和变量呢?我们引用这些对象的时候究竟发生了什么? 原始发布:Dmitry ...

  2. UML建模—EA的使用起步

    Enterprise Architect(EA) 是一个功能比较强悍的建模工具. 对于一个软件设计者来说,从需求分析到业务设计.类模型设计.数据库设计到测试.发布.部署等一系列软件设计必须的操作都可以 ...

  3. Csharp

    c#简介 c#程序结构 c#基本语法 c#数据类型 c#类型转换 c#变量 c#常量 c#运算符 c#判断 c#循环 c#方法 c#简介 C# 是一个现代的.通用的.面向对象的编程语言,它是由微软(M ...

  4. C# string Stream 互转

    使用C#将字符串转化成流,将流转换成字符串,代码如下: using System.IO; using System.Text; namespace CSharpConvertString2Stream ...

  5. 从零开始的全栈工程师——jQuery

    jQueryjq是js一个高效且精简的库( 用的多写得少 ) ,是chrome出版的.jq内部有一个$的方法,他是jq的起始符或标识符,这个方法是用于获取元素. 下载库或者框架的方法官网 produc ...

  6. contentType和dataType

    contentType: 告诉服务器,我要发什么类型的数据 dataType:告诉服务器,我要想什么类型的数据,如果没有指定,那么会自动推断是返回 XML,还是JSON,还是script,还是Stri ...

  7. Myeclipse中进行JUnit单元测试

    最近学习了在myeclipse中进行单元测试,写点东西总结总结. JUnit单元测试: 测试对象为一个类中的方法. juint不是javase中的部分,所以必须导入jar包,但是myeclipse自带 ...

  8. Oracle数据库错误消息

    Oracle数据库错误消息 导出错误消息 l EXP-00000导出终止失败 原因:导出时产生Oracle错误. 操作:检查相应的Oracle错误消息. l EXP-00001数据域被截断 - 列长度 ...

  9. PHP 运用rsa加密和解密实例

    1.加密解密的第一步是生成公钥.私钥对,私钥加密的内容能通过公钥解密(反过来亦可以) 下载开源RSA密钥生成工具openssl(通常Linux系统都自带该程序),解压缩至独立的文件夹,进入其中的bin ...

  10. input file 类型为excel表格

    以下为react写法,可自行改为html的 <div className="flag-tip"> 请上传excel表格, 后缀名为.csv, .xls, .xlsx的都 ...