Spark单机版集群
一、创建用户
# useradd spark
# passwd spark
二、下载软件
JDK,Scala,SBT,Maven
版本信息如下:
JDK jdk-7u79-linux-x64.gz
Scala scala-2.10.5.tgz
SBT sbt-0.13.7.zip
Maven apache-maven-3.2.5-bin.tar.gz
注意:如果只是安装Spark环境,则只需JDK和Scala即可,SBT和Maven是为了后续的源码编译。
三、解压上述文件并进行环境变量配置
# cd /usr/local/
# tar xvf /root/jdk-7u79-linux-x64.gz
# tar xvf /root/scala-2.10.5.tgz
# tar xvf /root/apache-maven-3.2.5-bin.tar.gz
# unzip /root/sbt-0.13.7.zip
修改环境变量的配置文件
# vim /etc/profile
export JAVA_HOME=/usr/local/jdk1.7.0_79
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export SCALA_HOME=/usr/local/scala-2.10.5
export MAVEN_HOME=/usr/local/apache-maven-3.2.5
export SBT_HOME=/usr/local/sbt
export PATH=$PATH:$JAVA_HOME/bin:$SCALA_HOME/bin:$MAVEN_HOME/bin:$SBT_HOME/bin
使配置文件生效
# source /etc/profile
测试环境变量是否生效
# java –version
java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
# scala –version
Scala code runner version 2.10.5 -- Copyright 2002-2013, LAMP/EPFL
# mvn –version
Apache Maven 3.2.5 (12a6b3acb947671f09b81f49094c53f426d8cea1; 2014-12-15T01:29:23+08:00)
Maven home: /usr/local/apache-maven-3.2.5
Java version: 1.7.0_79, vendor: Oracle Corporation
Java home: /usr/local/jdk1.7.0_79/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-229.el7.x86_64", arch: "amd64", family: "unix"
# sbt --version
sbt launcher version 0.13.7
四、主机名绑定
[root@spark01 ~]# vim /etc/hosts
192.168.244.147 spark01
五、配置spark
切换到spark用户下
下载hadoop和spark,可使用wget命令下载
spark-1.4.0 http://d3kbcqa49mib13.cloudfront.net/spark-1.4.0-bin-hadoop2.6.tgz
Hadoop http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz
解压上述文件并进行环境变量配置
修改spark用户环境变量的配置文件
[spark@spark01 ~]$ vim .bash_profile
export SPARK_HOME=$HOME/spark-1.4.0-bin-hadoop2.6
export HADOOP_HOME=$HOME/hadoop-2.6.0
export HADOOP_CONF_DIR=$HOME/hadoop-2.6.0/etc/hadoop
export PATH=$PATH:$SPARK_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
使配置文件生效
[spark@spark01 ~]$ source .bash_profile
修改spark配置文件
[spark@spark01 ~]$ cd spark-1.4.0-bin-hadoop2.6/conf/
[spark@spark01 conf]$ cp spark-env.sh.template spark-env.sh
[spark@spark01 conf]$ vim spark-env.sh
在后面添加如下内容:
export SCALA_HOME=/usr/local/scala-2.10.5
export SPARK_MASTER_IP=spark01
export SPARK_WORKER_MEMORY=1500m
export JAVA_HOME=/usr/local/jdk1.7.0_79
有条件的童鞋可将SPARK_WORKER_MEMORY适当设大一点,因为我虚拟机内存是2G,所以只给了1500m。
配置slaves
[spark@spark01 conf]$ cp slaves slaves.template
[spark@spark01 conf]$ vim slaves
将localhost修改为spark01
启动master
[spark@spark01 spark-1.4.0-bin-hadoop2.6]$ sbin/start-master.sh
starting org.apache.spark.deploy.master.Master, logging to /home/spark/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark-spark-org.apache.spark.deploy.master.Master-1-spark01.out
查看上述日志的输出内容
[spark@spark01 spark-1.4.0-bin-hadoop2.6]$ cd logs/
[spark@spark01 logs]$ cat spark-spark-org.apache.spark.deploy.master.Master-1-spark01.out
Spark Command: /usr/local/jdk1.7.0_79/bin/java -cp /home/spark/spark-1.4.0-bin-hadoop2.6/sbin/../conf/:/home/spark/spark-1.4.0-bin-hadoop2.6/lib/spark-assembly-1.4.0-hadoop2.6.0.jar:/home/spark/spark-1.4.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/home/spark/spark-1.4.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/home/spark/spark-1.4.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/home/spark/hadoop-2.6.0/etc/hadoop/ -Xms512m -Xmx512m -XX:MaxPermSize=128m org.apache.spark.deploy.master.Master --ip spark01 --port 7077 --webui-port 8080
========================================
16/01/16 15:12:30 INFO master.Master: Registered signal handlers for [TERM, HUP, INT]
16/01/16 15:12:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/01/16 15:12:32 INFO spark.SecurityManager: Changing view acls to: spark
16/01/16 15:12:32 INFO spark.SecurityManager: Changing modify acls to: spark
16/01/16 15:12:32 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); users with modify permissions: Set(spark)
16/01/16 15:12:33 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/01/16 15:12:33 INFO Remoting: Starting remoting
16/01/16 15:12:33 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkMaster@spark01:7077]
16/01/16 15:12:33 INFO util.Utils: Successfully started service 'sparkMaster' on port 7077.
16/01/16 15:12:34 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/01/16 15:12:34 INFO server.AbstractConnector: Started SelectChannelConnector@spark01:6066
16/01/16 15:12:34 INFO util.Utils: Successfully started service on port 6066.
16/01/16 15:12:34 INFO rest.StandaloneRestServer: Started REST server for submitting applications on port 6066
16/01/16 15:12:34 INFO master.Master: Starting Spark master at spark://spark01:7077
16/01/16 15:12:34 INFO master.Master: Running Spark version 1.4.0
16/01/16 15:12:34 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/01/16 15:12:34 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:8080
16/01/16 15:12:34 INFO util.Utils: Successfully started service 'MasterUI' on port 8080.
16/01/16 15:12:34 INFO ui.MasterWebUI: Started MasterWebUI at http://192.168.244.147:8080
16/01/16 15:12:34 INFO master.Master: I have been elected leader! New state: ALIVE
从日志中也可看出,master启动正常
下面来看看master的 web管理界面,默认在8080端口
启动worker
[spark@spark01 spark-1.4.0-bin-hadoop2.6]$ sbin/start-slaves.sh spark://spark01:7077
spark01: Warning: Permanently added 'spark01,192.168.244.147' (ECDSA) to the list of known hosts.
spark@spark01's password:
spark01: starting org.apache.spark.deploy.worker.Worker, logging to /home/spark/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark-spark-org.apache.spark.deploy.worker.Worker-1-spark01.out
输入spark01上spark用户的密码
可通过日志的信息来确认workder是否正常启动,因信息太多,在这里就不贴出了。
[spark@spark01 spark-1.4.0-bin-hadoop2.6]$ cd logs/
[spark@spark01 logs]$ cat spark-spark-org.apache.spark.deploy.worker.Worker-1-spark01.out
启动spark shell
[spark@spark01 spark-1.4.0-bin-hadoop2.6]$ bin/spark-shell --master spark://spark01:7077
16/01/16 15:33:17 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/01/16 15:33:18 INFO spark.SecurityManager: Changing view acls to: spark
16/01/16 15:33:18 INFO spark.SecurityManager: Changing modify acls to: spark
16/01/16 15:33:18 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); users with modify permissions: Set(spark)
16/01/16 15:33:18 INFO spark.HttpServer: Starting HTTP Server
16/01/16 15:33:18 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/01/16 15:33:18 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:42300
16/01/16 15:33:18 INFO util.Utils: Successfully started service 'HTTP class server' on port 42300.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.4.0
/_/ Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_79)
Type in expressions to have them evaluated.
Type :help for more information.
16/01/16 15:33:30 INFO spark.SparkContext: Running Spark version 1.4.0
16/01/16 15:33:30 INFO spark.SecurityManager: Changing view acls to: spark
16/01/16 15:33:30 INFO spark.SecurityManager: Changing modify acls to: spark
16/01/16 15:33:30 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); users with modify permissions: Set(spark)
16/01/16 15:33:31 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/01/16 15:33:31 INFO Remoting: Starting remoting
16/01/16 15:33:31 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.244.147:43850]
16/01/16 15:33:31 INFO util.Utils: Successfully started service 'sparkDriver' on port 43850.
16/01/16 15:33:31 INFO spark.SparkEnv: Registering MapOutputTracker
16/01/16 15:33:31 INFO spark.SparkEnv: Registering BlockManagerMaster
16/01/16 15:33:31 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-7b7bd4bd-ff20-4e3d-a354-61a4ca7c4b2f/blockmgr-0e855210-3609-4204-b5e3-151e0c096c15
16/01/16 15:33:31 INFO storage.MemoryStore: MemoryStore started with capacity 265.4 MB
16/01/16 15:33:31 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-7b7bd4bd-ff20-4e3d-a354-61a4ca7c4b2f/httpd-56ac16d2-dd82-41cb-99d7-4d11ef36b42e
16/01/16 15:33:31 INFO spark.HttpServer: Starting HTTP Server
16/01/16 15:33:31 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/01/16 15:33:31 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:47633
16/01/16 15:33:31 INFO util.Utils: Successfully started service 'HTTP file server' on port 47633.
16/01/16 15:33:31 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/01/16 15:33:31 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/01/16 15:33:31 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
16/01/16 15:33:31 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
16/01/16 15:33:31 INFO ui.SparkUI: Started SparkUI at http://192.168.244.147:4040
16/01/16 15:33:32 INFO client.AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@spark01:7077/user/Master...
16/01/16 15:33:33 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20160116153332-0000
16/01/16 15:33:33 INFO client.AppClient$ClientActor: Executor added: app-20160116153332-0000/0 on worker-20160116152314-192.168.244.147-58914 (192.168.244.147:58914) with 2 cores
16/01/16 15:33:33 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160116153332-0000/0 on hostPort 192.168.244.147:58914 with 2 cores, 512.0 MB RAM
16/01/16 15:33:33 INFO client.AppClient$ClientActor: Executor updated: app-20160116153332-0000/0 is now LOADING
16/01/16 15:33:33 INFO client.AppClient$ClientActor: Executor updated: app-20160116153332-0000/0 is now RUNNING
16/01/16 15:33:34 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33146.
16/01/16 15:33:34 INFO netty.NettyBlockTransferService: Server created on 33146
16/01/16 15:33:34 INFO storage.BlockManagerMaster: Trying to register BlockManager
16/01/16 15:33:34 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.244.147:33146 with 265.4 MB RAM, BlockManagerId(driver, 192.168.244.147, 33146)
16/01/16 15:33:34 INFO storage.BlockManagerMaster: Registered BlockManager
16/01/16 15:33:34 INFO cluster.SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/01/16 15:33:34 INFO repl.SparkILoop: Created spark context..
Spark context available as sc.
16/01/16 15:33:38 INFO hive.HiveContext: Initializing execution hive, version 0.13.1
16/01/16 15:33:43 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/01/16 15:33:43 INFO metastore.ObjectStore: ObjectStore, initialize called
16/01/16 15:33:44 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
16/01/16 15:33:44 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
16/01/16 15:33:44 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.244.147:46741/user/Executor#-2043358626]) with ID 0
16/01/16 15:33:44 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/01/16 15:33:45 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.244.147:33017 with 265.4 MB RAM, BlockManagerId(0, 192.168.244.147, 33017)
16/01/16 15:33:46 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/01/16 15:33:48 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
16/01/16 15:33:48 INFO metastore.MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5. Encountered: "@" (64), after : "".
16/01/16 15:33:52 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/01/16 15:33:52 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/01/16 15:33:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/01/16 15:33:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/01/16 15:33:54 INFO metastore.ObjectStore: Initialized ObjectStore
16/01/16 15:33:54 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 0.13.1aa
16/01/16 15:33:55 INFO metastore.HiveMetaStore: Added admin role in metastore
16/01/16 15:33:55 INFO metastore.HiveMetaStore: Added public role in metastore
16/01/16 15:33:56 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
16/01/16 15:33:56 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr.
16/01/16 15:33:56 INFO repl.SparkILoop: Created sql context (with Hive support)..
SQL context available as sqlContext. scala>
打开spark shell以后,可以写一个简单的程序,say hello to the world
scala> println("helloworld")
helloworld
再来看看spark的web管理界面,可以看出,多了一个Workders和Running Applications的信息
至此,Spark的伪分布式环境搭建完毕,
有以下几点需要注意:
1. 上述中的Maven和SBT是非必须的,只是为了后续的源码编译,所以,如果只是单纯的搭建Spark环境,可不用下载Maven和SBT。
2. 该Spark的伪分布式环境其实是集群的基础,只需修改极少的地方,然后copy到slave节点上即可,鉴于篇幅有限,后文再表。
Spark单机版集群的更多相关文章
- 安装spark ha集群
安装spark ha集群 1.默认安装好hadoop+zookeeper 2.安装scala 1.解压安装包 tar zxvf scala-2.11.7.tgz 2.配置环境变量 vim /etc/p ...
- (二)win7下用Intelij IDEA 远程调试spark standalone 集群
关于这个spark的环境搭建了好久,踩了一堆坑,今天 环境: WIN7笔记本 spark 集群(4个虚拟机搭建的) Intelij IDEA15 scala-2.10.4 java-1.7.0 版本 ...
- spark在集群上运行
1.spark在集群上运行应用的详细过程 (1)用户通过spark-submit脚本提交应用 (2)spark-submit脚本启动驱动器程序,调用用户定义的main()方法 (3)驱动器程序与集群管 ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二十一)NIFI1.7.1安装
一.nifi基本配置 1. 修改各节点主机名,修改/etc/hosts文件内容. 192.168.0.120 master 192.168.0.121 slave1 192.168.0.122 sla ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十三)kafka+spark streaming打包好的程序提交时提示虚拟内存不足(Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical memory used; 2.2 GB of 2.1 G)
异常问题:Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical mem ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十二)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网。
Centos7出现异常:Failed to start LSB: Bring up/down networking. 按照<Kafka:ZK+Kafka+Spark Streaming集群环境搭 ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十一)定制一个arvo格式文件发送到kafka的topic,通过Structured Streaming读取kafka的数据
将arvo格式数据发送到kafka的topic 第一步:定制avro schema: { "type": "record", "name": ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十)安装hadoop2.9.0搭建HA
如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(九)安装kafka_2.11-1.1.0
如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...
随机推荐
- linux 邮件服务器
邮件通信系统协议及概念:软件角色:MUA:邮件客户端MTA:邮件服务端MDA:邮件服务端模块邮件客户端:Mail User Agent,邮件用户代理邮件服务端:Mail Transfer Agent, ...
- 微信 Tinker 的一切都在这里,包括源码
最近半年以来,Android热补丁技术热潮继续爆发,各大公司相继推出自己的开源框架.Tinker在最近也顺利完成了公司的审核,并非常荣幸的成为github.com/Tencent上第一个正式公开的项目 ...
- 错误:readline/readline.h:没有那个文件或目录解决方法
curl -R -O http://www.lua.org/ftp/lua-5.3.0.tar.gz tar zxf lua-5.3.0.tar.gz cd lua-5.3.0 make linux ...
- HighCharts之2D数值带有百分数的面积图
HighCharts之2D数值带有百分数的面积图 1.HighCharts之2D数值带有百分数的面积图源码 AreaPercentage.html: <!DOCTYPE html> < ...
- Caused by: java.lang.NoClassDefFoundError: org/springframework/web/context/WebApplicationContext
1.错误描述 严重: A child container failed during start java.util.concurrent.ExecutionException: org.apache ...
- com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown database 'smvch'
1.错误描述 INFO:2015-05-01 14:20:44[main] - Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDat ...
- 使用promise方式写settimeout
//使用promise方式写settimeout, //好处就是用于写动画的时候只需知道后一个的动画在前一个动画结束后多久执行 console.time('settimeout:');//开始计算这段 ...
- Mybatis if test 中int判断非空的坑
Mybatis 中,alarmType 是int类型.如果alarmType 为0的话,条件判断返回结果为false,其它值的话,返回true. <if test="alarmType ...
- VS2010编译VS2008工程时,LINK : fatal error LNK1123: failure during conversion to COFF: file invalid or corrupt
1.问题 电脑上同时安装了VS2008,VS2010,使用VS2010编译VS2008建立的工程,或者,VS2010创建新的工程.编译时,出现以下链接错误: LINK : fatal error LN ...
- CentOS 7离线安装MySQL 5.7
系列文章首发平台为果冻想个人博客.果冻想,是一个原创技术文章分享网站.在这里果冻会分享他的技术心得,技术得失,技术人生.我在果冻想等待你,也希望你能和我分享你的技术得与失,期待. 前言 网上已经有那么 ...