Spark学习之路 (二)Spark2.3 HA集群的分布式安装
一、下载Spark安装包
1、从官网下载
http://spark.apache.org/downloads.html
2、从微软的镜像站下载
http://mirrors.hust.edu.cn/apache/
3、从清华的镜像站下载
https://mirrors.tuna.tsinghua.edu.cn/apache/
二、安装基础
1、Java8安装成功
2、zookeeper安装成功
3、hadoop2.7.5 HA安装成功
4、Scala安装成功(不安装进程也可以启动)
三、Spark安装过程
1、上传并解压缩
[hadoop@hadoop1 ~]$ ls
apps data exam inithive.conf movie spark-2.3.0-bin-hadoop2.7.tgz udf.jar
cookies data.txt executions json.txt projects student zookeeper.out
course emp hive.sql log sougou temp
[hadoop@hadoop1 ~]$ tar -zxvf spark-2.3.0-bin-hadoop2.7.tgz -C apps/
2、为安装包创建一个软连接
[hadoop@hadoop1 ~]$ cd apps/
[hadoop@hadoop1 apps]$ ls
hadoop-2.7. hbase-1.2. spark-2.3.0-bin-hadoop2.7 zookeeper-3.4. zookeeper.out
[hadoop@hadoop1 apps]$ ln -s spark-2.3.0-bin-hadoop2.7/ spark
[hadoop@hadoop1 apps]$ ll
总用量
drwxr-xr-x. hadoop hadoop 3月 : hadoop-2.7.
drwxrwxr-x. hadoop hadoop 3月 : hbase-1.2.
lrwxrwxrwx. hadoop hadoop 4月 : spark -> spark-2.3.0-bin-hadoop2.7/
drwxr-xr-x. hadoop hadoop 2月 : spark-2.3.-bin-hadoop2.
drwxr-xr-x. hadoop hadoop 3月 zookeeper-3.4.
-rw-rw-r--. hadoop hadoop 3月 : zookeeper.out
[hadoop@hadoop1 apps]$
3、进入spark/conf修改配置文件
(1)进入配置文件所在目录
[hadoop@hadoop1 ~]$ cd apps/spark/conf/
[hadoop@hadoop1 conf]$ ll
总用量
-rw-r--r--. hadoop hadoop 2月 : docker.properties.template
-rw-r--r--. hadoop hadoop 2月 : fairscheduler.xml.template
-rw-r--r--. hadoop hadoop 2月 : log4j.properties.template
-rw-r--r--. hadoop hadoop 2月 : metrics.properties.template
-rw-r--r--. hadoop hadoop 2月 : slaves.template
-rw-r--r--. hadoop hadoop 2月 : spark-defaults.conf.template
-rwxr-xr-x. hadoop hadoop 2月 : spark-env.sh.template
[hadoop@hadoop1 conf]$
(2)复制spark-env.sh.template并重命名为spark-env.sh,并在文件最后添加配置内容
[hadoop@hadoop1 conf]$ cp spark-env.sh.template spark-env.sh
[hadoop@hadoop1 conf]$ vi spark-env.sh
export JAVA_HOME=/usr/local/jdk1.8.0_73
#export SCALA_HOME=/usr/share/scala
export HADOOP_HOME=/home/hadoop/apps/hadoop-2.7.5
export HADOOP_CONF_DIR=/home/hadoop/apps/hadoop-2.7.5/etc/hadoop
export SPARK_WORKER_MEMORY=500m
export SPARK_WORKER_CORES=1
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop1:2181,hadoop2:2181,hadoop3:2181,hadoop4:2181 -Dspark.deploy.zookeeper.dir=/spark"
注:
#export SPARK_MASTER_IP=hadoop1 这个配置要注释掉。
集群搭建时配置的spark参数可能和现在的不一样,主要是考虑个人电脑配置问题,如果memory配置太大,机器运行很慢。
说明:
-Dspark.deploy.recoveryMode=ZOOKEEPER #说明整个集群状态是通过zookeeper来维护的,整个集群状态的恢复也是通过zookeeper来维护的。就是说用zookeeper做了spark的HA配置,Master(Active)挂掉的话,Master(standby)要想变成Master(Active)的话,Master(Standby)就要像zookeeper读取整个集群状态信息,然后进行恢复所有Worker和Driver的状态信息,和所有的Application状态信息;
-Dspark.deploy.zookeeper.url=hadoop1:2181,hadoop2:2181,hadoop3:2181,hadoop4:2181#将所有配置了zookeeper,并且在这台机器上有可能做master(Active)的机器都配置进来;(我用了4台,就配置了4台)-Dspark.deploy.zookeeper.dir=/spark
这里的dir和zookeeper配置文件zoo.cfg中的dataDir的区别???
-Dspark.deploy.zookeeper.dir是保存spark的元数据,保存了spark的作业运行状态;
zookeeper会保存spark集群的所有的状态信息,包括所有的Workers信息,所有的Applactions信息,所有的Driver信息,如果集群
(3)复制slaves.template成slaves
[hadoop@hadoop1 conf]$ cp slaves.template slaves
[hadoop@hadoop1 conf]$ vi slaves
添加如下内容
hadoop1
hadoop2
hadoop3
hadoop4
(4)将安装包分发给其他节点
[hadoop@hadoop1 ~]$ cd apps/
[hadoop@hadoop1 apps]$ scp -r spark-2.3.0-bin-hadoop2.7/ hadoop2:$PWD
[hadoop@hadoop1 apps]$ scp -r spark-2.3.0-bin-hadoop2.7/ hadoop3:$PWD
[hadoop@hadoop1 apps]$ scp -r spark-2.3.0-bin-hadoop2.7/ hadoop4:$PWD
创建软连接
[hadoop@hadoop2 ~]$ cd apps/
[hadoop@hadoop2 apps]$ ls
hadoop-2.7. hbase-1.2. spark-2.3.0-bin-hadoop2.7 zookeeper-3.4.
[hadoop@hadoop2 apps]$ ln -s spark-2.3.0-bin-hadoop2.7/ spark
[hadoop@hadoop2 apps]$ ll
总用量
drwxr-xr-x hadoop hadoop 3月 : hadoop-2.7.
drwxrwxr-x hadoop hadoop 3月 : hbase-1.2.
lrwxrwxrwx hadoop hadoop 4月 : spark -> spark-2.3.0-bin-hadoop2.7/
drwxr-xr-x hadoop hadoop 4月 : spark-2.3.0-bin-hadoop2.7
drwxr-xr-x hadoop hadoop 3月 : zookeeper-3.4.
[hadoop@hadoop2 apps]$
4、配置环境变量
所有节点均要配置
[hadoop@hadoop1 spark]$ vi ~/.bashrc
#Spark
export SPARK_HOME=/home/hadoop/apps/spark
export PATH=$PATH:$SPARK_HOME/bin
保存并使其立即生效
[hadoop@hadoop1 spark]$ source ~/.bashrc
四、启动
1、先启动zookeeper集群
所有节点均要执行
[hadoop@hadoop1 ~]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4./bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@hadoop1 ~]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4./bin/../conf/zoo.cfg
Mode: follower
[hadoop@hadoop1 ~]$
2、在启动HDFS集群
任意一个节点执行即可
[hadoop@hadoop1 ~]$ start-dfs.sh
3、在启动Spark集群
在一个节点上执行
[hadoop@hadoop1 ~]$ cd apps/spark/sbin/
[hadoop@hadoop1 sbin]$ start-all.sh
4、查看进程
5、问题
查看进程发现spark集群只有hadoop1成功启动了Master进程,其他3个节点均没有启动成功,需要手动启动,进入到/home/hadoop/apps/spark/sbin目录下执行以下命令,3个节点都要执行
[hadoop@hadoop2 ~]$ cd ~/apps/spark/sbin/
[hadoop@hadoop2 sbin]$ start-master.sh
6、执行之后再次查看进程
Master进程和Worker进程都以启动成功
五、验证
1、查看Web界面Master状态
hadoop1是ALIVE状态,hadoop2、hadoop3和hadoop4均是STANDBY状态
hadoop1节点
hadoop2节点
hadoop3
hadoop4
2、验证HA的高可用
手动干掉hadoop1上面的Master进程,观察是否会自动进行切换
干掉hadoop1上的Master进程之后,再次查看web界面
hadoo1节点,由于Master进程被干掉,所以界面无法访问
hadoop2节点,Master被干掉之后,hadoop2节点上的Master成功篡位成功,成为ALIVE状态
hadoop3节点
hadoop4节点
六、执行Spark程序on standalone
1、执行第一个Spark程序
[hadoop@hadoop3 ~]$ /home/hadoop/apps/spark/bin/spark-submit \
> --class org.apache.spark.examples.SparkPi \
> --master spark://hadoop1:7077 \
> --executor-memory 500m \
> --total-executor-cores \
> /home/hadoop/apps/spark/examples/jars/spark-examples_2.-2.3..jar \
>
其中的spark://hadoop1:7077是下图中的地址
运行结果
2、启动spark shell
[hadoop@hadoop1 ~]$ /home/hadoop/apps/spark/bin/spark-shell \
> --master spark://hadoop1:7077 \
> --executor-memory 500m \
> --total-executor-cores 1
参数说明:
--master spark://hadoop1:7077 指定Master的地址
--executor-memory 500m:指定每个worker可用内存为500m
--total-executor-cores 1: 指定整个集群使用的cup核数为1个
注意:
如果启动spark shell时没有指定master地址,但是也可以正常启动spark shell和执行spark shell中的程序,其实是启动了spark的local模式,该模式仅在本机启动一个进程,没有与集群建立联系。
Spark Shell中已经默认将SparkContext类初始化为对象sc。用户代码如果需要用到,则直接应用sc即可
Spark Shell中已经默认将SparkSQl类初始化为对象spark。用户代码如果需要用到,则直接应用spark即可
3、 在spark shell中编写WordCount程序
(1)编写一个hello.txt文件并上传到HDFS上的spark目录下
[hadoop@hadoop1 ~]$ vi hello.txt
[hadoop@hadoop1 ~]$ hadoop fs -mkdir -p /spark
[hadoop@hadoop1 ~]$ hadoop fs -put hello.txt /spark
hello.txt的内容如下
you,jump
i,jump
you,jump
i,jump
jump
(2)在spark shell中用scala语言编写spark程序
scala> sc.textFile("/spark/hello.txt").flatMap(_.split(",")).map((_,)).reduceByKey(_+_).saveAsTextFile("/spark/out")
说明:
sc是SparkContext对象,该对象是提交spark程序的入口
textFile("/spark/hello.txt")是hdfs中读取数据
flatMap(_.split(" "))先map再压平
map((_,1))将单词和1构成元组
reduceByKey(_+_)按照key进行reduce,并将value累加
saveAsTextFile("/spark/out")将结果写入到hdfs中
(3)使用hdfs命令查看结果
[hadoop@hadoop2 ~]$ hadoop fs -cat /spark/out/p*
(jump,)
(you,)
(i,)
[hadoop@hadoop2 ~]$
七、 执行Spark程序on YARN
1、前提
成功启动zookeeper集群、HDFS集群、YARN集群
2、启动Spark on YARN
[hadoop@hadoop1 bin]$ spark-shell --master yarn --deploy-mode client
报错如下:
报错原因:内存资源给的过小,yarn直接kill掉进程,则报rpc连接失败、ClosedChannelException等错误。
解决方法:
先停止YARN服务,然后修改yarn-site.xml,增加如下内容
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
<description>Whether virtual memory limits will be enforced for containers</description>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>4</value>
<description>Ratio between virtual memory to physical memory when setting memory limits for containers</description>
</property>
将新的yarn-site.xml文件分发到其他Hadoop节点对应的目录下,最后在重新启动YARN。
重新执行以下命令启动spark on yarn
[hadoop@hadoop1 hadoop]$ spark-shell --master yarn --deploy-mode client
启动成功
3、打开YARN的web界面
打开YARN WEB页面:http://hadoop4:8088
可以看到Spark shell应用程序正在运行
单击ID号链接,可以看到该应用程序的详细信息
单击“ApplicationMaster”链接
4、运行程序
scala> val array = Array(,,,,)
array: Array[Int] = Array(, , , , ) scala> val rdd = sc.makeRDD(array)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[] at makeRDD at <console>: scala> rdd.count
res0: Long = scala>
再次查看YARN的web界面
查看executors
5、执行Spark自带的示例程序PI
[hadoop@hadoop1 ~]$ spark-submit --class org.apache.spark.examples.SparkPi \
> --master yarn \
> --deploy-mode cluster \
> --driver-memory 500m \
> --executor-memory 500m \
> --executor-cores \
> /home/hadoop/apps/spark/examples/jars/spark-examples_2.-2.3..jar \
>
执行过程
[hadoop@hadoop1 ~]$ spark-submit --class org.apache.spark.examples.SparkPi \
> --master yarn \
> --deploy-mode cluster \
> --driver-memory 500m \
> --executor-memory 500m \
> --executor-cores \
> /home/hadoop/apps/spark/examples/jars/spark-examples_2.-2.3..jar \
>
-- :: WARN NativeCodeLoader: - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-- :: INFO ConfiguredRMFailoverProxyProvider: - Failing over to rm2
-- :: INFO Client: - Requesting a new application from cluster with NodeManagers
-- :: INFO Client: - Verifying our application has not requested more than the maximum memory capability of the cluster ( MB per container)
-- :: INFO Client: - Will allocate AM container, with MB memory including MB overhead
-- :: INFO Client: - Setting up container launch context for our AM
-- :: INFO Client: - Setting up the launch environment for our AM container
-- :: INFO Client: - Preparing resources for our AM container
-- :: WARN Client: - Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
-- :: INFO Client: - Uploading resource file:/tmp/spark-93bd68c9-85de-482e-bbd7-cd2cee60e720/__spark_libs__8262081479435245591.zip -> hdfs://myha01/user/hadoop/.sparkStaging/application_1524303370510_0005/__spark_libs__8262081479435245591.zip
-- :: INFO Client: - Uploading resource file:/home/hadoop/apps/spark/examples/jars/spark-examples_2.-2.3..jar -> hdfs://myha01/user/hadoop/.sparkStaging/application_1524303370510_0005/spark-examples_2.11-2.3.0.jar
-- :: INFO Client: - Uploading resource file:/tmp/spark-93bd68c9-85de-482e-bbd7-cd2cee60e720/__spark_conf__2498510663663992254.zip -> hdfs://myha01/user/hadoop/.sparkStaging/application_1524303370510_0005/__spark_conf__.zip
-- :: INFO SecurityManager: - Changing view acls to: hadoop
-- :: INFO SecurityManager: - Changing modify acls to: hadoop
-- :: INFO SecurityManager: - Changing view acls groups to:
-- :: INFO SecurityManager: - Changing modify acls groups to:
-- :: INFO SecurityManager: - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set()
-- :: INFO Client: - Submitting application application_1524303370510_0005 to ResourceManager
-- :: INFO YarnClientImpl: - Submitted application application_1524303370510_0005
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: ACCEPTED)
-- :: INFO Client: -
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -
queue: default
start time:
final status: UNDEFINED
tracking URL: http://hadoop4:8088/proxy/application_1524303370510_0005/
user: hadoop
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: ACCEPTED)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: ACCEPTED)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: ACCEPTED)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: ACCEPTED)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: ACCEPTED)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: ACCEPTED)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: ACCEPTED)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: ACCEPTED)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: RUNNING)
-- :: INFO Client: -
client token: N/A
diagnostics: N/A
ApplicationMaster host: 192.168.123.104
ApplicationMaster RPC port:
queue: default
start time:
final status: UNDEFINED
tracking URL: http://hadoop4:8088/proxy/application_1524303370510_0005/
user: hadoop
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: RUNNING)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: RUNNING)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: RUNNING)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: RUNNING)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: RUNNING)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: RUNNING)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: RUNNING)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: RUNNING)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: RUNNING)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: RUNNING)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: RUNNING)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: RUNNING)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: RUNNING)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: RUNNING)
-- :: INFO Client: - Application report for application_1524303370510_0005 (state: FINISHED)
-- :: INFO Client: -
client token: N/A
diagnostics: N/A
ApplicationMaster host: 192.168.123.104
ApplicationMaster RPC port:
queue: default
start time:
final status: SUCCEEDED
tracking URL: http://hadoop4:8088/proxy/application_1524303370510_0005/
user: hadoop
-- :: INFO Client: - Deleted staging directory hdfs://myha01/user/hadoop/.sparkStaging/application_1524303370510_0005
-- :: INFO ShutdownHookManager: - Shutdown hook called
-- :: INFO ShutdownHookManager: - Deleting directory /tmp/spark-93bd68c9-85de-482e-bbd7-cd2cee60e720
-- :: INFO ShutdownHookManager: - Deleting directory /tmp/spark-06de6905--4f1e-a0a0-bc8a51daf535
[hadoop@hadoop1 ~]$
Spark学习之路 (二)Spark2.3 HA集群的分布式安装的更多相关文章
- Spark学习之路 (二)Spark2.3 HA集群的分布式安装[转]
下载Spark安装包 从官网下载 http://spark.apache.org/downloads.html 从微软的镜像站下载 http://mirrors.hust.edu.cn/apache/ ...
- Spark学习(四)Spark2.3 HA集群的分布式安装
一.下载Spark安装包 1.从官网下载 http://spark.apache.org/downloads.html 2.从微软的镜像站下载 http://mirrors.hust.edu.cn/a ...
- Spark2.3 HA集群的分布式安装
一.下载Spark安装包 1.从官网下载 http://spark.apache.org/downloads.html 2.从微软的镜像站下载 http://mirrors.hust.edu.cn/a ...
- Hadoop学习之路(四)Hadoop集群搭建和简单应用
概念了解 主从结构:在一个集群中,会有部分节点充当主服务器的角色,其他服务器都是从服务器的角色,当前这种架构模式叫做主从结构. 主从结构分类: 1.一主多从 2.多主多从 Hadoop中的HDFS和Y ...
- Docker 与 K8S学习笔记(二十三)—— Kubernetes集群搭建
小伙伴们,好久不见,这几个月实在太忙,所以一直没有更新,今天刚好有空,咱们继续k8s的学习,由于我们后面需要深入学习Pod的调度,所以我们原先使用MiniKube搭建的实验环境就不能满足我们的需求了, ...
- Hadoop学习之路(五)Hadoop集群搭建模式和各模式问题
分布式集群的通用问题 当前的HDFS和YARN都是一主多从的分布式架构,主从节点---管理者和工作者 问题:如果主节点或是管理者宕机了.会出现什么问题? 群龙无首,整个集群不可用.所以在一主多从的架构 ...
- Storm 学习之路(四)—— Storm集群环境搭建
一.集群规划 这里搭建一个3节点的Storm集群:三台主机上均部署Supervisor和LogViewer服务.同时为了保证高可用,除了在hadoop001上部署主Nimbus服务外,还在hadoop ...
- HBase 学习之路(四)—— HBase集群环境配置
一.集群规划 这里搭建一个3节点的HBase集群,其中三台主机上均为Regin Server.同时为了保证高可用,除了在hadoop001上部署主Master服务外,还在hadoop002上部署备用的 ...
- Hadoop 学习之路(五)—— Hadoop集群环境搭建
一.集群规划 这里搭建一个3节点的Hadoop集群,其中三台主机均部署DataNode和NodeManager服务,但只有hadoop001上部署NameNode和ResourceManager服务. ...
随机推荐
- Python接口自动化【requests处理Token请求】
首先说一下使用python模拟登录或注册时,对于带token的页面怎么登录注册模拟的思路: 1.对于带token的页面,需要先从最开始的页面获取合法token 2.然后使用获取到的合法token进行后 ...
- SQL server 2005数据库的还原与备份
一.SQL数据库的备份: 1.依次打开 开始菜单 → 程序 → Microsoft SQL Server 2005→SQL Server Management Studio ,这里我以UMVTEST命 ...
- more 命令
[root@localhost ~]# .txt # 按页显示文件内容,能向下翻页查看
- Linux系统上传文件与下载文件命令
我们用的服务器都是Linux系统的,如果用的是远程服务器,就需要将我们的代码推送过去,这里可以用到PSCP命令. (一)上传 pscp 本机文件的路径以及文件名 远程主机的用户名@远程主机IP:想要存 ...
- 008-spring cache-缓存实现-03-springboot redis实现
1.window下redis安装 https://www.cnblogs.com/bjlhx/p/7429811.html 2.pom <!-- 缓存 --> <dependency ...
- 在Linux直接运行安卓程序
Linux上的软件少得可怜,要是能够直接运行安卓程序,那将是意见很酷的事情. 方法原理:首先这个方法不需要开启安卓虚拟机,是直接在Linux上运行的. 谷歌在很早之前提出了archon的方案,能够直接 ...
- Django-分页、中间件和请求的声明周期
一.分页 相关连接:https://www.cnblogs.com/kongzhagen/p/6640975.html 一.Django的分页器(paginator) 1.view.py 视图 fro ...
- elasticsearch 处理index 一直INITIALIZING状态
elasticsearch一个节点异常重启后有一个index恢复的过程中状态一直INITIALIZING 处理方法 PUT index_name/_settings { "index&quo ...
- 采用Extjs MVVM + ThinkPHP 架构开发的思考
前后台号称都是MVC模式, 后台ThinkPHP框架实际上只提供web操作接口,直接返回json数据,因此只能算有Model和Controller两层, 前台ExtjsMVVM模式实际上就是分模块后的 ...
- 开启Laravel之旅的标准姿势
1.github下载最新的laravel https://github.com/laravel/laravel 2.下载到本地,改名,composer install,安装项目的依赖包 compose ...