Spark集群搭建

视频教程

1、优酷

2、YouTube

安装scala环境

下载地址http://www.scala-lang.org/download/

上传scala-2.10.5.tgz到master和slave机器的hadoop用户installer目录下

两台机器都要做

[hadoop@master installer]$ ls

hadoop2  hadoop-2.6.0.tar.gz  scala-2.10.5.tgz

解压

[hadoop@master installer]$ tar -zxvf scala-2.10.5.tgz

[hadoop@master installer]$ mv scala-2.10.5 scala

[hadoop@master installer]$ cd scala

[hadoop@master scala]$ pwd

/home/hadoop/installer/scala

配置环境变量:

[hadoop@master ~]$ vim .bashrc

# .bashrc

# Source global definitions

if [ -f /etc/bashrc ]; then

. /etc/bashrc

fi

# User specific aliases and functions

export JAVA_HOME=/usr/java/jdk1.7.0_79

export HADOOP_HOME=/home/hadoop/installer/hadoop2

export SCALA_HOME=/home/hadoop/installer/scala

export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native

export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

export CLASSPATH=$CLASSPATH:$HADOOP_HOME/lib:$JAVA_HOME/lib:$SCALA_HOME/lib

export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SCALA_HOME/bin

[hadoop@master ~]$ . .bashrc

安装python

安装gcc

[root@master ~]# mkdir /RHEL5U4

[root@master ~]# mount /dev/cdrom /media/

[root@master media]# cp -r * /RHEL5U4/

[root@master ~]vim /etc/yum.repos.d/iso.repo

[rhel-Server]

Name=5u4_Server

Baseurl=file:///RHEL5U4/Server

Enable=1

Gpgcheck=0

Gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

yum clean all

yum install gcc

Python安装

[root@master installer]# tar -zxvf Python-2.7.12

上传zlib-1.2.8.tar.gz

替换/root/installer/Python-2.7.12/Modules的zlib

[root@master Python-2.7.12]# ./configure --prefix=/usr/local/python27

[root@master Python-2.7.12]# make

[root@master Python-2.7.12]# make install

[root@master Python-2.7.12]# mv /usr/bin/python /usr/bin/python_old

[root@master Python-2.7.12]# ln -s /usr/local/python27/bin/python /usr/bin/

[root@master Python-2.7.12]# python

Python 2.7.12 (default, Nov  7 2016, 21:42:16)

[GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2

Type "help", "copyright", "credits" or "license" for more information.

>>>

安装spark环境

下载地址http://spark.apache.org/downloads.html

上传spark-2.0.0-bin-hadoop2.6.tgz到master的hadoop用户installer目录下

解压缩

[hadoop@master installer]$ tar -zxvf spark-2.0.0-bin-hadoop2.6.tgz

[hadoop@master installer]$ mv spark-2.0.0-bin-hadoop2.6 spark2

[hadoop@master installer]$ cd spark2/

[hadoop@master spark2]$ ls

bin  conf  data  examples  jars  LICENSE  licenses  NOTICE  python  R  README.md  RELEASE  sbin  yarn

[hadoop@master spark2]$ pwd

/home/hadoop/installer/spark2

[hadoop@master ~]$ vim .bashrc

# .bashrc

# Source global definitions

if [ -f /etc/bashrc ]; then

. /etc/bashrc

fi

# User specific aliases and functions

export JAVA_HOME=/usr/java/jdk1.7.0_79

export HADOOP_HOME=/home/hadoop/installer/hadoop2

export SCALA_HOME=/home/hadoop/installer/scala

export SPARK_HOME=/home/hadoop/installer/spark2

export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native

export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

export CLASSPATH=$CLASSPATH:$HADOOP_HOME/lib:$JAVA_HOME/lib:$SCALA_HOME/lib

export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SCALA_HOME/bin:$SPARK_HOME/bin:$SPARK_HOME/sbin

[hadoop@master ~]$ . .bashrc

[hadoop@master ~]$ scp .bashrc slave:~

.bashrc                                                                                            100%  621     0.6KB/s   00:00

在slave机器上执行

[hadoop@slave ~]$ . .bashrc

配置spark

[hadoop@master conf]$ cp spark-env.sh.template spark-env.sh

[hadoop@slave conf]$ vim spark-env.sh

#!/usr/bin/env bash

#

# Licensed to the Apache Software Foundation (ASF) under one or more

# contributor license agreements.  See the NOTICE file distributed with

# this work for additional information regarding copyright ownership.

# The ASF licenses this file to You under the Apache License, Version 2.0

# (the "License"); you may not use this file except in compliance with

# the License.  You may obtain a copy of the License at

#

#    http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

#

export JAVA_HOME=/usr/java/jdk1.7.0_79

export SCALA_HOME=/home/hadoop/installer/scala

export SPARK_MASTER_HOST=master

export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

export SPARK_EXECUTOR_MEMORY=600M

export SPARK_DRIVER_MEMORY=600M

[hadoop@slave conf]$ vim slaves

master

slave

[hadoop@master installer]$ scp -r spark2 slave:~/installer/

启动spark集群

[hadoop@master ~]$ start-master.sh

[hadoop@master ~]$ start-slaves.sh

[hadoop@master ~]$ jps

17769 ResourceManager

20192 Master

20275 Worker

17443 NameNode

20521 Jps

17631 SecondaryNameNode

[hadoop@slave ~]$ jps

13297 DataNode

15367 Worker

13408 NodeManager

16245 Jps

Spark wordcount

[hadoop@master ~]$ spark-shell

Setting default log level to "WARN".

To adjust logging level use sc.setLogLevel(newLevel).

16/11/04 11:05:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

16/11/04 11:05:09 WARN spark.SparkContext: Use an existing SparkContext, some configuration may not take effect.

Spark context Web UI available at http://192.168.3.100:4040

Spark context available as 'sc' (master = local[*], app id = local-1478228709028).

Spark session available as 'spark'.

Welcome to

____              __

/ __/__  ___ _____/ /__

_\ \/ _ \/ _ `/ __/  '_/

/___/ .__/\_,_/_/ /_/\_\   version 2.0.0

/_/

Using Scala version 2.11.8 (Java HotSpot(TM) Client VM, Java 1.7.0_79)

Type in expressions to have them evaluated.

Type :help for more information.

scala> val file = sc.textFile("hdfs://master:9000/data/wordcount")

16/11/04 11:05:14 WARN util.SizeEstimator: Failed to check whether UseCompressedOops is set; assuming yes

file: org.apache.spark.rdd.RDD[String] = hdfs://master:9000/data/input/wordcount MapPartitionsRDD[1] at textFile at <console>:24

scala> val count=file.flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_)

count: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at reduceByKey at <console>:26

scala> count.collect()

res0: Array[(String, Int)] = Array((package,1), (this,1), (Version"](http://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version),1), (Because,1), (Python,2), (cluster.,1), (its,1), ([run,1), (general,2), (have,1), (pre-built,1), (YARN,,1), (locally,2), (changed,1), (locally.,1), (sc.parallelize(1,1), (only,1), (Configuration,1), (This,2), (basic,1), (first,1), (learning,,1), ([Eclipse](https://cwiki.apache.org/confluence/display/SPARK/Useful+Developer+Tools#UsefulDeveloperTools-Eclipse),1), (documentation,3), (graph,1), (Hive,2), (several,1), (["Specifying,1), ("yarn",1), (page](http://spark.apache.org/documentation.html),1), ([params]`.,1), ([project,2), (prefer,1), (SparkPi,2), (<http://spark.apache.org/>,1), (engine,1), (version,1), (file,1), (documentation...

scala>

(四)Spark集群搭建-Java&Python版Spark的更多相关文章

  1. (三)Spark-Hadoop集群搭建-Java&Python版Spark

    Spark-Hadoop集群搭建 视频教程: 1.优酷 2.YouTube 配置java 启动ftp [root@master ~]# /etc/init.d/vsftpd restart 关闭 vs ...

  2. Spark集群搭建_YARN

    2017年3月1日, 星期三 Spark集群搭建_YARN 前提:参考Spark集群搭建_Standalone   1.修改spark中conf中的spark-env.sh   2.Spark on ...

  3. Spark集群搭建【Spark+Hadoop+Scala+Zookeeper】

    1.安装Linux 需要:3台CentOS7虚拟机 IP:192.168.245.130,192.168.245.131,192.168.245.132(类似,尽量保持连续,方便记忆) 注意: 3台虚 ...

  4. Spark集群搭建简配+它到底有多快?【单挑纯C/CPP/HADOOP】

    最近耳闻Spark风生水起,这两天利用休息时间研究了一下,果然还是给人不少惊喜.可惜,笔者不善JAVA,只有PYTHON和SCALA接口.花了不少时间从零开始认识PYTHON和SCALA,不少时间答了 ...

  5. hadoop+spark集群搭建入门

    忽略元数据末尾 回到原数据开始处 Hadoop+spark集群搭建 说明: 本文档主要讲述hadoop+spark的集群搭建,linux环境是centos,本文档集群搭建使用两个节点作为集群环境:一个 ...

  6. Spark集群搭建中的问题

    参照<Spark实战高手之路>学习的,书籍电子版在51CTO网站 资料链接 Hadoop下载[链接](http://archive.apache.org/dist/hadoop/core/ ...

  7. spark集群搭建

    文中的所有操作都是在之前的文章scala的安装及使用文章基础上建立的,重复操作已经简写: 配置中使用了master01.slave01.slave02.slave03: 一.虚拟机中操作(启动网卡)s ...

  8. 十、scala、spark集群搭建

    spark集群搭建: 1.上传scala-2.10.6.tgz到master 2.解压scala-2.10.6.tgz 3.配置环境变量 export SCALA_HOME=/mnt/scala-2. ...

  9. Spark集群搭建简要

    Spark集群搭建 1 Spark编译 1.1 下载源代码 git clone git://github.com/apache/spark.git -b branch-1.6 1.2 修改pom文件 ...

随机推荐

  1. 写自己的Socket框架(三)

    在通信写完了以后,应用层接收到Socket抛上来的byte[],这个时候对于实际的写逻辑的开发者来说,这样的数据并不友好,我们就需要在应用层统一一个包的规则(应用层协议),处理完以后,然后再传给实际的 ...

  2. java太low,又舍不得jvm平台的丰富资源?试试kotlin吧(一)

    尝试kotlin的起因 因为各种原因(版权,人员招聘),公司的技术体系从c#转到了java,我花了大概两周的时间来上手java,发现java的语法还是非常简单的,基本看着代码就知道什么意思.学习jav ...

  3. 牛逼的css3:动态过渡与图形变换

    写css3的属性的时候,最好加上浏览器内核标识,进行兼容. -ms-transform:scale(2,4); /* IE 9 */ -moz-transform:scale(2,4); /* Fir ...

  4. .NET Core采用的全新配置系统[3]: “Options模式”下的配置是如何绑定为Options对象

    配置的原子结构就是单纯的键值对,并且键和值都是字符串,但是在真正的项目开发中我们一般不会单纯地以键值对的形式来使用配置.值得推荐的做法就是采用<.NET Core采用的全新配置系统[1]: 读取 ...

  5. SQL Server-分页方式、ISNULL与COALESCE性能分析(八)

    前言 上一节我们讲解了数据类型以及字符串中几个需要注意的地方,这节我们继续讲讲字符串行数同时也讲其他内容和穿插的内容,简短的内容,深入的讲解,Always to review the basics. ...

  6. 【CSS进阶】CSS 颜色体系详解

    说到 CSS 颜色,相比大家都不会陌生,本文是我个人对 CSS 颜色体系的一个系统总结与学习,分享给大家. 先用一张图直观的感受一下与 CSS 颜色相关大概覆盖了哪些内容. 接下来的行文内容大概会按照 ...

  7. Javascript的二进制数据处理学习 ——nodejs环境和浏览器环境分别分析

    以前用JavaScript主要是处理常规的数字.字符串.数组对象等数据,基本没有试过用JavaScript处理二进制数据块,最近的项目中涉及到这方面的东西,就花一段时间学了下这方面的API,在此总结一 ...

  8. UWP简单示例(一):快速合成音乐MV

    准备 IDE:Visual Studio 2015 为你的项目安装Nuget包 SharpDx.XAudio2 为你的项目安装Nuget包 Win2D.UWP 了解并学习:Win2D官方博客 了解并学 ...

  9. 福利到!Rafy(原OEA)领域实体框架 2.22.2067 发布!

    距离“上次框架完整发布”已经过去了一年半了,应群中的朋友要求,决定在国庆放假之际,把最新的框架发布出来,并把帮助文档整理出来,这样可以方便大家快速上手.   发布内容 注意,本次发布,只包含 Rafy ...

  10. 初识Hadoop

    第一部分:              初识Hadoop 一.             谁说大象不能跳舞 业务数据越来越多,用关系型数据库来存储和处理数据越来越感觉吃力,一个查询或者一个导出,要执行很长 ...