1.场景

  在搭建好Hadoop+Spark环境后,现准备在此环境上提交简单的任务到Spark进行计算并输出结果。搭建过程:http://www.cnblogs.com/zengxiaoliang/p/6478859.html

  本人比较熟悉Java语言,现以Java的WordCount为例讲解这整个过程,要实现计算出给定文本中每个单词出现的次数。

2.环境测试

  在讲解例子之前,我想先测试一下之前搭建好的环境。

  2.1测试Hadoop环境

  首先创建一个文件wordcount.txt 内容如下:

Hello hadoop
hello spark
hello bigdata
yellow banana
red apple

  然后执行如下命令:

  hadoop fs -mkdir -p /Hadoop/Input(在HDFS创建目录)

  hadoop fs -put wordcount.txt /Hadoop/Input(将wordcount.txt文件上传到HDFS)

  hadoop fs -ls /Hadoop/Input (查看上传的文件)

  hadoop fs -text /Hadoop/Input/wordcount.txt (查看文件内容)

  2.2Spark环境测试

  我使用spark-shell,做一个简单的WordCount的测试。我就用上面Hadoop测试上传到HDFS的文件wordcount.txt。

  首先启动spark-shell命令:

  spark-shell

  

  然后直接输入scala语句:

  val file=sc.textFile("hdfs://Master:9000/Hadoop/Input/wordcount.txt")

  val rdd = file.flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_)

  rdd.collect()

  rdd.foreach(println)

  

  退出使用如下命令:

  :quit

  这样环境测试就结束了。

 3.Java实现单词计数

package com.example.spark;

import java.util.Arrays;
import java.util.Iterator;
import java.util.List;
import java.util.regex.Pattern; import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction; import scala.Tuple2; public final class WordCount {
private static final Pattern SPACE = Pattern.compile(" "); public static void main(String[] args) throws Exception {
SparkConf conf = new SparkConf().setAppName("kevin's first spark app");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> lines = sc.textFile(args[0]).cache();
JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String, String>() { private static final long serialVersionUID = 1L; @Override
public Iterator<String> call(String s) {
return Arrays.asList(SPACE.split(s)).iterator();
}
}); JavaPairRDD<String, Integer> ones = words.mapToPair(new PairFunction<String, String, Integer>() { private static final long serialVersionUID = 1L; @Override
public Tuple2<String, Integer> call(String s) {
return new Tuple2<String, Integer>(s, 1);
}
}); JavaPairRDD<String, Integer> counts = ones.reduceByKey(new Function2<Integer, Integer, Integer>() { private static final long serialVersionUID = 1L; @Override
public Integer call(Integer i1, Integer i2) {
return i1 + i2;
}
}); List<Tuple2<String, Integer>> output = counts.collect();
for (Tuple2<?, ?> tuple : output) {
System.out.println(tuple._1() + ": " + tuple._2());
} sc.close();
}
}

4.任务提交实现

  将上面Java实现的单词计数打成jar包spark-example-0.0.1-SNAPSHOT.jar,并且将jar包上传到Master节点,我是将jar包上传到/opt目录下,本文将以两种方式提交任务到spark,第一种是以spark-submit命令的方式提交任务,第二种是以java web的方式提交任务。

  4.1以spark-submit命令的方式提交任务

  spark-submit --master spark://114.55.246.88:7077 --class com.example.spark.WordCount /opt/spark-example-0.0.1-SNAPSHOT.jar hdfs://Master:9000/Hadoop/Input/wordcount.txt

  4.2以java web的方式提交任务

  我是用spring boot搭建的java web框架,实现代码如下:

  1)新建maven项目spark-submit

  2)pom.xml文件内容,这里要注意spark的依赖jar包要与scala的版本相对应,如spark-core_2.11,这后面2.11就是你安装的scala的版本。

<?xml version="1.0"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.4.1.RELEASE</version>
</parent>
<artifactId>spark-submit</artifactId>
<description>spark-submit</description>
<properties>
<start-class>com.example.spark.SparkSubmitApplication</start-class>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<java.version>1.8</java.version>
<commons.version>3.4</commons.version>
<org.apache.spark-version>2.1.0</org.apache.spark-version>
</properties> <dependencies>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>${commons.version}</version>
</dependency> <dependency>
<groupId>org.apache.tomcat.embed</groupId>
<artifactId>tomcat-embed-jasper</artifactId>
<scope>provided</scope>
</dependency> <dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.jayway.jsonpath</groupId>
<artifactId>json-path</artifactId>
</dependency> <dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions>
<exclusion>
<artifactId>spring-boot-starter-tomcat</artifactId>
<groupId>org.springframework.boot</groupId>
</exclusion>
</exclusions>
</dependency> <dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jetty</artifactId>
<exclusions>
<exclusion>
<groupId>org.eclipse.jetty.websocket</groupId>
<artifactId>*</artifactId>
</exclusion>
</exclusions>
</dependency> <dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jetty</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>jstl</artifactId>
</dependency>
<dependency>
<groupId>org.eclipse.jetty</groupId>
<artifactId>apache-jsp</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-solr</artifactId>
</dependency> <dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency> <dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>jstl</artifactId>
</dependency> <dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>${org.apache.spark-version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>${org.apache.spark-version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.11</artifactId>
<version>${org.apache.spark-version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>${org.apache.spark-version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.7.3</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka_2.11</artifactId>
<version>1.6.3</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-graphx_2.11</artifactId>
<version>${org.apache.spark-version}</version>
</dependency>
<dependency>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>3.0.0</version>
</dependency> <dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.6.5</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.6.5</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
<version>2.6.5</version>
</dependency> </dependencies>
<packaging>war</packaging> <repositories>
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/snapshot</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<repository>
<id>maven2</id>
<url>http://repo1.maven.org/maven2/</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/snapshot</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories> <build>
<plugins>
<plugin>
<artifactId>maven-war-plugin</artifactId>
<configuration>
<warSourceDirectory>src/main/webapp</warSourceDirectory>
</configuration>
</plugin> <plugin>
<groupId>org.mortbay.jetty</groupId>
<artifactId>jetty-maven-plugin</artifactId>
<configuration>
<systemProperties>
<systemProperty>
<name>spring.profiles.active</name>
<value>development</value>
</systemProperty>
<systemProperty>
<name>org.eclipse.jetty.server.Request.maxFormContentSize</name>
<!-- -1代表不作限制 -->
<value>600000</value>
</systemProperty>
</systemProperties>
<useTestClasspath>true</useTestClasspath>
<webAppConfig>
<contextPath>/</contextPath>
</webAppConfig>
<connectors>
<connector implementation="org.eclipse.jetty.server.nio.SelectChannelConnector">
<port>7080</port>
</connector>
</connectors>
</configuration>
</plugin>
</plugins> </build>
</project>

  3)SubmitJobToSpark.java

package com.example.spark;

import org.apache.spark.deploy.SparkSubmit;

/**
* @author kevin
*
*/
public class SubmitJobToSpark { public static void submitJob() {
String[] args = new String[] { "--master", "spark://114.55.246.88:7077", "--name", "test java submit job to spark", "--class", "com.example.spark.WordCount", "/opt/spark-example-0.0.1-SNAPSHOT.jar", "hdfs://Master:9000/Hadoop/Input/wordcount.txt" };
SparkSubmit.main(args);
}
}

  4)SparkController.java

package com.example.spark.web.controller;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse; import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.ResponseBody; import com.example.spark.SubmitJobToSpark; @Controller
@RequestMapping("spark")
public class SparkController {
private Logger logger = LoggerFactory.getLogger(SparkController.class); @RequestMapping(value = "sparkSubmit", method = { RequestMethod.GET, RequestMethod.POST })
@ResponseBody
public String sparkSubmit(HttpServletRequest request, HttpServletResponse response) {
logger.info("start submit spark tast...");
SubmitJobToSpark.submitJob();
return "hello";
} }

  5)将项目spark-submit打成war包部署到Master节点tomcat上,访问如下请求:

  http://114.55.246.88:9090/spark-submit/spark/sparkSubmit

  在tomcat的log中能看到计算的结果。

提交任务到Spark的更多相关文章

  1. 提交任务到spark master -- 分布式计算系统spark学习(四)

    部署暂时先用默认配置,我们来看看如何提交计算程序到spark上面. 拿官方的Python的测试程序搞一下. qpzhang@qpzhangdeMac-mini:~/project/spark-1.3. ...

  2. 提交任务到spark(以wordcount为例)

    1.首先需要搭建好hadoop+spark环境,并保证服务正常.本文以wordcount为例. 2.创建源文件,即输入源.hello.txt文件,内容如下: tom jerry henry jim s ...

  3. Spark在StandAlone模式下提交任务,spark.rpc.message.maxSize太小而出错

    1.错误信息org.apache.spark.SparkException: Job aborted due to stage failure:Serialized task 32:5 was 172 ...

  4. Eclipse提交代码到Spark集群上运行

    Spark集群master节点:      192.168.168.200 Eclipse运行windows主机: 192.168.168.100 场景: 为了测试在Eclipse上开发的代码在Spa ...

  5. 提交第一个spark作业到集群运行

    写在前面 接触spark有一段时间了,但是一直都没有真正意义上的在集群上面跑自己编写的代码.今天在本地使用scala编写一个简单的WordCount程序.然后,打包提交到集群上面跑一下... 在本地使 ...

  6. Docker中提交任务到Spark集群

    1.  背景描述和需求 数据分析程序部署在Docker中,有一些分析计算需要使用Spark计算,需要把任务提交到Spark集群计算. 接收程序部署在Docker中,主机不在Hadoop集群上.与Spa ...

  7. 在交互环境下使用 Pyspark 提交任务给 Spark 解决 : java.sql.SQLException: No suitable driver

    在 jupyter 上启用 local 交互环境和 spark 进行交互使用 imapla 来帮助 spark 取数据却失败了 from pyspark.sql import SparkSession ...

  8. 【原创】大叔经验分享(19)spark on yarn提交任务之后执行进度总是10%

    spark 2.1.1 系统中希望监控spark on yarn任务的执行进度,但是监控过程发现提交任务之后执行进度总是10%,直到执行成功或者失败,进度会突然变为100%,很神奇, 下面看spark ...

  9. 利用SparkLauncher 类以JAVA API 编程的方式提交Spark job

    一.环境说明和使用软件的版本说明: hadoop-version:hadoop-2.9.0.tar.gz spark-version:spark-2.2.0-bin-hadoop2.7.tgz jav ...

随机推荐

  1. 蓝桥网试题 java 基础练习 特殊的数字

    -------------------------------------------------------- 笑脸 :-) ------------------------------------ ...

  2. 自学javaee程序员之路--ssm的小项目(一)

    大家好~我叫王聪,缩写是WC(不是厕所!不是厕所!).是一名某内陆大四的学生.这两个月自学了javaee---关于web的一些心得,分享记录一下.建立这个博客的目的是望各位前辈学长指正批评~~也是建立 ...

  3. mysql学习之权限管理

    数据库权限的意义: 为了保证数据库中的业务数据不被非授权的用户非法窃取,需要对数据库的访问者进行各种限制,而数据库安全性控制措施主要有这三种,第一种用户身份鉴别,手段可以是口令,磁卡,指纹等技术,只有 ...

  4. 不要在Android的Application对象中缓存数据!

    前言   在你的App中的很多地方都需要使用到数据信息,它可能是一个session token,一次费时计算的结果等等,通常为了避免Activity之间传递数据的开销,会将这些数据通过持久化来存储. ...

  5. iOS 设置#ffff 这种颜色

    UI给图的时候给的是#f2f2f2 让我设置.没有你要的rgb. 所以只能自行解决封装了代码 HexColors.h #import "TargetConditionals.h" ...

  6. 数据库--iOS

    1.创建表 @"create table if not exists Person(id integer primary key autoincrement,name text,gender ...

  7. Win8下,以管理员身份启动VS项目

    之前一直是先以管理员身份启动VS,然后再打开项目的,比较麻烦,找了好久,总算有一个处理方案了 在Windows7下 通常使用修改属性的方式:在任意快捷方式上右击,选择属性,选择高级,选择以管理员身份启 ...

  8. 基于Spring DM管理的Bundle获取Spring上下文对象及指定Bean对象

    在讲述服务注册与引用的随笔中,有提到context.getServiceReferences()方法,通过该方法可以获取到OSGI框架容器中的指定类型的服务引用,从而获取到对应的服务对象.同时该方法还 ...

  9. 《半吊子全栈系列:Boostrap3》

    前言:后端开发做网站 几年前,作为一名纯粹后端Java开发人员,对JS还没开窍,对于页面只停留在<十天学会DIV+CSS>这种程度,但是我又想做网站怎么办? 这时候Boostrap3出现了 ...

  10. 学生管理系统(C语言)

    #include <stdio.h> #include <stdlib.h> #include <string.h> #define N 3 #define LEN ...