spark2.0配置
<project xmlns=
"http://maven.apache.org/POM/4.0.0"
xmlns:xsi=
"http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation=
"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"
>
<modelVersion>
4.0
.
0
</modelVersion>
<groupId>com.jd.ads</groupId>
<project xmlns=
"http://maven.apache.org/POM/4.0.0"
xmlns:xsi=
"http://www.w3.org/2001/XMLSchema-instance"
<modelVersion>
4.0
.
0
</modelVersion>
<groupId>com.jd.ads</groupId>
<artifactId>fun2spark</artifactId>
<version>
1.0
.
0
</version>
<packaging>jar</packaging>
<properties>
<spark.version>
2.0
.
1
</spark.version>
<proto.path>${project.basedir}/src/main/proto</proto.path>
<proto.file>*.proto</proto.file>
<proto.gene>${project.basedir}/src/main/java/gen-java</proto.gene>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>
2.2
.
0
</version>
</dependency>
<dependency>
<groupId>commons-codec</groupId>
<artifactId>commons-codec</artifactId>
<version>
1.9
</version>
</dependency>
<dependency>
<groupId>org.scala-lang.modules</groupId>
<artifactId>scala-xml_2.
11
</artifactId>
<version>
1.0
.
2
</version>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>
2.11
.
2
</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.
11
</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.
11
</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.
11
</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.
11
</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-
0
-8_2.
11
</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>com.github.scopt</groupId>
<artifactId>scopt_2.
10
</artifactId>
<version>
3.2
.
0
</version>
</dependency>
</dependencies>
<build>
<resources>
<resource>
<directory>src/main/java</directory>
</resource>
<resource>
<directory>src/main/scala</directory>
</resource>
<resource>
<directory>src/main/resources</directory>
</resource>
</resources>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-clean-plugin</artifactId>
<version>
2.4
.
1
</version>
<configuration>
<verbose>
true
</verbose>
<filesets>
<fileset>
<directory>${proto.gene}</directory>
</fileset>
</filesets>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>
2.3
.
2
</version>
<configuration>
<source>
1.6
</source>
<target>
1.6
</target>
<encoding>UTF-
8
</encoding>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-resources-plugin</artifactId>
<version>
2.4
.
3
</version>
<configuration>
<encoding>UTF-
8
</encoding>
</configuration>
</plugin>
<plugin>
<groupId>org.scala-tools</groupId>
<artifactId>maven-scala-plugin</artifactId>
<version>
2.15
.
2
</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>
2.3
.
1
</version>
</plugin>
</plugins>
</build>
<pluginRepositories>
<pluginRepository>
<id>scala</id>
<name>Scala Tools</name>
<url>http:
//scala-tools.org/repo-releases/</url>
<releases>
<enabled>
true
</enabled>
</releases>
<snapshots>
<enabled>
false
</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</project>
<artifactId>fun2spark</artifactId>
<version>
1.0
.
0
</version>
<packaging>jar</packaging>
<properties>
<spark.version>
2.0
.
1
</spark.version>
<proto.path>${project.basedir}/src/main/proto</proto.path>
<proto.file>*.proto</proto.file>
<proto.gene>${project.basedir}/src/main/java/gen-java</proto.gene>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>
2.2
.
0
</version>
</dependency>
<dependency>
<groupId>commons-codec</groupId>
<artifactId>commons-codec</artifactId>
<version>
1.9
</version>
</dependency>
<dependency>
<groupId>org.scala-lang.modules</groupId>
<artifactId>scala-xml_2.
11
</artifactId>
<version>
1.0
.
2
</version>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>
2.11
.
2
</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.
11
</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.
11
</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.
11
</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.
11
</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-
0
-8_2.
11
</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>com.github.scopt</groupId>
<artifactId>scopt_2.
10
</artifactId>
<version>
3.2
.
0
</version>
</dependency>
</dependencies>
<build>
<resources>
<resource>
<directory>src/main/java</directory>
</resource>
<resource>
<directory>src/main/scala</directory>
</resource>
<resource>
<directory>src/main/resources</directory>
</resource>
</resources>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-clean-plugin</artifactId>
<version>
2.4
.
1
</version>
<configuration>
<verbose>
true
</verbose>
<filesets>
<fileset>
<directory>${proto.gene}</directory>
</fileset>
</filesets>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>
2.3
.
2
</version>
<configuration>
<source>
1.6
</source>
<target>
1.6
</target>
<encoding>UTF-
8
</encoding>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-resources-plugin</artifactId>
<version>
2.4
.
3
</version>
<configuration>
<encoding>UTF-
8
</encoding>
</configuration>
</plugin>
<plugin>
<groupId>org.scala-tools</groupId>
<artifactId>maven-scala-plugin</artifactId>
<version>
2.15
.
2
</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>
2.3
.
1
</version>
</plugin>
</plugins>
</build>
<pluginRepositories>
<pluginRepository>
<id>scala</id>
<name>Scala Tools</name>
<url>http:
//scala-tools.org/repo-releases/</url>
<releases>
<enabled>
true
</enabled>
</releases>
<snapshots>
<enabled>
false
</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</project>
spark2.0配置的更多相关文章
- Spark2.0集成Hive操作的相关配置与注意事项
前言 已完成安装Apache Hive,具体安装步骤请参照,Linux基于Hadoop2.8.0集群安装配置Hive2.1.1及基础操作 补充说明 Hive中metastore(元数据存储)的三种方式 ...
- spark2.0.1 安装配置
1. 官网下载 wget http://d3kbcqa49mib13.cloudfront.net/spark-2.0.1-bin-hadoop2.7.tgz 2. 解压 tar -zxvf spar ...
- Ubuntu14.04或16.04下安装JDK1.8+Scala+Hadoop2.7.3+Spark2.0.2
为了将Hadoop和Spark的安装简单化,今日写下此帖. 首先,要看手头有多少机器,要安装伪分布式的Hadoop+Spark还是完全分布式的,这里分别记录. 1. 伪分布式安装 伪分布式的Hadoo ...
- Spark2.0编译
Spark2.0编译 1 前言 Spark2.0正式版于今天正式发布,本文基于CDH5.0.2的Spark编译. 2 编译步骤 #2.1 下载源码 wget https://github.com/ap ...
- CentOS下SparkR安装部署:hadoop2.7.3+spark2.0.0+scale2.11.8+hive2.1.0
注:之前本人写了一篇SparkR的安装部署文章:SparkR安装部署及数据分析实例,当时SparkR项目还没正式入主Spark,需要自己下载SparkR安装包,但现在spark已经支持R接口,so更新 ...
- windows 本地构建hadoop-spark运行环境(hadoop-2.6, spark2.0)
下载hadoop http://hadoop.apache.org/releases.html --> http://mirrors.tuna.tsinghua.edu.cn/apache/ha ...
- 初识Spark2.0之Spark SQL
内存计算平台spark在今年6月份的时候正式发布了spark2.0,相比上一版本的spark1.6版本,在内存优化,数据组织,流计算等方面都做出了较大的改变,同时更加注重基于DataFrame数据组织 ...
- 图文解析Spark2.0核心技术(转载)
导语 Spark2.0于2016-07-27正式发布,伴随着更简单.更快速.更智慧的新特性,spark 已经逐步替代 hadoop 在大数据中的地位,成为大数据处理的主流标准.本文主要以代码和绘图的方 ...
- 在centos7上安装部署hadoop2.7.3和spark2.0.0
一.安装装备 下载安装包: vmware workstations pro 12 三台centos7.1 mini 虚拟机 网络配置NAT网络如下: 二.创建hadoop用户和hadoop用户组 1. ...
随机推荐
- Access数据库的模糊查询到底是用*还是%
今天被用了一下Access数据库,结果被它的模糊查询给折腾了一上午,到底是用*还是%?特此记下来 事情是这样的,我用C#写了个小的窗体程序,访问Access数据库进行一个模糊查询,我先手工往Acces ...
- 洛谷P1371 NOI元丹
P1371 NOI元丹 71通过 394提交 题目提供者洛谷OnlineJudge 标签云端评测 难度普及/提高- 提交 讨论 题解 最新讨论 我觉得不需要讨论O long long 不够 没有取 ...
- iOS 三种定时器
http://www.cocoachina.com/ios/20160905/17482.html
- IO操作概念。同步、异步、阻塞、非阻塞
“一个IO操作其实分成了两个步骤:发起IO请求和实际的IO操作. 同步IO和异步IO的区别就在于第二个步骤是否阻塞,如果实际的IO读写阻塞请求进程,那么就是同步IO. 阻塞IO和非阻塞IO的区别在于第 ...
- 细说 webpack 之流程篇
摘自: http://taobaofed.org/blog/2016/09/09/webpack-flow/ 引言 目前,几乎所有业务的开发构建都会用到 webpack .的确,作为模块加载和打包神器 ...
- 使用axi_datamover完成ZYNQ片内PS与PL间的数据传输
分享下PS与PL之间数据传输比较另类的实现方式,实现目标是: 1.传输时数据不能滞留在一端,无论是1个字节还是1K字节都能立即发送: 2.PL端接口为FIFO接口: PS到PL的数据传输流程: PS到 ...
- ueditor
1:添加插件包 2:添加文件上传的jar包 3:页面引入ueditor插件 <!-- ueditor --><link type="text/css" href= ...
- Objective - C NSArray不可变数组和NSMutableArray可变数组
OC中存储数据最常用 的两个容器就是数组和字典,而作为最常用的,应该了解这所有的特点,及用法. OC中的数组是一个容量,有序的管理了一系列元素,并且存放在数组里的元素,必须是对象类型. 不可变数组,见 ...
- 关于mvc ajax (post提交)——页面传值以及后台接收
// 前段页面,点击按钮触发Success事件 function success() { var BusiName =“公司名称”; var UserName = “用户”; var UserPhon ...
- C# DateTime.ToString的坑
当需要将时间类型转换为字符串类型时,一般直接使用datetime.ToString()方法即可 1.直接使用ToString(),不带任何参数,代码如下 static void Main(string ...