java操作spark1.2.0
虽然推荐的是scala,但是还是试一下
package org.admln.java7OperateSpark; import java.util.Arrays;
import java.util.List;
import java.util.regex.Pattern; import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction; import scala.Tuple2; public class OperateSpark {
//单词切分分隔符
private static final Pattern SPACE = Pattern.compile(" "); public static void main(String[] args) {
//初始化
SparkConf sparkConf = new SparkConf().setAppName("JavaWordCount").setMaster("spark://hadoop:7077");
JavaSparkContext ctx = new JavaSparkContext(sparkConf); //第二个参数是文件的最小切分
JavaRDD<String> lines = ctx.textFile("hdfs://hadoop:8020/in/spark/javaOperateSpark/wordcount.txt");
JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String,String>() {
public Iterable<String> call(String s) {
return Arrays.asList(SPACE.split(s));
}
}); //划成键值对
JavaPairRDD<String,Integer> ones = words.mapToPair(new PairFunction<String,String,Integer>() {
public Tuple2<String, Integer> call(String t) {
return new Tuple2<String,Integer>(t,1);
}
}); JavaPairRDD<String,Integer> counts = ones.reduceByKey(new Function2<Integer,Integer,Integer>() {
public Integer call(Integer v1, Integer v2) {
return v1 + v2;
}
}); List<Tuple2<String,Integer>> output = counts.collect();
for(Tuple2<?,?> tuple : output) {
System.out.println(tuple._1() + ":" +tuple._2());
}
counts.saveAsTextFile("hdfs://hadoop:8020/out/spark/javaOperateSpark2/");
ctx.stop();
}
}
运行的时候出现了错误
eclipse中为:
Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode;
at org.apache.spark.util.collection.OpenHashSet.org$apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261)
at org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165)
at org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102)
at org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210)
at org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:169)
at org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:161)
at org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:155)
at org.apache.spark.util.collection.SizeTracker$class.takeSample(SizeTracker.scala:78)
at org.apache.spark.util.collection.SizeTracker$class.afterUpdate(SizeTracker.scala:70)
at org.apache.spark.util.collection.SizeTrackingVector.$plus$eq(SizeTrackingVector.scala:31)
at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:249)
at org.apache.spark.storage.MemoryStore.putIterator(MemoryStore.scala:136)
at org.apache.spark.storage.MemoryStore.putIterator(MemoryStore.scala:114)
at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:787)
at org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:638)
at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:992)
at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:98)
at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:84)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:29)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:945)
at org.apache.spark.SparkContext.hadoopFile(SparkContext.scala:695)
at org.apache.spark.SparkContext.textFile(SparkContext.scala:540)
at org.apache.spark.api.java.JavaSparkContext.textFile(JavaSparkContext.scala:184)
at org.admln.java7OperateSpark.OperateSpark.main(OperateSpark.java:27)
shell中为:
Exception in thread "main" java.lang.VerifyError: class org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$AddBlockRequestProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) ... ... at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
可以看到是protobuf版本和hadoop的冲突了
默认spark1.2.0的protobuf版本为

而hadoop2.2.0的为protobuf2.5.0
所以修改spark中pom.xml后重新编译生成部署包(花费一个多小时)
再运行的话shell端成功。但是eclipse端仍然报那个错误
这是因为我用的maven引用的spark包,存在guava版本冲突,默认为

单独加一个依赖
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>14.0.1</version>
</dependency>
然后eclipse提交的话不报错了,不过任务一直循环不执行,报告资源不够
WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
然后把核数加到2,内存加到1500M,可是仍然报
INFO SparkDeploySchedulerBackend: Granted executor ID app-20150111003236-0000/3 on hostPort hadoop:34766 with 2 cores, 512.0 MB RAM

也就是说核数改了,但是执行内存改不了,不知道为什么,还有就是同样的程序shell端提交就正常执行,eclipse外部提交就报内存不足
改驱动的内存也不行。
我推测有两种可能的原因
1.spark的BUG,SPARK_DRIVER_MEMORY变量默认是512M,但是外部修改不生效;
2.centos的资源和本机windows的资源混乱了,因为我看到了
ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 2
的错误,我本机是4核,虚拟机是2核。
不知道为什么网上没有eclipse提交的示例,应该要不就是本身就不支持,会和客户端资源混乱,要不就是还没人摸透。
java操作spark1.2.0的更多相关文章
- es学习-java操作 2.4.0版本
package esjava; import org.elasticsearch.action.bulk.*;import org.elasticsearch.action.delete.Delete ...
- JDBC 4.0 开始Java操作数据库不用再使用 Class.forName加载驱动类了
JDBC 4.0 开始Java操作数据库不用再使用 Class.forName加载驱动类了 代码示例 转自 https://docs.oracle.com/javase/tutorial/jdbc/o ...
- 基于Spark1.3.0的Spark sql三个核心部分
基于Spark1.3.0的Spark sql三个核心部分: 1.可以架子啊各种结构化数据源(JSON,Hive,and Parquet) 2.可以让你通过SQL,saprk内部程序或者外部攻击,通过标 ...
- Spark-1.6.0中的Sort Based Shuffle源码解读
从Spark-1.2.0开始,Spark的Shuffle由Hash Based Shuffle升级成了Sort Based Shuffle.即Spark.shuffle.manager从Hash换成了 ...
- Spark1.5.0 + Hadoop2.7.1整合
Hadoop2.7.1已经配置完毕. Hosts分配如下: master 172.16.15.140 slave1 172.15.15.141 slave2 172.16.15.142 一.安装Sca ...
- 搭建Hadoop2.6.0+Spark1.1.0集群环境
前几篇文章主要介绍了单机模式的hadoop和spark的安装和配置,方便开发和调试.本文主要介绍,真正集群环境下hadoop和spark的安装和使用. 1. 环境准备 集群有三台机器: master: ...
- spark 1.6.0 安装与配置(spark1.6.0、Ubuntu14.04、hadoop2.6.0、scala2.10.6、jdk1.7)
前几天刚着实研究spark,spark安装与配置是入门的关键,本人也是根据网上各位大神的教程,尝试配置,发现版本对应最为关键.现将自己的安装与配置过程介绍如下,如有兴趣的同学可以尝试安装.所谓工欲善其 ...
- 【MongoDB for Java】Java操作MongoDB
上一篇文章: http://www.cnblogs.com/hoojo/archive/2011/06/01/2066426.html介绍到了在MongoDB的控制台完成MongoDB的数据操作,通过 ...
- Java操作Oracle
public class DBCon { // 数据库驱动对象 public static final String DRIVER = "oracle.jdbc.driver.OracleD ...
随机推荐
- webrtc 的回声抵消(aec、aecm)算法简介(转)
webrtc 的回声抵消(aec.aecm)算法简介 webrtc 的回声抵消(aec.aecm)算法主要包括以下几个重要模块:1.回声时延估计 2.NLMS(归一化最小均方自适应算法) ...
- Hibernate理论
1.什么是Hibernate? Hibernate是数据持久层的一个轻量级框架.数据持久层的框架有很多比如:iBATIS,myBatis,Nhibernate,Siena等等.并且Hibernate是 ...
- hdu 1576 A/B (扩展欧几里德简单运用)
http://acm.hdu.edu.cn/showproblem.php?pid=1576 A/B Time Limit: 1000/1000 MS (Java/Others) Memory Lim ...
- JavaScript神一样的变量系统
话说上一篇介绍了JavaScript故事版的身世之谜.看官你估计也明白JavaScript出生之时,就未曾托于重任.布兰登-艾奇估计也没料到今天的JavaScript变得如此重要.要不然,当年他也不会 ...
- 用DependanceProperty做Dynamic换Icon
1:做Icon用Canvas还是DrawingBrush? Canvas的例子:
- OC:点语法
IOS中的@property 与 assign copy retain 的区别参考 //@理解为 OC 代码的标记 //如何去创建一个对象 创建对象的两步: // (1)为对象在堆区中开辟空间 Stu ...
- HTML5 ajax上传附件
var fd = new FormData(); fd.append('/*键值*/', this.files[0]);var xhr = new XMLHttpRequest ...
- CentOS7安装telnet服务
CentOS7.0 telnet-server 启动的问题.解决方法: ①.先检查CentOS7.0是否已经安装以下两个安装包:telnet-server.xinetd.命令如下: rpm ...
- cocos2d-x 使用Lua
转自:http://www.benmutou.com/blog/archives/49 1. Lua的堆栈和全局表 我们来简单解释一下Lua的堆栈和全局表,堆栈大家应该会比较熟悉,它主要是用来让C++ ...
- TAxThread - Inter thread message based communication - Delphi
http://www.cybletter.com/index.php?id=3 http://www.cybletter.com/index.php?id=30 Source Code http:// ...