spark HelloWorld程序(scala版)
使用本地模式,不需要安装spark,引入相关JAR包即可:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.2.0</version>
</dependency>
创建spark:
val sparkUrl = "local"
val conf = new SparkConf()
//.setJars(Seq("/home/panteng/IdeaProjects/sparkscala/target/spark-scala.jar"))
.set("fs.hdfs.impl.disable.cache", "true")
.set("spark.executor.memory", "8g") val spark = SparkSession
.builder()
.appName("Spark SQL basic example")
.config(conf)
.config("spark.some.config.option", "some-value")
.master(sparkUrl)
.getOrCreate()
加载本地文件:
val parquetFileDF = spark.read.parquet("/home/panteng/下载/000001_0")
//spark.read.parquet("hdfs://10.38.164.80:9000/user/root/000001_0")
文件操作:
parquetFileDF.createOrReplaceTempView("parquetFile") val descDF = spark.sql("SELECT substring(description,0,3) as pre ,description FROM parquetFile LIMIT 100000")
val diffDesc = descDF.distinct().sort("description")
diffDesc.createOrReplaceTempView("pre_desc")
val zhaoshang = spark.sql("select * from pre_desc")
zhaoshang.printSchema()
遍历处理:
zhaoshang.foreach(row => clustering(row))
val regexRdd = spark.sparkContext.parallelize(regexList)
regexRdd.repartition(1).saveAsTextFile("/home/panteng/下载/temp6") spark.stop()
附其他函数:
def clustering(row: Row): String = {
try {
var tempRegex = new Regex("null")
if (textPre.equals(row.getAs[String]("pre"))) {
textList = row.getAs[String]("description").replaceAll("\\d","0") :: textList
return "continue"
} else {
if (textList.size > 2) {
tempRegex = ScalaClient.getRegex(textList)
regexList = tempRegex :: regexList
}
if (row.getAs[String]("pre") != null && row.getAs[String]("description") != null) {
textPre = row.getAs[String]("pre")
textList = textList.dropRight(textList.size)
textList = row.getAs[String]("description") :: textList
}
return "ok - " + tempRegex.toString()
}
} catch {
case e: Exception => println("kkkkkkk" + e)
}
return "error"
}
package scala.learn import top.letsgogo.rpc.ThriftProxy import scala.util.matching.Regex object ScalaClient {
def main(args: Array[String]): Unit = {
val client = ThriftProxy.client
val seqList = List("您尾号9081的招行账户入账人民币689.00元",
"您尾号1234的招行一卡通支出人民币11.00元",
"您尾号2345的招行一卡通支出人民币110.00元",
"您尾号5432的招行一卡通支出人民币200.00元",
"您尾号5436的招行一卡通入账人民币142.00元")
var words: List[String] = List()
for (seq <- seqList) {
val list = client.splitSentence(seq)
for (wordIndex <- 0 until list.size()) {
words = list.get(wordIndex) :: words
}
}
val wordlist = words.map(word => (word, 1))
//方法一:先groupBy再map
var genealWords: List[String] = List()
wordlist.groupBy(_._1).map {
case (word, list) => (word, list.size)
}.foreach((row) => {
(if (row._2 >= seqList.size) genealWords = row._1 :: genealWords)
}) val list = client.splitSentence("您尾号1234的招行一卡通支出人民币200.00元")
val regexSeq: StringBuilder = new StringBuilder
val specialChar = List("[", "]", "(", ")")
for (wordIndex <- 0 until list.size()) {
var word = list.get(wordIndex)
if (genealWords.contains(word) && !("*".equals(word))) {
if (specialChar.contains(word.mkString(""))) {
word = "\\" + word
}
regexSeq.append(word)
} else {
regexSeq.append("(.*)")
}
}
println(regexSeq)
val regex = new Regex(regexSeq.mkString)
for (seq <- seqList) {
println(regex.findAllIn(seq).isEmpty)
}
} def getRegex(seqList: List[String]) = {
val client = ThriftProxy.client
var words: List[String] = List()
for (seq <- seqList) {
val list = client.splitSentence(seq)
for (wordIndex <- 0 until list.size()) {
words = list.get(wordIndex) :: words
}
}
val wordlist = words.map(word => (word, 1))
//方法一:先groupBy再map
var genealWords: List[String] = List()
wordlist.groupBy(_._1).map {
case (word, list) => (word, list.size)
}.foreach((row) => {
(if (row._2 >= seqList.size) genealWords = row._1 :: genealWords)
}) val list = client.splitSentence(seqList(0))
val regexSeq: StringBuilder = new StringBuilder
val specialChar = List("[", "]", "(", ")")
for (wordIndex <- 0 until list.size()) {
var word = list.get(wordIndex)
if (genealWords.contains(word) && !("*".equals(word))) {
if (specialChar.contains(word.mkString(""))) {
word = "\\" + word
}
regexSeq.append(word)
} else {
if(regexSeq.size > 4) {
val endStr = regexSeq.substring(regexSeq.size - 4, regexSeq.size - 0)
if (!"(.*)".equals(endStr)) {
regexSeq.append("(.*)")
}
}else{
regexSeq.append("(.*)")
}
}
}
println(regexSeq + " " + seqList.size)
val regex = new Regex(regexSeq.mkString.replaceAll("0+","\\\\d+"))
//for (seq <- seqList) {
// println(regex.findAllIn(seq).isEmpty)
//}
regex
}
}
批量数据提取正则
输出目录覆盖:
spark.hadoop.validateOutputSpecs false
基于dataSet执行Map,必须定义encoder 否则编译异常!但是对于某些type DataTypes没有提供,只能转为rdd进行map,之后再由RDD 转dataframe
val schema = StructType(Seq(
StructField("pre", StringType),
StructField("description", StringType)
))
val encoder = RowEncoder(schema)
val replaceRdd = diffDesc.map(row => myReplace(row))(encoder).sort("description") 任务提交:
./spark-2.2.0-bin-hadoop2.7/bin/spark-submit --name panteng --num-executors 100 --executor-cores 4 ./spark-scala.jar spark://dommain:7077 去除部分日志:
// Logger.getLogger("org.apache.spark").setLevel(Level.ERROR)
// Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)
// spark.sparkContext.setLogLevel("WARN")
常用配置:
spark-submit --java 8 \
--cluster xxx --master yarn-cluster \
--class xx.xx.xx.xx.Xxx \
--queue default \
--conf spark.yarn.appMasterEnv.JAVA_HOME=/opt/soft/jdk1.8.0 \
--conf spark.executorEnv.JAVA_HOME=/opt/soft/jdk1.8.0 \
--conf spark.yarn.user.classpath.first=true \
--num-executors 128 \
--conf spark.yarn.job.owners=panteng \
--conf spark.executor.memory=10G \
--conf spark.dynamicAllocation.enabled=true \
--conf spark.shuffle.service.enabled=true \
--conf spark.dynamicAllocation.minExecutors=2 \
--conf spark.yarn.executor.memoryOverhead=4000 \
--conf spark.yarn.driver.memoryOverhead=6000 \
--conf spark.driver.memory=10G \
--conf spark.driver.maxResultSize=4G \
--conf spark.rpc.message.maxSize=512 \
--driver-class-path hdfs://c3prc-hadoop/tmp/u_panteng/lda-lib/guava-14.0.1.jar \
xx-1.0-SNAPSHOT.jar parm1 parm2
spark HelloWorld程序(scala版)的更多相关文章
- Spark Scala语言学习系列之完成HelloWorld程序(三种方式)
三种方式完成HelloWorld程序 分别采用在REPL,命令行(scala脚本)和Eclipse下运行hello world. 一.Scala REPL. windows下安装好scala后,直接C ...
- IDEA搭建scala开发环境开发spark应用程序
通过IDEA搭建scala开发环境开发spark应用程序 一.idea社区版安装scala插件 因为idea默认不支持scala开发环境,所以当需要使用idea搭建scala开发环境时,首先需要安 ...
- Spark编程环境搭建(基于Intellij IDEA的Ultimate版本)(包含Java和Scala版的WordCount)(博主强烈推荐)
福利 => 每天都推送 欢迎大家,关注微信扫码并加入我的4个微信公众号: 大数据躺过的坑 Java从入门到架构师 人工智能躺过的坑 Java全栈大联盟 ...
- 利用Scala语言开发Spark应用程序
Spark内核是由Scala语言开发的,因此使用Scala语言开发Spark应用程序是自然而然的事情.如果你对Scala语言还不太熟悉,可 以阅读网络教程A Scala Tutorial for Ja ...
- Spark架构与作业执行流程简介(scala版)
在讲spark之前,不得不详细介绍一下RDD(Resilient Distributed Dataset),打开RDD的源码,一开始的介绍如此: 字面意思就是弹性分布式数据集,是spark中最基本的数 ...
- Scala学习2 ———— 三种方式完成HelloWorld程序
三种方式完成HelloWorld程序 分别采用在REPL,命令行(scala脚本)和Eclipse下运行hello world. 一.Scala REPL. 按照第一篇在windows下安装好scal ...
- Idea下用SBT搭建Spark Helloworld
没用过IDEA工具,听说跟Eclipse差不多,sbt在Idea其实就等于maven在Eclipse.Spark运行在JVM中,所以要在Idea下运行spark,就先要安装JDK 1.8+ 然后加入S ...
- (一)Spark简介-Java&Python版Spark
Spark简介 视频教程: 1.优酷 2.YouTube 简介: Spark是加州大学伯克利分校AMP实验室,开发的通用内存并行计算框架.Spark在2013年6月进入Apache成为孵化项目,8个月 ...
- 【原创】Kafka producer原理 (Scala版同步producer)
本文分析的Kafka代码为kafka-0.8.2.1.另外,由于Kafka目前提供了两套Producer代码,一套是Scala版的旧版本:一套是Java版的新版本.虽然Kafka社区极力推荐大家使用J ...
随机推荐
- iphone断点下载,断点续传
本文转载至 http://blog.csdn.net/zaitianaoxiang/article/details/6650469 - (void)loadView { NSURLConnection ...
- 转 谈谈JS里的{ }大括号和[ ]中括号的用法,理解后就可以看懂JSON结构了。
一.{ } 大括号,表示定义一个对象,大部分情况下要有成对的属性和值,或是函数. 如:var LangShen = {"Name":"Langshen",&qu ...
- python系列十四:Python3 文件
#!/usr/bin/python #Python3 文件 from urllib import requestimport pprint,pickle'''读和写文件open() 将会返回一个 fi ...
- ng-disabled的使用
1.适用范围 该指令适用于<input>, <select>,<button> 和 <textarea> 元素. 2.用法解析 ng-disabled ...
- 【24】response对象以及Python3中的一些变化
request.COOKIES 用来获取cookie response.write() 写的方法是response对象的 转自:博客园python3的变化 print 由一个语句(st ...
- Jquery添加元素append及阻止表单提交submit
HTML代码: <td><input name="duration[]" value="" type="text" /&g ...
- F110的几个功能
1.F-59, 没有找到函数, 使用BDC BAPI_ACC_DOCUMENT_POST 必须创建有借贷2 line 的凭证,需求要参考原始的SA类型凭证, 创建一个单条的 科目 = 供应商 的凭证, ...
- CoreThink主题开发(九)使用H-ui开发博客主题之用户个人主页
感谢H-ui.感谢CoreThink! 效果图: 这里使用table布局 /Theme/Blog/User/Index/home.html <extend name="$_home_ ...
- JAVA项目中常用的异常知识点总结
JAVA项目中常用的异常知识点总结 1. java.lang.nullpointerexception这个异常大家肯定都经常遇到,异常的解释是"程序遇上了空指针",简单地说就是调用 ...
- 编译安装 nginx的http_stub_status_module监控其运行状态
步骤: 1 编译nginx,加上参数 --with-http_stub_status_module 以我自己的编译选项为例: #配置指令 ./configure --prefix=/usr/local ...