不多说,直接上干货! 前期博客 Zeppelin的入门使用系列之使用Zeppelin来创建临时表UserTable(三) 1. 运行年龄统计的Spark SQL (1) 输入Spark SQL时,必须在第一行输入%sql . %sql主要是告诉Zeppelin的解释器(Interpreter),后续输入的命令是Spark SQL. %sql select age,count(*) counts from UserTable group by age order by age (2) .显示年
Spark .0以前版本: val sparkConf = new SparkConf().setAppName("soyo") val spark = new SparkContext(sparkConf) Spark .0以后版本:(上面的写法兼容) 直接用SparkSession: val spark = SparkSession .builder .appName("soyo") .getOrCreate() var tc = spark.sparkCont
spark创建dataFrame方式有很多种,官方API也比较多 公司业务上的个别场景使用了下面两种方式 1.通过List创建dataFrame /** * Applies a schema to a List of Java Beans. * * WARNING: Since there is no guaranteed ordering for fields in a Java Bean, * SELECT * queries will return the columns in an un
sparkR读取csv文件 The general method for creating SparkDataFrames from data sources is read.df. This method takes in the path for the file to load and the type of data source, and the currently active SparkSession will be used automatically. SparkR suppo