一.正常按照数据库和表导入 \\前面介绍了通过底层文件得形式导入到hive的表中,或者直接导入到hdfs中,\\现在介绍通过hive的database和table命令来从上层操作.sqoop import --connect "jdbc:mysql://host03.xyy:3306/sakila" --username root --password root --table payment --where "payment_id<=8000" --hive…
一,选择数据库,这里使用标准mysql sakila数据库 mysql -u root -D sakila -p 二.首先尝试把表中的数据导入到hdfs文件中,这样后续就可以使用spark来dataframe或者rdd来处理数据 sqoop import --connect "jdbc:mysql://host03.xyy:3306/sakila" --username root --password root --table rental --target-dir "Sqo…
一.获得最初的数据并形成dataframe val ny= sc.textFile("data/new_york/")val header=ny.firstval filterNY =ny.filter(listing=>{ listing.split(",").size==14 && listing!=header })val nyMap= filterNY.map(listing=>{ val listingInfo=listing.…
一.条件表达 case when ... then when .... then ... when ... then ...end select film_id,rpad(title,20," "),case when rating in ("G","PG","PG-13") then "YOUNG" WHEN RATING=="NC-17" THEN "17 AND UP&q…
We will be using the sakila database extensively inside the rest of the course and it would be great if you can follow the installation process below. Importing the Sakila Database 一. Change the File .这一步原来提供的文件中可能已经i做好了. Find and Replace all "InnoDB…
通过hdfs或者spark用户登录操作系统,执行spark-shell spark-shell 也可以带参数,这样就覆盖了默认得参数 spark-shell --master yarn --num-executors 2 --executor-memory 2G --driver-memory 1536M 默认值得设置一般在/etc/spark/conf/spark-env.sh里面设置 一.通过array数组自动获得 1.枚举生成数组 val arr=Array(1,2,3,4,5,6,7)…
一.基础日期处理 //date 日期处理select current_date;select current_timestamp;//to_date(time) ;to_date(string)select to_date(current_timestamp);select to_date(rental_date) from rental limit 10;month(date/time)year(date/time)day(date/time)second(time)minute(time)h…
一.基本操作 concat(string,string,string)concat_ws(string,string,string)select customer_id,concat_ws(" ",first_name,last_name),email,address_id from customer;lower(string)initcap(string)if 表达式 select customer_id,if (length(first_name)>6 , substring…
// dataframe is the topic 一.获得基础数据.先通过rdd的方式获得数据 val ny= sc.textFile("data/new_york/")val header=ny.firstval filterNY =ny.filter(listing=>{ listing.split(",").size==14 && listing!=header }) //因为后面多是按照表格的形式来处理dataframe,所以这里增加…
//groupbykey 一.准备数据val flights=sc.textFile("data/Flights/flights.csv")val sampleFlights=sc.parallelize(flights.take(1000))val header=sampleFlights.firstval filteredFlights=sampleFlights.filter(line=>{ line!=header&&line.split(",&…