Spark+Hadoop+Hive集群上数据操作记录
[rc@vq18ptkh01 ~]$ hadoop fs -ls /
drwxr-xr-x+ - jc_rc supergroup 0 2016-11-03 11:46 /dt
[rc@vq18ptkh01 ~]$ hadoop fs -copyFromLocal wifi_phone_list_1030.csv /dt
[rc@vq18ptkh01 ~]$ hadoop fs -copyFromLocal wifi_phone_list_1031.csv /dt
[rc@vq18ptkh01 ~]$ hadoop fs -copyFromLocal wifi_phone_list_1101.csv /dt [rc@vq18ptkh01 ~]$ hadoop fs -ls /dt
16/11/03 11:53:16 INFO hdfs.PeerCache: SocketCache disabled.
Found 3 items
-rw-r--r--+ 3 jc_rc supergroup 1548749 2016-11-03 11:48 /dt/wifi_phone_list_1030.csv
-rw-r--r--+ 3 jc_rc supergroup 1262964 2016-11-03 11:52 /dt/wifi_phone_list_1031.csv
-rw-r--r--+ 3 jc_rc supergroup 979619 2016-11-03 11:52 /dt/wifi_phone_list_1101.csv [rc@vq18ptkh01 ~]$ beeline
Connecting to jdbc:hive2://1.8.15.1:24002,10.78.152.24:24002,1.8.15.2:24002,1.8.12.42:24002,1.8.15.62:24002/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;sasl.qop=auth-conf;auth=KERBEROS;principal=hive/hadoop.hadoop.com@HADOOP.COM
Debug is true storeKey false useTicketCache true useKeyTab false doNotPrompt false ticketCache is null isInitiator true KeyTab is null refreshKrb5Config is false principal is null tryFirstPass is false useFirstPass is false storePass is false clearPass is false
Acquire TGT from Cache
Principal is jc_rc@HADOOP.COM
Commit Succeeded Connected to: Apache Hive (version 1.3.0)
Driver: Hive JDBC (version 1.3.0)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.3.0 by Apache Hive
0: jdbc:hive2://1.8.15.2:21066/> use r_hive_db;
No rows affected (0.547 seconds) 0: jdbc:hive2://1.8.15.2:21066/> create table tmp_wifi1030(imisdn string,starttime string,endtime string) row format delimited fields terminated by ',' stored as textfile;
0: jdbc:hive2://1.8.15.2:21066/> show tables; [rc@vq18ptkh01 ~]$ wc wifi_phone_list_1030.csv -l
25390 wifi_phone_list_1030.csv
+---------------+--+
| tab_name |
+---------------+--+
| tmp_wifi1030 |
+---------------+--+
1 row selected (0.401 seconds)
0: jdbc:hive2://1.8.15.2:21066/> load data inpath 'hdfs:/dt/wifi_phone_list_1030.csv' into table tmp_wifi1030;
0: jdbc:hive2://1.8.15.2:21066/> select * from tmp_wifi1030;
| tmp_wifi1030.imisdn | tmp_wifi1030.starttime | tmp_wifi1030.endtime |
+----------------------+--------------------------+--------------------------+--+
| 18806503523 | 2016-10-30 23:58:56.000 | 2016-10-31 00:01:07.000 |
| 15700125216 | 2016-10-30 23:58:57.000 | 2016-10-31 00:01:49.000 |
+----------------------+--------------------------+--------------------------+--+
25,390 rows selected (5.649 seconds) 0: jdbc:hive2://1.8.15.2:21066/> select count(*) from tmp_wifi1030;
INFO : Number of reduce tasks determined at compile time: 1
INFO : In order to change the average load for a reducer (in bytes):
INFO : set hive.exec.reducers.bytes.per.reducer=<number>
INFO : In order to limit the maximum number of reducers:
INFO : set hive.exec.reducers.max=<number>
INFO : In order to set a constant number of reducers:
INFO : set mapreduce.job.reduces=<number>
INFO : number of splits:1
INFO : Submitting tokens for job: job_1475071482566_2471703
INFO : Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:hacluster, Ident: (HDFS_DELEGATION_TOKEN token 19416140 for jc_rc)
INFO : Kind: HIVE_DELEGATION_TOKEN, Service: HiveServer2ImpersonationToken, Ident: 00 05 6a 63 5f 72 63 05 6a 63 5f 72 63 21 68 69 76 65 2f 68 61 64 6f 6f 70 2e 68 61 64 6f 6f 70 2e 63 6f 6d 40 48 41 44 4f 4f 50 2e 43 4f 4d 8a 01 58 28 57 df 96 8a 01 58 4c 64 63 96 8d 0d 65 ff 8e 03 97
INFO : The url to track the job: https://pc-z1:26001/proxy/application_1475071482566_2471703/
INFO : Starting Job = job_1475071482566_2471703, Tracking URL = https://pc-z1:26001/proxy/application_1475071482566_2471703/
INFO : Kill Command = /opt/huawei/Bigdata/FusionInsight_V100R002C60SPC200/FusionInsight-Hive-1.3.0/hive-1.3.0/bin/..//../hadoop/bin/hadoop job -kill job_1475071482566_2471703
INFO : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
INFO : 2016-11-03 12:04:58,351 Stage-1 map = 0%, reduce = 0%
INFO : 2016-11-03 12:05:04,702 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.72 sec
INFO : 2016-11-03 12:05:12,096 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 4.86 sec
INFO : MapReduce Total cumulative CPU time: 4 seconds 860 msec
INFO : Ended Job = job_1475071482566_2471703
+--------+--+
| _c0 |
+--------+--+
| 25390 |
+--------+--+
1 row selected (25.595 seconds) 0: jdbc:hive2://1.8.15.62:21066/> select * from default.d_s1mme limit 10;
+----------------------+--------------------+-------------------------+----------------------+-------------------+--------------------+--------------------+----------------------+------------------------------+------------------------------------+----------------------------------+--------------------------------+-----------------------------+-----------------------------+-----------------------+------------------------------+----------------------------+------------------------+----------------------+--------------------+------------------------------------+------------------------------------+--------------------------+--------------------------+------------------------+------------------------+-------------------+-----------------------+-------------------------+-------------------------+-------------------+---------------------------------+---------------------------+-----------------------------+----------------------------+-------------------------------+-------------------------------------+-------------------------------------+---------------------------+-----------------------------+----------------------------+-------------------------------+-------------------------------------+-------------------------------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------+----------------------+--+
| d_s1mme .length | d_s1mme .city 。。。。。。。。。。。。 | | 2016101714 |
| NULL | 579 | 5 | 130980097fb8c900 | 6 | 460006791248581 | 352093070081343 | 88888888888888888 | 20 | 2016-10-17 13:30:23.0 | 2016-10-17 13:30:23.0 | 0 | 20 | NULL | 0 | 209743848 | 419 | 32 | D5095073 | NULL | NULL | NULL | 100.67.254.45 | 100.111.211.166 | 36412 | 36412 | 589D | BAE6802 | NULL | NULL | NULL | 0 | NULL | NULL | NULL | NULL | NULL | NULL | | | | | | | | | | | 2016101714 |
+----------------------+--------------------+-------------------------+----------------------+-------------------+--------------------+--------------------+----------------------+------------------------------+------------------------------------+----------------------------------+--------------------------------+-----------------------------+-----------------------------+-----------------------+------------------------------+----------------------------+------------------------+----------------------+--------------------+------------------------------------+------------------------------------+--------------------------+--------------------------+------------------------+------------------------+-------------------+-----------------------+-------------------------+-------------------------+-------------------+---------------------------------+---------------------------+-----------------------------+----------------------------+-------------------------------+-------------------------------------+-------------------------------------+---------------------------+-----------------------------+----------------------------+-------------------------------+-------------------------------------+-------------------------------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------+----------------------+--+
10 rows selected (0.6 seconds)
create table tmp_mr_s1_mme1030 as
select a.length,a.city,a.interface,a.xdr_id,a.rat,a.imsi,a.imei,a.msisdn,a.procedure_start_time,a.procedure_end_time,a.mme_ue_s1ap_id,a.mme_group_id,a.mme_code,a.user_ipv4,a.tac,a.cell_id,a.other_tac,a.other_eci
from default.d_s1mme a join r_hive_db.tmp_wifi1030 b on a.msisdn=b.imisdn and a.p_hour>='20161030' and a.p_hour<'20161031'; 0: jdbc:hive2://1.8.15.2:21066/> create table tmp_mr_s1_mme_enbs1030 as
0: jdbc:hive2://1.8.15.2:21066/> select cell_id/256 from tmp_mr_s1_mme1030;
0: jdbc:hive2://1.8.15.62:21066/> create table tmp_mr_s1_mme_cellids1030 as select distinct cast(cell_id as bigint) as cellid from tmp_mr_s1_mme1030; 0: jdbc:hive2://1.8.15.62:21066/> set hive.merge.mapfiles;
+---------------------------+--+
| set |
+---------------------------+--+
| hive.merge.mapfiles=true |
+---------------------------+--+
1 row selected (0.022 seconds)
0: jdbc:hive2://1.8.15.62:21066/> set hive.merge.mapredfields;
+---------------------------------------+--+
| set |
+---------------------------------------+--+
| hive.merge.mapredfields is undefined |
+---------------------------------------+--+
1 row selected (0.022 seconds)
0: jdbc:hive2://1.8.15.62:21066/> set hive.merge.size.per.task=1024000000;
No rows affected (0.012 seconds)
0: jdbc:hive2://1.8.15.62:21066/> set hive.merge.smallfiles.avgsize=1024000000;
No rows affected (0.012 seconds)
0: jdbc:hive2://1.8.15.62:21066/> use r_hive_db;
No rows affected (0.031 seconds)
0: jdbc:hive2://1.8.15.62:21066/> insert overwrite directory '/dt/' row format delimited fields terminated by '|' select * from tmp_mr_s1_mme_cellids1030;
INFO : Number of reduce tasks is set to 0 since there's no reduce operator
INFO : number of splits:17
INFO : Submitting tokens for job: job_1475071482566_2477152
INFO : Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:hacluster, Ident: (HDFS_DELEGATION_TOKEN token 19422634 for jc_rc)
INFO : Kind: HIVE_DELEGATION_TOKEN, Service: HiveServer2ImpersonationToken, Ident: 00 05 6a 63 5f 72 63 05 6a 63 5f 72 63 21 68 69 76 65 2f 68 61 64 6f 6f 70 2e 68 61 64 6f 6f 70 2e 63 6f 6d 40 48 41 44 4f 4f 50 2e 43 4f 4d 8a 01 58 28 d2 8f 0b 8a 01 58 4c df 13 0b 8d 0d 6c 4b 8e 03 98
INFO : The url to track the job: https://pc-z1:26001/proxy/application_1475071482566_2477152/
INFO : Starting Job = job_1475071482566_2477152, Tracking URL = https://pc-z1:26001/proxy/application_1475071482566_2477152/
INFO : Kill Command = /opt/huawei/Bigdata/FusionInsight_V100R002C60SPC200/FusionInsight-Hive-1.3.0/hive-1.3.0/bin/..//../hadoop/bin/hadoop job -kill job_1475071482566_2477152
INFO : Hadoop job information for Stage-1: number of mappers: 17; number of reducers: 0
INFO : 2016-11-03 14:40:52,492 Stage-1 map = 0%, reduce = 0%
INFO : 2016-11-03 14:40:58,835 Stage-1 map = 76%, reduce = 0%, Cumulative CPU 28.78 sec
INFO : 2016-11-03 14:40:59,892 Stage-1 map = 88%, reduce = 0%, Cumulative CPU 33.55 sec
INFO : 2016-11-03 14:41:10,486 Stage-1 map = 94%, reduce = 0%, Cumulative CPU 37.13 sec
INFO : 2016-11-03 14:41:11,549 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 41.13 sec
INFO : MapReduce Total cumulative CPU time: 41 seconds 130 msec
INFO : Ended Job = job_1475071482566_2477152
INFO : Stage-3 is filtered out by condition resolver.
INFO : Stage-2 is selected by condition resolver.
INFO : Stage-4 is filtered out by condition resolver.
INFO : Number of reduce tasks is set to 0 since there's no reduce operator
INFO : number of splits:1
INFO : Submitting tokens for job: job_1475071482566_2477181
INFO : Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:hacluster, Ident: (HDFS_DELEGATION_TOKEN token 19422663 for jc_rc)
INFO : Kind: HIVE_DELEGATION_TOKEN, Service: HiveServer2ImpersonationToken, Ident: 00 05 6a 63 5f 72 63 05 6a 63 5f 72 63 21 68 69 76 65 2f 68 61 64 6f 6f 70 2e 68 61 64 6f 6f 70 2e 63 6f 6d 40 48 41 44 4f 4f 50 2e 43 4f 4d 8a 01 58 28 d2 8f 0b 8a 01 58 4c df 13 0b 8d 0d 6c 4b 8e 03 98
INFO : The url to track the job: https://pc-z1:26001/proxy/application_1475071482566_2477181/
INFO : Starting Job = job_1475071482566_2477181, Tracking URL = https://pc-z1:26001/proxy/application_1475071482566_2477181/
INFO : Kill Command = /opt/huawei/Bigdata/FusionInsight_V100R002C60SPC200/FusionInsight-Hive-1.3.0/hive-1.3.0/bin/..//../hadoop/bin/hadoop job -kill job_1475071482566_2477181
INFO : Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 0
INFO : 2016-11-03 14:41:22,190 Stage-2 map = 0%, reduce = 0%
INFO : 2016-11-03 14:41:28,571 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 2.2 sec
INFO : MapReduce Total cumulative CPU time: 2 seconds 200 msec
INFO : Ended Job = job_1475071482566_2477181
INFO : Moving data to directory /dt from hdfs://hacluster/dt/.hive-staging_hive_2016-11-03_14-40-43_774_4317869403646242426-140183/-ext-10000
No rows affected (46.604 seconds) [rc@vq18ptkh01 dt]$ hadoop fs -ls /dt
16/11/03 14:46:18 INFO hdfs.PeerCache: SocketCache disabled.
Found 1 items
-rwxrwxrwx+ 3 jc_rc supergroup 26819 2016-11-03 14:41 /dt/000000_0
[rc@vq18ptkh01 dt]$ hadoop fs -copyToLocal /dt/000000_0
16/11/03 14:46:33 INFO hdfs.PeerCache: SocketCache disabled.
[rc@vq18ptkh01 dt]$ ls
000000_0
[rc@vq18ptkh01 dt]$ [rc@vq18ptkh01 dt]$ ls
000000_0 000001_0 000002_0 000003_0 000004_0 000005_0
[rc@vq18ptkh01 dt]$ ftp 10.70.41.126 21
Connected to 10.70.41.126 (10.70.41.126).
220 10.70.41.126 FTP server ready
Name (10.70.41.126:rc): joy
331 Password required for joy.
Password:
230 User joy logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> put 000000_0 /Temp/a_dt/
local: 000000_0 remote: /Temp/a_dt/
227 Entering Passive Mode (10,70,41,126,168,163).
550 /Temp/a_dt/: Not a regular file
ftp> put
(local-file) 000000_0
(remote-file) /Temp/a_dt/000000_0
local: 000000_0 remote: /Temp/a_dt/000000_0
227 Entering Passive Mode (10,70,41,126,168,207).
150 Opening BINARY mode data connection for /Temp/a_dt/000000_0
226 Transfer complete.
1049905992 bytes sent in 33 secs (31787.20 Kbytes/sec)
ftp> put 000001_0 /Temp/a_dt/000001_0
local: 000001_0 remote: /Temp/a_dt/000001_0
227 Entering Passive Mode (10,70,41,126,168,255).
150 Opening BINARY mode data connection for /Temp/a_dt/000001_0
452 Transfer aborted. No space left on device
ftp> put 000002_0 /Temp/a_dt/000002_0
local: 000002_0 remote: /Temp/a_dt/000002_0
227 Entering Passive Mode (10,70,41,126,169,20).
150 Opening BINARY mode data connection for /Temp/a_dt/000002_0
452 Transfer aborted. No space left on device
ftp> put 000003_0 /Temp/a_dt/000003_0
local: 000003_0 remote: /Temp/a_dt/000003_0
227 Entering Passive Mode (10,70,41,126,169,40).
150 Opening BINARY mode data connection for /Temp/a_dt/000003_0
452 Transfer aborted. No space left on device
ftp> put 000004_0 /Temp/a_dt/000004_0
local: 000004_0 remote: /Temp/a_dt/000004_0
227 Entering Passive Mode (10,70,41,126,169,66).
150 Opening BINARY mode data connection for /Temp/a_dt/000004_0
452 Transfer aborted. No space left on device
ftp> put 000005_0 /Temp/a_dt/000005_0
local: 000005_0 remote: /Temp/a_dt/000005_0
227 Entering Passive Mode (10,70,41,126,169,85).
150 Opening BINARY mode data connection for /Temp/a_dt/000005_0
226 Transfer complete.
23465237 bytes sent in 0.747 secs (31391.79 Kbytes/sec)
ftp>
查询hdfs文件内容,如果文件过大时不能一次加载,可以使用:
hadoop fs -cat /user/my/ab.txt |more
Spark+Hadoop+Hive集群上数据操作记录的更多相关文章
- 06、部署Spark程序到集群上运行
06.部署Spark程序到集群上运行 6.1 修改程序代码 修改文件加载路径 在spark集群上执行程序时,如果加载文件需要确保路径是所有节点能否访问到的路径,因此通常是hdfs路径地址.所以需要修改 ...
- Redis Cluster高可用集群在线迁移操作记录【转】
之前介绍了redis cluster的结构及高可用集群部署过程,今天这里简单说下redis集群的迁移.由于之前的redis cluster集群环境部署的服务器性能有限,需要迁移到高配置的服务器上.考虑 ...
- Hadoop跨集群迁移数据(整理版)
1. 什么是DistCp DistCp(分布式拷贝)是用于大规模集群内部和集群之间拷贝的工具.它使用Map/Reduce实现文件分发,错误处理和恢复,以及报告生成.它把文件和目录的列表作为map任务的 ...
- Hadoop hbase集群断电数据块被破坏无法启动
集群机器意外断电重启,导致hbase 无法正常启动,抛出reflect invocation异常,可能是正在执行的插入或合并等操作进行到一半时中断,导致部分数据文件不完整格式不正确或在hdfs上blo ...
- 使用hive客户端java api读写hive集群上的信息
上文介绍了hdfs集群信息的读取方式,本文说hive 1.先解决依赖 <properties> <hive.version>1.2.1</hive.version> ...
- Redis Cluster高可用集群在线迁移操作记录
之前介绍了redis cluster的结构及高可用集群部署过程,今天这里简单说下redis集群的迁移.由于之前的redis cluster集群环境部署的服务器性能有限,需要迁移到高配置的服务器上.考虑 ...
- 使用DBeaver Enterprise连接redis集群的一些操作记录
要点总结: 使用DBeaver Enterprise连接redis集群可以通过SQL语句查看key对应的value,但是没法查看key. 使用RedisDesktopManager连接redis集群可 ...
- 大数据学习系列之七 ----- Hadoop+Spark+Zookeeper+HBase+Hive集群搭建 图文详解
引言 在之前的大数据学习系列中,搭建了Hadoop+Spark+HBase+Hive 环境以及一些测试.其实要说的话,我开始学习大数据的时候,搭建的就是集群,并不是单机模式和伪分布式.至于为什么先写单 ...
- HADOOP+SPARK+ZOOKEEPER+HBASE+HIVE集群搭建(转)
原文地址:https://www.cnblogs.com/hanzhi/articles/8794984.html 目录 引言 目录 一环境选择 1集群机器安装图 2配置说明 3下载地址 二集群的相关 ...
随机推荐
- C#中的IComparable 和 IComparer 接口,实现列表中的对象比较和排序
借豆瓣某博主的话先对这两个接口进行一个解释: IComparable在要比较的对象的类中实现,可以比较该对象和另一个对象 IComparer在一个单独的类中实现,可以比较任意两个对象. 如果已经支持 ...
- 【BZOJ】1086: [SCOI2005]王室联邦
http://www.lydsy.com/JudgeOnline/problem.php?id=1086 题意:n个点的树,要求分块,使得每一块的大小在[b, 3b]内且块与某个点形成的块是连通的(某 ...
- BZOJ4556: [Tjoi2016&Heoi2016]字符串
Description 佳媛姐姐过生日的时候,她的小伙伴从某东上买了一个生日礼物.生日礼物放在一个神奇的箱子中.箱子外边写了 一个长为n的字符串s,和m个问题.佳媛姐姐必须正确回答这m个问题,才能打开 ...
- MySQL实用技巧
自增Id重新计数 TRUNCATE TABLE 表名 获取最后插入数据的ID SELECT LAST_INSERT_ID(); 使用"id1,id2,id3"当参数 FIN ...
- GPIO裸机编程
作者:李老师,华清远见嵌入式学院讲师. GPIO控制技术是接口技术中最简单的一种.本章通过介绍S5PV210芯片的GPIO控制方法,让读者初步掌握控制硬件接口的方法.本章的主要内容: GPIO功能介绍 ...
- jquery js javascript select 无限级 插件 优化foxidea版
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/ ...
- JavaScript的几种继承方式
看<JavaScript高级程序设计>做的一些笔记 ECMAScript只支持实现继承,不支持接口继承(因为函数没有签名) 原型链(实现继承的主要方法): function SuperTy ...
- Python 3.x下消除print()自动换行
Python 2.x下的print语句在输出字符串之后会默认换行,如果不希望换行,只要在语句最后加一个“,”即可.但是在Python 3.x下,print()变成内置函数,加“,”的老方法就行不通了. ...
- inline,block,inline-block的区别
display:block block元素会独占一行,多个block元素会各自新起一行.默认情况下,block元素宽度自动填满其父元素宽度. block元素可以设置width,height属性.块级元 ...
- css3很酷的加载动画多款
在线实例:http://www.admin10000.com/document/3601.html 源码:https://github.com/tobiasahlin/SpinKit