Hive GenericUDF2
再来看一个分数统计的小例子。
在Hive中存在如下一张表:
- hive> describe tb_test2;
- OK
- name string
- score_list array<map<string,int>>
- Time taken: 0.074 seconds
- hive> select * from tb_test2;
- OK
- A [{"math":100,"english":90,"history":85}]
- B [{"math":95,"english":80,"history":100}]
- C [{"math":80,"english":90,"histroy":100}]
- Time taken: 0.107 seconds
编写genericUDF.
- package com.wz.udf;
- import org.apache.hadoop.io.Text;
- import org.apache.hadoop.io.IntWritable;
- import org.apache.hadoop.hive.ql.udf.generic.GenericUDF;
- import org.apache.hadoop.hive.ql.exec.UDFArgumentException;
- import org.apache.hadoop.hive.ql.exec.UDFArgumentLengthException;
- import org.apache.hadoop.hive.ql.exec.UDFArgumentTypeException;
- import org.apache.hadoop.hive.ql.metadata.HiveException;
- import org.apache.hadoop.hive.serde2.lazy.LazyString;
- import org.apache.hadoop.hive.serde2.lazy.LazyMap;
- import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
- import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector.Category;
- import org.apache.hadoop.hive.serde2.objectinspector.ListObjectInspector;
- import org.apache.hadoop.hive.serde2.objectinspector.MapObjectInspector;
- import org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector;
- import org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector;
- import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory;
- import org.apache.hadoop.hive.serde2.objectinspector.StructField;
- import org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector;
- import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory;
- import org.apache.hadoop.hive.serde2.objectinspector.primitive.StringObjectInspector;
- import java.util.ArrayList;
- public class helloGenericUDFNew extends GenericUDF {
- ////输入变量定义
- private ObjectInspector nameObj;
- private ListObjectInspector listoi;
- private MapObjectInspector mapOI;
- private ArrayList<Object> valueList = new ArrayList<Object>();
- @Override
- public ObjectInspector initialize(ObjectInspector[] arguments) throws UDFArgumentException {
- nameObj = (ObjectInspector)arguments[0];
- listoi = (ListObjectInspector)arguments[1];
- mapOI = ((MapObjectInspector)listoi.getListElementObjectInspector());
- //输出结构体定义
- ArrayList structFieldNames = new ArrayList();
- ArrayList structFieldObjectInspectors = new ArrayList();
- structFieldNames.add("name");
- structFieldNames.add("totalScore");
- structFieldObjectInspectors.add( PrimitiveObjectInspectorFactory.writableStringObjectInspector );
- structFieldObjectInspectors.add( PrimitiveObjectInspectorFactory.writableIntObjectInspector );
- StructObjectInspector si2;
- si2 = ObjectInspectorFactory.getStandardStructObjectInspector(structFieldNames, structFieldObjectInspectors);
- return si2;
- }
- @Override
- public Object evaluate(DeferredObject[] arguments) throws HiveException{
- LazyString LName = (LazyString)(arguments[0].get());
- String strName = ((StringObjectInspector)nameObj).getPrimitiveJavaObject( LName );
- int nelements = listoi.getListLength(arguments[1].get());
- int nTotalScore=0;
- valueList.clear();
- //遍历list
- for(int i=0;i<nelements;i++)
- {
- LazyMap LMap = (LazyMap)listoi.getListElement(arguments[1].get(),i);
- //获取map中的所有value值
- valueList.addAll(mapOI.getMap(LMap).values());
- for (int j = 0; j < valueList.size(); j++)
- {
- nTotalScore+=Integer.parseInt(valueList.get(j).toString());
- }
- }
- Object[] e;
- e = new Object[2];
- e[0] = new Text(strName);
- e[1] = new IntWritable(nTotalScore);
- return e;
- }
- @Override
- public String getDisplayString(String[] children) {
- assert( children.length>0 );
- StringBuilder sb = new StringBuilder();
- sb.append("helloGenericUDFNew(");
- sb.append(children[0]);
- sb.append(")");
- return sb.toString();
- }
- }
在Hive中执行,结果如下:
- hive> add jar /home/wangzhun/hive/hive-0.8.1/lib/helloGenericUDFNew.jar;
- Added /home/wangzhun/hive/hive-0.8.1/lib/helloGenericUDFNew.jar to class path
- Added resource: /home/wangzhun/hive/hive-0.8.1/lib/helloGenericUDFNew.jar
- hive> create temporary function hellonew as 'com.wz.udf.helloGenericUDFNew';
- OK
- Time taken: 0.016 seconds
- hive> select hellonew(tb_test2.name,tb_test2.score_list) from tb_test2;
- Total MapReduce jobs = 1
- Launching Job 1 out of 1
- Number of reduce tasks is set to 0 since there's no reduce operator
- Starting Job = job_201312091733_0018, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201312091733_0018
- Kill Command = /home/wangzhun/hadoop/hadoop-0.20.2/bin/../bin/hadoop job -Dmapred.job.tracker=localhost:9001 -kill job_201312091733_0018
- Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
- 2013-12-09 22:31:22,328 Stage-1 map = 0%, reduce = 0%
- 2013-12-09 22:31:25,354 Stage-1 map = 100%, reduce = 0%
- 2013-12-09 22:31:28,390 Stage-1 map = 100%, reduce = 100%
- Ended Job = job_201312091733_0018
- MapReduce Jobs Launched:
- Job 0: Map: 1 HDFS Read: 99 HDFS Write: 18 SUCESS
- Total MapReduce CPU Time Spent: 0 msec
- OK
- {"people":"A","totalscore":275}
- {"people":"B","totalscore":275}
- {"people":"C","totalscore":270}
- Time taken: 21.7 seconds
Hive GenericUDF2的更多相关文章
- 初识Hadoop、Hive
2016.10.13 20:28 很久没有写随笔了,自打小宝出生后就没有写过新的文章.数次来到博客园,想开始新的学习历程,总是被各种琐事中断.一方面确实是最近的项目工作比较忙,各个集群频繁地上线加多版 ...
- Hive安装配置指北(含Hive Metastore详解)
个人主页: http://www.linbingdong.com 本文介绍Hive安装配置的整个过程,包括MySQL.Hive及Metastore的安装配置,并分析了Metastore三种配置方式的区 ...
- Hive on Spark安装配置详解(都是坑啊)
个人主页:http://www.linbingdong.com 简书地址:http://www.jianshu.com/p/a7f75b868568 简介 本文主要记录如何安装配置Hive on Sp ...
- HIVE教程
完整PDF下载:<HIVE简明教程> 前言 Hive是对于数据仓库进行管理和分析的工具.但是不要被“数据仓库”这个词所吓倒,数据仓库是很复杂的东西,但是如果你会SQL,就会发现Hive是那 ...
- 基于Ubuntu Hadoop的群集搭建Hive
Hive是Hadoop生态中的一个重要组成部分,主要用于数据仓库.前面的文章中我们已经搭建好了Hadoop的群集,下面我们在这个群集上再搭建Hive的群集. 1.安装MySQL 1.1安装MySQL ...
- hive
Hive Documentation https://cwiki.apache.org/confluence/display/Hive/Home 2016-12-22 14:52:41 ANTLR ...
- 深入浅出数据仓库中SQL性能优化之Hive篇
转自:http://www.csdn.net/article/2015-01-13/2823530 一个Hive查询生成多个Map Reduce Job,一个Map Reduce Job又有Map,R ...
- Hive读取外表数据时跳过文件行首和行尾
作者:Syn良子 出处:http://www.cnblogs.com/cssdongl 转载请注明出处 有时候用hive读取外表数据时,比如csv这种类型的,需要跳过行首或者行尾一些和数据无关的或者自 ...
- Hive索引功能测试
作者:Syn良子 出处:http://www.cnblogs.com/cssdongl 转载请注明出处 从Hive的官方wiki来看,Hive0.7以后增加了一个对表建立index的功能,想试下性能是 ...
随机推荐
- HashMap实现原理分析--面试详谈
1. HashMap的数据结构 数据结构中有数组和链表来实现对数据的存储,但这两者基本上是两个极端. 数组 数组存储区间是连续的,占用内存严重,故空间复杂的很大.但数组的二分查找时间复杂度小,为O(1 ...
- 七. Python基础(7)--文件的读写
七. Python基础(7)--文件的读写 1 ● 文件读取的知识补充 f = open('file', encoding = 'utf-8') content1 = f.read() content ...
- python验证代理IP
接上一篇保存的IP地址,进行验证 # -*- coding: utf-8 -*- import requests from threading import Thread import threadi ...
- SQL-21 查找所有员工自入职以来的薪水涨幅情况,给出员工编号emp_no以及其对应的薪水涨幅growth,并按照growth进行升序
题目描述 查找所有员工自入职以来的薪水涨幅情况,给出员工编号emp_no以及其对应的薪水涨幅growth,并按照growth进行升序CREATE TABLE `employees` (`emp_no` ...
- MySQL输入密码后闪退
刚刚我遇到这个问题,服务里MySQL是启状态的,所以我求助百度,发现很多种说法,我试了几个,还是不行,后来想起来我的密码不对,于是换了正确的密码试了一下,没毛病,进去了. 所以输入密码闪退时,首先确定 ...
- <codis><jodis>
Overview For codis and jodis. Codis TBD... Jodis Java client for codis. Jodis is a java client for c ...
- 2010年腾讯前端面试题学习(jquery,html,css部分)
看了牛人写的回忆文章,里面有2010年腾讯的前端面试题,里面涉及到不少基础性的问题,js部分已学习,这是jquery,html和css部分,学习一下:) 原文地址:https://segmentfau ...
- 2019-03-26-day019-面向对象耦合与组合
作业 #对象的耦合 class Circle: def __init__(self,r): self.r = r def area(self): return 3.14*self.r**2 def p ...
- nodejs环境下配置运行mysql
首先需要在本地运行node环境 必须在本地安装mysql(可以用xampp里面的) 在node环境下引入mysql模块 命令: npm install node-mysql 运用例子(前提条件 ...
- HDU 6140 17多校8 Hybrid Crystals(思维题)
题目传送: Hybrid Crystals Problem Description > Kyber crystals, also called the living crystal or sim ...