Overview

  • Apache Impala (incubating) is the open source, native analytic database for apache Hadoop.

Features

  • Do BI-style Queries on Hadoop:

    • low latency and high concurrency for BI/analytic queries on Hadoop(not delivered by batch frameworks such as Apache Hive).
    • scales linearly, even in multitenant environments.
  • Unify ur Infrasturecture: Utilize the same file and data formats and metadata, security, and resource management frameworks as your Hadoop deployment—no redundant infrastructure or data conversion/duplication.
  • Implement Quickly: supports SQL
  • Count on Enterprise-class Security
  • Retain Freedom from Lock-in: open-source
  • Expand the Hadoop User-verse

Architecuture

  • Circumvents MapReduce to avoid latency, directly access the data through a specialized distributed query engine that is very similar to those found in commercial parallel RDBMSs.
  • Some advantages:
    • Thx to local processing on data nodes, network bottlenecks are avoided.
    • A signle, open, and unified metadata store can be utilized.
    • Costly data format conversion is unnecessary and thus no overhead is incurred.
    • All data is immediately query-able, with no delays for ETL.
    • All hardware is utilized for Impala queries as well as for MR.
    • Only a single machine pool is needed to scale.

Documentation

... skip

Impala User-Defined Functions(UDFs)

  • UDF let you code ur own application logic for processing column values during an Impala query.

UDFS Concepts

  • U can code either scalar functions for producing results one row at a time.
  • Or more complex aggregate functions for doing analysis across.

UDFs and UDAFs

  • The most general kind of udf takes single input value and produces a single output value. When used in a query, it is called once for each row in the result set. eg:

    select customer_name, is_frequent_customer(customer_id) from customers;
    select obfuscate(sensitive_column) from sensitive_data;
  • A user-defined aggergate function(UDAF) accepts a group of values and returns a single value. U can use UDAFs to summarize and condense sets of rows, in the same style as the built-in COUNT, MAX(), SUM(), and AVG() functions. When called in a query that uses the GROUP BY clause, the function is called once for each combination of GROUP BY values. eg:
    -- Evaluates multiple rows but returns a single value
    select closest_restaurant(latitude, longitude) from places; -- Evaluates batches of rows and returns a separate value for each batch.
    select most_profitable_locartion(store_id, sales, expenses, tax_rate, depreciation) from franchise_data group by year;
  • Currently, Impala does not support other categories of udf, such as user-defined table functions(UDTFs) or window functions.

Native Impala UDFs

  • Impala supports UDFs written in C++, in addition to supporting existing Hive UDFs written in Java.
  • Where practical, use C++ UDFs because the compiled native code can yield higher performance, with UDF execution time often 10x faster for a C++ UDF than the equivalent Java UDF.

Using Hive UDFs with Impala

  • Impala can run Java-based user-defined functions (UDFs), originally written for Hive, with no changes, subject to the following conditions:

    • The parameter and return value must all use scalar data types supported by Impala. That's to say, complex or nested types are not supported.
    • Currently, Hive UDFs that accept or return the TIMESTAMP type are not supported.
    • Hive UDAFs and UDTFs are not supported.
    • Typically, a Java UDF will execute several times slower in Impala than the equivalent native UDF written in C++.
  • What to do next?
    • write ur udf
    • upload the jar to a hdfs path(where impala can read)
    • for each Java-based UDF that u want to call through Impala, issue a CREATE FUNCTION statement, with a LOCATION clause containing the full HDFS path or the JAR file, and a SYMBOL clause with the fully qualified name of the class, using dots as separators and without the .class extension. eg:
      create function my_neg(bigint)
      returns bigint location '/user/hive/udfs/hive.jar'
      symbol = 'org.apache.hadoop.hive.ql.udf.UDFOPNegative';
    • call the function from ur queries, passing arguments of the correct type to match the function signature.

FYI

<Impala><Overview><UDF>的更多相关文章

  1. 简单物联网:外网访问内网路由器下树莓派Flask服务器

    最近做一个小东西,大概过程就是想在教室,宿舍控制实验室的一些设备. 已经在树莓上搭了一个轻量的flask服务器,在实验室的路由器下,任何设备都是可以访问的:但是有一些限制条件,比如我想在宿舍控制我种花 ...

  2. 利用ssh反向代理以及autossh实现从外网连接内网服务器

    前言 最近遇到这样一个问题,我在实验室架设了一台服务器,给师弟或者小伙伴练习Linux用,然后平时在实验室这边直接连接是没有问题的,都是内网嘛.但是回到宿舍问题出来了,使用校园网的童鞋还是能连接上,使 ...

  3. 外网访问内网Docker容器

    外网访问内网Docker容器 本地安装了Docker容器,只能在局域网内访问,怎样从外网也能访问本地Docker容器? 本文将介绍具体的实现步骤. 1. 准备工作 1.1 安装并启动Docker容器 ...

  4. 外网访问内网SpringBoot

    外网访问内网SpringBoot 本地安装了SpringBoot,只能在局域网内访问,怎样从外网也能访问本地SpringBoot? 本文将介绍具体的实现步骤. 1. 准备工作 1.1 安装Java 1 ...

  5. 外网访问内网Elasticsearch WEB

    外网访问内网Elasticsearch WEB 本地安装了Elasticsearch,只能在局域网内访问其WEB,怎样从外网也能访问本地Elasticsearch? 本文将介绍具体的实现步骤. 1. ...

  6. 怎样从外网访问内网Rails

    外网访问内网Rails 本地安装了Rails,只能在局域网内访问,怎样从外网也能访问本地Rails? 本文将介绍具体的实现步骤. 1. 准备工作 1.1 安装并启动Rails 默认安装的Rails端口 ...

  7. 怎样从外网访问内网Memcached数据库

    外网访问内网Memcached数据库 本地安装了Memcached数据库,只能在局域网内访问,怎样从外网也能访问本地Memcached数据库? 本文将介绍具体的实现步骤. 1. 准备工作 1.1 安装 ...

  8. 怎样从外网访问内网CouchDB数据库

    外网访问内网CouchDB数据库 本地安装了CouchDB数据库,只能在局域网内访问,怎样从外网也能访问本地CouchDB数据库? 本文将介绍具体的实现步骤. 1. 准备工作 1.1 安装并启动Cou ...

  9. 怎样从外网访问内网DB2数据库

    外网访问内网DB2数据库 本地安装了DB2数据库,只能在局域网内访问,怎样从外网也能访问本地DB2数据库? 本文将介绍具体的实现步骤. 1. 准备工作 1.1 安装并启动DB2数据库 默认安装的DB2 ...

  10. 怎样从外网访问内网OpenLDAP数据库

    外网访问内网OpenLDAP数据库 本地安装了OpenLDAP数据库,只能在局域网内访问,怎样从外网也能访问本地OpenLDAP数据库? 本文将介绍具体的实现步骤. 1. 准备工作 1.1 安装并启动 ...

随机推荐

  1. Amaze UI——slider的参数说明

    <script type="text/javascript"> $(function(){ $('.am-slider').flexslider({ playAfter ...

  2. SWUST OJ(954)

    单链表的链接 #include <stdio.h> #include <stdlib.h> typedef struct LinkNode //单链表节点结构的定义 { cha ...

  3. 第一阶段——站立会议总结DAY08

    补发:因为第八次也就是第八天,那天有一个更重要的东西,看懂一个电商的系统.所以,未有进展.

  4. using强制对象清理资源 【转】

    转 http://www.cnblogs.com/Legolas/p/detail-of-using.html using肯定所有人都用过,最简单的就是使用using引入命名空间,然后就是引入别名,简 ...

  5. miniui 使用心得

    MiniUI demo实例使用心得:1.渲染速度很快2快速维护数据 3多种编辑方式 如 弹窗 直接下方显示form 下方显示tab 等4.树形 编辑 联动 5验证表单6文本框内 选择框 保存的多个选项 ...

  6. Consider defining a bean of type 'org.springframework.data.redis.connection.RedisConnectionFactory' in your configuration

    Description: Parameter 0 of method redisTemplate in com.liaojie.cloud.auth.server.config.redis.Redis ...

  7. Spring Cloud系列之Feign的常见问题总结

    一.FeignClient接口,不能使用@GettingMapping 之类的组合注解 代码示例: @FeignClient("microservice-provider-user" ...

  8. [洛谷 P1972] HH的项链(SDOI2009)

    P1972 [SDOI2009]HH的项链 题目描述 HH 有一串由各种漂亮的贝壳组成的项链.HH 相信不同的贝壳会带来好运,所以每次散步完后,他都会随意取出一段贝壳,思考它们所表达的含义.HH 不断 ...

  9. html中传递信息

    <div class="card" data-username="ArgenBarbie"> </div> JS: var userna ...

  10. Hadoop--单点故障修复

    nameNode单点故障修复 1.启动虚拟机,启动集群  此时我们将主机hadoop1关机(断掉主机),开始抢救: 1.使用 秘书(secondaryNameNode),成功率不是100%  (这里我 ...