转自:http://www.cnblogs.com/siwei1988/archive/2012/08/06/2624912.html

Pig Latin是一种数据流语言,变量的命名规则同java中变量的命名规则,变量名可以复用(不建议这样做,这种情况下相当与新建一个变量,同时删除原来的变量)

A = load 'NYSE_dividends' (exchange, symbol, date, dividends);
A = filter A by dividends > 0;
A = foreach A generate UPPER(symbol);

。注释:--单行注释;/*……*/多行注释;

。Pig Latin关键词不区分大小写,比如load,foreach,但是变量名和udf区分大小写,COUNT是udf,所以不同于count。

。Load 加载数据

默认加载当前用户的home目录(/users/yourlogin),可以在grunt下输入cd 命令更改当前所在目录。

divs = load '/data/examples/NYSE_dividends'

也可以输入完整的文件名

divs = load ‘hdfs://nn.acme.com/data/examples/NYSE_dividends’

默认使用TAB(\t)作为分割符,也可以使用using定义其它的分割符

divs = load 'NYSE_dividends' using PigStorage(',');

注意:只能用一个字符作为分割符

还可以使用using定义其它的加载函数

divs = load 'NYSE_dividends' using HBaseStorage();

as用于定义模式

divs = load 'NYSE_dividends' as (exchange, symbol, date, dividends);

也可以使用通配符加载一个目录下的所有文件,该目录下的所有子目录的文件也会被加载。通配符由hadoop文件系统决定,下面是hadoop 0.20所支持的通配符

glob comment
? Matches any single character.
* Matches zero or more characters.
[abc] Matches a single character from character set (a,b,c).
[a-z] Matches a single character from the character range (a..z), inclusive. The first character must be lexicographically less than or equal to the second character.
[^abc] Matches a single character that is not in the character set (a, b, c). The ^ character must occur immediately to the right of the opening bracket.
[^a-z] Matches a single character that is not from the character range (a..z) inclusive. The ^ character must occur immediately to the right of the opening bracket.
\c Removes (escapes) any special meaning of character c.
{ab,cd} Matches a string from the string set {ab, cd}

。as 定义模式,可用于load ** [as (ColumnName[:type])],foreach…generate ColumnName [as newColumnName]

。store存储数据,默认用using PigStorage 使用tab作为分割符。

store processed into '/data/examples/processed';

也可以输入完整路径比如hdfs://nn.acme.com/data/examples/processed.

可以使用using调用其它存储函数或其它分割符

store processed into 'processed' using
HBaseStorage();
store processed into 'processed' using PigStorage(',');

注意:数据存储并不是存储为一个文件,而是由reduce进程数决定的多个part文件。

。foreach…generate[*][begin .. end]

*匹配所有,同样适用与udf;

..匹配begin和end之间的部分,包括begin和end

prices = load 'NYSE_daily' as (exchange, symbol, date, open,
high, low, close, volume, adj_close);
beginning = foreach prices generate ..open; -- produces exchange, symbol, date, open
middle = foreach prices generate open..close; -- produces open, high, low, close
end = foreach prices generate volume..; -- produces volume, adj_close

一般情况下foreach…generate…重新生成的模式中的数据名和数据类型保持原来的名字和数据类型,但是如果有表达式则不会,可以在generate 变量后使用as关键词定义别名;

divs = load 'NYSE_dividends' as (exchange:chararray, symbol:chararray,
date:chararray, dividends:float);
sym = foreach divs generate symbol;
describe sym; sym: {symbol: chararray}
divs     = load 'NYSE_dividends' as (exchange:chararray, symbol:chararray,
date:chararray, dividends:float);
in_cents = foreach divs generate dividends * 100.0 as dividend, dividends * 100.0;
describe in_cents; in_cents: {dividend: double,double}

#用于map查找;.用于tuple(元组)投影;

bball = load 'baseball' as (name:chararray, team:chararray,
position:bag{t:(p:chararray)}, bat:map[]);
avg = foreach bball generate bat#'batting_average';
A = load 'input' as (t:tuple(x:int, y:int));
B = foreach A generate t.x, t.$1;

3.获取bag(包)中的数据

A = load 'input' as (b:bag{t:(x:int, y:int)});
B = foreach A generate b.x;
A = load 'input' as (b:bag{t:(x:int, y:int)});
B = foreach A generate b.(x, y);

下面的语句将执行不了

A = load 'foo' as (x:chararray, y:int, z:int);
B = group A by x; -- produces bag A containing all the records for a given value of x
C = foreach B generate SUM(A.y + A.z);

因为A.y 和 A.z都是bag,符号+对于bag不适用。

正确的做法如下

A = load 'foo' as (x:chararray, y:int, z:int);
A1 = foreach A generate x, y + z as yz;
B = group A1 by x;
C = foreach B generate SUM(A1.yz);

。foreach中嵌套其它语句

--distinct_symbols.pig
daily = load 'NYSE_daily' as (exchange, symbol); -- not interested in other fields
grpd = group daily by exchange;
uniqcnt = foreach grpd {
sym = daily.symbol;
uniq_sym = distinct sym;
generate group, COUNT(uniq_sym);
};

注意:foreach内部只支持distinctfilterlimitorder关键词;最后一句必须是generate;

--double_distinct.pig
divs = load 'NYSE_dividends' as (exchange:chararray, symbol:chararray);
grpd = group divs all;
uniq = foreach grpd {
exchanges = divs.exchange;
uniq_exchanges = distinct exchanges;
symbols = divs.symbol;
uniq_symbols = distinct symbols;
generate COUNT(uniq_exchanges), COUNT(uniq_symbols);
};

。flatten消除包嵌套关系

--flatten.pig
players = load 'baseball' as (name:chararray, team:chararray,
position:bag{t:(p:chararray)}, bat:map[]);
pos = foreach players generate name, flatten(position) as position;
bypos = group pos by position;
--flatten_noempty.pig
players = load 'baseball' as (name:chararray, team:chararray,
position:bag{t:(p:chararray)}, bat:map[]);
noempty = foreach players generate name,
((position is null or IsEmpty(position)) ? {('unknown')} : position)
as position;
pos = foreach noempty generate name, flatten(position) as position;
bypos = group pos by position;

。filter (注:pig中的逻辑语句同样遵循短路原则)

注意:null == 任何数据

。filter结合matches使用正则表达式(matches前加not表示不匹配)

pig中的正则表达式格式和java中的正则表达所一样,参考 http://docs.oracle.com/javase/6/docs/api/java/util/regex/Pattern.html

各种转义字符,转义字符使用方式:\\后面跟上转义码

点的转义:. ==> u002E
美元符号的转义:$ ==> u0024
乘方符号的转义:^ ==> u005E
左大括号的转义:{ ==> u007B
左方括号的转义:[ ==> u005B
左圆括号的转义:( ==> u0028
竖线的转义:| ==> u007C
右圆括号的转义:) ==> u0029
星号的转义:* ==> u002A
加号的转义:+ ==> u002B
问号的转义:? ==> u003F
反斜杠的转义: ==> u005C 

下面的例子查找包括CM.的记录

-- filter_not_matches.pig
divs = load 'NYSE_dividends' as (exchange:chararray, symbol:chararray,
date:chararray, dividends:float);
notstartswithcm = filter divs by not symbol matches '.*CM\\2u002E1.*';

。group之后的数据是一个map,其中key是group所用的键值,value是group针对的变量;

可用()同时对多个变量作group,group…all用于所有变量(注意:使用all时没有by),group之后的变量分为两个部分,第一部分变量名是group(不能更改),第二部是和原始bag模式一样的bag。

--twokey.pig
daily = load 'NYSE_daily' as (exchange, stock, date, dividends);
grpd = group daily by (exchange, stock);
avg = foreach grpd generate group, AVG(daily.dividends);
describe grpd;
grpd: {group: (exchange: bytearray,stock: bytearray),daily: {exchange: bytearray,
stock: bytearray,date: bytearray,dividends: bytearray}}
--countall.pig
daily = load 'NYSE_daily' as (exchange, stock);
grpd = group daily all;
cnt = foreach grpd generate COUNT(daily);

。cogroup对多个变量进行group

注意:所有key值为null的数据都被归为同一类,这一点和group相同,和join不同。

A = load 'input1' as (id:int, val:float);
B = load 'input2' as (id:int, val2:int);
C = cogroup A by id, B by id;
describe C; C: {group: int,A: {id: int,val: float},B: {id: int,val2: int}}

。order by

对单列进行排序

--order.pig
daily = load 'NYSE_daily' as (exchange:chararray, symbol:chararray,
date:chararray, open:float, high:float, low:float, close:float,
volume:int, adj_close:float);
bydate = order daily by date;

对多列进行排序

--order2key.pig
daily = load 'NYSE_daily' as (exchange:chararray, symbol:chararray,
date:chararray, open:float, high:float, low:float,
close:float, volume:int, adj_close:float);
bydatensymbol = order daily by date, symbol;

desc关键词按降序进行排序,null小于所有词

--orderdesc.pig
daily = load 'NYSE_daily' as (exchange:chararray, symbol:chararray,
date:chararray, open:float, high:float, low:float, close:float,
volume:int, adj_close:float);
byclose = order daily by close desc, open;
dump byclose; -- open still sorted in ascending order

。distinct只能去掉整个元组的重复行,不能去掉某几个特定列的重复行

--distinct.pig
-- find a distinct list of ticker symbols for each exchange
-- This load will truncate the records, picking up just the first two fields.
daily = load 'NYSE_daily' as (exchange:chararray, symbol:chararray);
uniq = distinct daily;

。join/left join / right join

null不匹配任何数据

-- join2key.pig
daily = load 'NYSE_daily' as (exchange, symbol, date, open, high, low, close,
volume, adj_close);
divs = load 'NYSE_dividends' as (exchange, symbol, date, dividends);
jnd = join daily by (symbol, date), divs by (symbol, date);
--leftjoin.pig
daily = load 'NYSE_daily' as (exchange, symbol, date, open, high, low, close,
volume, adj_close);
divs = load 'NYSE_dividends' as (exchange, symbol, date, dividends);
jnd = join daily by (symbol, date) left outer, divs by (symbol, date);

也可以同时多个变量,但只用于inner join

A = load 'input1' as (x, y);
B = load 'input2' as (u, v);
C = load 'input3' as (e, f);
alpha = join A by x, B by u, C by e;

也可以自身和自身join,但数据要加载两次

--selfjoin.pig
-- For each stock, find all dividends that increased between two dates
divs1 = load 'NYSE_dividends' as (exchange:chararray, symbol:chararray,
date:chararray, dividends);
divs2 = load 'NYSE_dividends' as (exchange:chararray, symbol:chararray,
date:chararray, dividends);
jnd = join divs1 by symbol, divs2 by symbol;
increased = filter jnd by divs1::date < divs2::date and
divs1::dividends < divs2::dividends;

下面这样不行

--selfjoin.pig
-- For each stock, find all dividends that increased between two dates
divs1 = load 'NYSE_dividends' as (exchange:chararray, symbol:chararray,
date:chararray, dividends);
jnd = join divs1 by symbol, divs1 by symbol;
increased = filter jnd by divs1::date < divs2::date and
divs1::dividends < divs2::dividends;

。union 相当与sql中的union,但与sql不通的是pig中的union可以针对两个不同模式的变量:如果两个变量模式相同,那么union后的变量模式与变量的模式一样;如果一个变量的模式可以由另一各变量的模式强制类型转换,那么union后的变量模式与转换后的变量模式相同;否则,union后的变量没有模式。

A = load 'input1' as (x:int, y:float);
B = load 'input2' as (x:int, y:float);
C = union A, B;
describe C; C: {x: int,y: float} A = load 'input1' as (x:double, y:float);
B = load 'input2' as (x:int, y:double);
C = union A, B;
describe C; C: {x: double,y: double} A = load 'input1' as (x:int, y:float);
B = load 'input2' as (x:int, y:chararray);
C = union A, B;
describe C; Schema for C unknown.
注意:在pig 1.0中 执行不了最后一种union。

注意:union不会剔除重复的行

如果需要对两个具有不通列名的变量union的话,可以使用onschema关键字

A = load 'input1' as (w: chararray, x:int, y:float);
B = load 'input2' as (x:int, y:double, z:chararray);
C = union onschema A, B;
describe C; C: {w: chararray,x: int,y: double,z: chararray}

。cross 相当于离散数学中的叉乘,输入行数分别为m行,n行,输出行数则为m*n行。

--thetajoin.pig
--I recommand running this one on a cluster too
daily = load 'NYSE_daily' as (exchange:chararray, symbol:chararray,
date:chararray, open:float, high:float, low:float,
close:float, volume:int, adj_close:float);
divs = load 'NYSE_dividends' as (exchange:chararray, symbol:chararray,
date:chararray, dividends:float);
crossed = cross daily, divs;
tjnd = filter crossed by daily::date < divs::date;

。limit

--limit.pig
divs = load 'NYSE_dividends';
first10 = limit divs 10;

在pig中除了order by 之外生成的数据都没有固定的顺序。上面的程序每次生成的数据也是不一样的。

。sample 用于生成测试数据,按指定参数选取部分数据。下面的程序选取10%的数据。

--sample.pig
divs = load 'NYSE_dividends';
some = sample divs 0.1;

。Parallel  设置pig的reduce进程个数

--parallel.pig
daily = load 'NYSE_daily' as (exchange, symbol, date, open, high, low, close,
volume, adj_close);
bysymbl = group daily by symbol parallel 10;

parallel只针对一条语句,如果希望脚本中的所有语句都有10个reduce进程,可以使用 set default_parallel 10命令

--defaultparallel.pig
set default_parallel 10;
daily = load 'NYSE_daily' as (exchange, symbol, date, open, high, low, close,
volume, adj_close);
bysymbl = group daily by symbol;
average = foreach bysymbl generate group, AVG(daily.close) as avg;
sorted = order average by avg desc;

如果同时使用parallel和set default_parallel,那么parallel中的参数将覆盖set default_parallel

。UDF

注册udf

--register.pig
register 'your_path_to_piggybank/piggybank.jar';
divs = load 'NYSE_dividends' as (exchange:chararray, symbol:chararray,
date:chararray, dividends:float);
backwards = foreach divs generate
org.apache.pig.piggybank.evaluation.string.Reverse(symbol);

定义udf别名

--define.pig
register 'your_path_to_piggybank/piggybank.jar';
define reverse org.apache.pig.piggybank.evaluation.string.Reverse();
divs = load 'NYSE_dividends' as (exchange:chararray, symbol:chararray,
date:chararray, dividends:float);
backwards = foreach divs generate reverse(symbol);

构造函数带参数的udf

--define_constructor_args.pig
register 'acme.jar';
define convert com.acme.financial.CurrencyConverter('dollar', 'euro');
divs = load 'NYSE_dividends' as (exchange:chararray, symbol:chararray,
date:chararray, dividends:float);
backwards = foreach divs generate convert(dividends);

。托管java中的静态函数(效率较低)

--invoker.pig
define hex InvokeForString('java.lang.Integer.toHexString', 'int');
divs = load 'NYSE_daily' as (exchange, symbol, date, open, high, low,
close, volume, adj_close);
nonnull = filter divs by volume is not null;
inhex = foreach nonnull generate symbol, hex((int)volume);

如果函数的参数是一个数组,那么传递过去的是一个bag

define stdev InvokeForDouble('com.acme.Stats.stdev', 'double[]');
A = load 'input' as (id: int, dp:double);
B = group A by id;
C = foreach B generate group, stdev(A.dp);

。multiquery

--multiquery.pig
players = load 'baseball' as (name:chararray, team:chararray,
position:bag{t:(p:chararray)}, bat:map[]);
pwithba = foreach players generate name, team, position,
bat#'batting_average' as batavg;
byteam = group pwithba by team;
avgbyteam = foreach byteam generate group, AVG(pwithba.batavg);
store avgbyteam into 'by_team';
flattenpos = foreach pwithba generate name, team,
flatten(position) as position, batavg;
bypos = group flattenpos by position;
avgbypos = foreach bypos generate group, AVG(flattenpos.batavg);
store avgbypos into 'by_position';

。split

wlogs = load 'weblogs' as (pageid, url, timestamp);
split wlogs into apr03 if timestamp < '20110404',
apr02 if timestamp < '20110403' and timestamp > '20110401',
apr01 if timestamp < '20110402' and timestamp > '20110331';
store apr03 into '20110403';
store apr02 into '20110402';
store apr01 into '20110401';

。设置pig环境

Parameter Value Type Description
debug string Sets the logging level to DEBUG. Equivalent to passing -debug DEBUG on the command line.
default_parallel integer Sets a default parallel level for all reduce operations in the script. See the section called “Parallel” for details.
job.name string Assigns a name to the Hadoop job. By default the name is the filename of the script being run, or a randomly generated name for interactive sessions.
job.priority string Type If your Hadoop cluster is using the Capacity Scheduler with priorities enabled for queues, this allows you to set the priority of your Pig job. Allowed values are very_lowlownormalhighvery_high.

。parameter 向pig脚本传递参数

--daily.pig
daily = load 'NYSE_daily' as (exchange:chararray, symbol:chararray,
date:chararray, open:float, high:float, low:float, close:float,
volume:int, adj_close:float);
yesterday = filter daily by date == '$DATE';
grpd = group yesterday all;
minmax = foreach grpd generate MAX(yesterday.high), MIN(yesterday.low);

用-p 传递参数,每个变量前都要加一个-p

pig -p DATE=2009-12-17 daily.pig

参数也可以放在一个文件里,每行一个参数,注释部分以#开头,使用-m 或者 -param_file.调用参数文件

pig脚本

wlogs = load 'clicks/$YEAR$MONTH01' as (url, pageid, timestamp);

参数文件

#Param file
YEAR=2009-
MONTH=12-
DAY=17
DATE=$YEAR$MONTH$DAY

执行

pig -param_file daily.params daily.pig

也可以在pig内定义参数%declare 或者 %default,%default定义默认的参数,在特殊情况下可以被覆盖

注意:%declare和%default不能用于以下位置:

  • pig脚本,此脚本非Macro宏,并且脚本被另外一个脚本调用(如果不被调用可以使用)
%default parallel_factor 10;
wlogs = load 'clicks' as (url, pageid, timestamp);
grp = group wlogs by pageid parallel $parallel_factor;
cntd = foreach grp generate group, COUNT(wlogs);

。定义Macro宏,相当于子函数

--macro.pig
-- Given daily input and a particular year, analyze how
-- stock prices changed on days dividends were paid out.
define dividend_analysis (daily, year, daily_symbol, daily_open, daily_close)
returns analyzed {
divs = load 'NYSE_dividends' as (exchange:chararray,
symbol:chararray, date:chararray, dividends:float);
divsthisyear = filter divs by date matches '$year-.*';
dailythisyear = filter $daily by date matches '$year-.*';
jnd = join divsthisyear by symbol, dailythisyear by $daily_symbol;
$analyzed = foreach jnd generate dailythisyear::$daily_symbol,
$daily_close - $daily_open;
}; daily = load 'NYSE_daily' as (exchange:chararray, symbol:chararray,
date:chararray, open:float, high:float, low:float, close:float,
volume:int, adj_close:float);
results = dividend_analysis(daily, '2009', 'symbol', 'open', 'close');

。引用pig文件,被引用的文件被执行一遍,相当于拼接在一起,被引用的文件中不能存在自定义变量

--main.pig
import '../examples/ch6/dividend_analysis.pig'; daily = load 'NYSE_daily' as (exchange:chararray, symbol:chararray,
date:chararray, open:float, high:float, low:float, close:float,
volume:int, adj_close:float);
results = dividend_analysis(daily, '2009', 'symbol', 'open', 'close');

默认搜索文件夹为当前文件夹,可以使用set pig.import.search.path设置搜索的路径

set pig.import.search.path '/usr/local/pig,/grid/pig';
import 'acme/macros.pig';

【转载】Pig语法进阶的更多相关文章

  1. pig语法学习 FOREACH GENERATE group AS

    深入浅出,转一个 转载必须注明出处:http://www.codelast.com/ 转载地址 本文可以让刚接触pig的人对一些基础概念有个初步的了解. 本文大概是互联网上第一篇公开发表的且涵盖大量实 ...

  2. 转载 【CSS进阶】伪元素的妙用--单标签之美

    1.单个颜色实现按钮 hover .active 的明暗变化 请点击 转载利用伪元素单个颜色实现 hover 和 active 时的明暗变化效果 2.利用after伪类清除浮动 .clearfix:a ...

  3. 运维shell全部语法进阶

    Linux运维之shell脚本进阶篇 一.if语句的使用 1)语法规则 1 2 3 4 5 6 7 8 9 if [条件]     then         指令 fi 或 if [条件];then ...

  4. 【转载】C#进阶系列——动态Lamada(二:优化)

    前言:前几天写了一篇动态Lamada的文章C#进阶系列——动态Lamada,受园友xiao99的启发,今天打算来重新优化下这个动态Lamada的工具类.在此做个笔记,以免以后忘了. 一.原理分析 上篇 ...

  5. [转载]markown语法

    目录 Cmd Markdown 公式指导手册 一.公式使用参考 1.如何插入公式 2.如何输入上下标 3.如何输入括号和分隔符 4.如何输入分数 5.如何输入开方 6.如何输入省略号 7.如何输入矢量 ...

  6. Markdown语法进阶

    tip:基本都是通过html格式实现的. 插入音频 插入音乐 在网易云音乐里找生成外连接,复制过来就OK了,可惜的是很多都不能生成外联连接.如果想自动播放,可以把auto改成1. 插入视频 直接引用在 ...

  7. Python语法进阶

    1.变量进阶 2.局部变量.全局变量  3.函数进阶 4.函数进阶

  8. 【转载】C#进阶系列——动态Lamada

    前言:在DDD系列文章里面,我们在后台仓储里面封装了传递Lamada表达式的通用方法,类似这样: public virtual IQueryable<TEntity> Find(Expre ...

  9. 【转载】DXUT进阶

    原文:DXUT进阶 概要 这个指南涵盖了更多DXUT的高级应用. 这个指南里的大部分功能是可选的, 为了以最小的代价来增强你的应用程序. DXUT提供了一个简单的基于GUI系统的精灵和一个设备设置对话 ...

随机推荐

  1. 消息队列MQ如何保证高可用性?

    保证MQ的高可用性,主要是解决MQ的缺点--系统复杂性变高--带来的问题 主要说一下  rabbitMQ  和  kafka  的高可用性 一.rabbitMQ的高可用性 rabbitMQ是基于主从做 ...

  2. ajax-属性、原理、实现html5进度条上传文件

    一.远古ajax实现方式如下: 1.前端请求后台,后台设置返回 http状态码204 2.运用img图片(或css/javascript等)的加载机制,请求后台 3.post提交数据,运用iframe ...

  3. centos python虚拟环境安装

    pip install virtualenv pip install virtualenvwrapper vim ~/.barshrc export VIRTUALENVWRAPPER_PYTHON= ...

  4. Java各种类

    1.Object类 equals方法 2.Date类 构造方法 成员方法 DateFormat类 Calendar类 3.System类 StringBuilder原理 构造方法 toString方法 ...

  5. 松软科技课堂:jQuery 效果 - 滑动

    jQuery 滑动方法 通过 jQuery,您可以在元素上创建滑动效果. jQuery 拥有以下滑动方法: slideDown() slideUp() slideToggle() jQuery sli ...

  6. 【转】Servlet 九大对象和四个作用域

    隐式对象 说明 request 转译后对应HttpServletRequest/ServletRequest对象 response 转译后对应HttpServletRespons/ServletRes ...

  7. Linux 用户命令

    用户管理的命令 添加删除用户,注意的是,涉及到权限的修改,只能用root去操作,其他人基本上没权限 useradd caixukun #创建用户caixukunpasswd   caixukun #给 ...

  8. multiprocessing 方法解析:

    以上是关于进程池的使用,截下来开始介绍如何使用多进程,由于multiprocessing 实现比concurrent.futures 实现更加底层这里还是推荐大家使用concurrent.future ...

  9. jQuery---创建和添加节点

    创建和添加节点 //创建jq对象 var $li = $('<a href="http://web.itcast.cn" target="_blank"& ...

  10. 微信小程序调试页面的坑

    使用微信开发者工具切新的页面保存刷新无法在左侧直接预览必须在app.json文件配置页面(填写路径但是不用写后缀名),并且把想要预览的页面放在第一个位置.