对于普通的java-action或者shell-action 都是支持的只要标准输出是"k1=v1"这中格式的就行:

现用test.py进行测试:

 ##test.py
#! /opt/anaconda3/bin/python import re
import os
import sys
import traceback if __name__ == '__main__':
try:
print("k1=v1") print(aaa) ##这里是个故意设置的错误
except Exception as e: print(traceback.format_exc())
exit(0) ##这个地方要特别注意,当异常退出时capture-output将会失效,所以要想获取异常信息,一定要正常退出,然后在decison节点处理错误退出

#workflow.xml
<workflow-app xmlns="uri:oozie:workflow:0.4" name="adaf4df46a6597914b9ff6cd80eff542c6a">
<start to="python-node"/>
<action name="python-node">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>108412.server.bigdata.com.cn:8032</job-tracker>
<name-node>hdfs://108474.server.bigdata.com.cn:8020</name-node>
<configuration>
<property>
<name>oozie.launcher.mapred.job.queue.name</name>
<value>ada.oozielauncher</value>
</property>
</configuration>
<exec>model.py</exec>
<file>model.py</file>
<capture-output/>
</shell>
<ok to="python-node1"/>
<error to="fail"/>
</action>
<action name="python-node1">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>108412.server.bigdata.com.cn:8032</job-tracker>
<name-node>hdfs://108474.server.bigdata.com.cn:8020</name-node>
<configuration>
<property>
<name>oozie.launcher.mapred.job.queue.name</name>
<value>ada.oozielauncher</value>
</property>
</configuration>
<exec>echo</exec>
<argument>k1=${wf:actionData("python-node")["k1"]}</argument>
<capture-output/>
</shell>
<ok to="check-output"/>
<error to="fail"/>
</action>
<decision name="check-output">
<switch>
<case to="end">
${wf:actionData('python-node1')['k1'] eq 'Hello Oozie'}
</case>
<default to="fail"/>
</switch>
</decision>
<kill name="fail">
<message>Python action failed, error message[${wf:actionData('python-node')['k1']}]</message>
<!--message>Python action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message-->
</kill>
<end name="end"/>
</workflow-app>
 #job.properties
oozie.use.system.libpath=True
security_enabled=False
dryrun=False
jobTracker=108412.server.bigdata.com.cn:8032
nameNode=hdfs://108474.server.bigdata.com.cn:8020
user.name=root
queueName=test #sharelib配置不能配置hive,会报错
#oozie.action.sharelib.for.spark=spark,hive spark-action
#oozie.action.sharelib.for.sqoop=sqoop,hbase
oozie.wf.application.path=${nameNode}/user/lyy/oozie/test

将以上test.py和workflow.xml放到hdfs的/user/lyy/oozie/test目录下,使用一下命令提交:

oozie job -oozie http://10.8.4.46:11000/oozie -config job.properties -run

另外如果代码中有标准输出,但是格式不是"k=v"类型的,则用el函数wf:actionData无法获取,然而capture-output依旧把标准输出的信息捕获了,存储在oozie元数据表oozie.WF_ACTIONS中data字段存放,这个字段是mediumblob类型不能直接查看,可以通过下面restfulAPI获取json格式的数据,如下:

http://108446.server.bigdata.com.cn:11000/oozie/v1/job/0000106-181129152008300-oozie-oozi-W

{
"appPath":"hdfs://108474.server.bigdata.com.cn:8020/user/lyy/oozie/3a0c7d3a2ed5468087d93c69db651f3f",
"acl":null,
"status":"KILLED",
"createdTime":"Mon, 10 Dec 2018 03:50:13 GMT",
"conf":"<configuration>
<property>
<name>user.name</name>
<value>root</value>
</property>
<property>
<name>oozie.use.system.libpath</name>
<value>True</value>
</property>
<property>
<name>mapreduce.job.user.name</name>
<value>root</value>
</property>
<property>
<name>security_enabled</name>
<value>False</value>
</property>
<property>
<name>queueName</name>
<value>ada.spark</value>
</property>
<property>
<name>nameNode</name>
<value>hdfs://108474.server.bigdata.com.cn:8020</value>
</property>
<property>
<name>dryrun</name>
<value>False</value>
</property>
<property>
<name>jobTracker</name>
<value>108412.server.bigdata.com.cn:8032</value>
</property>
<property>
<name>oozie.wf.application.path</name>
<value>hdfs://108474.server.bigdata.com.cn:8020/user/lyy/oozie/3a0c7d3a2ed5468087d93c69db651f3f</value>
</property>
</configuration>",
"lastModTime":"Mon, 10 Dec 2018 03:51:17 GMT",
"run":0,
"endTime":"Mon, 10 Dec 2018 03:51:17 GMT",
"externalId":null,
"appName":"adaf4df46a6597914b9ff6cd80eff542c6a",
"id":"0000106-181129152008300-oozie-oozi-W",
"startTime":"Mon, 10 Dec 2018 03:50:13 GMT",
"parentId":null,
"toString":"Workflow id[0000106-181129152008300-oozie-oozi-W] status[KILLED]",
"group":null,
"consoleUrl":"http://108446.server.bigdata.com.cn:11000/oozie?job=0000106-181129152008300-oozie-oozi-W",
"user":"root",
"actions":[
{
"errorMessage":null,
"status":"OK",
"stats":null,
"data":null,
"transition":"python-node",
"externalStatus":"OK",
"cred":"null",
"conf":"",
"type":":START:",
"endTime":"Mon, 10 Dec 2018 03:50:14 GMT",
"externalId":"-",
"id":"0000106-181129152008300-oozie-oozi-W@:start:",
"startTime":"Mon, 10 Dec 2018 03:50:13 GMT",
"userRetryCount":0,
"externalChildIDs":null,
"name":":start:",
"errorCode":null,
"trackerUri":"-",
"retries":0,
"userRetryInterval":10,
"toString":"Action name[:start:] status[OK]",
"consoleUrl":"-",
"userRetryMax":0
},
{
"errorMessage":null,
"status":"OK",
"stats":null,
"data":"#
#Mon Dec 10 11:50:24 CST 2018
File="./model.py", line 12, in <module>
Traceback=(most recent call last)\:
print(aaa)=
NameError=name 'aaa' is not defined ####这个就是出错的栈信息
k1=v1 ##这个是标准输出的信息
",
"transition":"python-node1",
"externalStatus":"SUCCEEDED",
"cred":"null",
"conf":"<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>108412.server.bigdata.com.cn:8032</job-tracker>
<name-node>hdfs://108474.server.bigdata.com.cn:8020</name-node>
<configuration>
<property xmlns="">
<name>oozie.launcher.mapred.job.queue.name</name>
<value>ada.oozielauncher</value>
<source>programatically</source>
</property>
</configuration>
<exec>model.py</exec>
<file>model.py</file>
<capture-output />
</shell>",
"type":"shell",
"endTime":"Mon, 10 Dec 2018 03:50:24 GMT",
"externalId":"job_1542533868365_0510",
"id":"0000106-181129152008300-oozie-oozi-W@python-node",
"startTime":"Mon, 10 Dec 2018 03:50:14 GMT",
"userRetryCount":0,
"externalChildIDs":null,
"name":"python-node",
"errorCode":null,
"trackerUri":"108412.server.bigdata.com.cn:8032",
"retries":0,
"userRetryInterval":10,
"toString":"Action name[python-node] status[OK]",
"consoleUrl":"http://108412.server.bigdata.com.cn:8088/proxy/application_1542533868365_0510/",
"userRetryMax":0
},
{
"errorMessage":null,
"status":"OK",
"stats":null,
"data":"#
#Mon Dec 10 11:51:16 CST 2018
k1=v1
",
"transition":"check-output",
"externalStatus":"SUCCEEDED",
"cred":"null",
"conf":"<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>108412.server.bigdata.com.cn:8032</job-tracker>
<name-node>hdfs://108474.server.bigdata.com.cn:8020</name-node>
<configuration>
<property xmlns="">
<name>oozie.launcher.mapred.job.queue.name</name>
<value>ada.oozielauncher</value>
<source>programatically</source>
</property>
</configuration>
<exec>echo</exec>
<argument>k1=v1</argument> ##这个就是正常的k1=v1标准输出传递到了python-node1节点了
<capture-output />
</shell>",
"type":"shell",
"endTime":"Mon, 10 Dec 2018 03:51:16 GMT",
"externalId":"job_1542533868365_0511",
"id":"0000106-181129152008300-oozie-oozi-W@python-node1",
"startTime":"Mon, 10 Dec 2018 03:50:24 GMT",
"userRetryCount":0,
"externalChildIDs":null,
"name":"python-node1",
"errorCode":null,
"trackerUri":"108412.server.bigdata.com.cn:8032",
"retries":0,
"userRetryInterval":10,
"toString":"Action name[python-node1] status[OK]",
"consoleUrl":"http://108412.server.bigdata.com.cn:8088/proxy/application_1542533868365_0511/",
"userRetryMax":0
},
{
"errorMessage":null,
"status":"OK",
"stats":null,
"data":null,
"transition":"fail",
"externalStatus":"fail",
"cred":"null",
"conf":"<switch xmlns="uri:oozie:workflow:0.4">
<case to="end">false</case>
<default to="fail" />
</switch>",
"type":"switch",
"endTime":"Mon, 10 Dec 2018 03:51:17 GMT",
"externalId":"-",
"id":"0000106-181129152008300-oozie-oozi-W@check-output",
"startTime":"Mon, 10 Dec 2018 03:51:16 GMT",
"userRetryCount":0,
"externalChildIDs":null,
"name":"check-output",
"errorCode":null,
"trackerUri":"-",
"retries":0,
"userRetryInterval":10,
"toString":"Action name[check-output] status[OK]",
"consoleUrl":"-",
"userRetryMax":0
},
{
"errorMessage":"Python action failed, error message[v1]",
"status":"OK",
"stats":null,
"data":null,
"transition":null,
"externalStatus":"OK",
"cred":"null",
"conf":"Python action failed, error message[${wf:actionData('python-node')['k1']}]",
"type":":KILL:",
"endTime":"Mon, 10 Dec 2018 03:51:17 GMT",
"externalId":"-",
"id":"0000106-181129152008300-oozie-oozi-W@fail",
"startTime":"Mon, 10 Dec 2018 03:51:17 GMT",
"userRetryCount":0,
"externalChildIDs":null,
"name":"fail",
"errorCode":"E0729",
"trackerUri":"-",
"retries":0,
"userRetryInterval":10,
"toString":"Action name[fail] status[OK]",
"consoleUrl":"-",
"userRetryMax":0
}
]
}

spark使用oozie提交的两种方式的workflow.xml:

#shell-action:
<workflow-app xmlns="uri:oozie:workflow:0.4" name="shell-wf">
<start to="shell-node"/>
<action name="shell-node">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapreduce.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>spark2-submit</exec>
<argument>--master</argument>
<argument>yarn</argument>
<argument>--deploy-mode</argument>
<argument>cluster</argument>
<argument>--queue</argument>
<argument>ada.spark</argument>
<argument>--name</argument>
<argument>testYarn</argument>
<argument>--conf</argument>
<argument>spark.yarn.appMasterEnv.JAVA_HOME=/usr/java/jdk1.8</argument>
<argument>--conf</argument>
<argument>spark.executorEnv.JAVA_HOME=/usr/java/jdk1.8</argument>
<argument>--jars</argument>
<argument>hdfs://10.8.18.74:8020/ada/spark/share/tech_component/tc.plat.spark.jar,hdfs://10.8.18.74:8020/ada/spark/share/tech_component/bigdata4i-1.0.jar,hdfs://10.8.18.74:8020/ada/spark/share/tech_component/bigdata-sparklog-1.0.jar</argument>
<argument>--files</argument>
<argument>/etc/hive/conf/hive-site.xml</argument>
<argument>--class</argument>
<argument>testYarn.test.Ttest</argument>
<argument>hdfs://10.8.18.74:8020/user/lyy/App/testYarn.test.jar</argument>
<capture-output/>
</shell>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app> ##spark-action:
<workflow-app xmlns="uri:oozie:workflow:0.4" name="spark-action">
<start to="spark-node"/>
<action name="spark-node">
<spark xmlns="uri:oozie:spark-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapreduce.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<master>yarn</master>
<mode>cluster</mode>
<name>Spark-Action</name>
<class>testYarn.test.Ttest</class>
<jar>${nameNode}/user/lyy/App/testYarn.test.jar</jar>
<spark-opts>--conf spark.yarn.appMasterEnv.JAVA_HOME=/usr/java/jdk1.8 --conf spark.executorEnv.JAVA_HOME=/usr/java/jdk1.8 --jars hdfs://10.8.18.74:8020/ada/spark/share/tech_component/tc.plat.spark.jar,hdfs://10.8.18.74:8020/ada/spark/share/tech_component/bigdata4i-1.0.jar, hdfs://10.8.18.74:8020/ada/spark/share/tech_component/bigdata-sparklog-1.0.jar --conf spark.executor.extraJavaOptions=-Dlog4j.configuration=/etc/hadoop/conf/log4j.properties --conf spark.driver.extraJavaOptions=-Dlog4j.configuration=/etc/hadoop/conf/log4j.properties --conf spark.yarn.queue=ada.spark --files /etc/hive/conf/hive-site.xml</spark-opts>
</spark>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>

oozie捕获标准输出&异常capture-output的更多相关文章

  1. 7、pytest -- 捕获标准输出和标准错误输出

    目录 1. 标准输出/标准错误输出/标准输入的默认捕获行为 2. 修改和去使能捕获行为 2.1. 文件描述符级别的捕获行为(默认) 2.2. sys级别的捕获行为 2.3. 去使能捕获行为 3. 使用 ...

  2. 存储过程——异常捕获&打印异常信息

    目录 0. 背景说明 1. 建立异常信息表ErrorLog 2. 建立保存异常信息的存储过程 3. 建立在SQL Server中打印异常信息的存储过程 4. 建立一个用于测试的存储过程抛出异常进行测试 ...

  3. 在C#代码中应用Log4Net(四)在Winform和Web中捕获全局异常

    毕竟人不是神,谁写的程序都会有bug,有了bug不可怕,可怕的是出错了,你却不知道错误在哪里.所以我们需要将应用程序中抛出的所有异常都记录起来,不然出了错,找问题就能要了你的命.下面我们主要讨论的是如 ...

  4. Java未被捕获的异常该怎么处理

    在你学习在程序中处理异常之前,看一看如果你不处理它们会有什么情况发生是很有好处的.下面的小程序包括一个故意导致被零除错误的表达式.class Exc0 {    public static void ...

  5. WCF基础教程之异常处理:你的Try..Catch语句真的能捕获到异常吗?

    在上一篇WCF基础教程之开篇:创建.测试和调用WCF博客中,我们简单的介绍了如何创建一个WCF服务并调用这个服务.其实,上一篇博客主要是为了今天这篇博客做铺垫,考虑到网上大多数WCF教程都是从基础讲起 ...

  6. C# WINFORM 捕获全局异常

    using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Thr ...

  7. Javascript异步请求你能捕获到异常吗?

    Javascript异步请求你能捕获到异常吗? 异常处理是程序发布之前必须要解决的问题,不经过异常处理的应用会让用户对产品失去信心.在异常处理中,我们一贯的做法是按照函数调用的次序,将异常从数据访问层 ...

  8. Android捕获崩溃异常

    开发中最让人头疼的是应用突然爆炸,然后跳回到桌面.而且我们常常不知道这种状况会何时出现,在应用调试阶段还好,还可以通过调试工具的日志查看错误出现在哪里.但平时使用的时候给你闹崩溃,那你就欲哭无泪了. ...

  9. 测试 __try, __finally, __except(被__finally捕获的异常, 还会被上一级的__except捕获。反之不行)

    C语言标准是没有 try-catch语法 的, M$家自己提供了一组. /// @file ClassroomExamples.c /// @brief 验证C语言的非标准try, catch #in ...

随机推荐

  1. IDEA使用心得整理

    [演出模式] 我们可以使用[Presentation Mode],将IDEA弄到最大,可以让你只关注一个类里面的代码,进行毫无干扰的coding. 可以使用Alt+V快捷键,弹出View视图,然后选择 ...

  2. 来自苹果的编程语言——Swift简单介绍【整理】

    2014年06月03日凌晨,Apple刚刚公布了Swift编程语言,本文从其公布的书籍<The Swift Programming Language>中摘录和提取而成.希望对各位的iOS& ...

  3. Java基础加强之并发(三)Thread中start()和run()的区别

    Thread中start()和run()的区别 start() : 它的作用是启动一个新线程,新线程会执行相应的run()方法.start()不能被重复调用.run()   : run()就和普通的成 ...

  4. kubenetes master使用curl 操作API

    前提条件: 已经使用kubeadm 安装集群 查看 kebelet.conf 配置内容 kubectl --kubeconfig /etc/kubernetes/kubelet.conf config ...

  5. 你不知道的css高级应用三种方法——实现多行省略

    前言 这是个老掉牙的需求啦,不过仍然有很多人在网上找解决方案,特别是搜索结果排名靠前的那些,都是些只会介绍兼容性不好的使用-webkit-line-clamp的方案. 如果你看到这篇文章,可能代表你正 ...

  6. C语言偏冷知识点汇总

    1.C语言函数声明中参数类型写在右括号后是什么意思?如下代码所示: int add(a, b) int a; int b; { return a + b; } 像这样的声明是什么意思,我测试过在gcc ...

  7. SERVICE问题解决方法

    这篇文章主要介绍了Windows服务器下出现ZendOptimizer.MemoryBase@NETWORK SERVICE问题解决方法,需要的朋友可以参考下 日志提示 事件 ID ( 2 )的描述( ...

  8. ZOJ 3992 One-Dimensional Maze(思维题)

    L - One-Dimensional Maze Time Limit:1000MS     Memory Limit:65536KB     64bit IO Format:%lld & % ...

  9. centos7下部署iptables环境纪录(关闭默认的firewalle)(转)

    下面介绍centos7关闭firewall安装iptables,并且开启80端口.3306端口的操作记录:[root@localhost ~]# cat /etc/redhat-release Cen ...

  10. Scala-元组操作

    package com.bigdata object TupleMapO { def main(args: Array[String]): Unit = { // 元组:Tuple,就是由()包起来, ...