Oozie4.3

一 简介

1 官网

http://oozie.apache.org/

Apache Oozie Workflow Scheduler for Hadoop

Hadoop生态的工作流调度器

Overview

Oozie is a workflow scheduler system to manage Apache Hadoop jobs.

Oozie Workflow jobs are Directed Acyclical Graphs (DAGs) of actions.

Oozie Coordinator jobs are recurrent Oozie Workflow jobs triggered by time (frequency) and data availability.

Oozie is integrated with the rest of the Hadoop stack supporting several types of Hadoop jobs out of the box (such as Java map-reduce, Streaming map-reduce, Pig, Hive, Sqoop and Distcp) as well as system specific jobs (such as Java programs and shell scripts).

Oozie is a scalable, reliable and extensible system.

2 部署

3 数据库表结构

wf_jobs:工作流实例

wf_actions:任务实例

coord_jobs:调度实例

coord_actions:调度任务实例

4 概念

l  Control Node:工作流的开始、结束以及决定Workflow的执行路径的节点(start、end、kill、decision、fork/join)

l  Action Node:工作流执行的计算任务,支持的类型包括(HDFS、MapReduce、Java、Shell、SSH、Pig、Hive、E-Mail、Sub-Workflow、Sqoop、Distcp),即任务

l  Workflow:由Control Node以及一系列Action Node组成的工作流,即工作流

l  Coordinator:根据指定Cron信息触发workflow,即调度

l  Bundle:按照组的方式批量管理Coordinator任务,实现集中的启停

二 代码解析

1 启动过程

加载配置的所有service:

ServicesLoader.contextInitialized

         Services.init

                  Services.loadServices (oozie.services, oozie.services.ext)

Service结构:

Service

         org.apache.oozie.service.SchedulerService,

         org.apache.oozie.service.InstrumentationService,

         org.apache.oozie.service.MemoryLocksService,

         org.apache.oozie.service.UUIDService,

         org.apache.oozie.service.ELService,

         org.apache.oozie.service.AuthorizationService,

         org.apache.oozie.service.UserGroupInformationService,

         org.apache.oozie.service.HadoopAccessorService,

         org.apache.oozie.service.JobsConcurrencyService,

         org.apache.oozie.service.URIHandlerService,

         org.apache.oozie.service.DagXLogInfoService,

         org.apache.oozie.service.SchemaService,

         org.apache.oozie.service.LiteWorkflowAppService,

         org.apache.oozie.service.JPAService,

         org.apache.oozie.service.StoreService,

         org.apache.oozie.service.SLAStoreService,

         org.apache.oozie.service.DBLiteWorkflowStoreService,

         org.apache.oozie.service.CallbackService,

         org.apache.oozie.service.ActionService,

         org.apache.oozie.service.ShareLibService,

         org.apache.oozie.service.CallableQueueService,

         org.apache.oozie.service.ActionCheckerService,

         org.apache.oozie.service.RecoveryService,

         org.apache.oozie.service.PurgeService,

         org.apache.oozie.service.CoordinatorEngineService,

         org.apache.oozie.service.BundleEngineService,

         org.apache.oozie.service.DagEngineService,

         org.apache.oozie.service.CoordMaterializeTriggerService,

         org.apache.oozie.service.StatusTransitService,

         org.apache.oozie.service.PauseTransitService,

         org.apache.oozie.service.GroupsService,

         org.apache.oozie.service.ProxyUserService,

         org.apache.oozie.service.XLogStreamingService,

         org.apache.oozie.service.JvmPauseMonitorService,

         org.apache.oozie.service.SparkConfigurationService

2 核心引擎

BaseEngine

         DAGEngine (负责workflow执行

         CoordinatorEngine 负责coordinator执行

         BundleEngine 负责bundle执行

3 workflow提交执行过程

DAGEngine.submitJob| submitJobFromCoordinator (提交workflow)

         SubmitXCommand.call

                  execute

                          LiteWorkflowAppService.parseDef (解析得到WorkflowApp)

                                   LiteWorkflowLib.parseDef

                                            LiteWorkflowAppParser.validateAndParse

                                                     parse

                          WorkflowLib.createInstance (创建WorkflowInstance)

                          BatchQueryExecutor.executeBatchInsertUpdateDelete (保存WorkflowJobBean 到wf_jobs)

         StartXCommand.call

                  SignalXCommand.call

                          execute

                                   WorkflowInstance.start

                                            LiteWorkflowInstance.start

                                                     signal

                                                             NodeHandler.enter

                                                                      ActionNodeHandler.enter

                                                                               start

                                                                                        LiteWorkflowStoreService.liteExecute (添加WorkflowActionBean到ACTIONS_TO_START)

                                   WorkflowStoreService.getActionsToStart (从ACTIONS_TO_START取Action)

                                            ActionStartXCommand.call

                                                     ActionExecutor.start

                                                     WorkflowNotificationXCommand.call

                                            BatchQueryExecutor.executeBatchInsertUpdateDelete (保存WorkflowActionBean到wf_actions)

ActionExecutor.start是异步的,还需要检查Action执行状态来推进流程,有两种情况:

一种是Oozie Server正常运行:利用JobEndNotification

CallbackServlet.doGet

         DagEngine.processCallback

                  CompletedActionXCommand.call

                          ActionCheckXCommand.call

                                   ActionExecutor.check

ActionEndXCommand.call

                                            SignalXCommand.call

一种是Oozie Server重启:利用ActionCheckerService

ActionCheckerService.init

         ActionCheckRunnable.run

                  runWFActionCheck (GET_RUNNING_ACTIONS, oozie.service.ActionCheckerService.action.check.delay=600)

                          ActionCheckXCommand.call

                                    ActionExecutor.check

                                   ActionEndXCommand.call

                                            SignalXCommand.call

                  runCoordActionCheck

4 coordinator提交执行过程

CoordinatorEngine.submitJob(提交coordinator)

         CoordSubmitXCommand.call

                  submit

                          submitJob

                                   storeToDB

                                            CoordJobQueryExecutor.insert (保存CoordinatorJobBean到coord_jobs)

                                   queueMaterializeTransitionXCommand

                                            CoordMaterializeTransitionXCommand.call

                                                     execute

                                                              materialize

                                                                      materializeActions

                                                                               CoordCommandUtils.materializeOneInstance(创建CoordinatorActionBean)

                                                                                storeToDB

                                                             performWrites

                                                                      BatchQueryExecutor.executeBatchInsertUpdateDelete(保存CoordinatorActionBean到coord_actions)

                                                                      CoordActionInputCheckXCommand.call

                                                                               CoordActionReadyXCommand.call

                                                                                        CoordActionStartXCommand.call

                                                                                                DAGEngine.submitJobFromCoordinator

定时任务触发Materialize:

CoordMaterializeTriggerService.init

         CoordMaterializeTriggerRunnable.run

                  CoordMaterializeTriggerService.runCoordJobMatLookup

                          materializeCoordJobs (GET_COORD_JOBS_OLDER_FOR_MATERIALIZATION)

                                   CoordMaterializeTransitionXCommand.call

5 分布式

有些内部任务只能启动一个,单server环境Oozie中通过MemoryLocksService来保证,多server环境Oozie通过ZKLocksService来保证,要开启ZK,需要开启一些service:

org.apache.oozie.service.ZKLocksService,

org.apache.oozie.service.ZKXLogStreamingService,

org.apache.oozie.service.ZKJobsConcurrencyService,

org.apache.oozie.service.ZKUUIDService

同时需要配置oozie.zookeeper.connection.string

6 任务执行过程

ActionExecutor是任务执行的核心抽象基类,所有的具体任务都是这个类的子类

ActionExecutor

         JavaActionExecutor

         SshActionExecutor

         FsActionExecutor

         SubWorkflowActionExecutor

其中JavaActionExecutor是最重要的一个子类,很多其他的任务都是这个类的子类(比如HiveActionExecutor、SparkActionExecutor等)

JavaActionExecutor.start

         prepareActionDir

         submitLauncher

                  JobClient.getJob

                  injectLauncherCallback

                          ActionExecutor.Context.getCallbackUrl

                                   job.end.notification.url

                  createLauncherConf

                          LauncherMapperHelper.setupLauncherInfo

                  JobClient.submitJob

         check

JavaActionExecutor执行时会提交一个map任务到yarn,即LauncherMapper,

LauncherMapper.map

         LauncherMain.main

LauncherMain是具体任务的执行类

LauncherMain

         JavaMain

         HiveMain

         Hive2Main

         SparkMain

         ShellMain

         SqoopMain

【原创】大数据基础之Oozie(1)简介、源代码解析的更多相关文章

  1. 【原创】大数据基础之Oozie vs Azkaban

    概括 Azkaban是一个非常轻量的开源调度框架,适合二次开发,但是无法直接用于生产环境,存在致命缺陷(比如AzkabanWebServer是单点,1年多时间没有修复),在一些情景下的行为简单粗暴(比 ...

  2. 【原创】大数据基础之Oozie(3)Oozie从4.3升级到5.0

    官方文档如下: http://oozie.apache.org/docs/5.0.0/AG_OozieUpgrade.html 这里写的比较简单,大概过程如下:1 下载5.0代码并编译:2 解压5.0 ...

  3. 【原创】大数据基础之Oozie(2)使用

    命令行 $ oozie help 1 导出环境变量 $ export OOZIE_URL=http://oozie_server:11000/oozie 否则都需要增加 -oozie 参数,比如 $ ...

  4. 【原创】大数据基础之Oozie(4)oozie使用的spark版本升级

    oozie默认使用的spark是1.6,一直没有升级,如果想用最新的2.4,需要自己手工升级 首先看当前使用的spark版本的jar # oozie admin -oozie http://$oozi ...

  5. 【原创】大数据基础之Zookeeper(2)源代码解析

    核心枚举 public enum ServerState { LOOKING, FOLLOWING, LEADING, OBSERVING; } zookeeper服务器状态:刚启动LOOKING,f ...

  6. 【原创】大数据基础之Impala(1)简介、安装、使用

    impala2.12 官方:http://impala.apache.org/ 一 简介 Apache Impala is the open source, native analytic datab ...

  7. 【原创】大数据基础之Benchmark(2)TPC-DS

    tpc 官方:http://www.tpc.org/ 一 简介 The TPC is a non-profit corporation founded to define transaction pr ...

  8. 【原创】大数据基础之词频统计Word Count

    对文件进行词频统计,是一个大数据领域的hello word级别的应用,来看下实现有多简单: 1 Linux单机处理 egrep -o "\b[[:alpha:]]+\b" test ...

  9. 大数据基础知识:分布式计算、服务器集群[zz]

    大数据中的数据量非常巨大,达到了PB级别.而且这庞大的数据之中,不仅仅包括结构化数据(如数字.符号等数据),还包括非结构化数据(如文本.图像.声音.视频等数据).这使得大数据的存储,管理和处理很难利用 ...

随机推荐

  1. apply和call与this

    函数本身的apply方法,改变this指向哪个对象: function getAge() { var y = new Date().getFullYear(); return y - this.bir ...

  2. 控制结构(11): Continuation passing style(CPS)

    // 上一篇:控制结构(10)指令序列(opcode) [注释]: 这个笔记系列需要告一个段落了,收尾部分整理下几个时髦(The New Old Things)结构. 后面打算开一个算法方面的,重新学 ...

  3. 在nginx上用FastCGI解析PHP

    nginx配置文件: Nginx 默认使用  include enable-php.conf;   通过enable-php.conf 来解析PHP,该文件内容如下 location ~ [^/]\. ...

  4. HP 1010、 1020、 1022 、M1005激光打印机内部无卡纸,但机器仍提示卡纸?

    HP 1010.1018.1020.1022.M1005激光打印机,硒鼓原装编号:Q2612A  1800页 ( A4纸,5%覆盖率).是办公桌面小型打印机中主流产品,故障率极小. 现有一台HP 10 ...

  5. jvm学习一:类加载过程详解

    (自学笔记,持续更新,欢迎指正) 我们都知道一个java程序运行要经过编译和执行,但是这太概括了,中间还有很多步骤,今天来说说类加载 学完类加载之后,java运行过程就可以分为  编译  > 类 ...

  6. monkey事件简介

    操作事件简介 Monkey所执行的随机事件流中包含11大事件,分别是触摸事件.手势事件.二指缩放事件.轨迹事件.屏幕旋转事件.基本导航事件.主要导航事件.系统按键事件.启动Activity事件.键盘事 ...

  7. Python——逻辑运算(or,and)

    print(0 and 2 > 1) #结果0 print(0 and 2 < 1) #结果0 print(1 and 2 > 1) #结果True print(1 and 2 &l ...

  8. less的基本使用

    众所周知,less是一门css预处理语言,其他的类似还有scss.Stylus 等,和js相似度比较高的就是less了.话不多说,我们来看less的使用. Node.js 环境中使用 Less : n ...

  9. 【题解】放球游戏A

    题目描述 校园里在上活动课,Red和Blue两位小朋友在玩一种游戏,他俩在一排N个格子里,自左到右地轮流放小球,每个格子只能放一个小球.每个人一次只能放1至5个球,最后面对没有空格而不能放球的人为输. ...

  10. 【XSY2903】B 莫比乌斯反演

    题目描述 有一个\(n\times n\)的网格,除了左下角的格子外每个格子的中心里都有一个圆,每个圆的半径为\(R\),问你在左下角的格子的中心能看到多少个圆. \(n\leq {10}^9,R_0 ...