1、package org.apache.hadoop.mapred这四个协议都在这个包下。

2、从最简单的AdminOperationsProtocol看,

void refreshQueues() throws IOException;

void refreshNodes() throws IOException;

boolean setSafeMode(JobTracker.SafeModeAction safeModeAction)  throws IOException;

3、相对于hdfs里边那几个协议,这几个协议都经常修改,不是很稳定

InterTrackerProtocol

HeartbeatResponse heartbeat(TaskTrackerStatus status,
                            boolean restarted,
                            boolean initialContact,
                            boolean acceptNewTasks,
                            short responseId)
  throws IOException;

public String getFilesystemName() throws IOException;

public void reportTaskTrackerError(String taskTracker,
                                   String errorClass,
                                   String errorMessage) throws IOException;

TaskCompletionEvent[] getTaskCompletionEvents(JobID jobid, int fromEventId
    , int maxEvents) throws IOException;

public String getSystemDir();

public String getBuildVersion() throws IOException;

public String getVIVersion() throws IOException;

JobSubmissionProtocol

public JobID getNewJobId() throws IOException;

public JobStatus submitJob(JobID jobName, String jobSubmitDir, Credentials ts)
throws IOException;

public ClusterStatus getClusterStatus(boolean detailed) throws IOException;

public AccessControlList getQueueAdmins(String queueName) throws IOException;

public void killJob(JobID jobid) throws IOException;

public void setJobPriority(JobID jobid, String priority)
                                                    throws IOException;

public boolean killTask(TaskAttemptID taskId, boolean shouldFail) throws IOException;

public JobProfile getJobProfile(JobID jobid) throws IOException;

public JobStatus getJobStatus(JobID jobid) throws IOException;

public Counters getJobCounters(JobID jobid) throws IOException;

public TaskReport[] getMapTaskReports(JobID jobid) throws IOException;

public TaskReport[] getReduceTaskReports(JobID jobid) throws IOException;

public TaskReport[] getCleanupTaskReports(JobID jobid) throws IOException;

public TaskReport[] getSetupTaskReports(JobID jobid) throws IOException;

public String getFilesystemName() throws IOException;

public JobStatus[] jobsToComplete() throws IOException;

public JobStatus[] getAllJobs() throws IOException;

public TaskCompletionEvent[] getTaskCompletionEvents(JobID jobid
    , int fromEventId, int maxEvents) throws IOException;

public String[] getTaskDiagnostics(TaskAttemptID taskId)
throws IOException;
  public String getSystemDir();

public String getStagingAreaDir() throws IOException;

public JobQueueInfo[] getQueues() throws IOException;

public JobQueueInfo getQueueInfo(String queue) throws IOException;

public JobStatus[] getJobsFromQueue(String queue) throws IOException;

public QueueAclsInfo[] getQueueAclsForCurrentUser() throws IOException;

public
Token<DelegationTokenIdentifier> getDelegationToken(Text renewer
                                                    ) throws IOException,
                                                        InterruptedException;

public long renewDelegationToken(Token<DelegationTokenIdentifier> token
                                 ) throws IOException,
                                          InterruptedException;

public void cancelDelegationToken(Token<DelegationTokenIdentifier> token
                                  ) throws IOException,InterruptedException;

TaskUmbilicalProtocol

JvmTask getTask(JvmContext context) throws IOException;

boolean statusUpdate(TaskAttemptID taskId, TaskStatus taskStatus,
    JvmContext jvmContext) throws IOException, InterruptedException;

void reportDiagnosticInfo(TaskAttemptID taskid, String trace,
    JvmContext jvmContext) throws IOException;

void reportNextRecordRange(TaskAttemptID taskid, SortedRanges.Range range,
    JvmContext jvmContext) throws IOException;

boolean ping(TaskAttemptID taskid, JvmContext jvmContext) throws IOException;

void done(TaskAttemptID taskid, JvmContext jvmContext) throws IOException;
void commitPending(TaskAttemptID taskId, TaskStatus taskStatus,
    JvmContext jvmContext) throws IOException, InterruptedException;

boolean canCommit(TaskAttemptID taskid, JvmContext jvmContext) throws IOException;

void shuffleError(TaskAttemptID taskId, String message, JvmContext jvmContext)
    throws IOException;

void fsError(TaskAttemptID taskId, String message, JvmContext jvmContext)
    throws IOException;

void fatalError(TaskAttemptID taskId, String message, JvmContext jvmContext)
    throws IOException;

MapTaskCompletionEventsUpdate getMapCompletionEvents(JobID jobId,
                                                     int fromIndex,
                                                     int maxLocs,
                                                     TaskAttemptID id,
                                                     JvmContext jvmContext)
throws IOException;

void updatePrivateDistributedCacheSizes(org.apache.hadoop.mapreduce.JobID jobId,
                                        long[] sizes) throws IOException;

从协议VersionedProtocol开始4——AdminOperationsProtocol、InterTrackerProtocol、JobSubmissionProtocol、TaskUmbilicalProtocol的更多相关文章

  1. 从协议VersionedProtocol开始

    VersionedProtocol协议是Hadoop的最顶层协议接口的抽象:5--3--3共11个协议,嘿嘿 1)HDFS相关 ClientDatanodeProtocol:client与datano ...

  2. 从协议VersionedProtocol开始3——ClientProtocol、DatanodeProtocol、NamenodeProtocol、RefreshAuthorizationPolicyProtocol、RefreshUserMappingsProtocol

    1.ClientProtocol这个玩意的版本号是61L:DatanodeProtocol 是26L:NamenodeProtocol是 3L;RefreshAuthorizationPolicyPr ...

  3. 从协议VersionedProtocol开始2——ClientDatanodeProtocol和InterDatanodeProtocol

    1.首先,我看的是hadoop1.2.1 这个里边,有点奇怪ClientDatanodeProtocol的versionID是4,但是InterDatanodeProtocol的versionID是3 ...

  4. 从协议VersionedProtocol开始1

    Phase 0: Make a plan You must first decide what steps you're going to have in your process. It sound ...

  5. Hadoop系列番外篇之一文搞懂Hadoop RPC框架及细节实现

    @ 目录 Hadoop RPC 框架解析 1.Hadoop RPC框架概述 1.1 RPC框架特点 1.2 Hadoop RPC框架 2.Java基础知识回顾 2.1 Java反射机制与动态代理 2. ...

  6. MapReduce剖析笔记之四:TaskTracker通过心跳机制获取任务的流程

    上一节分析到了JobTracker把作业从队列里取出来并进行了初始化,所谓的初始化,主要是获取了Map.Reduce任务的数量,并统计了哪些DataNode所在的服务器可以处理哪些Split等等,将这 ...

  7. MapReduce剖析笔记之二:Job提交的过程

    上一节以WordCount分析了MapReduce的基本执行流程,但并没有从框架上进行分析,这一部分工作在后续慢慢补充.这一节,先剖析一下作业提交过程. 在分析之前,我们先进行一下粗略的思考,如果要我 ...

  8. hadoop之JobTracker功能分析

    JobTracker是整个MapReduce计算框架中的主服务,相当于集群的“管理者”,负责整个集群的作业控制和资源管理.本文对JobTracker的启动过程及心跳接收与应答两个主要功能进行分析. 1 ...

  9. 【Hadoop代码笔记】通过JobClient对Jobtracker的调用详细了解Hadoop RPC

    Hadoop的各个服务间,客户端和服务间的交互采用RPC方式.关于这种机制介绍的资源很多,也不难理解,这里不做背景介绍.只是尝试从Jobclient向JobTracker提交作业这个最简单的客户端服务 ...

随机推荐

  1. 上传本地文件到HDFS

    源代码: import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hado ...

  2. Jq_选择器、效果函数

    JQuery 选择器 选择器                     实例                                   选取 *                         ...

  3. jquery中DOM

    节点包裹 wrap() (1)$().wrap(html) 将选择的节点用指定的元素包装 $('p').wrap('<div></div>'); (2)多层包裹 $('p'). ...

  4. LF CRLF

    在git提交的时候 有时候会提示这个 LF will be replaced by CRLF 这是因为window的结束符是:回车和换行 crlfmac和linux的结束符是 lf, 于是当代码在这两 ...

  5. mysql数据库回滚

    在应用$mysqli时,因没常用到数据回滚,老忘,整理下,做个记录. $mysqli->autocommit(FALSE);//自动提交设置关闭 $mysqli->query(" ...

  6. for( unsigned int i=heapSize/2-1; i>=0; --i)

    unsigned int的表示 今天在写堆排序的时候遇到一个BUG void builMaxHeap( int *arr,unsigned int heapSize){ unsigned int i; ...

  7. CLGeocoder Error Domain=kCLErrorDomain Code=2

    使用CLGeocoder解码地址时,遇到错误 Error Domain=kCLErrorDomain Code=2 代码: #pragma mark 跟踪定位代理方法,每次位置发生变化即会执行(只要定 ...

  8. windows 下使用 Filezilla server 搭建 ftp 服务器

    windows 下使用 Filezilla server 搭建 ftp 服务器 1. Filezilla server 免费,开源, ftp 服务端 2. 下载安装, windows  https:/ ...

  9. 关于智能指针boost::shared_ptr

    boost库中的智能指针shared_ptr, 功能强大, 且开销小,故受到广大coder的欢迎. 但在实际的使用过程中,笔者也发现了一些不足. 1.定制的删除器 shared_ptr除了可以使用默认 ...

  10. linux笔记:linux常用命令-关机重启命令

    关机重启命令:shutdown(关机或者重启) 其他关机命令: 其他重启命令: 系统运行级别: 修改系统默认运行级别和查询系统运行级别: 退出登录命令:logout(退出登录)