初始化state类

//org.apache.flink.streaming.runtime.tasks.StreamTask#initializeState

initializeState();

private void initializeState() throws Exception {



StreamOperator<?>[] allOperators = operatorChain.getAllOperators();



for (StreamOperator<?> operator : allOperators) {

if (null != operator) {

operator.initializeState();

}

}

}

operator.initializeState() 调用的方法路径 org.apache.flink.streaming.api.operators.AbstractStreamOperator#initializeState() ,所有的操作流类都继承该类,同时也没有重写这个方法。

public final void initializeState() throws Exception {

////这里会调用状态后端,里面很重要

1. final StreamOperatorStateContext context =

streamTaskStateManager.streamOperatorStateContext(

getOperatorID(),

getClass().getSimpleName(),

this,

keySerializer,

streamTaskCloseableRegistry,

metrics);

...

|

streamTaskStateManager.streamOperatorStateContext(......)调用方法的路径org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl#streamOperatorStateContext

......

// -------------- Keyed State Backend 这里是重点 关于checkpoint--------------

keyedStatedBackend = keyedStatedBackend(

keySerializer,

operatorIdentifierText,

prioritizedOperatorSubtaskStates,

streamTaskCloseableRegistry,

metricGroup);



// -------------- Operator State Backend 这里是重点 关于checkpoint --------------

operatorStateBackend = operatorStateBackend(

operatorIdentifierText,

prioritizedOperatorSubtaskStates,

streamTaskCloseableRegistry);

......

keyedStatedBackend() 这个方法最里面是调用了 org.apache.flink.streaming.api.operators.BackendRestorerProcedure#attemptCreateAndRestore

private T attemptCreateAndRestore(Collection restoreState) throws Exception {

......

// create a new, empty backend.

final T backendInstance = instanceSupplier.get();



// attempt to restore from snapshot (or null if no state was checkpointed).

backendInstance.restore(restoreState);

......

}

backendInstance.restore(restoreState)调用的方法路径org.apache.flink.runtime.state.DefaultOperatorStateBackend#restore

// registeredOperatorStates这个对象是核心

...

PartitionableListState<?> listState = registeredOperatorStates.get(restoredSnapshot.getName());



if (null == listState) {

listState = new PartitionableListState<>(restoredMetaInfo);

//重点,这里就是存储了快照状态类

//********************************************************************

registeredOperatorStates.put(listState.getStateMetaInfo().getName(), listState);

//********************************************************************

} else {

// TODO with eager state registration in place, check here for serializer migration strategies

}

...

triggerCheckpoint 将定时触发执行checkpoint,而上面是是初始化的执行逻辑

定时快照state类

org.apache.flink.runtime.checkpoint.CheckpointCoordinator#triggerCheckpoint(long, boolean) 

......

// send the messages to the tasks that trigger their checkpoint 我猜测这里就是远程发送触发checkpoint的步骤 这里进行的数据文件的生成奶奶的

for (Execution execution: executions) {

execution.triggerCheckpoint(checkpointID, timestamp, checkpointOptions);

}

......

execution.triggerCheckpoint调用路径 org.apache.flink.runtime.executiongraph.Execution#triggerCheckpoint

/**

* Trigger a new checkpoint on the task of this execution.

* @param checkpointId of th checkpoint to trigger

* @param timestamp of the checkpoint to trigger

* @param checkpointOptions of the checkpoint to trigger

/

public void triggerCheckpoint(long checkpointId, long timestamp, CheckpointOptions checkpointOptions) {

      ......

final LogicalSlot slot = assignedResource;

if (slot != null) {

final TaskManagerGateway taskManagerGateway = slot.getTaskManagerGateway();

taskManagerGateway.triggerCheckpoint(attemptId, getVertex().getJobId(), checkpointId, timestamp, checkpointOptions);

}

      .....

}

taskManagerGateway.triggerCheckpoint(......)里面最终调用路径 org.apache.flink.runtime.taskexecutor.TaskExecutor#triggerCheckpoint

@Override

public CompletableFuture triggerCheckpoint(

ExecutionAttemptID executionAttemptID,long checkpointId,long checkpointTimestamp,CheckpointOptions checkpointOptions) {

  ......

final Task task = taskSlotTable.getTask(executionAttemptID);

if (task != null) {

task.triggerCheckpointBarrier(checkpointId, checkpointTimestamp, checkpointOptions);



return CompletableFuture.completedFuture(Acknowledge.get());

}

  ......

}

task.triggerCheckpointBarrier(......)调用路径 org.apache.flink.runtime.taskmanager.Task#triggerCheckpointBarrier

/
*

  • Calls the invokable to trigger a checkpoint.
  • 这里开始出发执行checkpoint,应该算是入口了,会调用org.apache.flink.streaming.runtime.tasks.StreamTask#triggerCheckpoint
  • AsyncCheckpointRunnable 任务在里面被执行
  • @param checkpointID The ID identifying the checkpoint.
  • @param checkpointTimestamp The timestamp associated with the checkpoint.
  • @param checkpointOptions Options for performing this checkpoint.

    */

    public void triggerCheckpointBarrier(

    final long checkpointID,

    long checkpointTimestamp,

    final CheckpointOptions checkpointOptions) {



    final AbstractInvokable invokable = this.invokable;

    final CheckpointMetaData checkpointMetaData = new CheckpointMetaData(checkpointID, checkpointTimestamp);



    if (executionState == ExecutionState.RUNNING && invokable != null) {



    // build a local closure

    final String taskName = taskNameWithSubtask;

    final SafetyNetCloseableRegistry safetyNetCloseableRegistry =

    FileSystemSafetyNet.getSafetyNetCloseableRegistryForThread();



    Runnable runnable = new Runnable() {

    @Override

    public void run() {

    // set safety net from the task's context for checkpointing thread

    LOG.debug("Creating FileSystem stream leak safety net for {}", Thread.currentThread().getName());

    FileSystemSafetyNet.setSafetyNetCloseableRegistryForThread(safetyNetCloseableRegistry);



    try {

    boolean success = invokable.triggerCheckpoint(checkpointMetaData, checkpointOptions);

    ......

    }

      ......

    }

    };

    //创建线程数为1的线程池,提交runnable任务运行

    executeAsyncCallRunnable(runnable, String.format("Checkpoint Trigger for %s (%s).", taskNameWithSubtask, executionId));

    }

    }

    invokable.triggerCheckpoint(.....)里面最终调用的方法链如下:

    org.apache.flink.streaming.runtime.tasks.StreamTask#triggerCheckpoint

    org.apache.flink.streaming.runtime.tasks.StreamTask#performCheckpoint

    // we can do a checkpoint



    // All of the following steps happen as an atomic step from the perspective of barriers and

    // records/watermarks/timers/callbacks.

    // We generally try to emit the checkpoint barrier as soon as possible to not affect downstream

    // checkpoint alignments



    // Step (1): Prepare the checkpoint, allow operators to do some pre-barrier work.

    //           The pre-barrier work should be nothing or minimal in the common case.

    operatorChain.prepareSnapshotPreBarrier(checkpointMetaData.getCheckpointId());



    // Step (2): Send the checkpoint barrier downstream 生成状态数据 存储数据的对象为checkpointOptions 尼玛 今天debug没有生成数据呦

    operatorChain.broadcastCheckpointBarrier(

    checkpointMetaData.getCheckpointId(),

    checkpointMetaData.getTimestamp(),

    checkpointOptions);



    // Step (3): Take the state snapshot. This should be largely asynchronous, to not

    //           impact progress of the streaming topology

    checkpointState(checkpointMetaData, checkpointOptions, checkpointMetrics);

    checkpointState(......) 里面最终调用org.apache.flink.streaming.runtime.tasks.StreamTask.CheckpointingOperation#executeCheckpointing()

    重点警戒线.....................................................

    ......

    //调用用户的快照方法

    for (StreamOperator<?> op : allOperators) {//不同的算子对应的子类不一样,

    checkpointStreamOperator(op);

    }

    //后面生成数据,哪里生成数据了,要找到



    //这个run任务好像只生成元数据

    // we are transferring ownership over snapshotInProgressList for cleanup to the thread, active on submit

    AsyncCheckpointRunnable asyncCheckpointRunnable = new AsyncCheckpointRunnable(

    owner,

    operatorSnapshotsInProgress,

    checkpointMetaData,

    checkpointMetrics,

    startAsyncPartNano);



    owner.cancelables.registerCloseable(asyncCheckpointRunnable);

    owner.asyncOperationsThreadPool.submit(asyncCheckpointRunnable;

    ......
  1. checkpointStreamOperator(op);

private void checkpointStreamOperator(StreamOperator<?> op) throws Exception {

if (null != op) {

       //这个构造方法是核心

OperatorSnapshotFutures snapshotInProgress = op.snapshotState(

checkpointMetaData.getCheckpointId(),

checkpointMetaData.getTimestamp(),

checkpointOptions,

storageLocation);

operatorSnapshotsInProgress.put(op.getOperatorID(), snapshotInProgress);

}

}

op.snapshotState()是核心,调用org.apache.flink.streaming.api.operators.AbstractStreamOperator#snapshotState(long, long, org.apache.flink.runtime.checkpoint.CheckpointOptions, org.apache.flink.runtime.state.CheckpointStreamFactory)

注意因为op是子类,有些累实现AbstractStreamOperator有些子类实现AbstractUdfStreamOperator,所以在下面调用snapshotState(snapshotContext)方法时,会根据子类的实现不同,调用org.apache.flink.streaming.api.operators.AbstractStreamOperator#snapshotState(org.apache.flink.runtime.state.StateSnapshotContext)

或org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator#snapshotState

AbstractStreamOperator 实现类有94个

AbstractUdfStreamOperator实现类有42个

AbstractUdfStreamOperator继承AbstractStreamOperator

@Override

public final OperatorSnapshotFutures snapshotState(long checkpointId, long timestamp, CheckpointOptions checkpointOptions,

CheckpointStreamFactory factory) throws Exception {



try (StateSnapshotContextSynchronousImpl snapshotContext = new StateSnapshotContextSynchronousImpl(

checkpointId,

timestamp,

factory,

keyGroupRange,

getContainingTask().getCancelables())) {



//继承AbstractUdfStreamOperator的操作类会调用用户的快照方法,继承AbstractStreamOperator的操作类会调用这个方法,但是这个方法没有做什么东西。

snapshotState(snapshotContext);

       //上面调用好用户的快照方法了,就是确定了状态类里面目前的数据了。

       //下面就是如何访问到状态类,讲状态内的数据写入磁盘了。

snapshotInProgress.setKeyedStateRawFuture(snapshotContext.getKeyedStateStreamFuture());

snapshotInProgress.setOperatorStateRawFuture(snapshotContext.getOperatorStateStreamFuture());

//这里是生产状态数据文件

if (null != operatorStateBackend) {

System.out.println(Thread.currentThread().getName()+"::这里将状态数据写入文件中");

snapshotInProgress.setOperatorStateManagedFuture(

operatorStateBackend.snapshot(checkpointId, timestamp, factory, checkpointOptions));

}

       //这里是生产状态数据文件

if (null != keyedStateBackend) {

snapshotInProgress.setKeyedStateManagedFuture(

keyedStateBackend.snapshot(checkpointId, timestamp, factory, checkpointOptions));

}

}

return snapshotInProgress;

}

operatorStateBackend.snapshot(checkpointId, timestamp, factory, checkpointOptions))调用路径org.apache.flink.runtime.state.DefaultOperatorStateBackend#snapshot

谜底就在下面

public RunnableFuture<SnapshotResult> snapshot(

long checkpointId,

long timestamp,

@Nonnull CheckpointStreamFactory streamFactory,

@Nonnull CheckpointOptions checkpointOptions) throws Exception {

long syncStartTime = System.currentTimeMillis();



       //这个是超级关键的地方,你想知道如何访问到用户函数中的状态类,就在这里。

RunnableFuture<SnapshotResult> snapshotRunner =

snapshotStrategy.snapshot(checkpointId, timestamp, streamFactory, checkpointOptions);



snapshotStrategy.logSyncCompleted(streamFactory, syncStartTime);

return snapshotRunner;

}

snapshotStrategy.snapshot(checkpointId, timestamp, streamFactory, checkpointOptions)调用路径,取决于用户指定的后端状态,默认调用路径如下org.apache.flink.runtime.state.DefaultOperatorStateBackend.DefaultOperatorStateBackendSnapshotStrategy#snapshot

DefaultOperatorStateBackendSnapshotStrategy 是DefaultOperatorStateBackend的内部类

public RunnableFuture<SnapshotResult> snapshot(......) throws IOException {

//貌似数据就存在 registeredOperatorStates对象里面 其实下面的步骤不用研究,就是将状态数据写入文件,主要看看这个registeredOperatorStates是怎么弄到的

//************重点 registeredOperatorStates   对象

final Map<String, PartitionableListState<?>> registeredOperatorStatesDeepCopies =

new HashMap<>(registeredOperatorStates.size());

final Map<String, BackendWritableBroadcastState> registeredBroadcastStatesDeepCopies =

new HashMap<>(registeredBroadcastStates.size());



ClassLoader snapshotClassLoader = Thread.currentThread().getContextClassLoader();

try {

// eagerly create deep copies of the list and the broadcast states (if any)

// in the synchronous phase, so that we can use them in the async writing.

//entry.getValue() 里面就是状态类 将状态类存储在新建的map对象中

if (!registeredOperatorStates.isEmpty()) {

for (Map.Entry<String, PartitionableListState<?>> entry : registeredOperatorStates.entrySet()) {

PartitionableListState<?> listState = entry.getValue();

if (null != listState) {

listState = listState.deepCopy();

}

registeredOperatorStatesDeepCopies.put(entry.getKey(), listState);

}

}

//广播状态

if (!registeredBroadcastStates.isEmpty()) {

for (Map.Entry<String, BackendWritableBroadcastState> entry : registeredBroadcastStates.entrySet()) {

BackendWritableBroadcastState broadcastState = entry.getValue();

if (null != broadcastState) {

broadcastState = broadcastState.deepCopy();

}

registeredBroadcastStatesDeepCopies.put(entry.getKey(), broadcastState);

}

}

}

        //这个方法里面生成了状态数据文件
AsyncSnapshotCallable<SnapshotResult<OperatorStateHandle>> snapshotCallable =
new AsyncSnapshotCallable<SnapshotResult<OperatorStateHandle>>() {



@Override

protected SnapshotResult callInternal() throws Exception {

......

// get the registered operator state infos ...

List operatorMetaInfoSnapshots =

new ArrayList<>(registeredOperatorStatesDeepCopies.size());



for (Map.Entry<String, PartitionableListState<?>> entry :

registeredOperatorStatesDeepCopies.entrySet()) {

operatorMetaInfoSnapshots.add(entry.getValue().getStateMetaInfo().snapshot());

}



// ... get the registered broadcast operator state infos ...

List broadcastMetaInfoSnapshots =

new ArrayList<>(registeredBroadcastStatesDeepCopies.size());



for (Map.Entry<String, BackendWritableBroadcastState> entry :

registeredBroadcastStatesDeepCopies.entrySet()) {

broadcastMetaInfoSnapshots.add(entry.getValue().getStateMetaInfo().snapshot());

}



// ... write them all in the checkpoint stream ...

DataOutputView dov = new DataOutputViewStreamWrapper(localOut);



OperatorBackendSerializationProxy backendSerializationProxy =

new OperatorBackendSerializationProxy(operatorMetaInfoSnapshots, broadcastMetaInfoSnapshots);



backendSerializationProxy.write(dov);



// ... and then go for the states ...



......

}

};



final FutureTask<SnapshotResult> task =

snapshotCallable.toAsyncSnapshotFutureTask(closeStreamOnCancelRegistry);



if (!asynchronousSnapshots) {

task.run();

}



return task;

}

}

从上面我们可以看到,状态类都存放在registeredOperatorStatesDeepCopies这个map中。

用户能够更新状态类的数据都是因为这样访问到了状态类

public void initializeState(FunctionInitializationContext context) throws Exception {

......

checkpointedState = context.getOperatorStateStore().getListState(descriptor);

......

}

调用的就是org.apache.flink.runtime.state.DefaultOperatorStateBackend#getListState(org.apache.flink.api.common.state.ListStateDescriptor)

/**

* @Description: 返回状态类的时候,将状态类放入map对象供后面写入文件中

* @Param:

* @return:

* @Author: intsmaze

* @Date: 2019/1/18

/

private ListState getListState(

ListStateDescriptor stateDescriptor,

OperatorStateHandle.Mode mode) throws StateMigrationException {

@SuppressWarnings("unchecked")

PartitionableListState previous = (PartitionableListState) accessedStatesByName.get(name);

if (previous != null) {

checkStateNameAndMode(

previous.getStateMetaInfo().getName(),

name,

previous.getStateMetaInfo().getAssignmentMode(),

mode);

return previous;

}

      ......

PartitionableListState partitionableListState = (PartitionableListState) registeredOperatorStates.get(name);



if (null == partitionableListState) {

// no restored state for the state name; simply create new state holder

partitionableListState = new PartitionableListState<>(

new RegisteredOperatorStateBackendMetaInfo<>(

name,

partitionStateSerializer,

mode));

//这里也会存储状态类数据registeredOperatorStates这个对象和DefaultOperatorStateBackendSnapshotStrategy类的快照方法访问的对象共享

//
************************************************************

registeredOperatorStates.put(name, partitionableListState);

}

flink1.7 checkpoint源码分析的更多相关文章

  1. flink checkpoint 源码分析 (二)

    转发请注明原创地址http://www.cnblogs.com/dongxiao-yang/p/8260370.html flink checkpoint 源码分析 (一)一文主要讲述了在JobMan ...

  2. Flink源码阅读(二)——checkpoint源码分析

    前言 在Flink原理——容错机制一文中,已对checkpoint的机制有了较为基础的介绍,本文着重从源码方面去分析checkpoint的过程.当然本文只是分析做checkpoint的调度过程,只是尽 ...

  3. flink-connector-kafka consumer checkpoint源码分析

    转发请注明原创地址:http://www.cnblogs.com/dongxiao-yang/p/7700600.html <flink-connector-kafka consumer的top ...

  4. flink checkpoint 源码分析 (一)

    转发请注明原创地址http://www.cnblogs.com/dongxiao-yang/p/8029356.html checkpoint是Flink Fault Tolerance机制的重要构成 ...

  5. Heritrix源码分析(九) Heritrix的二次抓取以及如何让Heritrix抓取你不想抓取的URL

    本博客属原创文章,欢迎转载!转载请务必注明出处:http://guoyunsky.iteye.com/blog/644396       本博客已迁移到本人独立博客: http://www.yun5u ...

  6. Hadoop之HDFS原理及文件上传下载源码分析(上)

    HDFS原理 首先说明下,hadoop的各种搭建方式不再介绍,相信各位玩hadoop的同学随便都能搭出来. 楼主的环境: 操作系统:Ubuntu 15.10 hadoop版本:2.7.3 HA:否(随 ...

  7. ElasticSearch Index操作源码分析

    ElasticSearch Index操作源码分析 本文记录ElasticSearch创建索引执行源码流程.从执行流程角度看一下创建索引会涉及到哪些服务(比如AllocationService.Mas ...

  8. Mesos源码分析(12): Mesos-Slave接收到RunTask消息

    在前文Mesos源码分析(8): Mesos-Slave的初始化中,Mesos-Slave接收到RunTaskMessage消息,会调用Slave::runTask.   void Slave::ru ...

  9. Spark 源码分析 -- task实际执行过程

    Spark源码分析 – SparkContext 中的例子, 只分析到sc.runJob 那么最终是怎么执行的? 通过DAGScheduler切分成Stage, 封装成taskset, 提交给Task ...

随机推荐

  1. ThreadPoolExecutor源码解析(二)

    1.ThreadPoolExcuter运行实例 首先我们先看如何新建一个ThreadPoolExecutor去运行线程.然后深入到源码中去看ThreadPoolExecutor里面使如何运作的. pu ...

  2. java----构造回文字符串java(动态规划)【手写演算残图】

    问题描述 草稿解决过程 (字丑别喷) 代码实现 import java.util.Scanner; /** * Created by Admin on 2017/3/26. */ public cla ...

  3. sql-server的添加数据库文件(日志数据)以及收缩数据库文件(日志数据)

    环境: SSMS sql-server2016 一.为数据库添加数据文件 添加日志数据文件 以下是添加数据文件和日志文件的代码 ALTER DATABASE [joinbest] ADD FILE ( ...

  4. onclick="return function()"的使用情况

    根据function的返回值,进行下一步操作,当返回值为true时,进行下一步操作,当返回值为false时,不进行操作. 例如:当在 <a href="url" onclic ...

  5. C# DBHelper类 参考

    using System;using System.Collections.Generic;using System.Text;using System.Configuration;using Sys ...

  6. SQL Server 一致性读

    我们在Oracle和MySQL数据库中已经对一致性读的概念比较熟悉了,但是在SQL Server中却鲜少提及,但SQL Server自2005版本以来其实也实现了一致性读,几乎所有关系型数据库产品的一 ...

  7. GET vs. POST

    GET 和 POST 都创建数组(例如,array( key => value, key2 => value2, key3 => value3, ...)).此数组包含键/值对,其中 ...

  8. 【PAT】B1005 继续(3n+1)猜想

    没有婼姐写得好 将所有的输入放入mp,mp2 覆盖的数存入mp 一开始认为mp中只出现一次的元素就是,忘了可能只被覆盖一次的情况 所以添加了mp2保存输入 #include <iostream& ...

  9. MySQL高级知识(五)——索引分析

    前言:前面已经学习了explain(执行计划)的相关知识,这里利用explain对索引进行优化分析. 0.准备 首先创建三张表:tb_emp(职工表).tb_dept(部门表)和tb_desc(描述表 ...

  10. 修改CentOS默认yum源为国内yum镜像源

    CentOS默认的yum源不是国内的yum源,在通过yum安装一些软件的时候,会出现这样那样的错误,以及在下载安装的速度上也是非常慢的. 所以这个时候就需要将yum源替换成国内的yum源,国内主要开源 ...