Job资源分配的过程,

在submitJob中,会生成ExecutionGraph
最终调用到,

  1. executionGraph.scheduleForExecution(scheduler)

接着,ExecutionGraph

  1. public void scheduleForExecution(SlotProvider slotProvider) throws JobException {
  1. // simply take the vertices without inputs.
    for (ExecutionJobVertex ejv : this.tasks.values()) {
    if (ejv.getJobVertex().isInputVertex()) {
    ejv.scheduleAll(slotProvider, allowQueuedScheduling);
    }
    }

然后,ExecutionJobVertex

  1. public void scheduleAll(SlotProvider slotProvider, boolean queued) throws NoResourceAvailableException {
  2.  
  3. ExecutionVertex[] vertices = this.taskVertices;
  4.  
  5. // kick off the tasks
    for (ExecutionVertex ev : vertices) {
    ev.scheduleForExecution(slotProvider, queued);
    }
    }

再,ExecutionVertex

  1. public boolean scheduleForExecution(SlotProvider slotProvider, boolean queued) throws NoResourceAvailableException {
    return this.currentExecution.scheduleForExecution(slotProvider, queued);
    }

最终,Execution

  1. public boolean scheduleForExecution(SlotProvider slotProvider, boolean queued) throws NoResourceAvailableException {
  2.  
  3. final SlotSharingGroup sharingGroup = vertex.getJobVertex().getSlotSharingGroup();
  4. final CoLocationConstraint locationConstraint = vertex.getLocationConstraint();
  5.  
  6. if (transitionState(CREATED, SCHEDULED)) {
  7.  
  8. ScheduledUnit toSchedule = locationConstraint == null ?
  9. new ScheduledUnit(this, sharingGroup) :
  10. new ScheduledUnit(this, sharingGroup, locationConstraint);
  11.  
  12. // IMPORTANT: To prevent leaks of cluster resources, we need to make sure that slots are returned
  13. // in all cases where the deployment failed. we use many try {} finally {} clauses to assure that
  14. final Future<SimpleSlot> slotAllocationFuture = slotProvider.allocateSlot(toSchedule, queued); //异步去申请资源
  15.  
  16. // IMPORTANT: We have to use the synchronous handle operation (direct executor) here so
  17. // that we directly deploy the tasks if the slot allocation future is completed. This is
  18. // necessary for immediate deployment.
  19. final Future<Void> deploymentFuture = slotAllocationFuture.handle(new BiFunction<SimpleSlot, Throwable, Void>() {
  20. @Override
  21. public Void apply(SimpleSlot simpleSlot, Throwable throwable) {
  22. if (simpleSlot != null) {
  23. try {
  24. deployToSlot(simpleSlot); //如果申请到,去部署
  25. } catch (Throwable t) {
  26. try {
  27. simpleSlot.releaseSlot();
  28. } finally {
  29. markFailed(t);
  30. }
  31. }
  32. }
  33. else {
  34. markFailed(throwable);
  35. }
  36. return null;
  37. }
  38. });
  39.  
  40. return true;
  41. }

 

调用到,slotProvider.allocateSlot, slotProvider即Scheduler

  1. @Override
  2. public Future<SimpleSlot> allocateSlot(ScheduledUnit task, boolean allowQueued)
  3. throws NoResourceAvailableException {
  4.  
  5. final Object ret = scheduleTask(task, allowQueued);
  6. if (ret instanceof SimpleSlot) {
  7. return FlinkCompletableFuture.completed((SimpleSlot) ret); //如果是SimpleSlot,即已经分配成功,表示future结束
  8. }
  9. else if (ret instanceof Future) {
  10. return (Future) ret; //Future说明没有足够资源,申请还在异步中,继续future
  11. }
  12. else {
  13. throw new RuntimeException();
  14. }
  15. }

 

scheduleTask

  1. /**
  2. * Returns either a {@link SimpleSlot}, or a {@link Future}.
  3. */
  4. private Object scheduleTask(ScheduledUnit task, boolean queueIfNoResource) throws NoResourceAvailableException {
  5.  
  6. final ExecutionVertex vertex = task.getTaskToExecute().getVertex();
  7.  
  8. final Iterable<TaskManagerLocation> preferredLocations = vertex.getPreferredLocations();
  9. final boolean forceExternalLocation = vertex.isScheduleLocalOnly() &&
  10. preferredLocations != null && preferredLocations.iterator().hasNext(); //如果preferredLocations不为空,且vertex仅能local schedule
  11.  
  12. synchronized (globalLock) { //全局锁
  13.  
  14. SlotSharingGroup sharingUnit = task.getSlotSharingGroup();
  15.  
  16. if (sharingUnit != null) { //如果有SlotSharingGroup
  17.  
  18. // 1) === If the task has a slot sharing group, schedule with shared slots ===
  19.  
  20. final SlotSharingGroupAssignment assignment = sharingUnit.getTaskAssignment();
  21. final CoLocationConstraint constraint = task.getLocationConstraint();
  22.  
  23. // get a slot from the group, if the group has one for us (and can fulfill the constraint)
  24. final SimpleSlot slotFromGroup;
  25. if (constraint == null) {
  26. slotFromGroup = assignment.getSlotForTask(vertex); //通过SlotSharingGroupAssignment来分配slot
  27. }
  28. else {
  29. slotFromGroup = assignment.getSlotForTask(vertex, constraint);
  30. }
  31.  
  32. SimpleSlot newSlot = null;
  33. SimpleSlot toUse = null;
  34.  
  35. // the following needs to make sure any allocated slot is released in case of an error
  36. try {
  37.  
  38. // check whether the slot from the group is already what we want.
  39. // any slot that is local, or where the assignment was unconstrained is good!
  40. if (slotFromGroup != null && slotFromGroup.getLocality() != Locality.NON_LOCAL) { //如果找到local slot
  41. updateLocalityCounters(slotFromGroup, vertex);
  42. return slotFromGroup; //已经找到合适的slot,返回
  43. }
  44.  
  45. // the group did not have a local slot for us. see if we can one (or a better one)
  46. // our location preference is either determined by the location constraint, or by the
  47. // vertex's preferred locations
  48. final Iterable<TaskManagerLocation> locations;
  49. final boolean localOnly;
  50. if (constraint != null && constraint.isAssigned()) { //如果有constraint
  51. locations = Collections.singleton(constraint.getLocation());
  52. localOnly = true;
  53. }
  54. else {
  55. locations = vertex.getPreferredLocationsBasedOnInputs(); //否则,以输入节点所分配的slot的location信息,作为Preferred
  56. localOnly = forceExternalLocation;
  57. }
  58. // the group did not have a local slot for us. see if we can one (or a better one)
  59. newSlot = getNewSlotForSharingGroup(vertex, locations, assignment, constraint, localOnly); //试图为SharingGroup申请一个新的slot
  60.  
  61. if (slotFromGroup == null || !slotFromGroup.isAlive() || newSlot.getLocality() == Locality.LOCAL) {//如果newSlot是local的,那么就是使用newSlot
  62. // if there is no slot from the group, or the new slot is local,
  63. // then we use the new slot
  64. if (slotFromGroup != null) {
  65. slotFromGroup.releaseSlot();
  66. }
  67. toUse = newSlot; //使用新new的slot
  68. }
  69. else {
  70. // both are available and usable. neither is local. in that case, we may
  71. // as well use the slot from the sharing group, to minimize the number of
  72. // instances that the job occupies
  73. newSlot.releaseSlot();
  74. toUse = slotFromGroup;
  75. }
  76.  
  77. // if this is the first slot for the co-location constraint, we lock
  78. // the location, because we are going to use that slot
  79. if (constraint != null && !constraint.isAssigned()) {
  80. constraint.lockLocation();
  81. }
  82.  
  83. updateLocalityCounters(toUse, vertex);
  84. }
  85.  
  86. return toUse; //返回申请的slot
  87. }
  88. else { //如果不是共享slot,比较简单
  89.  
  90. // 2) === schedule without hints and sharing ===
  91.  
  92. SimpleSlot slot = getFreeSlotForTask(vertex, preferredLocations, forceExternalLocation); //直接申请slot
  93. if (slot != null) {
  94. updateLocalityCounters(slot, vertex);
  95. return slot; //申请到了就返回slot
  96. }
  97. else {
  98. // no resource available now, so queue the request
  99. if (queueIfNoResource) { //如果可以queue
  100. CompletableFuture<SimpleSlot> future = new FlinkCompletableFuture<>();
  101. this.taskQueue.add(new QueuedTask(task, future)); //把task缓存起来,并把future对象返回,表示异步申请
  102. return future;
  103. }
  104. }
  105. }
  106. }
  107. }

 

如果有SlotSharingGroup

首先试图从SlotSharingGroupAssignment中分配slot

slotFromGroup = assignment.getSlotForTask(vertex), 参考,Flink – SlotSharingGroup

如果没有发现local的slot,试图为该vertex创建一个新的slot,

newSlot = getNewSlotForSharingGroup(vertex, locations, assignment, constraint, localOnly); //试图为SharingGroup申请一个新的slot

  1. protected SimpleSlot getNewSlotForSharingGroup(ExecutionVertex vertex,
  2. Iterable<TaskManagerLocation> requestedLocations,
  3. SlotSharingGroupAssignment groupAssignment,
  4. CoLocationConstraint constraint,
  5. boolean localOnly)
  6. {
  7. // we need potentially to loop multiple times, because there may be false positives
  8. // in the set-with-available-instances
  9. while (true) {
  10. Pair<Instance, Locality> instanceLocalityPair = findInstance(requestedLocations, localOnly); //根据locations信息找到local的instance
  11.  
  12. if (instanceLocalityPair == null) { //如果没有可用的instance,返回null
  13. // nothing is available
  14. return null;
  15. }
  16.  
  17. final Instance instanceToUse = instanceLocalityPair.getLeft();
  18. final Locality locality = instanceLocalityPair.getRight();
  19.  
  20. try {
  21. JobVertexID groupID = vertex.getJobvertexId();
  22.  
  23. // allocate a shared slot from the instance
  24. SharedSlot sharedSlot = instanceToUse.allocateSharedSlot(vertex.getJobId(), groupAssignment); //从instance申请一个SharedSlot
  25.  
  26. // if the instance has further available slots, re-add it to the set of available resources.
  27. if (instanceToUse.hasResourcesAvailable()) { //如果这个instance还有多余的资源,再加入instancesWithAvailableResources,下次还能继续用来分配
  28. this.instancesWithAvailableResources.put(instanceToUse.getTaskManagerID(), instanceToUse);
  29. }
  30.  
  31. if (sharedSlot != null) {
  32. // add the shared slot to the assignment group and allocate a sub-slot
  33. SimpleSlot slot = constraint == null ?
  34. groupAssignment.addSharedSlotAndAllocateSubSlot(sharedSlot, locality, groupID) : //把分配的SharedSlot加到SlotSharingGroup的SlotSharingGroupAssignment中,并返回SharedSlot所持有的slot
  35. groupAssignment.addSharedSlotAndAllocateSubSlot(sharedSlot, locality, constraint);
  36.  
  37. if (slot != null) {
  38. return slot;
  39. }
  40. else {
  41. // could not add and allocate the sub-slot, so release shared slot
  42. sharedSlot.releaseSlot();
  43. }
  44. }
  45. }
  46. catch (InstanceDiedException e) {
  47. // the instance died it has not yet been propagated to this scheduler
  48. // remove the instance from the set of available instances
  49. removeInstance(instanceToUse);
  50. }
  51.  
  52. // if we failed to get a slot, fall through the loop
  53. }
  54. }

findInstance

  1. private Pair<Instance, Locality> findInstance(Iterable<TaskManagerLocation> requestedLocations, boolean localOnly) {
  2.  
  3. // drain the queue of newly available instances
  4. while (this.newlyAvailableInstances.size() > 0) { //BlockingQueue<Instance> newlyAvailableInstances
  5. Instance queuedInstance = this.newlyAvailableInstances.poll();
  6. if (queuedInstance != null) {
  7. this.instancesWithAvailableResources.put(queuedInstance.getTaskManagerID(), queuedInstance); // Map<ResourceID, Instance> instancesWithAvailableResources
  8. }
  9. }
  10.  
  11. // if nothing is available at all, return null
  12. if (this.instancesWithAvailableResources.isEmpty()) {
  13. return null;
  14. }
  15.  
  16. Iterator<TaskManagerLocation> locations = requestedLocations == null ? null : requestedLocations.iterator();
  17.  
  18. if (locations != null && locations.hasNext()) { //如果有prefered locations,优先找相对应的Instance
  19. // we have a locality preference
  20.  
  21. while (locations.hasNext()) {
  22. TaskManagerLocation location = locations.next();
  23. if (location != null) {
  24. Instance instance = instancesWithAvailableResources.remove(location.getResourceID()); //找到对应于perfer location的Instance
  25. if (instance != null) {
  26. return new ImmutablePair<Instance, Locality>(instance, Locality.LOCAL);
  27. }
  28. }
  29. }
  30.  
  31. // no local instance available
  32. if (localOnly) { //如果localOnly,而前面又没有找到local的,所以只能返回null
  33. return null;
  34. }
  35. else {
  36. // take the first instance from the instances with resources
  37. Iterator<Instance> instances = instancesWithAvailableResources.values().iterator();
  38. Instance instanceToUse = instances.next();
  39. instances.remove();
  40.  
  41. return new ImmutablePair<>(instanceToUse, Locality.NON_LOCAL); //由于前面没有找到local的,所以返回第一个instance,locality为non_local
  42. }
  43. }
  44. else {
  45. // no location preference, so use some instance
  46. Iterator<Instance> instances = instancesWithAvailableResources.values().iterator();
  47. Instance instanceToUse = instances.next();
  48. instances.remove();
  49.  
  50. return new ImmutablePair<>(instanceToUse, Locality.UNCONSTRAINED); //没有约束,也是取第一个instance,locality为UNCONSTRAINED
  51. }
  52. }

Instance.allocateSharedSlot

  1. public SharedSlot allocateSharedSlot(JobID jobID, SlotSharingGroupAssignment sharingGroupAssignment)
  2. throws InstanceDiedException
  3. {
  4. synchronized (instanceLock) {
  5. if (isDead) {
  6. throw new InstanceDiedException(this);
  7. }
  8.  
  9. Integer nextSlot = availableSlots.poll(); //Queue<Integer> availableSlots;
  10. if (nextSlot == null) {
  11. return null;
  12. }
  13. else {
  14. SharedSlot slot = new SharedSlot(
  15. jobID, this, location, nextSlot, taskManagerGateway, sharingGroupAssignment);
  16. allocatedSlots.add(slot); //Set<Slot> allocatedSlots
  17. return slot;
  18. }
  19. }
  20. }

如果新分配的slot是local的,就用newSlot;如果不是并且当前SlotSharingGroup中是有non-local的slot,就用现成的slot,没必要使用新的slot,这时需要把newSlot释放掉

如果没有SlotSharingGroup

简单的调用

SimpleSlot slot = getFreeSlotForTask(vertex, preferredLocations, forceExternalLocation);

  1. protected SimpleSlot getFreeSlotForTask(ExecutionVertex vertex,
  2. Iterable<TaskManagerLocation> requestedLocations,
  3. boolean localOnly) {
  4. // we need potentially to loop multiple times, because there may be false positives
  5. // in the set-with-available-instances
  6. while (true) {
  7. Pair<Instance, Locality> instanceLocalityPair = findInstance(requestedLocations, localOnly); //找到一个合适的instance
  8.  
  9. if (instanceLocalityPair == null){
  10. return null;
  11. }
  12.  
  13. Instance instanceToUse = instanceLocalityPair.getLeft();
  14. Locality locality = instanceLocalityPair.getRight();
  15.  
  16. try {
  17. SimpleSlot slot = instanceToUse.allocateSimpleSlot(vertex.getJobId()); //分配一个simpleSlot
  18.  
  19. // if the instance has further available slots, re-add it to the set of available resources.
  20. if (instanceToUse.hasResourcesAvailable()) {
  21. this.instancesWithAvailableResources.put(instanceToUse.getTaskManagerID(), instanceToUse);
  22. }
  23.  
  24. if (slot != null) {
  25. slot.setLocality(locality);
  26. return slot;
  27. }
  28. }
  29. catch (InstanceDiedException e) {
  30. // the instance died it has not yet been propagated to this scheduler
  31. // remove the instance from the set of available instances
  32. removeInstance(instanceToUse);
  33. }
  34.  
  35. // if we failed to get a slot, fall through the loop
  36. }
  37. }

逻辑和分配SharedSlot基本相同,只是会调用,

  1. public SimpleSlot allocateSimpleSlot(JobID jobID) throws InstanceDiedException {
  2. if (jobID == null) {
  3. throw new IllegalArgumentException();
  4. }
  5.  
  6. synchronized (instanceLock) {
  7. if (isDead) {
  8. throw new InstanceDiedException(this);
  9. }
  10.  
  11. Integer nextSlot = availableSlots.poll();
  12. if (nextSlot == null) {
  13. return null;
  14. }
  15. else {
  16. SimpleSlot slot = new SimpleSlot(jobID, this, location, nextSlot, taskManagerGateway);
  17. allocatedSlots.add(slot);
  18. return slot;
  19. }
  20. }
  21. }

 

Instance

Scheduler中的Instance怎么来的?

Scheduler实现InstanceListener接口的

  1. newInstanceAvailable
  1. @Override
  2. public void newInstanceAvailable(Instance instance) {
  3.  
  4. // synchronize globally for instance changes
  5. synchronized (this.globalLock) {
  6.  
  7. // check we do not already use this instance
  8. if (!this.allInstances.add(instance)) { //看看是否已经有了这个instance
  9. throw new IllegalArgumentException("The instance is already contained.");
  10. }
  11.  
  12. try {
  13. // make sure we get notifications about slots becoming available
  14. instance.setSlotAvailabilityListener(this); //加上SlotAvailabilityListener,当slot ready的时候,可以被通知
  15.  
  16. // store the instance in the by-host-lookup
  17. String instanceHostName = instance.getTaskManagerLocation().getHostname();
  18. Set<Instance> instanceSet = allInstancesByHost.get(instanceHostName); // HashMap<String, Set<Instance>> allInstancesByHost
  19. if (instanceSet == null) {
  20. instanceSet = new HashSet<Instance>();
  21. allInstancesByHost.put(instanceHostName, instanceSet);
  22. }
  23. instanceSet.add(instance);
  24.  
  25. // add it to the available resources and let potential waiters know
  26. this.instancesWithAvailableResources.put(instance.getTaskManagerID(), instance); // Map<ResourceID, Instance> instancesWithAvailableResources
  27.  
  28. // add all slots as available
  29. for (int i = 0; i < instance.getNumberOfAvailableSlots(); i++) { //多次触发newSlotAvailable
  30. newSlotAvailable(instance);
  31. }
  32. }
  33. catch (Throwable t) {
  34. LOG.error("Scheduler could not add new instance " + instance, t);
  35. removeInstance(instance);
  36. }
  37. }
  38. }

 

newInstanceAvailable,何时被调用,

 

JobManager

  1. case msg @ RegisterTaskManager(
  2. resourceId,
  3. connectionInfo,
  4. hardwareInformation,
  5. numberOfSlots) =>
  6.  
  7. val instanceID = instanceManager.registerTaskManager(
  8. taskManagerGateway,
  9. connectionInfo,
  10. hardwareInformation,
  11. numberOfSlots)

 

InstanceManager

  1. public InstanceID registerTaskManager(
  2. TaskManagerGateway taskManagerGateway,
  3. TaskManagerLocation taskManagerLocation,
  4. HardwareDescription resources,
  5. int numberOfSlots) {
  6.  
  7. synchronized (this.lock) {
  8.  
  9. InstanceID instanceID = new InstanceID();
  10.  
  11. Instance host = new Instance(
  12. taskManagerGateway,
  13. taskManagerLocation,
  14. instanceID,
  15. resources,
  16. numberOfSlots);
  17.  
  18. // notify all listeners (for example the scheduler)
  19. notifyNewInstance(host);
  20.  
  21. return instanceID;
  22. }
  23. }

 

  1. private void notifyNewInstance(Instance instance) {
  2. synchronized (this.instanceListeners) {
  3. for (InstanceListener listener : this.instanceListeners) {
  4. try {
  5. listener.newInstanceAvailable(instance);
  6. }
  7. catch (Throwable t) {
  8. LOG.error("Notification of new instance availability failed.", t);
  9. }
  10. }
  11. }
  12. }

 

Scheduler还是实现SlotAvailabilityListener

会调用newSlotAvailable

逻辑只是check是否有待分配的task,当有新的slot ready的时候,把queuedTask的future complete掉

  1. @Override
  2. public void newSlotAvailable(final Instance instance) {
  3.  
  4. // WARNING: The asynchrony here is necessary, because we cannot guarantee the order
  5. // of lock acquisition (global scheduler, instance) and otherwise lead to potential deadlocks:
  6. //
  7. // -> The scheduler needs to grab them (1) global scheduler lock
  8. // (2) slot/instance lock
  9. // -> The slot releasing grabs (1) slot/instance (for releasing) and
  10. // (2) scheduler (to check whether to take a new task item
  11. //
  12. // that leads with a high probability to deadlocks, when scheduling fast
  13.  
  14. this.newlyAvailableInstances.add(instance);
  15.  
  16. Futures.future(new Callable<Object>() {
  17. @Override
  18. public Object call() throws Exception {
  19. handleNewSlot();
  20. return null;
  21. }
  22. }, executionContext);
  23. }
  24.  
  25. private void handleNewSlot() {
  26.  
  27. synchronized (globalLock) {
  28. Instance instance = this.newlyAvailableInstances.poll();
  29. if (instance == null || !instance.hasResourcesAvailable()) {
  30. // someone else took it
  31. return;
  32. }
  33.  
  34. QueuedTask queued = taskQueue.peek(); //如果有待分配的task
  35.  
  36. // the slot was properly released, we can allocate a new one from that instance
  37.  
  38. if (queued != null) {
  39. ScheduledUnit task = queued.getTask();
  40. ExecutionVertex vertex = task.getTaskToExecute().getVertex();
  41.  
  42. try {
  43. SimpleSlot newSlot = instance.allocateSimpleSlot(vertex.getJobId()); //从instance分配一个simpleSlot
  44. if (newSlot != null) {
  45.  
  46. // success, remove from the task queue and notify the future
  47. taskQueue.poll();
  48. if (queued.getFuture() != null) {
  49. try {
  50. queued.getFuture().complete(newSlot); //complete该task的future,有slot了,你不用继续等了
  51. }
  52. catch (Throwable t) {
  53. LOG.error("Error calling allocation future for task " + vertex.getSimpleName(), t);
  54. task.getTaskToExecute().fail(t);
  55. }
  56. }
  57. }
  58. }
  59. catch (InstanceDiedException e) {
  60. if (LOG.isDebugEnabled()) {
  61. LOG.debug("Instance " + instance + " was marked dead asynchronously.");
  62. }
  63.  
  64. removeInstance(instance);
  65. }
  66. }
  67. else { //如果没有排队的task,直接把instance放到instancesWithAvailableResources就好
  68. this.instancesWithAvailableResources.put(instance.getTaskManagerID(), instance);
  69. }
  70. }
  71. }

 

newSlotAvailable除了当new instance注册时被调用外,还会在Instance.returnAllocatedSlot,即有人释放AllocatedSlot时,会被调用

Flink - Scheduler的更多相关文章

  1. Flink – JobManager.submitJob

    JobManager作为actor, case SubmitJob(jobGraph, listeningBehaviour) => val client = sender() val jobI ...

  2. AndroidStudio3.0无法打开Android Device Monitor的解决办法(An error has occurred on Android Device Monitor)

    ---恢复内容开始--- 打开monitor时出现 An error has occurred. See the log file... ------------------------------- ...

  3. Flink 1.1 – ResourceManager

    Flink resource manager的作用如图,   FlinkResourceManager /** * * <h1>Worker allocation steps</h1 ...

  4. Flink - InstanceManager

    InstanceManager用于管理JobManager申请到的taskManager和slots资源 /** * Simple manager that keeps track of which ...

  5. Flink - Checkpoint

    Flink在流上最大的特点,就是引入全局snapshot,   CheckpointCoordinator 做snapshot的核心组件为, CheckpointCoordinator /** * T ...

  6. Flink - FlinkKafkaConsumer08

      先看 AbstractFetcher 这个可以理解就是,consumer中具体去kafka读数据的线程,一个fetcher可以同时读多个partitions的数据来看看 /** * Base cl ...

  7. 使用Flink时遇到的问题(不断更新中)

    1.启动不起来 查看JobManager日志: WARN org.apache.flink.runtime.webmonitor.JobManagerRetriever - Failed to ret ...

  8. Apache Flink 分布式执行

    Flink 的分布式执行过程包含两个重要的角色,master 和 worker,参与 Flink 程序执行的有多个进程,包括 Job Manager,Task Manager 以及 Job Clien ...

  9. Hadoop Compatibility in Flink

    18 Nov 2014 by Fabian Hüske (@fhueske) Apache Hadoop is an industry standard for scalable analytical ...

随机推荐

  1. Vue中使用ECharts画散点图加均值线与阴影区域

    [本文出自天外归云的博客园] 需求 1. Vue中使用ECharts画散点图 2. 在图中加入加均值线 3. 在图中标注出阴影区域 实现 实现这个需求,要明确两点: 1. 知道如何在vue中使用ech ...

  2. 大津法---OTSU算法

    简介: 大津法(OTSU)是一种确定图像二值化分割阈值的算法,由日本学者大津于1979年提出.从大津法的原理上来讲,该方法又称作最大类间方差法,因为按照大津法求得的阈值进行图像二值化分割后,前景与背景 ...

  3. 9patch图的尺寸尽量为偶数

    美工做了一张.9的背景图,宽度110*80 像素,描点如下: 放到720p的智能电视上观看,总感觉怪怪的.仔细观看可以发现,前景图总是不能完全的覆盖掉背景图.总有那么一个像素的点多余出来,如图所示: ...

  4. 【LINUX】——如何配置宿主机和虚拟机IP在同一网段

    宿主机:win7  10.8.2.50 255.255.255.0 虚拟机:redhat 如果使用 NAT 的网络连接方式,虚拟机的 IP 会被分配为 192.168.*.* 网段,从虚拟机 ping ...

  5. rabbitmq启动异常table_attributes_mismatch

    rabbitmq启动异常table_attributes_mismatch 2017年01月09日 16:57:50 growithus 阅读数:18     [root@localhost rabb ...

  6. easyui-combox(tagbox) 多选操作 显示为tagbox

    <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...

  7. Python内置类型——dict

    Python中, 字典是容器,所以可以使用len()方法统计字典中的键值对的个数: 字典是可迭代的,迭代的依据是字典中的键. in, not in 等运算符判断指定的键是否在字典中: 如果索引一个字典 ...

  8. HttpSenderUtil向指定 URL 发送POST方法的请求

    package com.founder.ec.common.utils; import java.io.BufferedReader; import java.io.IOException; impo ...

  9. Puppet file资源使用

    1.文件管理介绍:          可管理的项目: 支持文件和目录 设置文件及目录的所有者及权限 恢复文件(包括文件的内容.权限及所有者) 清理目录以及子目录 2. 可使用参数: ensure :指 ...

  10. Android开发训练之第五章第七节——Transmitting Network Data Using Volley

    Transmitting Network Data Using Volley GET STARTED DEPENDENCIES AND PREREQUISITES Android 1.6 (API L ...