上一节分析了Job由JobClient提交到JobTracker的流程,利用RPC机制,JobTracker接收到Job ID和Job所在HDFS的目录,够早了JobInProgress对象,丢入队列,另一个线程从队列中取出JobInProgress对象,并丢入线程池中执行,执行JobInProgress的initJob方法,我们逐步分析。

  1. public void initJob(JobInProgress job) {
  2. if (null == job) {
  3. LOG.info("Init on null job is not valid");
  4. return;
  5. }
  6.  
  7. try {
  8. JobStatus prevStatus = (JobStatus)job.getStatus().clone();
  9. LOG.info("Initializing " + job.getJobID());
  10. job.initTasks();
  11. // Inform the listeners if the job state has changed
  12. // Note : that the job will be in PREP state.
  13. JobStatus newStatus = (JobStatus)job.getStatus().clone();
  14. if (prevStatus.getRunState() != newStatus.getRunState()) {
  15. JobStatusChangeEvent event =
  16. new JobStatusChangeEvent(job, EventType.RUN_STATE_CHANGED, prevStatus,
  17. newStatus);
  18. synchronized (JobTracker.this) {
  19. updateJobInProgressListeners(event);
  20. }
  21. }
  22. } catch (KillInterruptedException kie) {
  23. // If job was killed during initialization, job state will be KILLED
  24. LOG.error("Job initialization interrupted:\n" +
  25. StringUtils.stringifyException(kie));
  26. killJob(job);
  27. } catch (Throwable t) {
  28. String failureInfo =
  29. "Job initialization failed:\n" + StringUtils.stringifyException(t);
  30. // If the job initialization is failed, job state will be FAILED
  31. LOG.error(failureInfo);
  32. job.getStatus().setFailureInfo(failureInfo);
  33. failJob(job);
  34. }
  35. }

可以看出,先进行 job.initTasks(),初始化Map和Reduce任务,之后更新所有

  1. synchronized (JobTracker.this) {
  2. updateJobInProgressListeners(event);
  3. }

Map/Reduce Task初始化完毕是一个事件,下面的代码进行消息通知:

  1. // Update the listeners about the job
  2. // Assuming JobTracker is locked on entry.
  3. private void updateJobInProgressListeners(JobChangeEvent event) {
  4. for (JobInProgressListener listener : jobInProgressListeners) {
  5. listener.jobUpdated(event);
  6. }
  7. }

可见,在Job放入队列时使用的是jobAdded,此时使用的是jobUpdated。我们在后面再分析jobUpdated后的细节,此时先分析从jobAdded到jobUpdated之间,Job的初始化过程,主要分为几个阶段。

首先执行的是获取Split信息,这一部分信息事先已经由JobClient上传至HDFS中。

1、读取Split信息:

  1. //
  2. // read input splits and create a map per a split
  3. //
  4. TaskSplitMetaInfo[] splits = createSplits(jobId);
  5. if (numMapTasks != splits.length) {
  6. throw new IOException("Number of maps in JobConf doesn't match number of " +
  7. "recieved splits for job " + jobId + "! " +
  8. "numMapTasks=" + numMapTasks + ", #splits=" + splits.length);
  9. }
  10. numMapTasks = splits.length;

createSplits方法的代码为:

  1. TaskSplitMetaInfo[] createSplits(org.apache.hadoop.mapreduce.JobID jobId)
  2. throws IOException {
  3. TaskSplitMetaInfo[] allTaskSplitMetaInfo =
  4. SplitMetaInfoReader.readSplitMetaInfo(jobId, fs, jobtracker.getConf(),
  5. jobSubmitDir);
  6. return allTaskSplitMetaInfo;
  7. }

即读取job.splitmetainfo文件,获得Split信息:

  1. public static JobSplit.TaskSplitMetaInfo[] readSplitMetaInfo(
  2. JobID jobId, FileSystem fs, Configuration conf, Path jobSubmitDir)
  3. throws IOException {
  4. long maxMetaInfoSize = conf.getLong("mapreduce.jobtracker.split.metainfo.maxsize",
  5. 10000000L);
  6. Path metaSplitFile = JobSubmissionFiles.getJobSplitMetaFile(jobSubmitDir);
  7. FileStatus fStatus = fs.getFileStatus(metaSplitFile);
  8. if (maxMetaInfoSize > 0 && fStatus.getLen() > maxMetaInfoSize) {
  9. throw new IOException("Split metadata size exceeded " +
  10. maxMetaInfoSize +". Aborting job " + jobId);
  11. }
  12. FSDataInputStream in = fs.open(metaSplitFile);
  13. byte[] header = new byte[JobSplit.META_SPLIT_FILE_HEADER.length];
  14. in.readFully(header);
  15. if (!Arrays.equals(JobSplit.META_SPLIT_FILE_HEADER, header)) {
  16. throw new IOException("Invalid header on split file");
  17. }
  18. int vers = WritableUtils.readVInt(in);
  19. if (vers != JobSplit.META_SPLIT_VERSION) {
  20. in.close();
  21. throw new IOException("Unsupported split version " + vers);
  22. }
  23. int numSplits = WritableUtils.readVInt(in); //TODO: check for insane values
  24. JobSplit.TaskSplitMetaInfo[] allSplitMetaInfo =
  25. new JobSplit.TaskSplitMetaInfo[numSplits];
  26. final int maxLocations =
  27. conf.getInt(JobSplitWriter.MAX_SPLIT_LOCATIONS, Integer.MAX_VALUE);
  28. for (int i = 0; i < numSplits; i++) {
  29. JobSplit.SplitMetaInfo splitMetaInfo = new JobSplit.SplitMetaInfo();
  30. splitMetaInfo.readFields(in);
  31. final int numLocations = splitMetaInfo.getLocations().length;
  32. if (numLocations > maxLocations) {
  33. throw new IOException("Max block location exceeded for split: #" + i +
  34. " splitsize: " + numLocations + " maxsize: " + maxLocations);
  35. }
  36. JobSplit.TaskSplitIndex splitIndex = new JobSplit.TaskSplitIndex(
  37. JobSubmissionFiles.getJobSplitFile(jobSubmitDir).toString(),
  38. splitMetaInfo.getStartOffset());
  39. allSplitMetaInfo[i] = new JobSplit.TaskSplitMetaInfo(splitIndex,
  40. splitMetaInfo.getLocations(),
  41. splitMetaInfo.getInputDataLength());
  42. }
  43. in.close();
  44. return allSplitMetaInfo;
  45. }

涉及读取文件的代码有:

  1. FSDataInputStream in = fs.open(metaSplitFile);
  2. byte[] header = new byte[JobSplit.META_SPLIT_FILE_HEADER.length];
  3. in.readFully(header);

这一部分先读取job.splitmetainfo文件的头部,头部实际上是字符串”META-SPL“,该信息由下面的类指定:

  1. public class JobSplit {
  2. static final int META_SPLIT_VERSION = 1;
  3. static final byte[] META_SPLIT_FILE_HEADER;
  4.  
  5. static {
  6. try {
  7. META_SPLIT_FILE_HEADER = "META-SPL".getBytes("UTF-8");
  8. } catch (UnsupportedEncodingException u) {
  9. throw new RuntimeException(u);
  10. }
  11. }
  12. .......

读取了文件头之后,剩下的是读取版本信息:

  1. int vers = WritableUtils.readVInt(in);
  2. if (vers != JobSplit.META_SPLIT_VERSION) {
  3. in.close();
  4. throw new IOException("Unsupported split version " + vers);
  5. }

检查了版本(1)后,接下来就是读取Split的数量:

  1. int numSplits = WritableUtils.readVInt(in); //TODO: check for insane values
  2. JobSplit.TaskSplitMetaInfo[] allSplitMetaInfo =
  3. new JobSplit.TaskSplitMetaInfo[numSplits];

并根据Split数量创建JobSplit.TaskSplitMetaInfo数组。接下来对于每个Split,循环读取位置等信息:

  1. for (int i = 0; i < numSplits; i++) {
  2. JobSplit.SplitMetaInfo splitMetaInfo = new JobSplit.SplitMetaInfo();
  3. splitMetaInfo.readFields(in);
  4. final int numLocations = splitMetaInfo.getLocations().length;
  5. if (numLocations > maxLocations) {
  6. throw new IOException("Max block location exceeded for split: #" + i +
  7. " splitsize: " + numLocations + " maxsize: " + maxLocations);
  8. }
  9. JobSplit.TaskSplitIndex splitIndex = new JobSplit.TaskSplitIndex(
  10. JobSubmissionFiles.getJobSplitFile(jobSubmitDir).toString(),
  11. splitMetaInfo.getStartOffset());
  12. allSplitMetaInfo[i] = new JobSplit.TaskSplitMetaInfo(splitIndex,
  13. splitMetaInfo.getLocations(),
  14. splitMetaInfo.getInputDataLength());
  15. }

在上面的代码中,splitMetaInfo.readFields(in)可以获得位置信息:

  1. public void readFields(DataInput in) throws IOException {
  2. int len = WritableUtils.readVInt(in);
  3. locations = new String[len];
  4. for (int i = 0; i < locations.length; i++) {
  5. locations[i] = Text.readString(in);
  6. }
  7. startOffset = WritableUtils.readVLong(in);
  8. inputDataLength = WritableUtils.readVLong(in);
  9. }

所谓的位置,实际上就是指这个Split在j哪些服务器的信息。获取到位置、Split数据长度等信息后,全部纪录在对象JobSplit.TaskSplitMetaInfo中:

  1. JobSplit.TaskSplitIndex splitIndex = new JobSplit.TaskSplitIndex(
  2. JobSubmissionFiles.getJobSplitFile(jobSubmitDir).toString(),
  3. splitMetaInfo.getStartOffset());
  4. allSplitMetaInfo[i] = new JobSplit.TaskSplitMetaInfo(splitIndex,
  5. splitMetaInfo.getLocations(),
  6. splitMetaInfo.getInputDataLength());

返回allSplitMetaInfo数组。

2、根据Map任务数量创建相同数量的TaskInProgress对象:

上面返回的数组大小即纪录了Split的个数,也决定了Map的数量,验证这些服务器的合法性:

  1.     numMapTasks = splits.length;
  2.  
  3.     // Sanity check the locations so we don't create/initialize unnecessary tasks
        for (TaskSplitMetaInfo split : splits) {
          NetUtils.verifyHostnames(split.getLocations());
        }

在监控相关类中设置相应信息:

  1. jobtracker.getInstrumentation().addWaitingMaps(getJobID(), numMapTasks);
  2. jobtracker.getInstrumentation().addWaitingReduces(getJobID(), numReduceTasks);
  3. this.queueMetrics.addWaitingMaps(getJobID(), numMapTasks);
  4. this.queueMetrics.addWaitingReduces(getJobID(), numReduceTasks);

接下来创建TaskInProgress对象,每个Map都对应于一个TaskInProgress对象:

  1. maps = new TaskInProgress[numMapTasks];
  2. for(int i=0; i < numMapTasks; ++i) {
  3. inputLength += splits[i].getInputDataLength();
  4. maps[i] = new TaskInProgress(jobId, jobFile,
  5. splits[i],
  6. jobtracker, conf, this, i, numSlotsPerMap);
  7. }

TaskInProgress纪录了一个Map Task或Reduce Task运行相关的所有信息,类似于JobInProgress,TaskInProgress的构造函数有两个,分别针对Map和Reduce的,对于Map的:

  1. /**
  2. * Constructor for MapTask
  3. */
  4. public TaskInProgress(JobID jobid, String jobFile,
  5. TaskSplitMetaInfo split,
  6. JobTracker jobtracker, JobConf conf,
  7. JobInProgress job, int partition,
  8. int numSlotsRequired) {
  9. this.jobFile = jobFile;
  10. this.splitInfo = split;
  11. this.jobtracker = jobtracker;
  12. this.job = job;
  13. this.conf = conf;
  14. this.partition = partition;
  15. this.maxSkipRecords = SkipBadRecords.getMapperMaxSkipRecords(conf);
  16. this.numSlotsRequired = numSlotsRequired;
  17. setMaxTaskAttempts();
  18. init(jobid);
  19. }

splitInfo纪录了当前Split的信息,partition即表示这是第几个Map Task,numSlotsRequired为1.

创建好的TaskInProgress将会放入缓存中:

  1. if (numMapTasks > 0) {
  2. nonRunningMapCache = createCache(splits, maxLevel);
  3. }

nonRunningMapCache是一个未运行起来的Map任务的关于主机信息等等的缓存,其索引为Node,即服务器;而其值为TaskInProgress对象,其声明为,因此,实际上就是解析Split所在的服务器,缓存下来,供后续调度使用:

  1. Map<Node, List<TaskInProgress>> nonRunningMapCache;

其方法代码为:

  1. private Map<Node, List<TaskInProgress>> createCache(
  2. TaskSplitMetaInfo[] splits, int maxLevel)
  3. throws UnknownHostException {
  4. Map<Node, List<TaskInProgress>> cache =
  5. new IdentityHashMap<Node, List<TaskInProgress>>(maxLevel);
  6.  
  7. Set<String> uniqueHosts = new TreeSet<String>();
  8. for (int i = 0; i < splits.length; i++) {
  9. String[] splitLocations = splits[i].getLocations();
  10. if (splitLocations == null || splitLocations.length == 0) {
  11. nonLocalMaps.add(maps[i]);
  12. continue;
  13. }
  14.  
  15. for(String host: splitLocations) {
  16. Node node = jobtracker.resolveAndAddToTopology(host);
  17. uniqueHosts.add(host);
  18. LOG.info("tip:" + maps[i].getTIPId() + " has split on node:" + node);
  19. for (int j = 0; j < maxLevel; j++) {
  20. List<TaskInProgress> hostMaps = cache.get(node);
  21. if (hostMaps == null) {
  22. hostMaps = new ArrayList<TaskInProgress>();
  23. cache.put(node, hostMaps);
  24. hostMaps.add(maps[i]);
  25. }
  26. //check whether the hostMaps already contains an entry for a TIP
  27. //This will be true for nodes that are racks and multiple nodes in
  28. //the rack contain the input for a tip. Note that if it already
  29. //exists in the hostMaps, it must be the last element there since
  30. //we process one TIP at a time sequentially in the split-size order
  31. if (hostMaps.get(hostMaps.size() - 1) != maps[i]) {
  32. hostMaps.add(maps[i]);
  33. }
  34. node = node.getParent();
  35. }
  36. }
  37. }
  38.  
  39. // Calibrate the localityWaitFactor - Do not override user intent!
  40. if (localityWaitFactor == DEFAULT_LOCALITY_WAIT_FACTOR) {
  41. int jobNodes = uniqueHosts.size();
  42. int clusterNodes = jobtracker.getNumberOfUniqueHosts();
  43.  
  44. if (clusterNodes > 0) {
  45. localityWaitFactor =
  46. Math.min((float)jobNodes/clusterNodes, localityWaitFactor);
  47. }
  48. LOG.info(jobId + " LOCALITY_WAIT_FACTOR=" + localityWaitFactor);
  49. }
  50.  
  51. return cache;
  52. }

3、根据Reduce任务数量创建相同数量的TaskInProgress对象:

代码和Map基本相同:

  1. //
  2. // Create reduce tasks
  3. //
  4. this.reduces = new TaskInProgress[numReduceTasks];
  5. for (int i = 0; i < numReduceTasks; i++) {
  6. reduces[i] = new TaskInProgress(jobId, jobFile,
  7. numMapTasks, i,
  8. jobtracker, conf, this, numSlotsPerReduce);
  9. nonRunningReduces.add(reduces[i]);
  10. }

4、计算Reduce任务启动前Map最少应该启动的数量:

根据MapReduce原理,先进行Map计算,之后中间结果再传递至Reduce计算,因此,Map要先进行计算,Reduce如果和Map一起启动,那么,Reduce必然先一直处于等待中。这会消耗机器资源,且Shuffle时间比较长。所以,这个值默认是Map所有任务数量的5%:

  1. // Calculate the minimum number of maps to be complete before
  2. // we should start scheduling reduces
  3. completedMapsForReduceSlowstart =
  4. (int)Math.ceil(
  5. (conf.getFloat("mapred.reduce.slowstart.completed.maps",
  6. DEFAULT_COMPLETED_MAPS_PERCENT_FOR_REDUCE_SLOWSTART) *
  7. numMapTasks));
  8.  
  9. // ... use the same for estimating the total output of all maps
  10. resourceEstimator.setThreshhold(completedMapsForReduceSlowstart);

从DEFAULT_COMPLETED_MAPS_PERCENT_FOR_REDUCE_SLOWSTART可以看出,是5%:

  1. private static float DEFAULT_COMPLETED_MAPS_PERCENT_FOR_REDUCE_SLOWSTART = 0.05f;

5、创建Map和Reduce任务的清理任务,各一个:

  1. // create cleanup two cleanup tips, one map and one reduce.
  2. cleanup = new TaskInProgress[2];
  3.  
  4. // cleanup map tip. This map doesn't use any splits. Just assign an empty
  5. // split.
  6. TaskSplitMetaInfo emptySplit = JobSplit.EMPTY_TASK_SPLIT;
  7. cleanup[0] = new TaskInProgress(jobId, jobFile, emptySplit,
  8. jobtracker, conf, this, numMapTasks, 1);
  9. cleanup[0].setJobCleanupTask();
  10.  
  11. // cleanup reduce tip.
  12. cleanup[1] = new TaskInProgress(jobId, jobFile, numMapTasks,
  13. numReduceTasks, jobtracker, conf, this, 1);
  14. cleanup[1].setJobCleanupTask();

6、创建Map和Reduce任务的启动任务,各一个:

  1. // create two setup tips, one map and one reduce.
  2. setup = new TaskInProgress[2];
  3.  
  4. // setup map tip. This map doesn't use any split. Just assign an empty
  5. // split.
  6. setup[0] = new TaskInProgress(jobId, jobFile, emptySplit,
  7. jobtracker, conf, this, numMapTasks + 1, 1);
  8. setup[0].setJobSetupTask();
  9.  
  10. // setup reduce tip.
  11. setup[1] = new TaskInProgress(jobId, jobFile, numMapTasks,
  12. numReduceTasks + 1, jobtracker, conf, this, 1);
  13. setup[1].setJobSetupTask();

7、Map/Reduce Task初始化完毕:

  1. synchronized(jobInitKillStatus){
  2. jobInitKillStatus.initDone = true;
  3.  
  4. // set this before the throw to make sure cleanup works properly
  5. tasksInited = true;
  6.  
  7. if(jobInitKillStatus.killed) {
  8. throw new KillInterruptedException("Job " + jobId + " killed in init");
  9. }
  10. }

初始化完毕后,会通过jobUpdated进行通知。Job更新的事件主要有三种:

  1. static enum EventType {RUN_STATE_CHANGED, START_TIME_CHANGED, PRIORITY_CHANGED}

此时初始化完毕属于RUN_STATE_CHANGED。从其代码来看,如果是运行状态改变,并不执行什么操作:

  1. public synchronized void jobUpdated(JobChangeEvent event) {
  2. JobInProgress job = event.getJobInProgress();
  3. if (event instanceof JobStatusChangeEvent) {
  4. // Check if the ordering of the job has changed
  5. // For now priority and start-time can change the job ordering
  6. JobStatusChangeEvent statusEvent = (JobStatusChangeEvent)event;
  7. JobSchedulingInfo oldInfo =
  8. new JobSchedulingInfo(statusEvent.getOldStatus());
  9. if (statusEvent.getEventType() == EventType.PRIORITY_CHANGED
  10. || statusEvent.getEventType() == EventType.START_TIME_CHANGED) {
  11. // Make a priority change
  12. reorderJobs(job, oldInfo);
  13. } else if (statusEvent.getEventType() == EventType.RUN_STATE_CHANGED) {
  14. // Check if the job is complete
  15. int runState = statusEvent.getNewStatus().getRunState();
  16. if (runState == JobStatus.SUCCEEDED
  17. || runState == JobStatus.FAILED
  18. || runState == JobStatus.KILLED) {
  19. jobCompleted(oldInfo);
  20. }
  21. }
  22. }
  23. }

因为此时Job并未结束。从此可以看出,Job在初始化完毕后,线程池又去执行其他Job的初始化等操作,等待TaskTracker来取。

关于TaskTracker与JobTracker之间的心跳,以及任务的获取等操作,比较复杂,留作后续博文分析。

后记

由流程图来看:

本博文在上一节分析了1、2、3、4的基础上,分析了5、6两个步骤,即Job的初始化、到HDFS中获取资源数据,获得Map和Reduce数量等过程。关于7、8、9、10等后续操作,在后续博文中分析。

MapReduce剖析笔记之三:Job的Map/Reduce Task初始化的更多相关文章

  1. MapReduce剖析笔记之二:Job提交的过程

    上一节以WordCount分析了MapReduce的基本执行流程,但并没有从框架上进行分析,这一部分工作在后续慢慢补充.这一节,先剖析一下作业提交过程. 在分析之前,我们先进行一下粗略的思考,如果要我 ...

  2. MapReduce剖析笔记之五:Map与Reduce任务分配过程

    在上一节分析了TaskTracker和JobTracker之间通过周期的心跳消息获取任务分配结果的过程.中间留了一个问题,就是任务到底是怎么分配的.任务的分配自然是由JobTracker做出来的,具体 ...

  3. MapReduce剖析笔记之四:TaskTracker通过心跳机制获取任务的流程

    上一节分析到了JobTracker把作业从队列里取出来并进行了初始化,所谓的初始化,主要是获取了Map.Reduce任务的数量,并统计了哪些DataNode所在的服务器可以处理哪些Split等等,将这 ...

  4. MapReduce剖析笔记之七:Child子进程处理Map和Reduce任务的主要流程

    在上一节我们分析了TaskTracker如何对JobTracker分配过来的任务进行初始化,并创建各类JVM启动所需的信息,最终创建JVM的整个过程,本节我们继续来看,JVM启动后,执行的是Child ...

  5. MapReduce剖析笔记之八: Map输出数据的处理类MapOutputBuffer分析

    在上一节我们分析了Child子进程启动,处理Map.Reduce任务的主要过程,但对于一些细节没有分析,这一节主要对MapOutputBuffer这个关键类进行分析. MapOutputBuffer顾 ...

  6. MapReduce剖析笔记之一:从WordCount理解MapReduce的几个阶段

    WordCount是一个入门的MapReduce程序(从src\examples\org\apache\hadoop\examples粘贴过来的): package org.apache.hadoop ...

  7. MapReduce剖析笔记之六:TaskTracker初始化任务并启动JVM过程

    在上面一节我们分析了JobTracker调用JobQueueTaskScheduler进行任务分配,JobQueueTaskScheduler又调用JobInProgress按照一定顺序查找任务的流程 ...

  8. Google的分布式计算模型Map Reduce map函数将输入分割成key/value对

    http://www.nowamagic.net/librarys/veda/detail/1768 上一篇 大规模分布式数据处理平台Hadoop的介绍 中提到了Google的分布式计算模型Map R ...

  9. Hadoop Map/Reduce教程

    原文地址:http://hadoop.apache.org/docs/r1.0.4/cn/mapred_tutorial.html 目的 先决条件 概述 输入与输出 例子:WordCount v1.0 ...

随机推荐

  1. 【原】Android热更新开源项目Tinker源码解析系列之三:so热更新

    本系列将从以下三个方面对Tinker进行源码解析: Android热更新开源项目Tinker源码解析系列之一:Dex热更新 Android热更新开源项目Tinker源码解析系列之二:资源文件热更新 A ...

  2. 玩转spring boot——快速开始

    开发环境: IED环境:Eclipse JDK版本:1.8 maven版本:3.3.9 一.创建一个spring boot的mcv web应用程序 打开Eclipse,新建Maven项目 选择quic ...

  3. Apache Ignite高性能分布式网格框架-初探

    Apache Ignite初步认识 今年4月开始倒腾openfire,过程中经历了许多,更学到了许多.特别是在集群方面有了很多的认识,真正开始认识到集群的概念及应用方法. 在openfire中使用的集 ...

  4. 在Asp.Net中操作PDF – iTextSharp - 使用表格

    使用Asp.Net生成PDF最常用的元素应该是表格,表格可以帮助比如订单或者发票类型的文档更加格式化和美观.本篇文章并不会深入探讨表格,仅仅是提供一个使用iTextSharp生成表格的方法介绍 使用i ...

  5. 微信小程序前端源码逻辑和工作流

    看完微信小程序的前端代码真的让我热血沸腾啊,代码逻辑和设计一目了然,没有多余的东西,真的是大道至简. 废话不多说,直接分析前端代码.个人观点,难免有疏漏,仅供参考. 文件基本结构: 先看入口app.j ...

  6. C# 对象实例化 用json保存 泛型类 可以很方便的保存程序设置

    参考页面: http://www.yuanjiaocheng.net/webapi/test-webapi.html http://www.yuanjiaocheng.net/webapi/web-a ...

  7. python 数据类型---文件二

    1.打印进度条 import sys,time for i in range(20): sys.stdout.write("#") sys.stdout.flush() #不等缓冲 ...

  8. WebLogic的安装和配置以及MyEclipse中配置WebLogic

    WebLogic 中间件: 是基础软件的一大类,属于可复用软件的范畴,顾名思义,中间件属于操作系统软件与应用软件的中间,比如:JDK,框架,weblogic. weblogic与tomcat区别 : ...

  9. equals变量在前面或者在后面有什么区别吗?这是一个坑点

    我就不废话那么多,直接上代码: package sf.com.mainTest; public class Test { public static void main(String[] args) ...

  10. trigger事件模拟

    事件模拟trigger 在操作DOM元素中,大多数事件都是用户必须操作才会触发事件,但有时,需要模拟用户的操作,来达到效果. 需求:页面初始化时触发搜索事件并获取input控件值,并打印输出(效果图如 ...