SparkContext可以说是Spark应用的发动机引擎,Spark Drive的初始化围绕这SparkContext的初始化。

SparkContext总览

sparkcontxt的主要组成部分

  • sparkEnv:spark运行环境,Executor是处理任务的执行器,依赖于SparkEnv的环境。Driver中也包含SparkEnv,为了保证Local模式下任务执行。此外,SparkEnv还包含serializerManager、RpcEnv、BlockManager、mapOutputTracker等组件。
  • LiveListenerBus:SparkContext的事件总线。接受各个使用方的事件,通过异步方式进行匹配后调用SparkListener的不同方法。
  • SparkUI:间接依赖于计算引擎、调度引擎、存储引擎体系,Job、stage、存储、executor等的监控都会以SparkListener的形式投递到LiveListenerBus,SparkUI从各个SparkListener中读取数据并显示到Web。
  • SparkStatusTracker:提供job、Stage等的监控信息,是低级API,只能提供一致性机制。
  • ConsoleProgressBar:利用SparkStatusTracker的API,在控制台展示Stage的进度。因SparkStatusTracker的一致性,显示一般有延时。
  • DAGScheduler(非常重要):DAG调度器,负责创建job、提供划分算法划分stage、提交stage等。
  • TaskScheduler(非常重要):任务调度器,按照调度算法对集群管理器已经分配给应用程序的资源进行二次调度后分配给任务。TaskScheduler的Task由DAGScheduler创建。
  • HeartbeatReceiver:心跳接收器,所有Executor都向HeartbeatReceiver发送心跳信息,HeartbeatReceiver接收到以后更新Executor的最后可见时间,然后将信息给TaskScheduler做处理
  • ContextCleaner:使用异步方式清理应用作用域的RDD、ShuffleDependcy和Broadcast。
  • JobProgressListener:作业进度监听器。
  • EventLoggingListener(可选):将事件持久化到存储的监听器,当spark.eventLog.enabled为true时使用
  • ExecutorAllocationManager:Exexcutor动态分配管理器。
  • ShutdownHokManager:用于关闭钩子函的管理器,在JVM退出时,执行清理 工作。

下面学习SparkContext的初始化过程

创建SparkEnv

// This function allows components created by SparkEnv to be mocked in unit tests:
private[spark] def createSparkEnv(
conf: SparkConf,
isLocal: Boolean,
listenerBus: LiveListenerBus): SparkEnv = {
SparkEnv.createDriverEnv(conf, isLocal, listenerBus, SparkContext.numDriverCores(master, conf))
}
private[spark] def env: SparkEnv = _env

先是创建createSparkEnv()方法,调用了createDriverEnv()

/* ------------------------------------------------------------------------------------- *
| Initialization. This code initializes the context in a manner that is exception-safe. |
| All internal fields holding state are initialized here, and any error prompts the |
| stop() method to be called. |
* ------------------------------------------------------------------------------------- */ private def warnSparkMem(value: String): String = {
logWarning("Using SPARK_MEM to set amount of memory to use per executor process is " +
"deprecated, please use spark.executor.memory instead.")
value
} /** Control our logLevel. This overrides any user-defined log settings.
* @param logLevel The desired log level as a string.
* Valid log levels include: ALL, DEBUG, ERROR, FATAL, INFO, OFF, TRACE, WARN
*/
def setLogLevel(logLevel: String) {
// let's allow lowercase or mixed case too
val upperCased = logLevel.toUpperCase(Locale.ROOT)
require(SparkContext.VALID_LOG_LEVELS.contains(upperCased),
s"Supplied level $logLevel did not match one of:" +
s" ${SparkContext.VALID_LOG_LEVELS.mkString(",")}")
Utils.setLogLevel(org.apache.log4j.Level.toLevel(upperCased))
} try {
_conf = config.clone()
_conf.validateSettings() if (!_conf.contains("spark.master")) {
throw new SparkException("A master URL must be set in your configuration")
}
if (!_conf.contains("spark.app.name")) {
throw new SparkException("An application name must be set in your configuration")
} // log out spark.app.name in the Spark driver logs
logInfo(s"Submitted application: $appName") // System property spark.yarn.app.id must be set if user code ran by AM on a YARN cluster
if (master == "yarn" && deployMode == "cluster" && !_conf.contains("spark.yarn.app.id")) {
throw new SparkException("Detected yarn cluster mode, but isn't running on a cluster. " +
"Deployment to YARN is not supported directly by SparkContext. Please use spark-submit.")
} if (_conf.getBoolean("spark.logConf", false)) {
logInfo("Spark configuration:\n" + _conf.toDebugString)
} // Set Spark driver host and port system properties. This explicitly sets the configuration
// instead of relying on the default value of the config constant.
_conf.set(DRIVER_HOST_ADDRESS, _conf.get(DRIVER_HOST_ADDRESS))
_conf.setIfMissing("spark.driver.port", "0") _conf.set("spark.executor.id", SparkContext.DRIVER_IDENTIFIER) _jars = Utils.getUserJars(_conf)
_files = _conf.getOption("spark.files").map(_.split(",")).map(_.filter(_.nonEmpty))
.toSeq.flatten _eventLogDir =
if (isEventLogEnabled) {
val unresolvedDir = conf.get("spark.eventLog.dir", EventLoggingListener.DEFAULT_LOG_DIR)
.stripSuffix("/")
Some(Utils.resolveURI(unresolvedDir))
} else {
None
} _eventLogCodec = {
val compress = _conf.getBoolean("spark.eventLog.compress", false)
if (compress && isEventLogEnabled) {
Some(CompressionCodec.getCodecName(_conf)).map(CompressionCodec.getShortName)
} else {
None
}
} _listenerBus = new LiveListenerBus(_conf) // Initialize the app status store and listener before SparkEnv is created so that it gets
// all events.
_statusStore = AppStatusStore.createLiveStore(conf)
listenerBus.addToStatusQueue(_statusStore.listener.get) // Create the Spark execution environment (cache, map output tracker, etc)
_env = createSparkEnv(_conf, isLocal, listenerBus)
SparkEnv.set(_env)

  因为SparkEnv的很多组件都向LiveListenerBus的事件总线队列中投递事件,所以首先创建的LiveListenerBus,这个类主要功能如下

  • 保存有消息队列,负责消息的缓存
  • 保存有注册过的listener,负责消息的分发
    这是一个简单的监听器模型。

SparkUI实现

SparkUI涉及太多组件,这里暂时不深入剖析,后续单独剖析。下面是创建SparkUI的代码

_statusTracker = new SparkStatusTracker(this, _statusStore)
 
 _progressBar =
if (_conf.get(UI_SHOW_CONSOLE_PROGRESS) && !log.isInfoEnabled) {
Some(new ConsoleProgressBar(this))
} else {
None
} _ui =
if (conf.getBoolean("spark.ui.enabled", true)) {
Some(SparkUI.create(Some(this), _statusStore, _conf, _env.securityManager, appName, "",
startTime))
} else {
// For tests, do not enable the UI
None
}
// Bind the UI before starting the task scheduler to communicate
// the bound port to the cluster manager properly
_ui.foreach(_.bind())

创建心跳接收器

如果是local模式,Driver和executor再同一节点,可以直接使用本地交互。出现异常可以方便知道。

当再生产环境中时,往往Executor和Driver是在不同节点上启动的,因此,Driver为了能够掌控Executor,在Driver中创建了心跳接收器。

// We need to register "HeartbeatReceiver" before "createTaskScheduler" because Executor will
// retrieve "HeartbeatReceiver" in the constructor. (SPARK-6640)
_heartbeatReceiver = env.rpcEnv.setupEndpoint(
HeartbeatReceiver.ENDPOINT_NAME, new HeartbeatReceiver(this))

代码用了SparkEnv的子组件NettyRpcEnv的setupEndpoint()方法,

该方法的作用:是向RpcEnv的Dispatcher注册HeartbeatReceiver,并返回HeartbeatReceiver的NettyRpcEndPointRef的引用。

创建和启动调度系统

TaskScheduler负责请求集群管理器给应用程序分配并运行Executor(一级调度)和给任务分配Executor并运行任务(二级调度),可以看作是任务调度的客户端。

DAGScheduler主要在任务正式交给TaskSchedulerImp提交前的准备工作,包括创建Job、将DAG的RDD划分到不同的stage、提交Stage等。

// Create and start the scheduler
val (sched, ts) = SparkContext.createTaskScheduler(this, master, deployMode)
_schedulerBackend = sched
_taskScheduler = ts
_dagScheduler = new DAGScheduler(this)
_heartbeatReceiver.ask[Boolean](TaskSchedulerIsSet) // start TaskScheduler after taskScheduler sets DAGScheduler reference in DAGScheduler's
// constructor
_taskScheduler.start()
createTaskScheduler()方法返回Scheduler和TaskScheduler的对偶(此处补scala的知识),表示SparkContext的_taskScheduler已经有了TAskScheduler的引用,HeartbeatReceiver接收到TaskSchedulerIsSet消息后将获取sparkContext的
_taskScheduler属性设置到自身的scheduler属性中。
/**
* Create a task scheduler based on a given master URL.
* Return a 2-tuple of the scheduler backend and the task scheduler.
*/
private def createTaskScheduler(
sc: SparkContext,
master: String,
deployMode: String): (SchedulerBackend, TaskScheduler) = {
import SparkMasterRegex._ // When running locally, don't try to re-execute tasks on failure.
val MAX_LOCAL_TASK_FAILURES = 1 master match {
case "local" =>
val scheduler = new TaskSchedulerImpl(sc, MAX_LOCAL_TASK_FAILURES, isLocal = true)
val backend = new LocalSchedulerBackend(sc.getConf, scheduler, 1)
scheduler.initialize(backend)
(backend, scheduler) case LOCAL_N_REGEX(threads) =>
def localCpuCount: Int = Runtime.getRuntime.availableProcessors()
// local[*] estimates the number of cores on the machine; local[N] uses exactly N threads.
val threadCount = if (threads == "*") localCpuCount else threads.toInt
if (threadCount <= 0) {
throw new SparkException(s"Asked to run locally with $threadCount threads")
}
val scheduler = new TaskSchedulerImpl(sc, MAX_LOCAL_TASK_FAILURES, isLocal = true)
val backend = new LocalSchedulerBackend(sc.getConf, scheduler, threadCount)
scheduler.initialize(backend)
(backend, scheduler) case LOCAL_N_FAILURES_REGEX(threads, maxFailures) =>
def localCpuCount: Int = Runtime.getRuntime.availableProcessors()
// local[*, M] means the number of cores on the computer with M failures
// local[N, M] means exactly N threads with M failures
val threadCount = if (threads == "*") localCpuCount else threads.toInt
val scheduler = new TaskSchedulerImpl(sc, maxFailures.toInt, isLocal = true)
val backend = new LocalSchedulerBackend(sc.getConf, scheduler, threadCount)
scheduler.initialize(backend)
(backend, scheduler) case SPARK_REGEX(sparkUrl) =>
val scheduler = new TaskSchedulerImpl(sc)
val masterUrls = sparkUrl.split(",").map("spark://" + _)
val backend = new StandaloneSchedulerBackend(scheduler, sc, masterUrls)
scheduler.initialize(backend)
(backend, scheduler) case LOCAL_CLUSTER_REGEX(numSlaves, coresPerSlave, memoryPerSlave) =>
// Check to make sure memory requested <= memoryPerSlave. Otherwise Spark will just hang.
val memoryPerSlaveInt = memoryPerSlave.toInt
if (sc.executorMemory > memoryPerSlaveInt) {
throw new SparkException(
"Asked to launch cluster with %d MB RAM / worker but requested %d MB/worker".format(
memoryPerSlaveInt, sc.executorMemory))
} val scheduler = new TaskSchedulerImpl(sc)
val localCluster = new LocalSparkCluster(
numSlaves.toInt, coresPerSlave.toInt, memoryPerSlaveInt, sc.conf)
val masterUrls = localCluster.start()
val backend = new StandaloneSchedulerBackend(scheduler, sc, masterUrls)
scheduler.initialize(backend)
backend.shutdownCallback = (backend: StandaloneSchedulerBackend) => {
localCluster.stop()
}
(backend, scheduler) case masterUrl =>
val cm = getClusterManager(masterUrl) match {
case Some(clusterMgr) => clusterMgr
case None => throw new SparkException("Could not parse Master URL: '" + master + "'")
}
try {
val scheduler = cm.createTaskScheduler(sc, masterUrl)
val backend = cm.createSchedulerBackend(sc, masterUrl, scheduler)
cm.initialize(scheduler, backend)
(backend, scheduler)
} catch {
case se: SparkException => throw se
case NonFatal(e) =>
throw new SparkException("External scheduler cannot be instantiated", e)
}
}
}

初始化块管理器BlockManager

BlockManager是SparkEnv的组件之一,囊括了spark存储体系的所有组件和功能,是存储体系最重要的组件。spark的存储体系后续学习。

_applicationId = _taskScheduler.applicationId()
_env.blockManager.initialize(_applicationId)

  

启动度量系统

spark在监控方面有自己的一套体系,一个系统的监控功能可丰富可测试性、性能优化、运维评估、数据统计等。spark的度量系统使用的是codahale提供的第三方仓库Metrics。

spark的度量系统的三个重要概念:

  • Instance:指定度量系统的实例名,分为Master、Worker、Application、Driver和Executor
  • Source:数据来源,分为应用度量来源(ApplicationSource)、Worker度量来源(WorkerSource)、DAGSceduler度量来源(DAGScedulerSource)、BlockManager度量来源(BlockManagerSource)
  • Sink:度量数据输出。默认Servlet,还提供了ConsoleSink、CsvSink、JmxSink、MetricsServlet、GraphiteSink等。
metricsSystem对Source和Sink进行封装,将Source的数据输出到不同的Sink。
metricsSystem是SparkEnv内部组件之一,是整个spark应用程序的度量系统。
// The metrics system for Driver need to be set spark.app.id to app ID.
// So it should start after we get app ID from the task scheduler and set spark.app.id.
_env.metricsSystem.start()
// Attach the driver metrics servlet handler to the web ui after the metrics system is started.
_env.metricsSystem.getServletHandlers.foreach(handler => ui.foreach(_.attachHandler(handler)))

将系统的ServletContextHandler添加到SparkUI中。

创建事件日志监听器(可选)

 _eventLogger =
if (isEventLogEnabled) {
val logger =
new EventLoggingListener(_applicationId, _applicationAttemptId, _eventLogDir.get,
_conf, _hadoopConfiguration)
logger.start()
listenerBus.addToEventLogQueue(logger)
Some(logger)
} else {
None
}

创建和启动ExecutorAllocationManager

ExecutorAllocationManager是基于工作负载动态分配和删除Executor的代理。

它内部会定时根据工作负载计算所需的Executor数量,

如果对Executor需求大于集群管理器申请的数量,那么向集群管理器添加Executor。反之,向集群管理器申请取消部分Executor。

此外它内部还会定时向集群管理器申请一出(杀死)过期了的Executor。

// Optionally scale number of executors dynamically based on workload. Exposed for testing.
val dynamicAllocationEnabled = Utils.isDynamicAllocationEnabled(_conf)
_executorAllocationManager =
if (dynamicAllocationEnabled) {
schedulerBackend match {
case b: ExecutorAllocationClient =>
Some(new ExecutorAllocationManager(
schedulerBackend.asInstanceOf[ExecutorAllocationClient], listenerBus, _conf,
_env.blockManager.master))
case _ =>
None
}
} else {
None
}
_executorAllocationManager.foreach(_.start())

ContextCleaner的创建和启动

用于清理超出应用范围的RDD、shuffle对应的map任务状态、Shuffle元数据、Broadcast对象及RDD的checkpoint数据

  • 创建ContexCleaner
_cleaner =
if (_conf.getBoolean("spark.cleaner.referenceTracking", true)) {
Some(new ContextCleaner(this))
} else {
None
}
_cleaner.foreach(_.start())
  • 启动ContexCleaner
/** Start the cleaner. */
def start(): Unit = {
cleaningThread.setDaemon(true)
cleaningThread.setName("Spark Context Cleaner")
cleaningThread.start()
periodicGCService.scheduleAtFixedRate(new Runnable {
override def run(): Unit = System.gc()
}, periodicGCInterval, periodicGCInterval, TimeUnit.SECONDS)
}

除了GC的定时器,ContextCleaner的其余工作原理和listenerBus一样(采用监听器模式,由异步线程来处理)。

Spark环境更新

  • 用户提交任务时添加的额外的jar包或者其他文件,该如何指定他们呢?

SparkContext初始化的时候会读取用户指定的Jar文件或者其他文件

_jars = Utils.getUserJars(_conf)
_files = _conf.getOption("spark.files").map(_.split(",")).map(_.filter(_.nonEmpty))
.toSeq.flatten

首先读取的时Jar文件,然后读取用户设置的其他文件。

当用Yarn模式时,_jars是spark.jars和spark.yarn.dist.jars的Jar文件的并集。

其他模式时,只采用spark.jars指定的Jar文件。

  • 任务如何获取这些jar和文件呢?
def jars: Seq[String] = _jars
def files: Seq[String] = _files
// Add each JAR given through the constructor
if (jars != null) {
jars.foreach(addJar)
} if (files != null) {
files.foreach(addFile)
}

addJar将Jar文件添加到Driver的RPC环境中。

由于addJar和addFile可能会对应用的环境产生影响,因此在SparkContext初始化的最后对更新环境

postEnvironmentUpdate()

SparkContext的收尾工作

postEnvironmentUpdate()
postApplicationStart() // Post init
_taskScheduler.postStartHook() // 等待SchedulerBackend准备完成
// 向度量系统注册Source
_env.metricsSystem.registerSource(_dagScheduler.metricsSource)
_env.metricsSystem.registerSource(new BlockManagerSource(_env.blockManager))
_executorAllocationManager.foreach { e =>
_env.metricsSystem.registerSource(e.executorAllocationManagerSource)
} // Make sure the context is stopped if the user forgets about it. This avoids leaving
// unfinished event logs around after the JVM exits cleanly. It doesn't help if the JVM
// is killed, though.
// 添加SparkContext的关闭钩子
logDebug("Adding shutdown hook") // force eager creation of logger
_shutdownHookRef = ShutdownHookManager.addShutdownHook(
ShutdownHookManager.SPARK_CONTEXT_SHUTDOWN_PRIORITY) { () =>
logInfo("Invoking stop() from shutdown hook")
try {
stop()
} catch {
case e: Throwable =>
logWarning("Ignoring Exception while stopping SparkContext from shutdown hook", e)
}
}
} catch {
case NonFatal(e) =>
logError("Error initializing SparkContext.", e)
try {
stop()
} catch {
case NonFatal(inner) =>
logError("Error stopping SparkContext after init error.", inner)
} finally {
throw e
}
} // In order to prevent multiple SparkContexts from being active at the same time, mark this
// context as having finished construction.
// NOTE: this must be placed at the end of the SparkContext constructor.
SparkContext.setActiveContext(this, allowMultipleContexts)

  

SparkContext提供的常用方法

broadcast

/**
* Broadcast a read-only variable to the cluster, returning a
* [[org.apache.spark.broadcast.Broadcast]] object for reading it in distributed functions.
* The variable will be sent to each cluster only once.
*
* @param value value to broadcast to the Spark nodes
* @return `Broadcast` object, a read-only variable cached on each machine
*/
def broadcast[T: ClassTag](value: T): Broadcast[T] = {
assertNotStopped()
require(!classOf[RDD[_]].isAssignableFrom(classTag[T].runtimeClass),
"Can not directly broadcast RDDs; instead, call collect() and broadcast the result.")
val bc = env.broadcastManager.newBroadcast[T](value, isLocal)
val callSite = getCallSite
logInfo("Created broadcast " + bc.id + " from " + callSite.shortForm)
cleaner.foreach(_.registerBroadcastForCleanup(bc))
bc
}

实质上是调用了SparkEnv的BroadcastManager的newBroadcast()方法生成广播对象。

addSparkListener

用于向LiveListenerBus中提娜佳实现了特质SparkListenerInterface的监听器

/**
* :: DeveloperApi ::
* Register a listener to receive up-calls from events that happen during execution.
*/
@DeveloperApi
def addSparkListener(listener: SparkListenerInterface) {
listenerBus.addToSharedQueue(listener)
}

runjob

SparkContext重载了runjob方法。最终都调用下面这个runjob。

/**
* Run a function on a given set of partitions in an RDD and pass the results to the given
* handler function. This is the main entry point for all actions in Spark.
*
* @param rdd target RDD to run tasks on
* @param func a function to run on each partition of the RDD
* @param partitions set of partitions to run on; some jobs may not want to compute on all
* partitions of the target RDD, e.g. for operations like `first()`
* @param resultHandler callback to pass each result to
*/
def runJob[T, U: ClassTag](
rdd: RDD[T],
func: (TaskContext, Iterator[T]) => U,
partitions: Seq[Int],
resultHandler: (Int, U) => Unit): Unit = {
if (stopped.get()) {
throw new IllegalStateException("SparkContext has been shutdown")
}
val callSite = getCallSite
val cleanedFunc = clean(func)
logInfo("Starting job: " + callSite.shortForm)
if (conf.getBoolean("spark.logLineage", false)) {
logInfo("RDD's recursive dependencies:\n" + rdd.toDebugString)
}
//调用sparkContext之前初始化时创建的DAGScheduler的runJob()方法
dagScheduler.runJob(rdd, cleanedFunc, partitions, callSite, resultHandler, localProperties.get)
progressBar.foreach(_.finishAll())
rdd.doCheckpoint() // 保存检查点
}

setCheckPoint

给作业中的RDD指定保存检查点的目录,是启用检查点机制的前提。

/**
* Set the directory under which RDDs are going to be checkpointed.
* @param directory path to the directory where checkpoint files will be stored
* (must be HDFS path if running in cluster)
*/
def setCheckpointDir(directory: String) { // If we are running on a cluster, log a warning if the directory is local.
// Otherwise, the driver may attempt to reconstruct the checkpointed RDD from
// its own local file system, which is incorrect because the checkpoint files
// are actually on the executor machines.
if (!isLocal && Utils.nonLocalPaths(directory).isEmpty) {
logWarning("Spark is not running in local mode, therefore the checkpoint directory " +
s"must not be on the local filesystem. Directory '$directory' " +
"appears to be on the local filesystem.")
}

  

参考

1.《Spark内核设计的艺术架构设计与实现》

2.Spark2.4.3源码

spark源码之SparkContext的更多相关文章

  1. Spark源码剖析 - SparkContext的初始化(二)_创建执行环境SparkEnv

    2. 创建执行环境SparkEnv SparkEnv是Spark的执行环境对象,其中包括众多与Executor执行相关的对象.由于在local模式下Driver会创建Executor,local-cl ...

  2. Spark源码剖析 - SparkContext的初始化(三)_创建并初始化Spark UI

    3. 创建并初始化Spark UI 任何系统都需要提供监控功能,用浏览器能访问具有样式及布局并提供丰富监控数据的页面无疑是一种简单.高效的方式.SparkUI就是这样的服务. 在大型分布式系统中,采用 ...

  3. Spark源码分析 – SparkContext

    Spark源码分析之-scheduler模块 这位写的非常好, 让我对Spark的源码分析, 变的轻松了许多 这里自己再梳理一遍 先看一个简单的spark操作, val sc = new SparkC ...

  4. spark源码阅读--SparkContext启动过程

    ##SparkContext启动过程 基于spark 2.1.0  scala 2.11.8 spark源码的体系结构实在是很庞大,从使用spark-submit脚本提交任务,到向yarn申请容器,启 ...

  5. Spark源码剖析 - SparkContext的初始化(十)_Spark环境更新

    12. Spark环境更新 在SparkContext的初始化过程中,可能对其环境造成影响,所以需要更新环境,代码如下: SparkContext初始化过程中,如果设置了spark.jars属性,sp ...

  6. Spark源码剖析 - SparkContext的初始化(一)

    1. SparkContext概述 注意:SparkContext的初始化剖析是基于Spark2.1.0版本的 Spark Driver用于提交用户应用程序,实际可以看作Spark的客户端.了解Spa ...

  7. Spark源码剖析 - SparkContext的初始化(五)_创建任务调度器TaskScheduler

    5. 创建任务调度器TaskScheduler TaskScheduler也是SparkContext的重要组成部分,负责任务的提交,并且请求集群管理器对任务调度.TaskScheduler也可以看作 ...

  8. Spark源码剖析 - SparkContext的初始化(八)_初始化管理器BlockManager

    8.初始化管理器BlockManager 无论是Spark的初始化阶段还是任务提交.执行阶段,始终离不开存储体系.Spark为了避免Hadoop读写磁盘的I/O操作成为性能瓶颈,优先将配置信息.计算结 ...

  9. Spark源码剖析 - SparkContext的初始化(九)_启动测量系统MetricsSystem

    9. 启动测量系统MetricsSystem MetricsSystem使用codahale提供的第三方测量仓库Metrics.MetricsSystem中有三个概念: Instance:指定了谁在使 ...

随机推荐

  1. Angular 学习笔记 (动态组件 & Material Overlay & Dialog 分析)

    更新: 2019-11-24  dialog vs router link refer : https://stackoverflow.com/questions/51821766/angular-m ...

  2. css文字截断

    通过css将文字进行截断,截断部分使用省略号代替 .impleName{ max-width: 100%; /*最大宽度为当前元素的100%*/ display: inline-block; whit ...

  3. putty使用方法

    putty是一种体体积小,无需安装的一款免费安全使用方便的绿色软件,它主要用于远程控制linux系统,只要获取了远程的linux的地址,便可以远程控制linux系统以方便管理,越来越受到各方面的欢迎. ...

  4. vue 分组左右选择

    <el-col :span="12"> <div style="text-align: left" class="transferd ...

  5. lumen路由配置nginx

    nginx配置文件中添加: set   $root_path   '/data/www/m.domain.com/public';    root   $root_path; location / { ...

  6. 【Mybatis】缓存

    一.概述 1.1 缓存的意义 1.2 mybatis持久层缓存 二.一级缓存 2.1 原理 2.2 一级缓存配置 一级缓存测试 三.二级缓存 3.1 原理 3.2 mybatis二级缓存配置 3.3 ...

  7. SQLSEVER 不同服务器下两个结构相似的表实现数据同步(触发器)

    1.建立链接服务器 在ServerA 中创建指向ServerB的链接服务器,并做好账号映射.addlinkedserver存储过程创建一个链接服务器,参数详情参见官方文档. 第1个参数LNK_Serv ...

  8. unittest 运行slenium(四)---通过指定用例的形式运行用例

    一: 说明 跟数据驱动唯一的区别为用例数据获取时,及运行方式不同. 其它都基本相同,可参考https://www.cnblogs.com/xiaodingdong/p/11753220.html 二: ...

  9. 【OF框架】使用OF框架创建应用项目

    开始:准备工作 开发环境已经安装Visual Studio,包含Web开发负载.Python开发负载.NodeJs开发负载 开发环境已经安装Visual Studio Code 开发环境已经安装Nod ...

  10. P3197 [HNOI2008]越狱[组合数学]

    题目来源:洛谷 题目描述 监狱有连续编号为 1…N 的 N 个房间,每个房间关押一个犯人,有 M 种宗教,每个犯人可能信仰其中一种.如果相邻房间的犯人的宗教相同,就可能发生越狱,求有多少种状态可能发生 ...