A Taxonomy for Performance

In this section, we introduce some basic performance metrics. These provide a

vocabulary for performance analysis and allow us to frame the objectives of a

tuning project in quantitative terms. These objectives are the non-functional requirements that define our performance goals. One common basic set of performance metrics is:

• Throughput

• Latency

• Capacity

• Degradation

• Utilization

• Efficiency

• Scalability



Throughput

Throughput is a metric that represents the rate of work a system or subsystem

can perform. This is usually expressed as number of units of work in some time

period. For example, we might be interested in how many transactions per second a system can execute.

For the throughput number to be meaningful in a real performance exercise,

it should include a description of the reference platform it was obtained on. For

example, the hardware spec, OS and software stack are all relevant to throughput, as is whether the system under test is a single server or a cluster.



Latency

Performance metrics are sometimes explained via metaphors that evokes

plumbing. If a water pipe can produce 100l per second, then the volume produced in 1 second (100 litres) is the throughput. In this metaphor, the latency is

effectively the length of the pipe. That is, it’s the time taken to process a single

transaction.

It is normally quoted as an end-to-end time. It is dependent on workload, so

a common approach is to produce a graph showing latency as a function of increasing workload.



Capacity

The capacity is the amount of work parallelism a system possesses. That is, the

number units of work (e.g. transactions) that can be simultaneously ongoing in

the system.

Capacity is obviously related to throughput, and we should expect that as

the concurrent load on a system increases, that throughput (and latency) will

be affected. For this reason, capacity is usually quoted as the processing available at a given value of latency or throughput.



Utilisation

One of the most common performance analysis tasks is to achieve efficient use of a systems resources. Ideally, CPUs should be used for handling units
of work, rather than being idle (or spending time handling OS or other housekeeping

tasks).

Depending on the workload, there can be a huge difference between the utilisation levels of different resources. For example, a computation-intensive workload (such as graphics processing or encryption) may be running at
close

to 100% CPU but only be using a small percentage of available memory.



E€iciency

Dividing the throughput of a system by the utilised resources gives a measure of the overall efficiency of the system. Intuitively, this makes sense,
as requiring more resources to produce the same throughput, is one useful definition of being less efficient.

It is also possible, when dealing with larger systems, to use a form of cost accounting to measure efficiency. If Solution A has a total dollar cost of ownership (TCO) as solution B for the same throughput then it is, clearly,
half as efficient.



Scalability

The throughout or capacity of a system depends upon the resources available for processing. The change in throughput as resources are added is one measure
of the scalability of a system or application. The holy grail of system scalability is to have throughput change exactly in step with resources.

Consider a system based on a cluster of servers. If the cluster is expanded,

for example, by doubling in size, then what throughput can be achieved? If the

new cluster can handle twice the volume of transactions, then the system is exhibiting “perfect linear scaling”. This is very difficult to achieve in practice, especially over a wide range of posible loads.

System scalability is dependent upon a number of factors, and is not normally a simple constant factor. It is very common for a system to scale close to linearly for some range of resources, but then at higher loads, to encounter

some limitation in the system that prevents perfect scaling.


Degradation

If we increase the load on a system, either by increasing the number of requests (or clients) or by increasing the speed requests arrive at, then we
may see a change in the observed latency and/or throughput.

Note that this change is dependent on utilisation. If the system is underutilised, then there should be some slack before observables change, but if resources are fully utilised then we would expect to see throughput stop
increasing, or latency increase. These changes are usually called the degradation of the system under additional load.




Connections between the observables

The behaviour of the various performance observables is usually connected in some manner. The details of this connection will depend upon whether the
system is running at peak utility. For example, in general, the utilisation will

change as the load on a system increases. However, if the system is underutilised, then increasing load may not apprciably increase utilisation. Conversely, if the system is already stressed, then the effect of increasing load may be

felt in another observable.

As another example, scalability and degradation both represent the change in behaviour of a system as more load is added. For scalability, as the load is increased, so are available resources, and the central question is
whether the

system can make use of them. On the other hand, if load is added but additional resources are not provided, degradation of some performance observable (e.g. latency) is the expected outcome.

In rare cases, additional load can cause counter-intuitive results. For example, if the change in load causes some part of the system to switch to a more resource intensive, but higher performance mode, then the overall

effect can be to reduce latency, even though more requests are being received.



读书笔记:

Optimizing Java



by Benjamin J Evans and James Gough



Copyright © 2016 Benjamin Evans, James Gough. All rights reserved.



Printed in the United States of America.



Published
by O’Reilly Media, Inc. , 1005 Gravenstein Highway North, Sebastopol, CA 95472.

A Taxonomy for Performance的更多相关文章

  1. Boost application performance using asynchronous I/O-ref

    http://www.ibm.com/developerworks/linux/library/l-async/?S_TACT=105AGX52&S_CMP=cn-a-l Introducti ...

  2. Performance Monitor4:监控SQL Server的IO性能

    SQL Server的IO性能受到物理Disk的IO延迟和SQL Server内部执行的IO操作的影响.在监控Disk性能时,最主要的度量值(metric)是IO延迟,IO延迟是指从Applicati ...

  3. Performance Tuning

    本文译自Wikipedia的Performance tuning词条,原词条中的不少链接和扩展内容非常值得一读,翻译过程中暴露了个人工程学思想和英语水平的不足,翻译后的内容也失去很多准确性和丰富性,需 ...

  4. SharePoint 2013 Create taxonomy field

    创建taxonomy field之前我们首先来学习一下如果创建termSet,原因是我们所创建的taxonomy field需要关联到termSet. 简单介绍一下Taxonomy Term Stor ...

  5. Performance Monitor3:监控SQL Server的内存压力

    SQL Server 使用的资源受到操作系统的调度,同时,SQL Server在内部实现了一套调度算法,用于管理从操作系统获取的资源,主要是对内存和CPU资源的调度.一个好的数据库系统,必定在内存中缓 ...

  6. [MySQL Reference Manual] 23 Performance Schema结构

    23 MySQL Performance Schema 23 MySQL Performance Schema 23.1 性能框架快速启动 23.2 性能框架配置 23.2.1 性能框架编译时配置 2 ...

  7. Unity性能优化(2)-官方教程Diagnosing performance problems using the Profiler window翻译

    本文是Unity官方教程,性能优化系列的第二篇<Diagnosing performance problems using the Profiler window>的简单翻译. 相关文章: ...

  8. 使用ANTS Performance Profiler&ANTS Memory Profiler工具分析IIS进程内存和CPU占用过高问题

    一.前言 最近一段时间,网站经常出现两个问题: 1.内存占用率一点点增高,直到将服务器内存占满. 2.访问某个页面时,页面响应过慢,CPU居高不下. 初步判断内存一点点增多可能是因为有未释放的资源一直 ...

  9. KPI:Key Performance Indicator

    通信中KPI,是Key Performance Indicators的缩写,意思是关键性能指标.performance 还有绩效:业绩的意思,但显然不适用于这种场合. 通信中KPI的内容有:掉话率.接 ...

随机推荐

  1. php简易计算器

    php循环结构 案例:php简易计算器 步骤: 1.先绘制这个表格 2.根据表单提交的sub属性判断一下,是否点击计算了 (GET方式提交的数据,通过地址栏传递的) 3.计算,并将结果输入到第二行 问 ...

  2. CentOS 7.4 下搭建 Elasticsearch 6.3 搜索群集

    上个月 13 号,Elasticsearch 6.3 如约而至,该版本和以往版本相比,新增了很多新功能,其中最令人瞩目的莫过于集成了 X-Pack 模块.而在最新的 X-Pack 中 Elastics ...

  3. Appium Python API 汇总(中文版)

    网络搜集而来,留着备用,方便自己也方便他人.感谢总结的人! 1.contexts contexts(self): Returns the contexts within the current ses ...

  4. mongo 3.4分片集群系列之七:配置数据库管理

    这个系列大致想跟大家分享以下篇章: 1.mongo 3.4分片集群系列之一:浅谈分片集群 2.mongo 3.4分片集群系列之二:搭建分片集群--哈希分片 3.mongo 3.4分片集群系列之三:搭建 ...

  5. Probabilistic locking in SQLite

    In SQLite, a reader/writer lock mechanism is required to control the multi-process concurrent access ...

  6. canvas一周一练 -- canvas绘制奥运五环(1)

    运行效果: <!DOCTYPE html> <html> <head> </head> <body> <canvas id=" ...

  7. 模块挂载、切换,uml模式、流程图模式

    模块挂载.切换,uml模式.流程图模式

  8. python游戏开发:pygame事件与设备轮询

    一.pygame事件 1.简介 pygame事件可以处理游戏中的各种事情.其实在前两节的博客中,我们已经使用过他们了.如下是pygame的完整事件列表: QUIT,ACTIVEEVENT,KEYDOW ...

  9. 在网页中引用DWG控件,交互绘图,和响应鼠标点击对象的方法

    在网页中引用DWG控件,交互绘图,和响应鼠标点击对象的方法 [MXDRAW CAD控件文档] 下面帮助的完整例子,在控件安装目录的Sample\Ie\iedemo.htm中. 1.      主要用到 ...

  10. UVA - 12661 Funny Car Racing (Dijkstra算法)

    题目: 思路: 把时间当做距离利用Dijkstra算法来做这个题. 前提:该结点e.c<=e.a,k = d[v]%(e.a+e.b); 当车在这个点的1处时,如果在第一个a这段时间内能够通过且 ...