iostat, from the excellent sysstat suite of utilities, is the go-to tool for evaluating IO performance on Linux. It's obvious why that's the case: sysstat is very useful, solid, and widely installed. System administrators can go a lot worse than taking a look at iostat -x. There are some serious caveats lurking in iostat's output, two of which are greatly magnified on newer machines with solid state drives.

To explain what's wrong, let me compare two lines of iostat output:

Device:     rrqm/s   wrqm/s       r/s     w/s    rkB/s    wkB/s avgrq-sz
sdd 0.00 0.00 13823.00 0.00 55292.00 0.00 8.00
avgqu-sz await r_await w_await svctm %util
0.78 0.06 0.06 0.00 0.06 78.40 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz
sdd 0.00 0.00 72914.67 0.00 291658.67 0.00 8.00
avgqu-sz await r_await w_await svctm %util
15.27 0.21 0.21 0.00 0.01 100.00

Both of these lines are from the same device (a Samsung 840 EVO SSD), and both are from read-only 4kB random loads. What differs here is the level of parallelism: in the first load the mean queue depth is only 0.78, and in the second it's 15.27. Same pattern, more threads.

The first problem we run into with this output is the svctm field, widely taken to be the average amount of time an operation takes. The iostat man page describes it as:

The average service time (in milliseconds) for I/O requests that were issued to the device.

and goes on to say:

The average service time (svctm field) value is meaningless, as I/O statistics are now calculated at block level, and we don't know when the disk driver starts to process a request.

The reasons the man page states for this field being meaningless are true, as are the warnings in the sysstat code. The calculation behind svctm is fundamentally broken, and doesn't really have a clear meaning. Inside iostat, svctm in an interval is calculated as time the device was doing some work / number of IOs, that is the amount of time we were doing work, divided by the amount of work done. Going back to our two workloads, we can compare their service times:

svctm
0.06
0.01

Taken literally, this means the device was responding in 60µs when under little load, and 10µs when under a lot of load. That seems unlikely, and indeed the load generator fio tells us it's not true. So what's going on?

Magnetic hard drives are serial beings. They have a few tiny heads, ganged together, that move over a spinning platter to a single location where they do some IO. Once the IO is done, and no sooner, they move on. Over the years, they've gathered some shiny capabilities like NCQ and TCQ that make them appear parallel (mostly to allow reordering), but they're still the same old horse-and-carriage sequential devices they've always been. Modern hard drives expose some level of concurrency, but no true parallelism. SSDs, like the Samsung 840 EVO in this test, are different. SSDs can and do perform operations in parallel. In fact, the only way to achieve their peak performance is to offer them parallel work to do.

While SSDs vary in the details of their internal construction, most have the ability to access multiple flash packages (groups of chips) at a time. This is a big deal for SSD performance. Individual flash chips actually don't have great bandwidth, so the ability to group the performance of many chips together is essential. The chips are completely independent, and because the controller doesn't need to block on requests to the chip, the drive is truly doing multiple things at once. Without the single electromechanical head as a bottleneck, even single SSDs can have a fairly large amount of internal parallelism. This diagram from Agarwal, et al shows the high-level architecture:

If Jane does one thing at a time, and doing ten things takes Jane 20 minutes, each thing has taken Jane an average of two minutes. The mean time between asking Jane to do something and Jane completing it is two minutes. Alice, like Jane, can do ten things in twenty minutes, but she works on multiple things in parallel. Looking only at Alice's throughput (the number of things she gets done in a period of time) what can we say about Alice's latency (the amount of time it takes her from start to finish for a task)? Very little. We know its less than 10 minutes. If she's busy the whole time, we know it's 2 or more minutes. That's it.

Let's go back to that iostat output:

Device:     rrqm/s   wrqm/s       r/s     w/s    rkB/s    wkB/s avgrq-sz
sdd 0.00 0.00 13823.00 0.00 55292.00 0.00 8.00
avgqu-sz await r_await w_await svctm %util
0.78 0.06 0.06 0.00 0.06 78.40 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz
sdd 0.00 0.00 72914.67 0.00 291658.67 0.00 8.00
avgqu-sz await r_await w_await svctm %util
15.27 0.21 0.21 0.00 0.01 100.00

What's going on with %util, then? The first line is telling us that the drive is 78.4% utilized at 13823 reads per second. The second line is telling us that the drive is 100% utilized at 72914 reads per second. If it takes 14 thousand to fill it to 78.4%, wouldn't we expect it to only be able to do 18 thousand in total? How is it doing 73 thousand?

The problem here is the same - parallelism. When iostat says %util, it means "Percentage of CPU time during which I/O requests were issued to the device". The percentage of the time the drive was doing at least one thing. If it's doing 16 things at the same time, that doesn't change. Once again, this calculation works just fine for magnetic drives (and Jane), which do only one thing at a time. The amount of time they spend doing one thing is a great indication of how busy they really are. SSDs (and RAIDs, and Alice), on the other hand, can do multiple things at once. If you can do multiple things in parallel, the percentage of time you're doing at least one thing isn't a great predictor of your performance potential. The iostat man page does provide a warning:

Device saturation occurs when this value is close to 100% for devices serving requests serially. But for devices serving requests in parallel, such as RAID arrays and modern SSDs, this number does not reflect their performance limits.

As a measure of general IO busyness %util is fairly handy, but as an indication of how much the system is doing compared to what it can do, it's terrible. Iostat's svctm has even fewer redeeming strengths. It's just extremely misleading for most modern storage systems and workloads. Both of these fields are likely to mislead more than inform on modern SSD-based storage systems, and their use should be treated with extreme care.

%util的计算方法是:IO请求在处理的时间/iostat的时间间隔;util的含义是指IO请求在设备上处理的CPU时间百分比。对于SSD盘来讲,SSD盘的driver是并行处理IO请求的,所以%util=90以上并不能说明盘的能力还剩余10%。

%svctm的计算方法是:设备忙的时间/iostat时间间隔内做的IO数;svctm最初的含义是指IO请求在设备中被处理的时间。但是目前的/proc/iostats的数据源是从block层采集的,所以并不能获取到IO的真正排队时间和处理时间。

原文链接:http://brooker.co.za/blog/2014/07/04/iostat-pct.html

iostat中的util和svctm (Two traps in iostat: %util and svctm)的更多相关文章

  1. iostat中 %util高 应用延迟高

    经过长时间监控,发现iostat 中的%util居高不下,一直在98%上下,说明带宽占用率极高,遇到了瓶颈. 且读写速度很慢,经过排查,发现是HBA卡出现问题,更换后,用dd if命令测试,磁盘的读写 ...

  2. iostat (转https://www.cnblogs.com/ftl1012/p/iostat.html)

    iostat是I/O statistics(输入/输出统计)的缩写,iostat工具将对系统的磁盘操作活动进行监视.它的特点是汇报磁盘活动统计情况,同时也会汇报出CPU使用情况.iostat也有一个弱 ...

  3. java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@1f303192 rejected from java.util.concurrent.ThreadPoolExecutor@11f7cc04[Terminated, pool size = 0, active threads

    java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@1f303192 rejec ...

  4. 疑难杂症:NoSuchMethodError: com.opensymphony.xwork2.util.finder.UrlSet.includeClassesUrl(Lcom/opensymphony/xwork2/util/finder/ClassLoaderInterface;)

    严重: Exception starting filter struts2java.lang.NoSuchMethodError: com.opensymphony.xwork2.util.finde ...

  5. java.lang.ClassCastException: java.util.ArrayList cannot be cast to java.util.Map

    1.错误描写叙述 java.lang.ClassCastException: java.util.ArrayList cannot be cast to java.util.Map at servic ...

  6. util类中非静态方法中注入serivce,在controller层是使用util。

    今天碰到如题的问题,刚一开始在util中注入service总是注入失败,起初我以为是util中没有注入成功,debug看了一下果然注入不进来. 然后各种纠结,最终坑爹的问题是在controller直接 ...

  7. Maven中使用ssm框架出现:org.apache.tomcat.util.modeler.BaseModelMBean.invoke 调用方法[manageApp]时发生异常

    org.apache.tomcat.util.modeler.BaseModelMBean.invoke 调用方法[manageApp]时发生异常 首先可以排查一下像: @RequestMapping ...

  8. mybatis中传集合时 报异常 invalid comparison: java.util.Arrays$ArrayList and java.lang.String

    犯了一个低级的错误,在传集合类型的参数时,把他当成字符串处理了,导致报类型转换的错误 把  and nsrsbh!=' ' 删掉就行了

  9. java 多线程 发布订阅模式:发布者java.util.concurrent.SubmissionPublisher;订阅者java.util.concurrent.Flow.Subscriber

    1,什么是发布订阅模式? 在软件架构中,发布订阅是一种消息范式,消息的发送者(称为发布者)不会将消息直接发送给特定的接收者(称为订阅者).而是将发布的消息分为不同的类别,无需了解哪些订阅者(如果有的话 ...

随机推荐

  1. 【python】TCP/IP编程

    No1: [TCP] 客户端 import socket s=socket.socket(socket.AF_INET,socket.SOCK_STREAM) s.connect(('www.sina ...

  2. UVA 796 Critical Links(模板题)(无向图求桥)

    <题目链接> 题目大意: 无向连通图求桥,并将桥按顺序输出. 解题分析: 无向图求桥的模板题,下面用了kuangbin的模板. #include <cstdio> #inclu ...

  3. hdu 1518 Square 木棍建正方形【DFS】

    题目链接 题目大意: 题意就是输入棍子的数量和每根棍子的长度,看能不能拼成正方形. #include <bits/stdc++.h> using namespace std; int n, ...

  4. 从函数式编程到Ramda函数库(二)

    Ramda 基本的数据结构都是原生 JavaScript 对象,我们常用的集合是 JavaScript 的数组.Ramda 还保留了许多其他原生 JavaScript 特性,例如,函数是具有属性的对象 ...

  5. 146. 大小写转换 II

    146. Lowercase to Uppercase II Description Implement an upper method to convert all characters in a ...

  6. Android编译环境配置(Ubuntu 14.04)

    常识:编译Android源代码需要在Linux系统环境下进行... 在Linux中,开发Android环境包括以下需求:Git.repo.JDK(现在一般使用OpenJDK)等:其中,Git用于下载源 ...

  7. [洛谷P1638]逛画展

    [洛谷P1638]逛画展 题目大意: 有\(n(n\le10^6)\)个格子,每个格子有一种颜色.颜色种数为\(m(m\le2000)\).求包含所有颜色的最小区间. 思路: 尺取法裸题. 思路: # ...

  8. Charles——前端必备模拟后端数据

    Charles--前端必备模拟后端数据 现在都是前后端分离开发了,前端开发者经常会遇到一个问题如何模拟后端数据来进行开发调试,在这里给大家介绍一个前端神器--Charles. 安装 安装就不赘述了,直 ...

  9. [AHOI2017/HNOI2017]大佬

    Description: 人们总是难免会碰到大佬.他们趾高气昂地谈论凡人不能理解的算法和数据结构,走到任何一个地方,大佬的气场就能让周围的人吓得瑟瑟发抖,不敢言语. 你作为一个 OIER,面对这样的事 ...

  10. MySQL(四)

    分组 按照字段分组,表示此字段相同的数据会被放到一个组中 分组后,只能查询出相同的数据列,对于有差异的数据列无法出现在结果集中 可以对分组后的数据进行统计,做聚合运算 语法: select 列1,列2 ...