对于btt的结果分析十分的困难,我和同事花了很多的时间在网上查找btt输出的每一项参数的意义,试图更好的分析bio的统计信息,但网上文章一大抄,翻来覆去就是那几篇文章。

本文中内容参考了以下网址:

1.btt官方网址:http://git.kernel.dk/cgit/blktrace/tree/btt/doc/btt.tex

在(centos7.4)/usr/share/doc/blktrace-1.0.5/README中32行提到http://git.kernel.dk/

本文中对btt的分析很多参考自btt.tex

2.http或者git上的blktrace最新版本

http://brick.kernel.dk/snaps/

  1. dd if=/dev/zero of=/dev/sdb bs=1M count=200
    mount -t debugfs none /sys/kernel/debug/
    blktrace -d /dev/sdb
  2. blkparse -i sdb -d sdb.blktrace.bin
  3. btt -i sdb.blktrace.bin
  1. conv=fdatasync
  2. conv=fsync
  3. conv=sync
  4. oflag=dsyn
  5. oflag=syn
  6. oflag=direct
  1. dd默认是buffer io, 页面是pdflush刷的。自己加上去这些参数看看分析结果,会让你大吃一惊的。

btt结果如下:

  1. btt -i sdb.blktrace.bin | grep -A 100 "All Devices"
  2.  
  3. ==================== All Devices ====================
  4.  
  5. ALL MIN AVG MAX N
  6. --------------- ------------- ------------- ------------- -----------
  7.  
  8. Q2Q 0.000000479 0.007676882 5.711474239
  9. Q2A 0.000080329 0.000887075 0.029918892
  10. Q2G 0.000000441 0.000296168 0.029764843
  11. S2G 0.014563643 0.021455586 0.029762593
  12. G2I 0.000000837 0.001079771 0.017816477
  13. Q2M 0.000000134 0.000000299 0.000001491
  14. I2D 0.000019345 0.055941845 0.106661743
  15. M2D 0.000029110 0.000044429 0.000053098
  16. D2C 0.000176810 0.023186230 0.053523069
  17. Q2C 0.000327039 0.079803764 0.146987975
  18.  
  19. ==================== Device Overhead ====================
  20.  
  21. DEV | Q2G G2I Q2M I2D D2C
  22. ---------- | --------- --------- --------- --------- ---------
  23. ( , ) | 0.3666% 1.3365% 0.0000% 69.2422% 29.0541%
  24. ---------- | --------- --------- --------- --------- ---------
  25. Overall | 0.3666% 1.3365% 0.0000% 69.2422% 29.0541%
  26.  
  27. ==================== Device Merge Information ====================
  28.  
  29. DEV | #Q #D Ratio | BLKmin BLKavg BLKmax Total
  30. ---------- | -------- -------- ------- | -------- -------- -------- --------
  31. ( , ) | 1.0 |
  32.  
  33. ==================== Device Q2Q Seek Information ====================
  34.  
  35. DEV | NSEEKS MEAN MEDIAN | MODE
  36. ---------- | --------------- --------------- --------------- | ---------------
  37. ( , ) | 1667079.8 | ()
  38. ---------- | --------------- --------------- --------------- | ---------------
  39. Overall | NSEEKS MEAN MEDIAN | MODE
  40. Average | 1667079.8 | ()
  41.  
  42. ==================== Device D2D Seek Information ====================
  43.  
  44. DEV | NSEEKS MEAN MEDIAN | MODE
  45. ---------- | --------------- --------------- --------------- | ---------------
  46. ( , ) | 1687714.7 | ()
  47. ---------- | --------------- --------------- --------------- | ---------------
  48. Overall | NSEEKS MEAN MEDIAN | MODE
  49. Average | 1687714.7 | ()
  50.  
  51. ==================== Plug Information ====================
  52.  
  53. DEV | # Plugs # Timer Us | % Time Q Plugged
  54. ---------- | ---------- ---------- | ----------------
  55. ( , ) | ( ) | 1.348107347%
  56.  
  57. DEV | IOs/Unp IOs/Unp(to)
  58. ---------- | ---------- ----------
  59. ( , ) | 0.0 0.0
  60. ( , ) | 14.8 10.7
  61. ---------- | ---------- ----------
  62. Overall | IOs/Unp IOs/Unp(to)
  63. Average | 14.8 10.7
  64.  
  65. ==================== Active Requests At Q Information ====================
  66.  
  67. DEV | Avg Reqs @ Q
  68. ---------- | -------------
  69. ( , ) | 71.0
  70.  
  71. ==================== I/O Active Period Information ====================
  72.  
  73. DEV | # Live Avg. Act Avg. !Act % Live
  74. ---------- | ---------- ------------- ------------- ------
  75. ( , ) | 0.000000000 0.000000000 0.00
  76. ( , ) | 0.134758122 1.351199997 10.09
  77. ---------- | ---------- ------------- ------------- ------
  78. Total Sys | 0.134758122 1.351199997 10.09
  79.  
  80. # Total System
  81. # Total System : q activity
  82. 0.000001613 0.0
  83. 0.000001613 0.4
  84. 0.124339757 0.4
  85. 0.124339757 0.0
  86. 0.301083076 0.0
  87. 0.301083076 0.4
  88. 0.400630592 0.4
  89. 0.400630592 0.0
  90. 0.573512380 0.0
  91. 0.573512380 0.4
  92. 0.680941099 0.4
  93. 0.680941099 0.0
  94. 0.855548103 0.0
  95. 0.855548103 0.4
  96. 0.954668973 0.4
  97. 0.954668973 0.0
  98. 4.983438230 0.0
  99. 4.983438230 0.4
  100. 4.983900171 0.4
  101. 4.983900171 0.0
  102. 5.964745346 0.0
  103. 5.964745346 0.4
  104. 6.210325030 0.4
  105. 6.210325030 0.0
  106. 11.921799269 0.0
  107. 11.921799269 0.4
  108. 11.922199421 0.4
  109. 11.922199421 0.0
  110.  
  111. # Total System : c activity
  112. 0.002123105 0.5
  113. 0.002123105 0.9
  114. 1.057812876 0.9
  115. 1.057812876 0.5
  116. 4.983851980 0.5
  117. 4.983851980 0.9
  118. 5.080603626 0.9
  119. 5.080603626 0.5
  120. 5.966766526 0.5
  121. 5.966766526 0.9
  122. 6.311149346 0.9
  123. 6.311149346 0.5
  124. 11.922139935 0.5
  125. 11.922139935 0.9
  126. 11.922139935 0.9
  127. 11.922139935 0.5
  128. 12.022423074 0.5
  129. 12.022423074 0.9
  130. 12.022423074 0.9
  131. 12.022423074 0.5
  132.  
  133. # Per device
  134. # , : q activity
  135. 0.000001613 1.0
  136. 0.000001613 1.4
  137. 0.124339757 1.4
  138. 0.124339757 1.0
  139. 0.301083076 1.0
  140. 0.301083076 1.4
  141. 0.400630592 1.4
  142. 0.400630592 1.0
  143. 0.573512380 1.0
  144. 0.573512380 1.4
  145. 0.680941099 1.4
  146. 0.680941099 1.0
  147. 0.855548103 1.0
  148. 0.855548103 1.4
  149. 0.954668973 1.4
  150. 0.954668973 1.0
  151. 4.983438230 1.0
  152. 4.983438230 1.4
  153. 4.983900171 1.4
  154. 4.983900171 1.0
  155. 5.964745346 1.0
  156. 5.964745346 1.4
  157. 6.210325030 1.4
  158. 6.210325030 1.0
  159. 11.921799269 1.0
  160. 11.921799269 1.4
  161. 11.922199421 1.4
  162. 11.922199421 1.0
  163.  
  164. # , : c activity
  165. 0.002123105 1.5
  166. 0.002123105 1.9
  167. 1.057812876 1.9
  168. 1.057812876 1.5
  169. 4.983851980 1.5
  170. 4.983851980 1.9
  171. 5.080603626 1.9
  172. 5.080603626 1.5
  173. 5.966766526 1.5
  174. 5.966766526 1.9
  175. 6.311149346 1.9
  176. 6.311149346 1.5
  177. 11.922139935 1.5
  178. 11.922139935 1.9
  179. 11.922139935 1.9
  180. 11.922139935 1.5
  181. 12.022423074 1.5
  182. 12.022423074 1.9
  183. 12.022423074 1.9
  184. 12.022423074 1.5
  185.  
  186. # Per process
  187. # blktrace : q activity
  188.  
  189. # blktrace : c activity
  190. 0.635480135 2.5
  191. 0.635480135 2.9
  192. 0.719490098 2.9
  193. 0.719490098 2.5
  194. 0.923163074 2.5
  195. 0.923163074 2.9
  196. 0.923163074 2.9
  197. 0.923163074 2.5
  198.  
  199. # dd : q activity
  200.  
  201. # dd : c activity
  202. 0.644083682 3.5
  203. 0.644083682 3.9
  204. 0.656542273 3.9
  205. 0.656542273 3.5
  206. 0.878359453 3.5
  207. 0.878359453 3.9
  208. 0.901143153 3.9
  209. 0.901143153 3.5
  210.  
  211. # jbd2 : q activity
  212. 4.983438230 4.0
  213. 4.983438230 4.4
  214. 4.983900171 4.4
  215. 4.983900171 4.0
  216. 11.921799269 4.0
  217. 11.921799269 4.4
  218. 11.922199421 4.4
  219. 11.922199421 4.0
  220.  
  221. # jbd2 : c activity
  222.  
  223. # ksoftirqd : q activity
  224.  
  225. # ksoftirqd : c activity
  226. 0.113965122 5.5
  227. 0.113965122 5.9
  228. 0.198731512 5.9
  229. 0.198731512 5.5
  230. 0.307666215 5.5
  231. 0.307666215 5.9
  232. 0.409794955 5.9
  233. 0.409794955 5.5
  234. 0.616305736 5.5
  235. 0.616305736 5.9
  236. 0.676618319 5.9
  237. 0.676618319 5.5
  238. 0.912045716 5.5
  239. 0.912045716 5.9
  240. 0.952781433 5.9
  241. 0.952781433 5.5
  242. 6.020433404 5.5
  243. 6.020433404 5.9
  244. 6.061857376 5.9
  245. 6.061857376 5.5
  246.  
  247. # kworker : q activity
  248. 0.000001613 6.0
  249. 0.000001613 6.4
  250. 0.124339757 6.4
  251. 0.124339757 6.0
  252. 0.301083076 6.0
  253. 0.301083076 6.4
  254. 0.400630592 6.4
  255. 0.400630592 6.0
  256. 0.573512380 6.0
  257. 0.573512380 6.4
  258. 0.680941099 6.4
  259. 0.680941099 6.0
  260. 0.855548103 6.0
  261. 0.855548103 6.4
  262. 0.954668973 6.4
  263. 0.954668973 6.0
  264. 5.964745346 6.0
  265. 5.964745346 6.4
  266. 6.210325030 6.4
  267. 6.210325030 6.0
  268.  
  269. # kworker : c activity
  270. 0.002123105 6.5
  271. 0.002123105 6.9
  272. 0.123954697 6.9
  273. 0.123954697 6.5
  274. 0.303307770 6.5
  275. 0.303307770 6.9
  276. 0.398885246 6.9
  277. 0.398885246 6.5
  278. 0.578128551 6.5
  279. 0.578128551 6.9
  280. 0.680640684 6.9
  281. 0.680640684 6.5
  282. 0.857973954 6.5
  283. 0.857973954 6.9
  284. 0.866998251 6.9
  285. 0.866998251 6.5
  286. 5.966766526 6.5
  287. 5.966766526 6.9
  288. 6.189072144 6.9
  289. 6.189072144 6.5
  290.  
  291. # pid000000000 : q activity
  292.  
  293. # pid000000000 : c activity
  294. 0.028265966 7.5
  295. 0.028265966 7.9
  296. 1.057812876 7.9
  297. 1.057812876 7.5
  298. 4.983851980 7.5
  299. 4.983851980 7.9
  300. 5.080603626 7.9
  301. 5.080603626 7.5
  302. 6.014711621 7.5
  303. 6.014711621 7.9
  304. 6.311149346 7.9
  305. 6.311149346 7.5
  306. 11.922139935 7.5
  307. 11.922139935 7.9
  308. 11.922139935 7.9
  309. 11.922139935 7.5
  310. 12.022423074 7.5
  311. 12.022423074 7.9
  312. 12.022423074 7.9
  313. 12.022423074 7.5
  314.  
  315. # rcu_sched : q activity
  316.  
  317. # rcu_sched : c activity
  318. 0.916222959 8.5
  319. 0.916222959 8.9
  320. 0.916659526 8.9
  321. 0.916659526 8.5

每一段详细分析:

  1. ALL MIN AVG MAX N
  2. --------------- ------------- ------------- ------------- -----------
  3.  
  4. Q2Q 0.000000479 0.007676882 5.711474239
  5. Q2A 0.000080329 0.000887075 0.029918892
  6. Q2G 0.000000441 0.000296168 0.029764843 1535
  7. S2G 0.014563643 0.021455586 0.029762593
  8. G2I 0.000000837 0.001079771 0.017816477 1535
  9. Q2M 0.000000134 0.000000299 0.000001491
  10. I2D 0.000019345 0.055941845 0.106661743 1535
  11. M2D 0.000029110 0.000044429 0.000053098
  12. D2C 0.000176810 0.023186230 0.053523069 1554
  13. Q2C 0.000327039 0.079803764 0.146987975 1554

我们需要关注的是红色部分,单位是秒s,N表示个数。

  1. ==================== Device Overhead ====================
  2.  
  3. DEV | Q2G G2I Q2M I2D D2C
  4. ---------- | --------- --------- --------- --------- ---------
  5. ( , ) | 0.3666% 1.3365% 0.0000% 69.2422% 29.0541%
  6. ---------- | --------- --------- --------- --------- ---------
  7. Overall | 0.3666% 1.3365% 0.0000% 69.2422% 29.0541%

在此次测试IO中,I2D(IO请求在IO调度中所耗费的时间)平均花费了69%的时间,属于最耗时的部分,其次是D2C(IO请求在硬件设备中耗费的时间)花费了29%的时间。

  1. ==================== Device Merge Information ====================
  2.  
  3. DEV | #Q #D Ratio | BLKmin BLKavg BLKmax Total
  4. ---------- | -------- -------- ------- | -------- -------- -------- --------
  5. ( , ) | 1.0 |

对此部分的解释如下:

A key measurement when making changes in the system (software \emph{or} hardware) is to understand the block IO layer ends up merging incoming requests into fewer, but larger, IOs to the underlying driver. In this section, we show the number of incoming requests (Q), the number of issued requests (D) and the resultant ratio. We also provide values for the minimum, average and maximum IOs generated.

IO合并请求信息,Q表示传入的IO请求,D表示合并后发出的请求,D越小证明数据包越大,合并请求比例越高(越高越好)。

  1. ==================== Device Q2Q Seek Information ====================
  2.  
  3. DEV | NSEEKS MEAN MEDIAN | MODE
  4. ---------- | --------------- --------------- --------------- | ---------------
  5. ( , ) | 1667079.8 | ()
  6. ---------- | --------------- --------------- --------------- | ---------------
  7. Overall | NSEEKS MEAN MEDIAN | MODE
  8. Average | 1667079.8 | ()
  9.  
  10. ==================== Device D2D Seek Information ====================
  11.  
  12. DEV | NSEEKS MEAN MEDIAN | MODE
  13. ---------- | --------------- --------------- --------------- | ---------------
  14. ( , ) | 1687714.7 | ()
  15. ---------- | --------------- --------------- --------------- | ---------------
  16. Overall | NSEEKS MEAN MEDIAN | MODE
  17. Average | 1687714.7 | ()
  1. # Total System
  2. # Total System : q activity
  3. # Total System : c activity
  4.  
  5. # Per device
  6. # , : q activity
  7. # , : c activity
  8.  
  9. # Per process
  10. # blktrace : q activity
  11. # blktrace : c activity
  12. # dd : q activity
  13. # dd : c activity
  14. # jbd2 : q activity
  15. # jbd2 : c activity
  16. # ksoftirqd : q activity
  17. # ksoftirqd : c activity
  18. # kworker : q activity
  19. # kworker : c activity
  20. # pid000000000 : q activity
  21. # pid000000000 : c activity
  22. # rcu_sched : q activity
  23. # rcu_sched : c activity

kworker(KWorker is in process ...At the moment there is not active source code to download because the project is in very early version. The only active place on this page is the bug tracking page that keeps my current list of bugs-tracks open. When the project hits the 0.1 release (see bug tracking) it will be available to the public (and maybe to some volunteers).)
jbd2(journaling block driver)这个进程实现的是文件系统的日志功能,磁盘使用日志功能来保证数据的完整性
RCU(Read-Copy Update)
ksoftirqd软中断处理线程

对于activity的定义可以参见http://git.kernel.dk/cgit/blktrace/tree/btt/output.c

部分代码如下:

  1. int output_regions(FILE *ofp, char *header, struct region_info *reg,
  2. float base)
  3. {
  4. if (list_len(&reg->qranges) == && list_len(&reg->cranges) == )
  5. return ;
  6.  
  7. fprintf(ofp, "# %16s : q activity\n", header);
  8. __output_ranges(ofp, &reg->qranges, base);
  9. fprintf(ofp, "\n");
  10.  
  11. fprintf(ofp, "# %16s : c activity\n", header);
  12. __output_ranges(ofp, &reg->cranges, base + 0.5);
  13. fprintf(ofp, "\n");
  14.  
  15. return ;
  16. }

本文参考

http://git.kernel.dk/cgit/blktrace/tree/btt/doc/btt.tex

blktrace btt结果分析的更多相关文章

  1. IO在block级别的过程分析

    btt User Guide在百度找了3天没找到,bing也不行,结果google第一页第5个结果就是. 可恶的GFW http://www.fis.unipr.it/doc/blktrace-1.0 ...

  2. 利用BLKTRACE分析IO性能

    在Linux系统上,如果I/O发生性能问题,有没有办法进一步定位故障位置呢?iostat等最常用的工具肯定是指望不上的,[容易被误读的iostat]一文中解释过await表示单个I/O所需的平均时间, ...

  3. [转] 利用BLKTRACE分析IO性能

    在Linux系统上,如果I/O发生性能问题,有没有办法进一步定位故障位置呢?iostat等最常用的工具肯定是指望不上的,[容易被误读的iostat]一文中解释过await表示单个I/O所需的平均时间, ...

  4. blktrace分析IO

    http://bean-li.github.io/blktrace-to-report/ 前言 上篇博客介绍了iostat的一些输出,这篇介绍blktrace这个神器.上一节介绍iostat的时候,我 ...

  5. [转载]blktrace分析IO

    前言 上篇博客介绍了iostat的一些输出,这篇介绍blktrace这个神器.上一节介绍iostat的时候,我们心心念念希望得到块设备处理io的service time,而不是service time ...

  6. 通过blktrace, debugfs分析磁盘IO

    前几天微博上有同学问我磁盘util达到了100%时程序性能下降的问题,由于信息实在有限,我也没有办法帮太大的忙,这篇blog只是想给他列一下在磁盘util很高的时候如何通过blktrace+debug ...

  7. 【转】通过blktrace, debugfs分析磁盘IO

    前几天微博上有同学问我磁盘util达到了100%时程序性能下降的问题,由于信息实在有限,我也没有办法帮太大的忙,这篇blog只是想给他列一下在磁盘util很高的时候如何通过blktrace+debug ...

  8. Linux服务器I/O性能分析-2

    一.如何正确分析IO性能 1.1 BLKTRACE分析IO性能 之前的文章已经说明,要是系统发生I/O性能问题,我们常用的命令是无法精确定位问题(内核I/O调度器消耗的时间和硬件消耗的时间,这个不能作 ...

  9. 【Android】源码external/目录中在编译过程中生成的文件列表

    => external/eyes-free:   accessibilityvalidator.jar (host,share) => external/mesa3d:   libMesa ...

随机推荐

  1. 使用 Git Hook 自动部署 Hexo 到个人 VPS

    安装 Hexo 既然我的标题都已经那样写了,当然这个小节就不是本篇文章的重点了. 关于 Hexo 的安装跟配置,其实网上已经有很多很多文章了,随便一搜一大把.这里就有一篇超详细的,大家可以参考一下. ...

  2. InfiniBand 与Intel Omni-Path Architecture

    Intel Omni-Path Architecture (OPA) 是一种与InfiniBand相似的网络架构 可以用来避免以下PCI总线一些缺陷: 1.由于采用了基于总线的共享传输模式,在PCI总 ...

  3. ReentrantLock 重入锁(下)

    前沿:       ReentrantLock 是java重入锁一种实现,在java中我们通常使用ReentrantLock 和 synchronized来实现锁功能,本篇通过例子来理解下Reentr ...

  4. linux yum安装lsof命令

    [root@ITC-MCC ~]# yum install lsof[USM] permission denied^C[root@ITC-MCC ~]# [root@ITC-MCC ~]# [root ...

  5. UVALive-3989 Ladies' Choice (稳定婚姻问题)

    题目大意:稳定婚姻问题.... 题目分析:模板题. 代码如下: # include<iostream> # include<cstdio> # include<queue ...

  6. OleDbCommand使用参数应该注意的地方

    最近写程序用到OleDbCommand的Parameter写数据库,遇到很多问题: 1.OLE DB .NET Framework 数据提供程序和 ODBC .NET Framework 数据提供程序 ...

  7. 在am中定义消息集束,并在CO中验证之后抛出异常。

    需求:在页面上点某个按钮的时候,需要收集所有异常并抛出. -------------------------------------------方式1:参考 EBS OAF开发中的错误/异常处理(Er ...

  8. springboot 用mybatis-generator自动生成bean和dao

    1.在pom.xml里添加maven插件 <plugin> <groupId>org.mybatis.generator</groupId> <artifac ...

  9. 管道 && 消息队列 && 共享内存

    http://blog.csdn.net/piaoairy219/article/details/17333691 1. 管道 管道的优点是不需要加锁. 缺点是默认缓冲区太小,只有4K. 一个管道只适 ...

  10. New Concept English Two 21 55

    $课文53  触电的蛇 544. At last firemen have put out a big forest fire in California. 消防队员们终于扑灭了加利福尼亚的一场森林大 ...