Linux 文件系统缓存dirty_ratio与dirty_background_ratio两个参数区别
标签:
linux文件系统缓存cachedirty_ratiodirty_background_ratit |
分类: 专业学习 |
这两天在调优数据库性能的过程中需要降低操作系统文件Cache对数据库性能的影响,故调研了一些降低文件系统缓存大小的方法,其中一种是通过修改/proc/sys/vm/dirty_background_ration以及/proc/sys/vm/dirty_ratio两个参数的大小来实现。看了不少相关博文的介绍,不过一直弄不清楚这两个参数的区别在哪里,后来看了下面的一篇英文博客才大致了解了它们的不同。
附上原文:
Better Linux Disk Caching & Performance with vm.dirty_ratio & vm.dirty_background_ratio
by BOB PLANKERS on DECEMBER 22, 2013
in BEST PRACTICES,CLOUD,SYSTEM ADMINISTRATION,VIRTUALIZATION
This is post #16 in my December 2013 series about Linux Virtual Machine Performance Tuning. For more, please see the tag “Linux VM Performance Tuning.”
In previous posts
on vm.swappiness and using
RAM disks we talked about how the memory on a
Linux guest is used for the OS itself (the kernel, buffers, etc.),
applications, and also for file cache. File caching is an important
performance improvement, and read caching is a clear win in most
cases, balanced against applications using the RAM directly. Write
caching is trickier. The Linux kernel stages disk writes into
cache, and over time asynchronously flushes them to disk. This has
a nice effect of speeding disk I/O but it is risky. When data isn’t
written to disk there is an increased chance of losing it.
There is also the
chance that a lot of I/O will overwhelm the cache, too. Ever
written a lot of data to disk all at once, and seen large pauses on
the system while it tries to deal with all that data? Those pauses
are a result of the cache deciding that there’s too much data to be
written asynchronously (as a non-blocking background operation,
letting the application process continue), and switches to writing
synchronously (blocking and making the process wait until the I/O
is committed to disk). Of course, a filesystem also has to preserve
write order, so when it starts writing synchronously it first has
to destage the cache. Hence the long pause.
The nice thing is
that these are controllable options, and based on your workloads
& data you can decide how you want to set them up. Let’s take a
look:
$ sysctl -a | grep dirty
vm.dirty_background_ratio = 10
vm.dirty_background_bytes = 0
vm.dirty_ratio = 20
vm.dirty_bytes = 0
vm.dirty_writeback_centisecs = 500
vm.dirty_expire_centisecs = 3000
vm.dirty_background_ratio is the percentage of system memory that can be filled with “dirty” pages — memory pages that still need to be written to disk — before the pdflush/flush/kdmflush background processes kick in to write it to disk. My example is 10%, so if my virtual server has 32 GB of memory that’s 3.2 GB of data that can be sitting in RAM before something is done.
vm.dirty_ratio is the absolute maximum amount of system memory that can be filled with dirty pages before everything must get committed to disk. When the system gets to this point all new I/O blocks until dirty pages have been written to disk. This is often the source of long I/O pauses, but is a safeguard against too much data being cached unsafely in memory.
vm.dirty_background_bytes and vm.dirty_bytes are another way to specify these parameters. If you set the _bytes version the _ratio version will become 0, and vice-versa.
vm.dirty_expire_centisecs is how long something can be in cache before it needs to be written. In this case it’s 30 seconds. When the pdflush/flush/kdmflush processes kick in they will check to see how old a dirty page is, and if it’s older than this value it’ll be written asynchronously to disk. Since holding a dirty page in memory is unsafe this is also a safeguard against data loss.
vm.dirty_writeback_centisecs is how often the pdflush/flush/kdmflush processes wake up and check to see if work needs to be done.
You can also see statistics on the page cache in /proc/vmstat:
$ cat /proc/vmstat | egrep "dirty|writeback"
nr_dirty 878
nr_writeback 0
nr_writeback_temp 0
In my case I have 878 dirty pages waiting to be written to disk.
Approach 1: Decreasing the Cache
As with most things in the computer world, how you adjust these depends on what you’re trying to do. In many cases we have fast disk subsystems with their own big, battery-backed NVRAM caches, so keeping things in the OS page cache is risky. Let’s try to send I/O to the array in a more timely fashion and reduce the chance our local OS will, to borrow a phrase from the service industry, be “in the weeds.” To do this we lower vm.dirty_background_ratio and vm.dirty_ratio by adding new numbers to /etc/sysctl.conf and reloading with “sysctl –p”:
vm.dirty_background_ratio = 5
vm.dirty_ratio = 10
This is a typical approach on virtual machines, as well as Linux-based hypervisors. I wouldn’t suggest setting these parameters to zero, as some background I/O is nice to decouple application performance from short periods of higher latency on your disk array & SAN (“spikes”).
Approach 2: Increasing the Cache
There are scenarios where raising the cache dramatically has positive effects on performance. These situations are where the data contained on a Linux guest isn’t critical and can be lost, and usually where an application is writing to the same files repeatedly or in repeatable bursts. In theory, by allowing more dirty pages to exist in memory you’ll rewrite the same blocks over and over in cache, and just need to do one write every so often to the actual disk. To do this we raise the parameters:
vm.dirty_background_ratio = 50
vm.dirty_ratio = 80
Sometimes folks also increase the vm.dirty_expire_centisecs parameter to allow more time in cache. Beyond the increased risk of data loss, you also run the risk of long I/O pauses if that cache gets full and needs to destage, because on large VMs there will be a lot of data in cache.
Approach 3: Both Ways
There are also scenarios where a system has to deal with infrequent, bursty traffic to slow disk (batch jobs at the top of the hour, midnight, writing to an SD card on a Raspberry Pi, etc.). In that case an approach might be to allow all that write I/O to be deposited in the cache so that the background flush operations can deal with it asynchronously over time:
vm.dirty_background_ratio = 5
vm.dirty_ratio = 80
Here the background processes will start writing right away when it hits that 5% ceiling but the system won’t force synchronous I/O until it gets to 80% full. From there you just size your system RAM and vm.dirty_ratio to be able to consume all the written data. Again, there are tradeoffs with data consistency on disk, which translates into risk to data. Buy a UPS and make sure you can destage cache before the UPS runs out of power. :)
No matter the route you choose you should always be gathering hard data to support your changes and help you determine if you are improving things or making them worse. In this case you can get data from many different places, including the application itself, /proc/vmstat, /proc/meminfo, iostat, vmstat, and many of the things in /proc/sys/vm. Good luck!
转载自:
http://blog.sina.com.cn/s/blog_448574810101k1va.html
Linux 文件系统缓存dirty_ratio与dirty_background_ratio两个参数区别的更多相关文章
- 文件系统缓存dirty_ratio与dirty_background_ratio两个参数区别
这两天在调优数据库性能的过程中需要降低操作系统文件Cache对数据库性能的影响,故调研了一些降低文件系统缓存大小的方法,其中一种是通过修改/proc/sys/vm/dirty_background_r ...
- (转)文件系统缓存dirty_ratio与dirty_background_ratio两个参数区别
这两天在调优数据库性能的过程中需要降低操作系统文件Cache对数据库性能的影响,故调研了一些降低文件系统缓存大小的方法,其中一种是通过修改/proc/sys/vm/dirty_background_r ...
- [转载]文件系统缓存dirty_ratio与dirty_background_ra
原文地址:文件系统缓存dirty_ratio与dirty_background_ratio两个参数区别作者:vincent 这两天在调优数据库性能的过程中需要降低操作系统文件Cache对数据库性能的影 ...
- Linux 文件系统缓存 -针对不同数据库有不同作用
文件系统缓存 filesystem cache 许多人没有意识到.文件系统缓存对于性能的影响.Linux系统默认的设置倾向于把内存尽可能的用于文件cache,所以在一台大内存机器上,往往我们可能发现没 ...
- [转载]C#缓存absoluteExpiration、slidingExpiration两个参数的疑惑
看了很多资料终于搞明白cache中absoluteExpiration,slidingExpiration这两个参数的含义. absoluteExpiration:用于设置绝对过期时间,它表示只要时间 ...
- C#缓存absoluteExpiration、slidingExpiration两个参数的疑惑
看了很多资料终于搞明白cache中absoluteExpiration,slidingExpiration这两个参数的含义. absoluteExpiration:用于设置绝对过期时间,它表示只要时间 ...
- maven跳过单元测试的两个参数区别
maven在打包过程中需要执行单元测试.但有些时候单元测试已经通过只是想打包时,想跳过测试.maven提供了两个参数跳过测试:maven.test.skip=true 和skipTests. 例子 m ...
- Linux文件系统简介和软链接和硬链接的区别
Linux有着极其丰富的文件系统,大体可分为如下几类: 网络文件系统:如nfs.cifs等: 磁盘文件系统:如ext3.ext4等: 特殊文件系统:如prco.sysfs.ramfs.tmpfs等: ...
- Better Linux Disk Caching & Performance with vm.dirty_ratio & vm.dirty_background_ratio
In previous posts on vm.swappiness and using RAM disks we talked about how the memory on a Linux gue ...
随机推荐
- find_element_by_xpath()的几种方法
Xpath (XML Path Language),是W3C定义的用来在XML文档中选择节点的语言一:从根目录/开始有点像Linux的文件查看,/代表根目录,一级一级的查找,直接子节点,相当于css_ ...
- PAT(B) 1077 互评成绩计算(Java)
题目链接:1077 互评成绩计算 (20 point(s)) 题目描述 在浙大的计算机专业课中,经常有互评分组报告这个环节.一个组上台介绍自己的工作,其他组在台下为其表现评分.最后这个组的互评成绩是这 ...
- PAT甲级题分类汇编——序言
今天开个坑,分类整理PAT甲级题目(https://pintia.cn/problem-sets/994805342720868352/problems/type/7)中1051~1100部分.语言是 ...
- Linux 进程控制
分享知乎上看到的一句话,共勉: 学习周期分为学习,思考,实践,校正四个阶段,周期越短,学习效率越高. 前面讲的都是操作系统如何管理进程,接下来,看看用户如何进行进程控制. 1.进程创建 先介绍一下函数 ...
- 米联客 osrc_virtual_machine_sdx2017_4 虚拟机的使用
今天大部分时间都在高csdn的博客的,一直无法和word关联,来不及写使用教程了,先发下载链接. 虚拟机安装的是ubuntu16.4.3,vivado软件是SDX2017.4版本,包括的vivado2 ...
- 一行代码实现Vue微信支付,无需引用wexin-sdk库,前后端分离HTML微信支付,无需引用任何库
前后端分离项目实现微信支付的流程: 1:用户点击支付 2:请求服务端获取支付参数 3:客户端通过JS调起微信支付(微信打开的网页) * 本文主要解决的是第3步,视为前两步已经完成,能正确拿到支付参数, ...
- 优先队列问题 get it !!
首先 队列的基本用法 头文件 #include<queue> priority_queue < int/string/struct> q// q为队列的名字 基本操作 q.p ...
- springMVC关于异常优先级的处理
优先级 既然在SpringMVC中有两种处理异常的方式,那么就存在一个优先级的问题: 当发生异常的时候,SpringMVC会如下处理: (1)SpringMVC会先从配置文件找异常解析器Handler ...
- 【转载】Sqlserver使用Convert函数进行数据类型转换
在Sqlserver数据库中,可以使用Convert函数来进行数据类型的转换,如将数字类型decimal转换为字符串nvarchar类型,或者将字符串类型转换为数字类型都可以使用Convert函数来实 ...
- Centos 配置eth0 提示Device does not seem to be present -- 转载
http://www.cnblogs.com/fbwfbi/archive/2013/04/29/3050907.html 移动虚拟机造成网卡无法识别 一.故障现象: [root@c1node01 ~ ...