redis文档翻译_LRU缓存
add new one. This behavior is very well known in the community of developers, since it is the default behavior of the popular memcached system.
系统就有这样的缺省的做法。
is actually only one of the supported eviction methods. This page covers the more general topic of the Redis
maxmemory
directive that is used in order tolimit the memory usage to a fixed amount, and it also covers in depth the LRU algorithm used by Redis, that is actually an approximation of the exact LRU.
此页包括Redis的maxmemory指令,用于限制Redis使用最大物理内存的大小,也包括Redis深度的LRU算法。实际是近似的LRU算法。
Maxmemory configuration directive 最大内存配置指令
The maxmemory
configuration
directive is used in order to configure Redis to use a specified amount of memory for the data set. It is possible to set the configuration directive using the redis.conf
file,
or later using the CONFIG SETcommand at runtime.
maxmemory
指令是用作配置指定Redis使用内存的最大值。
能够在redis.conf文件配置,执行过程中还能够使用CONFIG
SET 命令设置。
For example in order to configure a memory limit of 100 megabytes, the following directive can be used inside the redis.conf
file.
比如。为了配置限制内存为100M,能够在redis.conf文件里配置:
maxmemory 100mb
Setting maxmemory
to zero results into no memory limits. This is the default behavior for 64 bit systems, while
32 bit systems use an implicit memory limit of 3GB.
maxmemory
为0 的结果是没有内存限制。这是64位系统的缺省做法,32位系统是限制3G。
the specified amount of memory is reached, it is possible to select among different behaviors, called policies. Redis can just return errors for commands that could result in more memory being
used, or it can evict some old data in order to return back to the specified limit every time new data is added.
policies(政策)。
Eviction policies 逐出政策
The exact behavior Redis follows when the maxmemory
limit is reached is configured using the maxmemory-policy
configuration
directive.
maxmemory-policy 指令配置的政策运行。
following policies are available: 以下是policies能够配置的 变量
- noeviction: return errors when the memory limit was reached and the client is trying to execute commands that could result in more memory to be used (most write commands, but DEL and
a few more exceptions). - noeviction:当达到最大内存使用限制而且client尝试运行使用很多其它内存的命令时。将返回一个错误。
- allkeys-lru: evict keys trying to remove the less recently used (LRU) keys first, in order to make space for the new data added.
- allkeys-lru:为了新加入的数据使用内存。将逐出近期最少使用的key。
- volatile-lru: evict keys trying to remove the less recently used (LRU) keys first, but only among keys that have anexpire set, in
order to make space for the new data added. - volatile-lru: 为了新加入的数据使用内存,将逐出有过期时间设置而且近期最少使用的key。
- allkeys-random: evict random keys in order to make space for the new data added.
- allkeys-random:: 为了新加入的数据使用内存。将随机逐出key。
- volatile-random: evict random keys in order to make space for the new data added, but only evict keys with anexpire set.
- volatile-random: 为了新加入的数据使用内存,将随机逐出具有过期时间设置的key。
- volatile-ttl: In order to make space for the new data, evict only keys with an expire set, and try to evict keys with a shorter
time to live (TTL) first. - 为了新加入的数据使用内存,将逐出有过期时间限制。而且存活时间(TTL)最短的key。
policies volatile-lru, volatile-random and volatile-ttl behave
like noeviction if there are no keys to evict matching the prerequisites.
pick the right eviction policy is important depending on the access pattern of your application, however you can reconfigure the policy at runtime while the application is running, and monitor the number of cache misses and hits using the Redis INFO output
in order to tune your setup.
general as a rule of thumb: 一般的配置规则
- Use the allkeys-lru policy when you expect a power-law distribution in the popularity of your requests, that is, you expect that a subset of elements will be accessed far more
often than the rest. This is a good pick if you are unsure.
- Use the allkeys-random if you have a cyclic access where all the keys are scanned continuously, or when you expect the distribution to be uniform (all elements likely accessed
with the same probability).
- Use the volatile-ttl if you want to be able to provide hints to Redis about what are good candidate for expiration by using different TTL values when you create your cache objects.
are mainly useful when you want to use a single instance for both caching and to have a set of persistent keys. However it is usually a better idea to run two Redis instances to solve such a problem.
is also worth to note that setting an expire to a key costs memory, so using a policy like allkeys-lru is more memory efficient since there is no need
to set an expire for the key to be evicted under memory pressure.
的政策比較有内存效率。由于它不须要去为过期的key的逐出添加额外的内存压力。
How the eviction process works 逐出是这样工作的
It is important to understand that the eviction process works like this:
理解逐出工作方式,就像这样:
- A client runs a new command, resulting in more data added.
- Redis checks the memory usage, and if it is greater than the
maxmemory
limit , it evicts keys according to the policy.
maxmemory
限制。就依据政策逐出key。
- A new command is executed, and so forth.
So we continuously cross the boundaries of the memory limit, by going over it, and then by evicting keys to return back under the limits.
因此我们持续加入数据达到内存限制,通过上面的步骤,逐出一些key使得内存回到限制之下。
If a command results in a lot of memory being used (like a big set intersection stored into a new key) for some time the memory limit can be surpassed by a noticeable amount.
假设命令的结果是导致非常大的内存被使用(比方一个新的key是一个大的set集合)。非常多时候达到内存限制是显而易见的。
Approximated LRU algorithm 近似的LRU算法
Redis LRU algorithm is not an exact implementation. This means that Redis is not able to pick the best candidate for
eviction, that is, the access that was accessed the most in the past. Instead it will try to run an approximation of the LRU algorithm, by sampling a small number of keys, and evicting the one that is the best (with the oldest access time) among the sampled
keys.
最好的候选人 ,也就是没有选择过去最后被訪问离如今最久的。反而 是去运行一个 近似LRU的算法,通过抽样少量的key,而且逐出抽样中最后被訪问离如今最久的key(最老的訪问时间)。
since Redis 3.0 (that is currently in beta) the algorithm was improved to also take a pool of good candidates for eviction. This improved the performance of the algorithm, making it able to approximate more closely the behavior of a real LRU algorithm.
改进了算法的性能。使它更加近似真正LRU算法。
is important about the Redis LRU algorithm is that you are able to tune the precision of the
algorithm by changing the number of samples to check for every eviction. This parameter is controlled by the following configuration directive:
maxmemory-samples 5
The reason why Redis does not use a true LRU implementation is because it costs more memory. However the approximation is virtually equivalent for the application using Redis. The following is a graphical comparison of how the LRU approximation used by Redis
compares with true LRU.
test to generate the above graphs filled a Redis server with a given number of keys. The keys were accessed from the first to the last, so that the first keys are the best candidates for eviction using an LRU algorithm. Later more 50% of keys are added, in
order to force half of the old keys to be evicted.
key从第一行到最后一行被訪问,那么第一个key是LUR算法中最好的逐出候选者。
之后有50%的key被加入,那么一半的旧key被逐出。
You can see three kind of dots in the graphs, forming three distinct bands.
在上图中你能够看见3个明显的差别:
- The light gray band are objects that were evicted.
- The gray band are objects that were not evicted.
- The green band are objects that were added.
a theoretical LRU implementation we expect that, among the old keys, the first half will be expired. The Redis LRU algorithm will instead only probabilistically expire
the older keys.
As you can see Redis 3.0 does a better job with 5 samples compared to Redis 2.8, however most objects that are among the latest accessed are still retained by Redis 2.8. Using a sample size of 10 in Redis 3.0 the
approximation is very close to the theoretical performance of Redis 3.0.
如你所见,3.0的工作比2.8更好。然而在2.8版本号中,大多数最新訪问对象的仍然保留。在3.0使用样品为10 时,性能很接近理论上的LRU算法。
Note that LRU is just a model to predict how likely a given key will be accessed in the future. Moreover, if your data access pattern closely resembles the power law, most of the accesses will be in the set of keys
that the LRU approximated algorithm will be able to handle well.
注意:LRU不过一个预測模式。给出的key非常可能在未来被訪问。此外,假设你的数据訪问模式类似于幂律(线性的)。大多数key都可能被訪问那么这个LRU算法的处理就是非常好的。
In simulations we found that using a power law access pattern, the difference between true LRU and Redis approximation were minimal or non-existent.
在实战中 。我们发现使用幂律(线性的)的訪问模式,在真正的LRU算法和Redis的LRU算法之间差异非常小或者不存在差异。
However you can raise the sample size to 10 at the cost of some additional CPU usage in order to closely approximate true LRU, and check if this makes a difference in your cache misses rate.
你能够提升样品大小配置到10。它将接近真正的LRU算法,而且有不同错过率。可是要消耗很多其它的CPU。
To experiment in production with different values for the sample size by using the CONFIG
command, is very simple.
SET maxmemory-samples <count>
在调试时使用不同的样品大小去调试很easy,使用命令CONFIG SET maxmemory-samples <count> 实现。
redis文档翻译_LRU缓存的更多相关文章
- Net分布式系统之五:C#使用Redis集群缓存
本文介绍系统缓存组件,采用NOSQL之Redis作为系统缓存层. 一.背景 系统考虑到高并发的使用场景.对于并发提交场景,通过上一章节介绍的RabbitMQ组件解决.对于系统高并发查询,为了提供性能减 ...
- Spring Boot使用redis做数据缓存
1 添加redis支持 在pom.xml中添加 <dependency> <groupId>org.springframework.boot</groupId> & ...
- C#使用Redis集群缓存
C#使用Redis集群缓存 本文介绍系统缓存组件,采用NOSQL之Redis作为系统缓存层. 一.背景 系统考虑到高并发的使用场景.对于并发提交场景,通过上一章节介绍的RabbitMQ组件解决.对于系 ...
- 在AspNetCore 中 使用Redis实现分布式缓存
AspNetCore 使用Redis实现分布式缓存 上一篇讲到了,Core的内置缓存:IMemoryCache,以及缓存的基础概念.本篇会进行一些概念上的补充. 本篇我们记录的内容是怎么在Core中使 ...
- SpringMVC + MyBatis + Mysql + Redis(作为二级缓存) 配置
2016年03月03日 10:37:47 标签: mysql / redis / mybatis / spring mvc / spring 33805 项目环境: 在SpringMVC + MyBa ...
- redis哈希缓存数据表
redis哈希缓存数据表 REDIS HASH可以用来缓存数据表的数据,以后可以从REDIS内存数据库中读取数据. 从内存中取数,无疑是很快的. var FRedis: IRedisClient; F ...
- redis删除单个key和多个key,ssdb会落地导致重启redis无法清除缓存
redis删除单个key和多个key,ssdb会落地导致重启redis无法清除缓存,需要针对单个key进行删除 删除单个:del key 删除多个:redis-cli -a pass(密码) keys ...
- MySQL与Redis实现二级缓存
redis简介 Redis 是完全开源免费的,遵守BSD协议,是一个高性能的key-value数据库 Redis 与其他 key - value 缓存产品有以下三个特点: Redis支持数据的持久化, ...
- Redis 集群缓存测试要点--关于 线上 token 失效 BUG 的总结
在测试账户系统过程中遇到了线上大面积用户登录态失效的严重问题,事后对于其原因及测试盲点做了一些总结记录以便以后查阅,总结分为以下7点,其中原理性的解释有些摘自网络. 1.账户系统token失效问题复盘 ...
随机推荐
- python输出字典中的中文
如果不用本文指定的方法,会有如下报错: UnicodeDecodeError: 'utf8' codec can't decode byte 0xbf in position 2: invalid s ...
- P1340 送礼物
时间: 1000ms / 空间: 131072KiB / Java类名: Main 描述 作为惩罚,GY被遣送去帮助某神牛给女生送礼物(GY:貌似是个好差事)但是在GY看到礼物之后,他就不这么认为了. ...
- net9:磁盘目录文件保存到XML文档及其XML文档的读写操作,以及绑定XML到treeview
原文发布时间为:2008-08-10 -- 来源于本人的百度文章 [由搬家工具导入] directorytoxml类: using System;using System.Data;using Sys ...
- linux 下高精度时间
今天在公司代码中看到了使用select函数的超时功能作定时器的用法,便整理了如下几个Linux下的微秒级别的定时器.在我的Ubutu10.10 双核环境中,编译通过. /* * @FileName: ...
- LeetCode OJ--Remove Duplicates from Sorted Array
http://oj.leetcode.com/problems/remove-duplicates-from-sorted-array/ 删除数组中的重复元素,要求为原地算法. 进行一遍遍历,记录下一 ...
- HTML-在canvas画图中,图片的线上链接已配置允许跨域后,仍然出错提示跨域,怎么解决?
这个问题我已经遇到了2次,第一次解决了后,第二次又遇到了,所以这次做个笔记,怕以后再次遇到 举例: 1.要实现的问题:我需要在canvas画布上画上我的微信头像 2.后台配置已经完成了允许我头像地址的 ...
- 2017 [六省联考] T5 分手是祝愿
4872: [Shoi2017]分手是祝愿 Time Limit: 20 Sec Memory Limit: 512 MBSubmit: 458 Solved: 299[Submit][Statu ...
- module has no attribute 'seq2seq'
tensorflow 中tf.nn.seq2seq.sequence_loss_by_example to tf.contrib.legacy_seq2seq.sequence_loss_by_exa ...
- Item 51:写new和delete时请遵循惯例
Item 51: Adhere to convention when writing new and delete. Item 50介绍了怎样自己定义new和delete但没有解释你必须遵循的惯例. ...
- Error building Player: Win32Exception: ApplicationName='E:/adt-20140702/sdk\tools\zipalign.exe', Com
1.原因 更新sdk后报错..由于版本号不同,zipalign.exe所处路径不同 2.解决的方法 在sdk路径下搜索zipalign.exe .然后拷贝到报错内容中制定的路径即可了.