mariadb cache
Since MariaDB Galera cluster versions 5.5.40 and 10.0.14 you can use the query cache. Earlier versions do NOT support the query cache.
http://www.fromdual.com/regularly-flushing-mysql-query-cache
Mariadb 5.5.31 and the new incredible query cache information plugin
Hi guys, i was reading the new query cache plugin from Roland Bouman, now default in mariadb-5.5.31
This is a very old feature request at mysql (27 Oct 2006 12:31):
http://bugs.mysql.com/bug.php?id=23714
And recent mariadb (thanks Sergei reading my MDEV =) ) (2012-05-04 01:22)
https://mariadb.atlassian.net/browse/MDEV-249
Well this is a very nice piece of code...
Every time i google about "mysql performace", "mysql cache" etc ... i get something like this:
http://www.cyberciti.biz/tips/how-does-query-caching-in-mysql-works-and-how-to-find-find-out-my-mysql-query-cache-is-working-or-not.html
or this:
http://stackoverflow.com/questions/4139936/query-cache-efficiency
But... What global statistics tell you about your specific query? How
you know if your query is cached? You can do this via status variables,
but you may not know if your cache have a good global cache hit for only
one query, or only one database.
Well mariadb plugin tell you some infomations about what you need... but
less information that might be usefull for a good query/table statistic
You can read the query and the size of query in cache memory. What this
tell about "is my query qith a high hit rate?"? hum... this tell you
have the query inside query cache, that's the only information...
Reading more and studying mariadb source code, i created a patch... at MDEV-4581 (https://mariadb.atlassian.net/browse/MDEV-4581), ok mariadb guys i don't know how to use lauchpad to help mariadb yet, but you can use the patch =)
well i will not explain statistics, let's show the results for 4 queries in cache:
SELECT
query_hits/(select max(query_hits) from query_cache_queries)*100 as p_query_hit,
select_expend_time_ms/(select max(select_expend_time_ms) from query_cache_queries)*100 as p_select_expend_time_ms,
select_rows_read/(select max(select_rows_read) from query_cache_queries)*100 AS p_select_rows_read,
result_found_rows/(select max(result_found_rows) from query_cache_queries)*100 AS p_result_found_rows,
select_rows_read/(select max(select_rows_read) from query_cache_queries)*100 AS p_select_rows_read,
`ENTRY_POSITION_IN_CACHE`,
`STATEMENT_SCHEMA`,
`STATEMENT_TEXT`,
`RESULT_FOUND_ROWS`, `QUERY_HITS`, `SELECT_EXPEND_TIME_MS`,
`SELECT_LOCK_TIME_MS`, `SELECT_ROWS_READ`, `TABLES`,
from_unixtime(`QUERY_INSERT_TIME`)
as time, `RESULT_LENGTH`, `RESULT_BLOCKS_COUNT`,
`RESULT_BLOCKS_SIZE`, `RESULT_BLOCKS_SIZE_USED`, `RESULT_TABLES_TYPE`,
`FLAGS_CLIENT_LONG_FLAG`, `FLAGS_CLIENT_PROTOCOL_41`,
`FLAGS_PROTOCOL_TYPE`, `FLAGS_MORE_RESULTS_EXISTS`, `FLAGS_IN_TRANS`,
`FLAGS_AUTOCOMMIT`, `FLAGS_PKT_NR`, `FLAGS_CHARACTER_SET_CLIENT_NUM`,
`FLAGS_CHARACTER_SET_RESULTS_NUM`, `FLAGS_COLLATION_CONNECTION_NUM`,
`FLAGS_LIMIT`, `FLAGS_SQL_MODE`, `FLAGS_MAX_SORT_LENGTH`,
`FLAGS_GROUP_CONCAT_MAX_LEN`, `FLAGS_DIV_PRECISION_INCREMENT`,
`FLAGS_DEFAULT_WEEK_FORMAT`
FROM `information_schema`.`QUERY_CACHE_QUERIES`
ORDER BY statement_schema,`QUERY_HITS`
You can see the result if you have a very very big monitor =)
p_query_hit | p_select_expend_time_ms | p_select_rows_read | p_result_found_rows | p_select_rows_read | ENTRY_POSITION_IN_CACHE | STATEMENT_SCHEMA | STATEMENT_TEXT | RESULT_FOUND_ROWS | QUERY_HITS | SELECT_EXPEND_TIME_MS | SELECT_LOCK_TIME_MS | SELECT_ROWS_READ | TABLES | time | RESULT_LENGTH | RESULT_BLOCKS_COUNT | RESULT_BLOCKS_SIZE | RESULT_BLOCKS_SIZE_USED | RESULT_TABLES_TYPE | FLAGS_CLIENT_LONG_FLAG | FLAGS_CLIENT_PROTOCOL_41 | FLAGS_PROTOCOL_TYPE | FLAGS_MORE_RESULTS_EXISTS | FLAGS_IN_TRANS | FLAGS_AUTOCOMMIT | FLAGS_PKT_NR | FLAGS_CHARACTER_SET_CLIENT_NUM | FLAGS_CHARACTER_SET_RESULTS_NUM | FLAGS_COLLATION_CONNECTION_NUM | FLAGS_LIMIT | FLAGS_SQL_MODE | FLAGS_MAX_SORT_LENGTH | FLAGS_GROUP_CONCAT_MAX_LEN | FLAGS_DIV_PRECISION_INCREMENT | FLAGS_DEFAULT_WEEK_FORMAT |
null | 100 | 1,7857 | 3,5714 | 1,7857 | 0 | dev_cadastros | SELECT SQL_CACHE SQL_SMALL_RESULT moeda FROM moedas | 1 | 0 | 1 | 0 | 1 | `dev_cadastros`.`moedas` | 2013-05-25 22:35:52.000 | 91 | 1 | 512 | 155 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 8 | 8 | 8 | -1 | 33554434 | 1024 | 1024 | 4 | 0 |
null | 100 | 100 | 100 | 100 | 1 | dev_cadastros | SELECT indice,nome,grupo FROM analise_credito_indices ORDER BY grupo,ordem | 28 | 0 | 1 | 1 | 56 | `dev_cadastros`.`analise_credito_indices` | 2013-05-25 22:35:52.000 | 1234 | 1 | 1304 | 1298 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 8 | 8 | 8 | -1 | 33554434 | 1024 | 1024 | 4 | 0 |
null | 0 | 1,7857 | 3,5714 | 1,7857 | 2 | shared | SELECT SQL_CACHE SQL_SMALL_RESULT inteiro,inteiros,centavo,centavos,decimais,precisao_fatores,cod_bcb,ultima_alteracao,nome FROM moedas_atual WHERE moeda="R$" |
1 | 0 | 0 | 0 | 1 | `shared`.`moedas_atual` | 2013-05-25 22:35:52.000 | 759 | 1 | 824 | 823 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 8 | 8 | 8 | -1 | 33554434 | 1024 | 1024 | 4 | 0 |
null | 0 | 1,7857 | 3,5714 | 1,7857 | 3 | shared | SELECT SQL_CACHE SQL_SMALL_RESULT fator_venda,fator_compra,ultima_alteracao,nome FROM moedas_atual WHERE moeda="R$" | 1 | 0 | 0 | 0 | 1 | `shared`.`moedas_atual` | 2013-05-25 22:35:52.000 | 393 | 1 | 512 | 457 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 8 | 8 | 8 | -1 | 33554434 | 1024 | 1024 | 4 | 0 |
------------
ok you don't have a 180" monitor? here the columns:
p_query_hit
p_select_expend_time_ms
p_select_rows_read
p_result_found_rows
p_select_rows_read
ENTRY_POSITION_IN_CACHE
STATEMENT_SCHEMA
STATEMENT_TEXT
RESULT_FOUND_ROWS
QUERY_HITS
SELECT_EXPEND_TIME_MS
SELECT_LOCK_TIME_MS
SELECT_ROWS_READ
TABLES
time (it return as unixtime stamp) => QUERY_INSERT_TIME
RESULT_LENGTH
RESULT_BLOCKS_COUNT
RESULT_BLOCKS_SIZE
RESULT_BLOCKS_SIZE_USED
RESULT_TABLES_TYPE
FLAGS_CLIENT_LONG_FLAG
FLAGS_CLIENT_PROTOCOL_41
FLAGS_PROTOCOL_TYPE
FLAGS_MORE_RESULTS_EXISTS
FLAGS_IN_TRANS
FLAGS_AUTOCOMMIT
FLAGS_PKT_NR
FLAGS_CHARACTER_SET_CLIENT_NUM
FLAGS_CHARACTER_SET_RESULTS_NUM
FLAGS_COLLATION_CONNECTION_NUM
FLAGS_LIMIT
FLAGS_SQL_MODE
FLAGS_MAX_SORT_LENGTH
FLAGS_GROUP_CONCAT_MAX_LEN
FLAGS_DIV_PRECISION_INCREMENT
FLAGS_DEFAULT_WEEK_FORMAT
------------
What more you need now?
You can see: how many hits you have in each query
How many time it take to execute if your query cache entry is "lost"
How many rows it read to result and many, many, many others informations
What's the oldest query entry
Humm do you want know what query in table X?
select * from information_schema.query_cache_queries where tables like "%´my_database´.´my_table´%"
And you got all queries from that table
You can do many things now =)
Now, we have a nice (very nice) tool to improve query cache statistics =)
Thanks Sergei from Mariadb with mariadb source code help, and many many thanks to Roland Bouman with this nice peace of code
New life to query cache!
mariadb cache的更多相关文章
- Mastering MariaDB 神秘的MariaDB 中文翻译版
是某群的哥们义务翻译的,宣传一下,还没时间时间读,粗滤看了全部翻译完了300多页佩服 https://github.com/CMant/Mastering-MariaDB- 原地址:如果你需要读,请s ...
- 搭建Linux+Jexus+MariaDB+ASP.NET[LJMA]环境
备注:,将我的博客内容整理成册,首先会在博客里优先发布,后续可能的话整理成电子书,主要从linux的最基础内容开始进入Linux的Mono开发方面的话题.本文是我整理博客内容的一篇文章. LJMA 是 ...
- Centos 使用YUM安装MariaDB
1.在 /etc/yum.repos.d/ 下建立 MariaDB.repo,内容如下: [azureuser@mono etc]$ cd /etc/yum.repos.d [azureuser@mo ...
- Facebook MyRocks at MariaDB
Recently my colleague Rasmus Johansson announced that MariaDB is adding support for the Facebook MyR ...
- centos 7 安装mariadb
卸载mysql # rpm -qa|grep mysql mysql-community-common-5.6.30-2.el7.x86_64 mysql-community-libs-5.6.30- ...
- 【MySQL】TokuDB引擎初探(MySQL升级为Percona,MySQL升级为MariaDB)
参考:http://blog.sina.com.cn/s/blog_4673e6030102v46l.html 参考:http://hcymysql.blog.51cto.com/5223301/14 ...
- 【MySQL】MySQL/MariaDB的优化器对in子查询的处理
参考:http://codingstandards.iteye.com/blog/1344833 上面参考文章中<高性能MySQL>第四章第四节在第三版中我对应章节是第六章第五节 最近分析 ...
- Mariadb Galera Cluster 群集 安装部署
#Mariadb Galera Cluster 群集 安装部署 openstack pike 部署 目录汇总 http://www.cnblogs.com/elvi/p/7613861.html # ...
- centos7安装jdk,tomcat,msyql(MariaDB)
操作系统版本 CentOS Linux release 7.2.1511 (Core) 安装jdk 下载jdk-8u66-linux-x64.rpm上传到linux上 先改用户权限 然后 rpm -i ...
随机推荐
- servlet第3讲(上集)----同一用户的不同页面共享数据
1.方法综述 2.Cookie 3.sendRedict()方法 4.隐藏表单
- 上传下载文件, 同时部署在webapps下, 而不是项目下,防止重新部署tomcat, 上传文件消失
前端上传 <a href='javascript:upload("+data[i].id+")' title='Upload Report'> <img src= ...
- 设计模式二 适配器模式 adapter
适配器模式的目的:如果用户需要使用某个类的服务,而这项服务是这个类用一个不同的接口提供的,那么,可以使用适配器模式为客户提供一个期望的接口.
- Windows下的 Axel下载工具 - 移植自Linux
Axel 是 CLI (command-line interface) 下的一个多线程下载工具,通常我都用它取代 wget 下载各类文件,适用于 Linux 及 BSD 等 UNIX 类平台. 以下是 ...
- Oracle 锁
select for update对某行加锁之后: select语句可以执行: select for update 这行不可以: 会一直等待锁释放 select for update wait 3 ...
- n++与++n的区别
n++ 是先执行n++再进行赋值返回的只却是n. ++n 是先赋值之后再执行++n. 其实执行 n++ and ++n 都算是一次赋值 所以若 n = n++ and n = ++n 其实就是2次赋值 ...
- wpf之ListBox横向显示所有ListBoxItem
Xaml: <Window x:Class="WpfApplication6.MainWindow" xmlns="http://schemas.microsoft ...
- WPF(ContentControl和ItemsControl)
WPF(ContentControl和ItemsControl) 2013-04-01 16:25 2188人阅读 评论(0) 收藏 举报 分类: .Net(C#)(31) WPF(25) 版权 ...
- F(k)<(维护+枚举)\(找规律+递推+枚举)>
题意 小明有一个不降序列(f(1),f(2),f(3),--),f(k)代表在这个序列中大小是k的有f(k)个.我们规定f(n)的前12项如下图. n 1 2 3 4 5 6 7 8 9 10 11 ...
- 在windows系统用odbc连接
当连接的数据出现失败时,出现数据库别名仍然存在,但还是要用这个别名重新建立连接 在windows客户端,用输入db2cmd输入c:\Users\yexuxia>db2 list db direc ...