D:\NormalSoftware>python mysql_filter_slow_log.py ./mysql1-slow.log --no-duplic
ates --sort-avg-query-time --top=100 >> mysql_slow_test05.txt

469行要改为:

query_time = (float(numbers[1].split()[0]), float(numbers[2].split()[0]),
float(numbers[3].split()[0]), float(numbers[4]))

150 151行注释掉:

#locale.setlocale(locale.LC_NUMERIC,
# os.name == 'nt' and 'en' or 'en_US.ISO8859-1')

mysql_filter_slow_log.py

使用方法:(这里只介绍python的使用方法)

python mysql_filter_slow_log.py  ./mysql1-slow.log --no-duplicates --sort-execution-count --top=10  >> mysql_slow_test.txt

备注:mysql1-slow.log  慢查询日志名称

 --no-duplicates

 --sort-execution-count

 --top=10  取前十位

 mysql_slow_test.txt  输出分析报告

附录:

官方给出的使用方法举例:

=====================================

1
2
3
4
5
6
7
8
9
# Filter slow queries executed for at least 3 seconds not from root, remove duplicates,
# apply execution count as first sorting value and save first 10 unique queries to file.
# In addition, remember last input file position and statistics.
php mysql_filter_slow_log.php -T=3 -eu=root --no-duplicates --sort-execution-count --top=10 --incremental linux-slow.log > mysql-slow-queries.log
# Start permanent filtering of all slow queries from now on: at least 3 seconds or examining 10000 rows, exclude users root and test
tail -f -n 0 linux-slow.log | python mysql_filter_slow_log.py -T=3 -R=10000 -eu=root -eu=test &
# (-n 0 outputs only lines generated after start of tail)
# Stop permanent filtering
kill `ps auxww | grep 'tail -f -n 0 linux-slow.log' | egrep -v grep | awk '{print $2}'`
====================================

官方给出的命令参数:

==================================

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
-T=min_query_time
-R=min_rows_examined
-ih, --include-host
-eh, --exclude-host
-iu, --include-user
-eu, --exclude-user
-iq, --include-query
--date=date_first-date_last Include only queries between date_first (and date_last).
                            Input:                    Date Range:
                            13.11.2006             -> 13.11.2006 - 14.11.2006 (exclusive)
                            13.11.2006-15.11.2006  -> 13.11.2006 - 16.11.2006 (exclusive)
                            15-11-2006-11/13/2006  -> 13.11.2006 - 16.11.2006 (exclusive)
                            >13.11.2006            -> 14.11.2006 - later
                            13.11.2006-            -> 13.11.2006 - later
                            <13.11.2006            -> earlier    - 13.11.2006 (exclusive)
                            -13.11.2006            -> earlier    - 14.11.2006 (exclusive)
                            Please do not forget to escape the greater or lesser than symbols (><, i.e. '--date=>13.11.2006').
                            Short dates are supported if you include a trailing separator (i.e. 13.11.-11/15/).
--incremental Remember input file positions and optionally --no-duplicates statistics between executions in mysql_filter_slow_log.sqlite3
--no-duplicates Powerful option to output only unique query strings with additional statistics:
                Execution count, first and last timestamp.
                Query time: avg / max / sum.
                Lock time: avg / max / sum.
                Rows examined: avg / max / sum.
                Rows sent: avg / max / sum.
--no-output Do not print statistics, just update database with incremental statistics
Default ordering of unique queries:
--sort-sum-query-time    [ 1. position]
--sort-avg-query-time    [ 2. position]
--sort-max-query-time    [ 3. position]
--sort-sum-lock-time     [ 4. position]
--sort-avg-lock-time     [ 5. position]
--sort-max-lock-time     [ 6. position]
--sort-sum-rows-examined [ 7. position]
--sort-avg-rows-examined [ 8. position]
--sort-max-rows-examined [ 9. position]
--sort-execution-count   [10. position]
--sort-sum-rows-sent     [11. position]
--sort-avg-rows-sent     [12. position]
--sort-max-rows-sent     [13. position]
--sort=sum-query-time,avg-query-time,max-query-time,...   You can include multiple sorting values separated by commas.
--sort=sqt,aqt,mqt,slt,alt,mlt,sre,are,mre,ec,srs,ars,mrs Every long sorting option has an equivalent short form (first character of each word).
--top=max_unique_query_count Output maximal max_unique_query_count different unique queries
--details                    Enables output of timestamp based unique query time lines after user list
                             (i.e. # Query_time: 81  Lock_time: 0  Rows_sent: 884  Rows_examined: 2448350).
--help Output this message only and quit
[multiple] options can be passed more than once to set multiple values.
[position] options take the position of their first occurrence into account.
           The first passed option will replace the default first sorting, ...
           Remaining default ordering options will keep their relative positions.
====================================

官方给出的配置文件中管理慢日志参数的配置

====================================

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# I.e. you could add the following lines under the [mysqld] section of your my.ini or my.cnf configuration file:
# Log all queries taking more than 3 seconds
long_query_time=3  # minimum: 1, default: 10
# MySQL >= 5.1.21 (or patched): 3 seconds = 3000000 microseconds
# long_query_time=3.000000  # minimum: 0.000001 (1 microsecond)
# Activate the Slow Query Log
slow_query_log  # >= 5.1.29
# log-slow-queries  # deprecated since 5.1.29
# Write to a custom file name (>= 5.1.29)
# slow_query_log_file=file_name  # default: /data_dir/host_name-slow.log
# Log all queries without indexes
# log-queries-not-using-indexes
# Log only queries which examine at least N rows (>= 5.1.21)
# min_examined_row_limit=1000  # default: 0
# Log slow OPTIMIZE TABLE, ANALYZE TABLE, and ALTER TABLE statements
# log-slow-admin-statements
# Log slow queries executed by replication slaves (>= 5.1.21)
# log-slow-slave-statements
# MySQL 5.1.6 through 5.1.20 had a default value of log-output=TABLE, so you should force
# Attention: logging to TABLE only includes whole seconds information
log-output=FILE
## Admin query for online activation is possible since MySQL 5.1 (without server restart)
## SET @@global.slow_query_log=1
## SET @@global.long_query_time=1
## Show current variables related to the Slow Query Log
## SHOW GLOBAL VARIABLES WHERE Variable_name REGEXP 'admin|min_examined|log_output|log_queries|log_slave|long|slow_quer'
======================================

注意:在执行脚本的时候会报数据类型的错误,具体错误指定469行,经过查看,实际慢查询日志中的query_time是float类型,而在这个脚本工具中定义的确实int类型。于是自行修改!

默认:

======================

query_time = (int(numbers[1].split()[0]), int(numbers[2].split()[0]),

              int(numbers[3].split()[0]), int(numbers[4]))

======================

修改为:

======================

query_time = (float(numbers[1].split()[0]), float(numbers[2].split()[0]),

              float(numbers[3].split()[0]), float(numbers[4]))

mysql慢查询日志分析工具(python写的)的更多相关文章

  1. mysql慢查询日志分析工具 mysqlsla(转)

    mysql数据库的慢查询日志是非常重要的一项调优辅助日志,但是mysql默认记录的日志格式阅读时不够友好,这是由mysql日志记录规则所决定的,捕获一条就记录一条,虽说记录的信息足够详尽,但如果将浏览 ...

  2. mysql慢查询日志分析工具mysqldumpslow

    一.mysqldumpslow为mysql自带,安装后既带有该工具. 二.mysqldumpslow经常使用的参数 -s,是order的顺序 al 平均锁定时间 ar 平均返回记录时间 at 平均查询 ...

  3. MySQL 慢查询日志分析工具(pt-query-digest)

    1. 慢查询命令: 是否开启和日志路径:show variables like '%slow_query_log%'; 最大查询时间:show variables like '%query_time% ...

  4. 慢查询日志分析工具之pt-query-digest

    简介        pt-query-digest 是用于分析mysql慢查询的一个工具,与mysqldumpshow工具相比,py-query_digest 工具的分析结果更具体,更完善. 有时因为 ...

  5. Mysql慢查询(分析工具)

    慢查询分析工具[mysqldumpslow] 常用的慢查询日志分析工具 汇总除查询条件外其他完全相同的SQL,并将分析结果按照参数中所指定的顺序输出 语法: mysqldumpslow -s r -t ...

  6. MySQL 慢查询日志分析及可视化结果

    MySQL 慢查询日志分析及可视化结果 MySQL 慢查询日志分析 pt-query-digest分析慢查询日志 pt-query-digest --report slow.log 报告最近半个小时的 ...

  7. pt-query-digest查询日志分析工具

    1.工具介绍 pt-query-digest是用于分析mysql慢查询的一个工具,它可以分析binlog.General log.slowlog,也可以通过SHOWPROCESSLIST或者通过tcp ...

  8. MySQL慢查询日志分析

    一:查询slow log的状态,如示例代码所示,则slow log已经开启. mysql> show variables like '%slow%'; +-------------------- ...

  9. MySQL慢查询日志分析提取【转】

    原文:https://www.cnblogs.com/skymyyang/p/7239010.html 一:查询slow log的状态,如示例代码所示,则slow log已经开启. mysql> ...

随机推荐

  1. python软件工程知识

    软件工程知识 3.1 程序设计过程中,常用伪代码来"思考"一个程序,在将伪代码程序转换成python程序. 3.2 所有python程序都可以给予6类控制结构来创建(顺序,if, ...

  2. IntelliJ IDEA 取消控制台行数限制

    在idea7之后的版本中取消了 控制台行数设置 选项,只能通过更改配置文件进行更改 在%安装目录%/bin中找到idea.properties文件,更改idea.cycle.buffer.size项值 ...

  3. python学习心得

    一,高级特性: 1,切片:[start:stop:step] >>>l=range() >>>l[,-,] resulte is [] 2,迭代 2.1按iterv ...

  4. java学习笔记——可用链表

    NO 链表方法名称 描述 1 public void add(数据类型 对象) 向链表中增加数据 2 public int size() 查看链表中数据个数 3 public boolean isEm ...

  5. 王立平--scard0与scard1分别指的是什么?以及路径获取

    一般是: scard0:指系统内部存储 scard1:指外插的sd卡 也有特例.. 分别获取路径的方法: package com.main; import java.lang.reflect.Meth ...

  6. springboot 有用网址收集

    http://www.ityouknow.com/spring-boot.html springboot多数据源配置: https://blog.csdn.net/neosmith/article/d ...

  7. windows超过最大连接数解决命令

    query user /server:218.57.146.175 logoff  1 /server:218.57.146.175

  8. Kafka备忘

    官网 http://kafka.apache.org/ 多生产者多消费者 多topic和多分区 多消费者组.每组中消息不能重复消费,组间不影响 启动 RunKafka(){ cd $kafka_hom ...

  9. MapReduce小文件处理之CombineFileInputFormat实现

    在MapReduce使用过程中.一般会遇到输入文件特别小(几百KB.几十MB).而Hadoop默认会为每一个文件向yarn申请一个container启动map,container的启动关闭是很耗时的. ...

  10. poj 2762 Going from u to v or from v to u?(强连通、缩点、拓扑)

    题意:(理解错了)在一个洞穴中有多个room,要求任意选两个room:u.v,都能保证u.v之间有通路,注意洞穴中的路是有向边.. 分析:强连通子图中的点必然两两之间可以互通,两个强连通子图之间有通路 ...