GC Buffer Busy Waits处理(转载)
与单实例不同,在RAC环境中,由于多节点的原因,会因为节点间的资源争用产生GC类的等待,而这其中,GC Buffer Busy Waits又是最为常见的,从性能角度上说,RAC是把双刃剑,用的好,能够有很大的性能提升,用的不好,则会由于内部资源竞争的影响,严重拖累数据库性能。
简单来说,用RAC,就一定要将各个节点隔离化,不管是通过业务隔离,区域隔离还是什么其他隔离手段,最终的目的,就是要各个节点所承担的业务,访问不同的数据对象,最大可能的减少节点间的资源争用,才能发挥RAC集群系统的最大性能。
因此,如果在一个RAC数据库中,发现大量的GC Buffer Busy Waits,则很大程度上说明,该套系统可能存在严重的性能问题,需要进行有针对性的检查。
下面转载一篇文章,关于如何处理GC Buffer Busy Waits事件的,供参考。
http://www.ardentperf.com/2007/09/12/gc-buffer-busy-waits-in-rac-finding-hot-blocks/
GC Buffer Busy Waits in RAC: Finding Hot Blocks
Well I don’t have a lot of time to write anything up… sheesh - it’s like 10pm and I’m still messing with this. I should be in bed. But before I quit for the night I thought I’d just do a quick post with some queries that might be useful for anyone working on a RAC system who sees a lot of the event “gc buffer busy”.
Now you’ll recall that this event simply means that we’re waiting for another instance who has the block. But generally if you see lots of these then it’s an indication of contention across the cluster. So here’s how I got to the bottom of a problem on a pretty active 6-node cluster here in NYC.
Using the ASH
I’ll show two different ways here to arrive at the same conclusion. First, we’ll look a the ASH to see what the sampled sessions today were waiting on. Second, we’ll look at the segment statistics captured by the AWR.
First of all some setup. I already knew what the wait events looked like from looking at dbconsole but here’s a quick snapshot using the ASH data from today:
select min(begin_interval_time) min, max(end_interval_time) max
from dba_hist_snapshot
where snap_id between 12831 and 12838;
MIN MAX
------------------------------ ------------------------------
12-SEP-07 09.00.17.451 AM 12-SEP-07 05.00.03.683 PM
This is the window I’m going to use; 9am to 5pm today.
select wait_class_id, wait_class, count(*) cnt
from dba_hist_active_sess_history
where snap_id between 12831 and 12838
group by wait_class_id, wait_class
order by 3;
WAIT_CLASS_ID WAIT_CLASS CNT
------------- ------------------------------ ----------
3290255840 Configuration 169
2000153315 Network 934
4108307767 System I/O 7199
3386400367 Commit 7809
4217450380 Application 12248
3875070507 Concurrency 14754
1893977003 Other 35499
97762
3871361733 Cluster 104810
1740759767 User I/O 121999
You can see that there were a very large number of cluster events recorded in the ASH. Let’s look a little closer.
select event_id, event, count(*) cnt from dba_hist_active_sess_history
where snap_id between 12831 and 12838 and wait_class_id=3871361733
group by event_id, event
order by 3;
EVENT_ID EVENT CNT
---------- ---------------------------------------- ----------
3905407295 gc current request 4
3785617759 gc current block congested 10
2705335821 gc cr block congested 15
512320954 gc cr request 16
3794703642 gc cr grant congested 17
3897775868 gc current multi block request 17
1742950045 gc current retry 18
1445598276 gc cr disk read 148
1457266432 gc current split 229
2685450749 gc current grant 2-way 290
957917679 gc current block lost 579
737661873 gc cr block 2-way 699
2277737081 gc current grant busy 991
3570184881 gc current block 3-way 1190
3151901526 gc cr block lost 1951
111015833 gc current block 2-way 2078
3046984244 gc cr block 3-way 2107
661121159 gc cr multi block request 4092
3201690383 gc cr grant 2-way 4129
1520064534 gc cr block busy 4576
2701629120 gc current block busy 14379
1478861578 gc buffer busy 67275
Notice the huge gap between the number of buffer busy waits and everything else. Other statistics I checked also confirmed that this wait event was the most significant on the cluster. So now we’ve got an event and we know that 67,275 sessions were waiting on it during ASH snapshots between 9am and 5pm today. Let’s see what SQL these sessions were executing when they got snapped. In fact lets even include the “gc current block busy” events since there was a bit of a gap for them too.
select sql_id, count(*) cnt from dba_hist_active_sess_history
where snap_id between 12831 and 12838
and event_id in (2701629120, 1478861578)
group by sql_id
having count(*)>1000
order by 2;
SQL_ID CNT
------------- ----------
6kk6ydpp3u8xw 1011
2hvs3mpab5j0w 1022
292jxfuggtsqh 1168
3mcxaqffnzgfw 1226
a36pf34c87x7s 1328
4vs8wgvpfm87w 1390
22ggtj4z9ak3a 1574
gsqhbt5a6d4uv 1744
cyt90uk11a22c 2240
39dtqqpr7ygcw 4251
8v3b2m405atgy 42292
Wow - another big leap - 4,000 to 42,000! Clearly there’s one SQL statement which is the primary culprit. What’s the statement?
select sql_text from dba_hist_sqltext where sql_id='8v3b2m405atgy';
SQL_TEXT
---------------------------------------------------------------------------
insert into bigtable(id, version, client, cl_business_id, cl_order_id, desc
I’ve changed the table and field names so you can’t guess who my client might be. :) But it gets the idea across - an insert statement. Hmmm. Any guesses yet about what the problem might be? Well an insert statement could access a whole host of objects (partitions and indexes)… and even more in this case since there are a good number of triggers on this table. Conveniently, the ASH in 10g records what object is being waited on so we can drill down even to that level.
select count(distinct(current_obj#)) from dba_hist_active_sess_history
where snap_id between 12831 and 12838
and event_id=1478861578 and sql_id='8v3b2m405atgy';
COUNT(DISTINCT(CURRENT_OBJ#))
-----------------------------
14
select current_obj#, count(*) cnt from dba_hist_active_sess_history
where snap_id between 12831 and 12838
and event_id=1478861578 and sql_id='8v3b2m405atgy'
group by current_obj#
order by 2;
CURRENT_OBJ# CNT
------------ ----------
3122841 1
3122868 3
3173166 4
3324924 5
3325122 8
3064307 8
-1 10
3064369 331
0 511
3122795 617
3064433 880
3208619 3913
3208620 5411
3208618 22215
Well a trend is emerging. Another very clear outlier - less than a thousand sessions waiting on most objects but the last one is over twenty-two thousand. Let’s have a look at all three of the biggest ones.
select object_id, owner, object_name, subobject_name, object_type from dba_objects
where object_id in (3208618, 3208619, 3208620);
OBJECT_ID OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_TYPE
---------- ---------- ------------------------------ ------------------------------ -------------------
3208618 JSCHDER BIGTABLE_LOG P_2007_09 TABLE PARTITION
3208619 JSCHDER BIGTABL_LG_X_ID P_2007_09 INDEX PARTITION
3208620 JSCHDER BIGTABL_LG_X_CHANGE_DATE P_2007_09 INDEX PARTITION
Now wait just a moment… this isn’t even the object we’re updating!! Well I’ll spare you the details but one of the triggers logs every change to BIGTABLE with about 7 inserts into this one. It’s all PL/SQL so we get bind variables and everything - it’s just the sheer number of accesses that is causing all the contention.
One further thing we can do is actually see which blocks are getting most contended for - the ASH records this too. (Isn’t the ASH great?)
select current_file#, current_block#, count(*) cnt
from dba_hist_active_sess_history
where snap_id between 12831 and 12838
and event_id=1478861578 and sql_id='8v3b2m405atgy'
and current_obj# in (3208618, 3208619, 3208620)
group by current_file#, current_block#
having count(*)>50
order by 3;
CURRENT_FILE# CURRENT_BLOCK# CNT
------------- -------------- ----------
1330 238073 51
1542 22645 55
1487 237914 56
1330 238724 61
1330 244129 76
1487 233206 120
One thing that I immediately noticed is that there does not seem to be a single hot block!!! (What?) Out of 40,000 sessions accessing these three objects no more than 120 ever tried to hit the same block. Let’s quickly check if any of these are header blocks on the segments.
select segment_name, header_file, header_block
from dba_segments where owner='JHEIDER' and partition_name='P_2007_09'
and segment_name in ('PLACEMENTS_LOG','PLCMNTS_LG_X_ID',
'PLCMNTS_LG_X_CHANGE_DATE');
SEGMENT_NAME HEADER_FILE HEADER_BLOCK
------------------------------ ----------- ------------
BIGTABL_LG_X_CHANGE_DATE 1207 204809
BIGTABL_LG_X_ID 1207 196617
BIGTABLE_LOG 1209 16393
No - all seem to be data blocks. Why so much contention? Maybe the RAC and OPS experts out there already have some guesses… but first let’s explore one alternative method to check the same thing and see of the numbers line up.
AWR Segment Statistics
Here’s a handy little query I made up the other day to quickly digest any of the segment statistics from the AWR and grab the top objects for the cluster, reporting on each instance. I’m not going to explain the whole thing but I’ll just copy it verbatim - feel free to use it but you’ll have to figure it out yourself. :)
col object format a60
col i format 99
select * from (
select o.owner||'.'||o.object_name||decode(o.subobject_name,NULL,'','.')||
o.subobject_name||' ['||o.object_type||']' object,
instance_number i, stat
from (
select obj#||'.'||dataobj# obj#, instance_number, sum(
GC_BUFFER_BUSY_DELTA
) stat
from dba_hist_seg_stat
where (snap_id between 12831 and 12838)
and (instance_number between 1 and 6)
group by rollup(obj#||'.'||dataobj#, instance_number)
having obj#||'.'||dataobj# is not null
) s, dba_hist_seg_stat_obj o
where o.dataobj#||'.'||o.obj#=s.obj#
order by max(stat) over (partition by s.obj#) desc,
o.owner||o.object_name||o.subobject_name, nvl(instance_number,0)
) where rownum<=40;
OBJECT I STAT
---------------------------------------- -- -------
JSCHDER.BIGTABLE_LOG.P_2007_09 [TABLE PARTITION] 2529540
JSCHDER.BIGTABLE_LOG.P_2007_09 [TABLE PARTITION] 1 228292
JSCHDER.BIGTABLE_LOG.P_2007_09 [TABLE PARTITION] 2 309684
JSCHDER.BIGTABLE_LOG.P_2007_09 [TABLE PARTITION] 3 289147
JSCHDER.BIGTABLE_LOG.P_2007_09 [TABLE PARTITION] 4 224155
JSCHDER.BIGTABLE_LOG.P_2007_09 [TABLE PARTITION] 5 1136822
JSCHDER.BIGTABLE_LOG.P_2007_09 [TABLE PARTITION] 6 341440
JSCHDER.BIGTABL_LG_X_CHANGE_DATE.P_2007_09 [INDEX PARTITION] 2270221
JSCHDER.BIGTABL_LG_X_CHANGE_DATE.P_2007_09 [INDEX PARTITION] 1 220094
JSCHDER.BIGTABL_LG_X_CHANGE_DATE.P_2007_09 [INDEX PARTITION] 2 313038
JSCHDER.BIGTABL_LG_X_CHANGE_DATE.P_2007_09 [INDEX PARTITION] 3 299509
JSCHDER.BIGTABL_LG_X_CHANGE_DATE.P_2007_09 [INDEX PARTITION] 4 217489
JSCHDER.BIGTABL_LG_X_CHANGE_DATE.P_2007_09 [INDEX PARTITION] 5 940827
JSCHDER.BIGTABL_LG_X_CHANGE_DATE.P_2007_09 [INDEX PARTITION] 6 279264
JSCHDER.BIGTABLE.P_WAREHOUSE [TABLE PARTITION] 1793931
JSCHDER.BIGTABLE.P_WAREHOUSE [TABLE PARTITION] 1 427482
JSCHDER.BIGTABLE.P_WAREHOUSE [TABLE PARTITION] 2 352305
JSCHDER.BIGTABLE.P_WAREHOUSE [TABLE PARTITION] 3 398699
JSCHDER.BIGTABLE.P_WAREHOUSE [TABLE PARTITION] 4 268045
JSCHDER.BIGTABLE.P_WAREHOUSE [TABLE PARTITION] 5 269230
JSCHDER.BIGTABLE.P_WAREHOUSE [TABLE PARTITION] 6 78170
JSCHDER.BIGTABL_LG_X_ID.P_2007_09 [INDEX PARTITION] 771060
JSCHDER.BIGTABL_LG_X_ID.P_2007_09 [INDEX PARTITION] 1 162296
JSCHDER.BIGTABL_LG_X_ID.P_2007_09 [INDEX PARTITION] 2 231141
JSCHDER.BIGTABL_LG_X_ID.P_2007_09 [INDEX PARTITION] 3 220573
JSCHDER.BIGTABL_LG_X_ID.P_2007_09 [INDEX PARTITION] 4 157050
JSCHDER.BIGTABLE.P_DEACTIVE [TABLE PARTITION] 393663
JSCHDER.BIGTABLE.P_DEACTIVE [TABLE PARTITION] 1 66277
JSCHDER.BIGTABLE.P_DEACTIVE [TABLE PARTITION] 2 10364
JSCHDER.BIGTABLE.P_DEACTIVE [TABLE PARTITION] 3 6930
JSCHDER.BIGTABLE.P_DEACTIVE [TABLE PARTITION] 4 3484
JSCHDER.BIGTABLE.P_DEACTIVE [TABLE PARTITION] 5 266722
JSCHDER.BIGTABLE.P_DEACTIVE [TABLE PARTITION] 6 39886
JSCHDER.BIGTABLE.P_ACTIVE_APPROVED [TABLE PARTITION] 276637
JSCHDER.BIGTABLE.P_ACTIVE_APPROVED [TABLE PARTITION] 1 13750
JSCHDER.BIGTABLE.P_ACTIVE_APPROVED [TABLE PARTITION] 2 12207
JSCHDER.BIGTABLE.P_ACTIVE_APPROVED [TABLE PARTITION] 3 23522
JSCHDER.BIGTABLE.P_ACTIVE_APPROVED [TABLE PARTITION] 4 28336
JSCHDER.BIGTABLE.P_ACTIVE_APPROVED [TABLE PARTITION] 5 99704
JSCHDER.BIGTABLE.P_ACTIVE_APPROVED [TABLE PARTITION] 6 99118
40 rows selected.
As an aside, there is a line in the middle that says “GC_BUFFER_BUSY_DELTA”. You can replace that line with any of these values to see the top objects for the corresponding waits during the reporting period:
LOGICAL_READS_DELTA
BUFFER_BUSY_WAITS_DELTA
DB_BLOCK_CHANGES_DELTA
PHYSICAL_READS_DELTA
PHYSICAL_WRITES_DELTA
PHYSICAL_READS_DIRECT_DELTA
PHYSICAL_WRITES_DIRECT_DELTA
ITL_WAITS_DELTA
ROW_LOCK_WAITS_DELTA
GC_CR_BLOCKS_SERVED_DELTA
GC_CU_BLOCKS_SERVED_DELTA
GC_BUFFER_BUSY_DELTA
GC_CR_BLOCKS_RECEIVED_DELTA
GC_CU_BLOCKS_RECEIVED_DELTA
SPACE_USED_DELTA
SPACE_ALLOCATED_DELTA
TABLE_SCANS_DELTA
Now as you can see, these statistics confirm what we observed from the ASH: the top waits in the system are for the BIGTABLE_LOG table. However this also reveals something the ASH didn’t - that the date-based index on the same table is a close second.
The Real Culprit
Any time you see heavy concurrency problems during inserts on table data blocks there should always be one first place to look: space management. Since ancient versions of OPS it has been a well-known fact that freelists are the enemy of concurrency. In this case, that was exactly the culprit.
select distinct tablespace_name from dba_tab_partitions
where table_name='BIGTABLE_LOG';
TABLESPACE_NAME
------------------------------
BIGTABLE_LOG_DATA
select extent_management, allocation_type, segment_space_management
from dba_tablespaces where tablespace_name='BIGTABLE_LOG_DATA';
EXTENT_MAN ALLOCATIO SEGMEN
---------- --------- ------
LOCAL USER MANUAL
SQL> select distinct freelists, freelist_groups from dba_tab_partitions
2 where table_name='BIGTABLE_LOG';
FREELISTS FREELIST_GROUPS
---------- ---------------
1 1
And there you have it. The busiest table on their 6-node OLTP RAC system is using MSSM with a single freelist group. I’m pretty sure this could cause contention problems!
GC Buffer Busy Waits处理(转载)的更多相关文章
- RAC性能分析 - gc buffer busy acquire 等待事件
概述---------------------gc buffer busy是RAC数据库中常见的等待事件,11g开始gc buffer busy分为gc buffer busy acquire和gc ...
- buffer cache —— buffer busy waits/read by other session
oracle提供非常精确.有效的row level lock机制,多个用户同时修改数据时,为了保护数据,以块为单位挂起锁的情况不会发生.但这不太正确.以块为单位的锁虽然不存在,但正因为oracle I ...
- buffer busy waits
Buffer busy waits 当会话想要访问缓冲区中的数据块,而该数据块正在被其他会话使用时将产生Buffer busy waits事件. 其他会话可能正从数据文件向缓冲器读取同样的数据块,或正 ...
- buffer busy wait
什么是buffer busy wait? A session that reads or modifies a buffer in the SGA must first acquire the cac ...
- 模拟产生CBC LATCH与buffer busy wait等待事件
数据库版本:11.2.0.4.0 1.查出表TEST相关信息 select rowid, dbms_rowid.rowid_row_number(rowid) rowid_rownum, dbms_r ...
- buffer busy wait在RAC环境下出现
昨天运维组的同时反映有套系统用户反映很慢,需要协助帮忙检查什么原因引起的性能问题.导出了从8点到11点的AWR报告进行分析,发现等待事件里大部分的指标都正常,就是buffer busy wait的平均 ...
- JVM学习(4)——全面总结Java的GC算法和回收机制---转载自http://www.cnblogs.com/kubixuesheng/p/5208647.html
俗话说,自己写的代码,6个月后也是别人的代码……复习!复习!复习!涉及到的知识点总结如下: 一些JVM的跟踪参数的设置 Java堆的分配参数 -Xmx 和 –Xms 应该保持一个什么关系,可以让系统的 ...
- socket-详细分析No buffer space available(转载)
文章原文出处:http://www.cnblogs.com/hjwublog/p/5114380.html 今天在公司服务器上部署运行的后台程序出现大面积接口无法调用的问题,查看后台控制台打印如下信息 ...
- Full GC有关问题学习分析(转载)
网站持久代引发Full GC问题分析 现状: Dragoon(监控系统)的日报显示trade_us_wholelsale(美国wholesale集群),日均Young GC次数25w次左右,应用暂停2 ...
随机推荐
- 注册表修改PSD关联photoshop
当psd文件右键点击--打开方式--选择默认程序photoshop也没用的时候,那应该是注册表未关联,可以试下以下方法: 第一步:在运行框中输入regedit,打开注册表编辑器,将HKEY_CLASS ...
- Jquery中用offset().top和offsetTop的比较
今天,想测试一个div与顶部的距离,用的是.offsetTop,但是offsetTop获得的值,怎么都打印不出来.折腾了半天,打印的结果都是undefined,虽然网上很多资料都说返回的是数值.虽然这 ...
- [转载]破解TexturePacker加密资源
最近我们要开一个新项目,UI与交互打算借鉴当前正火的<圣火英雄传>,程序开发为了和美术制作并行,打算用圣火的资源暂代使用.我解压圣火apk,发现用TexturePacker命令行无法把它的 ...
- java读写中文文件
在用Java程序进行读写含中文的txt文件时,经常会出现读出或写入的内容会出现乱码.原因其实很简单,就是系统的编码和程序的编码采用了不同的编码格式.通常,假如自己不修改的话,windows自身采用的编 ...
- PHP 练习租房
练习:租房子 <body> <form action="test.php" method="post"> <div>区域: ...
- Hibernate 异常 —— No CurrentSessionContext configured
在使用 SessionFactory 的 getCurrentSession 方法时遇到如下异常 “No CurrentSessionContext configured ” 原因是: 在hibern ...
- OSX 升级 vim
善用 Homebrew 神器啊,少年!Homebrew - The missing package manager for OS X安装完成后打开终端输入: brew install vim --wi ...
- JSON 之 SuperObject(1)
一直盼着 Delphi 能够直接支持 "正则表达式" 与 "JSON"; Delphi 2009 刚来的时候, 有了 JSON, 但不好, 那时尝试过一点. 这 ...
- R语言实现数据集某一列的频数统计——with和table
with(priority.train, table(From.EMail)) 统计priority.train中From.EMail的频数
- UVa 10954 (Huffman 优先队列) Add All
直接用一个优先队列去模拟Huffman树的建立过程. 每次取优先队列前两个数,然后累加其和,把这个和在放入到优先队列中去. #include <cstdio> #include <q ...