[20180118]tstats的问题.txt

--//关于使用tstats收集处理统计信息,可以看链接http://blog.itpub.net/267265/viewspace-1987839/

TSTATS in a Nutshell P97
The removal of time-sensitive data from object statistics is the main idea behind TSTATS. Here is the essence of
a process that can be used on the back of that idea:
1. Gather statistics on a selected "master" test system.
2. Fabricate statistics for all global temporary tables.
3. Remove time-sensitive data from object statistics.
4. Perform initial performance testing on the master test system and make adjustments as necessary.
5. Copy the statistics to all other test systems and ultimately to production.
6. Lock the object statistics of your application schemas on all systems.
7. Drop all statistics-gathering jobs for application schemas on all your systems.
8. TSTATS only applies to application schemas, so any jobs that gather dictionary statistics are unaffected.

--//突然自己想在一个小的生产系统试验tstats的思想。我发现自己上存在一些问题。前面我就发现直方图信息不能删除。
--//它实际上提供的脚本删除字段的最大最小值信息,而不是3. Remove time-sensitive data from object statistics.。
--//实际上删除最大最小信息直方图信息会变得无用,通过测试来说明问题。

1.环境:
SCOTT@book> @ &r/ver1
PORT_STRING                    VERSION        BANNER
------------------------------ -------------- --------------------------------------------------------------------------------
x86_64/Linux 2.4.xx            11.2.0.4.0     Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

create table t as select rownum id ,lpad('x',32,'x') name ,'Y' flag  from dual connect by level<=1e5;
update t set flag='N' where id=1e5;
commit ;

execute sys.dbms_stats.gather_table_stats ( OwnName => user,TabName => 't',Estimate_Percent => NULL,Method_Opt => 'FOR
ALL COLUMNS SIZE 1 for columns flag size 254 ',Cascade => True ,No_Invalidate => false);

2.测试1:
SCOTT@book> select * from t where flag='N';
        ID NAME                 F
---------- -------------------- -
    100000 xxxxxxxxxxxxxxxxxxxx N
           xxxxxxxxxxxx

SCOTT@book> @ &r/dpc '' ''
PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID  0h7g0tqtzcvzn, child number 0
-------------------------------------
select * from t where flag='N'
Plan hash value: 120143814
-----------------------------------------------------------------------------------------
| Id  | Operation                   | Name     | E-Rows |E-Bytes| Cost (%CPU)| E-Time   |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |          |        |       |     2 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| T        |      1 |    40 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | I_T_FLAG |      1 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

--//可以发现使用直方图。

3.测试2:
--//如果删除最大最小值:
SCOTT@book> exec system.tstats.adjust_column_stats_v3( 'SCOTT','T');
PL/SQL procedure successfully completed.

SCOTT@book> select * from dba_histograms where owner=user and table_name='T';
OWNER  TABLE_NAME COLUMN_NAME          ENDPOINT_NUMBER ENDPOINT_VALUE ENDPOINT_A
------ ---------- -------------------- --------------- -------------- ----------
SCOTT  T          FLAG                          100000     4.6211E+35
SCOTT  T          FLAG                               1     4.0500E+35
SCOTT  T          ID                                 0
SCOTT  T          NAME                               0
SCOTT  T          ID                                 1
SCOTT  T          NAME                               1
6 rows selected.

SCOTT@book> select COLUMN_NAME,DATA_TYPE,DATA_LENGTH,DATA_PRECISION,DATA_SCALE,NUM_DISTINCT,LOW_VALUE,HIGH_VALUE,DENSITY,NUM_NULLS,NUM_BUCKETS,HISTOGRAM from dba_tab_cols where owner=user and table_name='T';
COLUMN_NAME          DATA_TYPE  DATA_LENGTH DATA_PRECISION DATA_SCALE NUM_DISTINCT LOW_VALUE  HIGH_VALUE    DENSITY  NUM_NULLS NUM_BUCKETS HISTOGRAM
-------------------- ---------- ----------- -------------- ---------- ------------ ---------- ---------- ---------- ---------- ----------- ---------------
ID                   NUMBER              22                                 100000                           .00001          0           1 NONE
NAME                 VARCHAR2            32                                      1                                1          0           1 NONE
FLAG                 CHAR                 1                                      2                               .5          0           2 FREQUENCY
--//可以发现直方图信息还存在。但是最大最小值信息已经删除。

SCOTT@book> select * from t where flag='N';
        ID NAME                 F
---------- -------------------- -
    100000 xxxxxxxxxxxxxxxxxxxx N
           xxxxxxxxxxxx

--//执行计划:
Plan hash value: 1601196873
---------------------------------------------------------------------------
| Id  | Operation         | Name | E-Rows |E-Bytes| Cost (%CPU)| E-Time   |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |        |       |   177 (100)|          |
|*  1 |  TABLE ACCESS FULL| T    |  50000 |  1953K|   177   (1)| 00:00:03 |
---------------------------------------------------------------------------
Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1 / T@SEL$1
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("FLAG"='N')

--//执行计划变成全表扫描。也就是直方图存在的情况下,最大最小值信息不能删除。

3.实际上真正的情况也许更复杂,我举一个生产系统遇到的问题。
SYSTEM@192.168.xx.xx:1521/zzzzz> @ &r/sqlid 22h1xj9x22jf6
SQL_ID        SQLTEXT
------------- ---------------------------------------------------------------------------------------
22h1xj9x22jf6 SELECT COUNT ( *) FROM ZY_ZYJS A WHERE A.JZRQ IS NULL AND A.CZGH =:1 AND A.JSRQ < :2
1 row selected.

SYSTEM@192.168.xx.xx:1521/zzzzz> @ bind_cap 22h1xj9x22jf6 ''
SQL_ID        CHILD_NUMBER WAS NAME                   POSITION MAX_LENGTH LAST_CAPTURED       DATATYPE_STRING VALUE_STRING
------------- ------------ --- -------------------- ---------- ---------- ------------------- --------------- -------------------------------
22h1xj9x22jf6            0 YES :1                            1         32 2018-01-17 12:05:36 CHAR(32)        829
                           YES :2                            2          7 2018-01-17 12:05:36 DATE            2018/01/01 00:00:00

--//我们系统存在2个索引。czgh+JZRQ(操作工号+结帐日期)是一个复合索引,JSRQ(结算日期)是一个索引。明显使用JSRQ不好,因为
--//小于2018/1/1的数据很多。而实际的执行计划是:(在使用tstats包处理后)

Plan hash value: 2747456857
----------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name                | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                     |      1 |        |       |     5 (100)|          |      1 |00:00:00.01 |      20 |       |       |          |
|   1 |  SORT AGGREGATE     |                     |      1 |      1 |    21 |            |          |      1 |00:00:00.01 |      20 |       |       |          |
|*  2 |   VIEW              | index$_join$_001    |      1 |      1 |    21 |     5   (0)| 00:00:01 |      0 |00:00:00.01 |      20 |       |       |          |
|*  3 |    HASH JOIN        |                     |      1 |        |       |            |          |      0 |00:00:00.01 |      20 |  1368K|  1368K| 1649K (0)|
|*  4 |     INDEX RANGE SCAN| IDX_ZY_ZYJS_JSRQ    |      1 |      1 |    21 |     3  (34)| 00:00:01 |   5356 |00:00:00.01 |      14 |       |       |          |
|*  5 |     INDEX RANGE SCAN| I_ZY_ZYJS_CZGH_JZRQ |      1 |      1 |    21 |     4  (25)| 00:00:01 |     14 |00:00:00.01 |       6 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------------------------

--//恢复统计信息。
BEGIN
  SYS.DBMS_STATS.GATHER_TABLE_STATS (
     OwnName           => 'XXXXX'
    ,TabName           => 'ZY_ZYJS'
    ,Estimate_Percent  => SYS.DBMS_STATS.AUTO_SAMPLE_SIZE
    ,Method_Opt        => 'FOR ALL COLUMNS SIZE REPEAT '
    ,Degree            => 4
    ,Cascade           => TRUE
    ,No_Invalidate  => FALSE);
END;
/
SYSTEM@192.168.xx.xx:1521/zzzzz> SELECT COLUMN_NAME, DATA_TYPE, DATA_LENGTH, DATA_PRECISION, DATA_SCALE, NUM_DISTINCT, LOW_VALUE, HIGH_VALUE,
       DENSITY, NUM_NULLS, NUM_BUCKETS, HISTOGRAM
  FROM dba_tab_cols
 WHERE table_name  = 'ZY_ZYJS'
   AND column_name = 'JSRQ';
COLUMN_NAME          DATA_TYPE  DATA_LENGTH DATA_PRECISION DATA_SCALE NUM_DISTINCT LOW_VALUE  HIGH_VALUE    DENSITY  NUM_NULLS NUM_BUCKETS HISTOGRAM
-------------------- ---------- ----------- -------------- ---------- ------------ ---------- ---------- ---------- ---------- ----------- ---------------
JSRQ                 DATE                 7                                   5606 78740B0C10 7876011209  .00017838          0           1 NONE
                                                                                   012C       1017
1 row selected.

--//执行计划变成了。

Plan hash value: 3335725722
----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                     |      1 |        |       |     8 (100)|          |      1 |00:00:00.01 |       5 |
|   1 |  SORT AGGREGATE              |                     |      1 |      1 |    21 |            |          |      1 |00:00:00.01 |       5 |
|*  2 |   TABLE ACCESS BY INDEX ROWID| ZY_ZYJS             |      1 |     16 |   336 |     8   (0)| 00:00:01 |      0 |00:00:00.01 |       5 |
|*  3 |    INDEX RANGE SCAN          | I_ZY_ZYJS_CZGH_JZRQ |      1 |     17 |       |     1   (0)| 00:00:01 |     14 |00:00:00.01 |       2 |
----------------------------------------------------------------------------------------------------------------------------------------------

--//所以讲没有一成不变的优化方案,必须综合分析。可惜国内大部分dba很少把精力放在优化sql语句上,我自己也一样....

4.附上tstats的源代码,我做了许多修改:

--//注:我一般建立在system用户,另外要保证编译通过需要建立一张无用的表:
CREATE TABLE SAMPLE_PAYMENTS
(
  PAYGRADE         INTEGER,
  PAYMENT_DATE     DATE,
  JOB_DESCRIPTION  CHAR(20 BYTE)
);

--//我自己懒的修改代码,我还加入一个过程adjust_column_stats_v4,在保留直方图信息以及最大最小值,也就是存在直方图,不要修改字段信息。
--//使用这个包还是要小心!!

CREATE OR REPLACE PACKAGE tstats AUTHID CURRENT_USER
AS
   PROCEDURE adjust_column_stats_v1 (
      p_owner         all_tab_col_statistics.owner%TYPE DEFAULT SYS_CONTEXT (
                                                                   'USERENV'
                                                                  ,'CURRENT_SCHEMA')
     ,p_table_name    all_tab_col_statistics.table_name%TYPE);

PROCEDURE adjust_column_stats_v2 (
      p_owner         all_tab_col_statistics.owner%TYPE DEFAULT SYS_CONTEXT (
                                                                   'USERENV'
                                                                  ,'CURRENT_SCHEMA')
     ,p_table_name    all_tab_col_statistics.table_name%TYPE);

PROCEDURE adjust_column_stats_v3 (
      p_owner         all_tab_col_statistics.owner%TYPE DEFAULT SYS_CONTEXT (
                                                                   'USERENV'
                                                                  ,'CURRENT_SCHEMA')
     ,p_table_name    all_tab_col_statistics.table_name%TYPE);

PROCEDURE adjust_column_stats_v4 (
      p_owner         all_tab_col_statistics.owner%TYPE DEFAULT SYS_CONTEXT (
                                                                   'USERENV'
                                                                  ,'CURRENT_SCHEMA')
     ,p_table_name    all_tab_col_statistics.table_name%TYPE);

PROCEDURE amend_time_based_statistics (
      effective_date    DATE DEFAULT SYSDATE);

PROCEDURE adjust_global_stats (
      p_owner         all_tab_col_statistics.owner%TYPE DEFAULT SYS_CONTEXT (
                                                                   'USERENV'
                                                                  ,'CURRENT_SCHEMA')
     ,p_table_name    all_tab_col_statistics.table_name%TYPE
     ,p_mode          VARCHAR2 DEFAULT 'PMOP');

PROCEDURE gather_table_stats (
      p_owner         all_tab_col_statistics.owner%TYPE DEFAULT SYS_CONTEXT (
                                                                   'USERENV'
                                                                  ,'CURRENT_SCHEMA')
     ,p_table_name    all_tab_col_statistics.table_name%TYPE);

PROCEDURE set_temp_table_stats (
      p_owner         all_tab_col_statistics.owner%TYPE DEFAULT SYS_CONTEXT (
                                                                   'USERENV'
                                                                  ,'CURRENT_SCHEMA')
     ,p_table_name    all_tab_col_statistics.table_name%TYPE
     ,p_numrows       INTEGER DEFAULT 20000
     ,p_numblks       INTEGER DEFAULT 1000
     ,p_avgrlen       INTEGER DEFAULT 400);

PROCEDURE import_table_stats (
      p_owner         all_tab_col_statistics.owner%TYPE DEFAULT SYS_CONTEXT (
                                                                   'USERENV'
                                                                  ,'CURRENT_SCHEMA')
     ,p_table_name    all_tab_col_statistics.table_name%TYPE
     ,p_statown       all_tab_col_statistics.owner%TYPE DEFAULT SYS_CONTEXT (
                                                                   'USERENV'
                                                                  ,'CURRENT_SCHEMA')
     ,p_stat_table    all_tab_col_statistics.table_name%TYPE);
END tstats;
/

CREATE OR REPLACE PACKAGE BODY SYSTEM.tstats
AS
   FUNCTION get_srec
      RETURN DBMS_STATS.statrec
   IS
      srec   DBMS_STATS.statrec;
   BEGIN
      /*

Workaround for issue in 12.1.0.1
      that produces wrong join cardinality
      when both tables have NULL for high
      and low values.  As a workaround this
      function sets the high value very high
      and the low value very low.
      */
      $IF DBMS_DB_VERSION.version >= 12
      $THEN
         srec.epc := 2;                                       -- Two endpoints
         srec.bkvals := NULL;                                  -- No histogram
         DBMS_STATS.prepare_column_values
         (
            srec
           ,DBMS_STATS.rawarray
            (
               HEXTORAW
               (
                  -- Min
                  '0000000000000000000000000000000000000000000000000000000000000000'
               )
              -- Max
              ,HEXTORAW
               (
                  'ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff'
               )
            )
         );
         RETURN srec;
      $ELSE
         RETURN NULL;
      $END
   END get_srec;

PROCEDURE adjust_column_stats_v1
   (
      p_owner         all_tab_col_statistics.owner%TYPE DEFAULT SYS_CONTEXT
                                                                (
                                                                   'USERENV'
                                                                  ,'CURRENT_SCHEMA'
                                                                )
     ,p_table_name    all_tab_col_statistics.table_name%TYPE
   )
   AS
      CURSOR c1
      IS
         SELECT *
           FROM all_tab_col_statistics
          WHERE     owner = p_owner
                AND table_name = p_table_name
                AND last_analyzed IS NOT NULL;
   BEGIN
      FOR r IN c1
      LOOP
         DBMS_STATS.delete_column_stats
         (
            ownname         => r.owner
           ,tabname         => r.table_name
           ,colname         => r.column_name
           ,cascade_parts   => TRUE
           ,no_invalidate   => TRUE
           ,force           => TRUE
         );
         DBMS_STATS.set_column_stats
         (
            ownname         => r.owner
           ,tabname         => r.table_name
           ,colname         => r.column_name
           ,distcnt         => r.num_distinct
           ,density         => r.density
           ,nullcnt         => r.num_nulls
           ,srec            => get_srec             -- No HIGH_VALUE/LOW_VALUE
           ,avgclen         => r.avg_col_len
           ,no_invalidate   => FALSE
           ,force           => TRUE
         );
      END LOOP;
   END adjust_column_stats_v1;

PROCEDURE adjust_column_stats_v2
   (
      p_owner         all_tab_col_statistics.owner%TYPE DEFAULT SYS_CONTEXT
                                                                (
                                                                   'USERENV'
                                                                  ,'CURRENT_SCHEMA'
                                                                )
     ,p_table_name    all_tab_col_statistics.table_name%TYPE
   )
   AS
      CURSOR c1
      IS
         SELECT *
           FROM all_tab_col_statistics
          WHERE     owner = p_owner
                AND table_name = p_table_name
                AND last_analyzed IS NOT NULL;

v_num_distinct   all_tab_col_statistics.num_distinct%TYPE;
   BEGIN
      FOR r IN c1
      LOOP
         DBMS_STATS.delete_column_stats
         (
            ownname         => r.owner
           ,tabname         => r.table_name
           ,colname         => r.column_name
           ,cascade_parts   => TRUE
           ,no_invalidate   => TRUE
           ,force           => TRUE
         );

IF r.num_distinct = 1
         THEN
            v_num_distinct := 1 + 1e-14;
         ELSE
            v_num_distinct := r.num_distinct;
         END IF;

DBMS_STATS.set_column_stats
         (
            ownname         => r.owner
           ,tabname         => r.table_name
           ,colname         => r.column_name
           ,distcnt         => v_num_distinct
           ,density         => 1 / v_num_distinct
           ,nullcnt         => r.num_nulls
           ,srec            => get_srec             -- No HIGH_VALUE/LOW_VALUE
           ,avgclen         => r.avg_col_len
           ,no_invalidate   => FALSE
           ,force           => TRUE
         );
      END LOOP;
   END adjust_column_stats_v2;

--  保留直方图信息
   PROCEDURE adjust_column_stats_v3
   (
      p_owner         all_tab_col_statistics.owner%TYPE DEFAULT SYS_CONTEXT
                                                                (
                                                                   'USERENV'
                                                                  ,'CURRENT_SCHEMA'
                                                                )
     ,p_table_name    all_tab_col_statistics.table_name%TYPE
   )
   AS
      CURSOR c1
      IS
         SELECT *
           FROM all_tab_col_statistics
          WHERE     owner = p_owner
                AND table_name = p_table_name
                AND last_analyzed IS NOT NULL;

v_num_distinct   all_tab_col_statistics.num_distinct%TYPE;
      z_distcnt        NUMBER;
      z_density        NUMBER;
      z_nullcnt        NUMBER;
      z_srec           DBMS_STATS.statrec;
      z_avgclen        NUMBER;
   BEGIN
      FOR r IN c1
      LOOP
         DBMS_STATS.get_column_stats
         (
            ownname   => r.owner
           ,tabname   => r.table_name
           ,colname   => r.column_name
           ,distcnt   => z_distcnt
           ,density   => z_density
           ,nullcnt   => z_nullcnt
           ,srec      => z_srec
           ,avgclen   => z_avgclen
         );

DBMS_STATS.delete_column_stats
         (
            ownname         => r.owner
           ,tabname         => r.table_name
           ,colname         => r.column_name
           ,cascade_parts   => TRUE
           ,no_invalidate   => TRUE
           ,force           => TRUE
         );

z_srec.minval := NULL;
         z_srec.maxval := NULL;

IF r.num_distinct = 1
         THEN
            v_num_distinct := 1 + 1e-14;
         ELSE
            v_num_distinct := r.num_distinct;
         END IF;

IF r.num_distinct <> 0
         THEN
            DBMS_STATS.set_column_stats
            (
               ownname         => r.owner
              ,tabname         => r.table_name
              ,colname         => r.column_name
              ,distcnt         => v_num_distinct
              ,density         => 1 / v_num_distinct
              ,nullcnt         => r.num_nulls
              ,srec            => z_srec            -- No HIGH_VALUE/LOW_VALUE
              ,avgclen         => r.avg_col_len
              ,no_invalidate   => FALSE
              ,force           => TRUE
            );
         END IF;
      END LOOP;
   END adjust_column_stats_v3;

--  保留直方图信息以及最大最小值,也就是存在直方图,不要修改字段信息。
   PROCEDURE adjust_column_stats_v4
   (
      p_owner         all_tab_col_statistics.owner%TYPE DEFAULT SYS_CONTEXT
                                                                (
                                                                   'USERENV'
                                                                  ,'CURRENT_SCHEMA'
                                                                )
     ,p_table_name    all_tab_col_statistics.table_name%TYPE
   )
   AS
      CURSOR c1
      IS
         SELECT *
           FROM all_tab_col_statistics
          WHERE     owner = p_owner
                AND table_name = p_table_name
                AND histogram='NONE'
                AND last_analyzed IS NOT NULL ;

v_num_distinct   all_tab_col_statistics.num_distinct%TYPE;
      z_distcnt        NUMBER;
      z_density        NUMBER;
      z_nullcnt        NUMBER;
      z_srec           DBMS_STATS.statrec;
      z_avgclen        NUMBER;
   BEGIN
      FOR r IN c1
      LOOP
         DBMS_STATS.get_column_stats
         (
            ownname   => r.owner
           ,tabname   => r.table_name
           ,colname   => r.column_name
           ,distcnt   => z_distcnt
           ,density   => z_density
           ,nullcnt   => z_nullcnt
           ,srec      => z_srec
           ,avgclen   => z_avgclen
         );

DBMS_STATS.delete_column_stats
         (
            ownname         => r.owner
           ,tabname         => r.table_name
           ,colname         => r.column_name
           ,cascade_parts   => TRUE
           ,no_invalidate   => TRUE
           ,force           => TRUE
         );

z_srec.minval := NULL;
         z_srec.maxval := NULL;

IF r.num_distinct = 1
         THEN
            v_num_distinct := 1 + 1e-14;
         ELSE
            v_num_distinct := r.num_distinct;
         END IF;

IF r.num_distinct <> 0
         THEN
            DBMS_STATS.set_column_stats
            (
               ownname         => r.owner
              ,tabname         => r.table_name
              ,colname         => r.column_name
              ,distcnt         => v_num_distinct
              ,density         => 1 / v_num_distinct
              ,nullcnt         => r.num_nulls
              ,srec            => z_srec            -- No HIGH_VALUE/LOW_VALUE
              ,avgclen         => r.avg_col_len
              ,no_invalidate   => FALSE
              ,force           => TRUE
            );
         END IF;
      END LOOP;
   END adjust_column_stats_v4;

PROCEDURE amend_time_based_statistics
   (
      effective_date    DATE DEFAULT SYSDATE
   )
   IS
      distcnt   NUMBER;
      density   NUMBER;
      nullcnt   NUMBER;
      srec      DBMS_STATS.statrec;

avgclen   NUMBER;
   BEGIN
      --
      -- Step 1: Remove data from previous run
      --
      DELETE FROM sample_payments;

--
      -- Step 2:  Add data for standard pay for standard employees
      --
      INSERT INTO sample_payments (paygrade, payment_date, job_description)
         WITH payment_dates
              AS (    SELECT ADD_MONTHS
                             (
                                TRUNC (effective_date, 'MM') + 19
                               ,1 - ROWNUM
                             )
                                standard_paydate
                        FROM DUAL
                  CONNECT BY LEVEL <= 12)
             ,paygrades
              AS (    SELECT ROWNUM + 1 paygrade
                        FROM DUAL
                  CONNECT BY LEVEL <= 9)
             ,multiplier
              AS (    SELECT ROWNUM rid
                        FROM DUAL
                  CONNECT BY LEVEL <= 100)
         SELECT paygrade
               ,CASE MOD (standard_paydate - DATE '1001-01-06', 7)
                   WHEN 5 THEN standard_paydate - 1
                   WHEN 6 THEN standard_paydate - 2
                   ELSE standard_paydate
                END
                   payment_date
               ,'AAA' job_description
           FROM paygrades, payment_dates, multiplier;

--
      -- Step 3:  Add data for paygrade 1
      --
      INSERT INTO sample_payments (paygrade, payment_date, job_description)
         WITH payment_dates
              AS (    SELECT ADD_MONTHS
                             (
                                LAST_DAY (TRUNC (effective_date))
                               ,1 - ROWNUM
                             )
                                standard_paydate
                        FROM DUAL
                  CONNECT BY LEVEL <= 12)
         SELECT 1 paygrade
               ,CASE MOD (standard_paydate - DATE '1001-01-06', 7)
                   WHEN 5 THEN standard_paydate - 1
                   WHEN 6 THEN standard_paydate - 2
                   ELSE standard_paydate
                END
                   payment_dates
               ,'zzz' job_description
           FROM payment_dates;

--
      -- Step 4:  Add rows for exceptions.
      --
      INSERT INTO sample_payments (paygrade, payment_date, job_description)
         WITH payment_dates
              AS (    SELECT ADD_MONTHS
                             (
                                TRUNC (effective_date, 'MM') + 19
                               ,1 - ROWNUM
                             )
                                standard_paydate
                        FROM DUAL
                  CONNECT BY LEVEL <= 12)
             ,paygrades
              AS (    SELECT ROWNUM + 1 paygrade
                        FROM DUAL
                  CONNECT BY LEVEL <= 7)
         SELECT paygrade
               ,CASE MOD (standard_paydate - DATE '1001-01-06', 7)
                   WHEN 5 THEN standard_paydate - 2 + paygrade
                   WHEN 6 THEN standard_paydate - 3 + paygrade
                   ELSE standard_paydate - 1 + paygrade
                END
                   payment_date
               ,'AAA' job_description
           FROM paygrades, payment_dates;

--
      -- Step 5:  Gather statistics for SAMPLE_PAYMENTS
      --
      DBMS_STATS.gather_table_stats
      (
         ownname      => SYS_CONTEXT ('USERENV', 'CURRENT_SCHEMA')
        ,tabname      => 'SAMPLE_PAYMENTS'
        ,method_opt   =>    'FOR COLUMNS SIZE 1 JOB_DESCRIPTION '
                         || 'FOR COLUMNS SIZE 254 PAYGRADE,PAYMENT_DATE, '
                         || '(PAYGRADE,PAYMENT_DATE)'
      );

--
      -- Step 6:  Copy column statistics from SAMPLE_PAYMENTS to PAYMENTS
      --
      FOR r IN (SELECT column_name, histogram
                  FROM all_tab_cols
                 WHERE table_name = 'SAMPLE_PAYMENTS')
      LOOP
         DBMS_STATS.get_column_stats
         (
            ownname   => SYS_CONTEXT ('USERENV', 'CURRENT_SCHEMA')
           ,tabname   => 'SAMPLE_PAYMENTS'
           ,colname   => r.column_name
           ,distcnt   => distcnt
           ,density   => density
           ,nullcnt   => nullcnt
           ,srec      => srec
           ,avgclen   => avgclen
         );

DBMS_STATS.set_column_stats
         (
            ownname   => SYS_CONTEXT ('USERENV', 'CURRENT_SCHEMA')
           ,tabname   => 'PAYMENTS'
           ,colname   => r.column_name
           ,distcnt   => distcnt
           ,density   => density
           ,nullcnt   => nullcnt
           ,srec      => srec
           ,avgclen   => avgclen
         );
      END LOOP;
   END amend_time_based_statistics;

PROCEDURE adjust_global_stats
   (
      p_owner         all_tab_col_statistics.owner%TYPE DEFAULT SYS_CONTEXT
                                                                (
                                                                   'USERENV'
                                                                  ,'CURRENT_SCHEMA'
                                                                )
     ,p_table_name    all_tab_col_statistics.table_name%TYPE
     ,p_mode          VARCHAR2 DEFAULT 'PMOP'
   )
   IS
      -- This helper function updates the statistic for the number of blocks in the
      -- table so that the average size of a partition is unaltered.  We sneak
      -- this value away in the unused CACHEDBLK statistic
      --
      numblks     NUMBER;
      numrows     NUMBER;
      avgrlen     NUMBER;
      cachedblk   NUMBER;
      cachehit    NUMBER;
   BEGIN
      DBMS_STATS.get_table_stats
      (
         ownname     => p_owner
        ,tabname     => p_table_name
        ,numrows     => numrows
        ,avgrlen     => avgrlen
        ,numblks     => numblks
        ,cachedblk   => cachedblk
        ,cachehit    => cachehit
      );

IF p_mode = 'PMOP'
      THEN
         --
         -- Resetting NUMBLKS based on CACHEDBLK
         -- average segment size and current number
         -- of partitions.
         --
         IF cachedblk IS NULL
         THEN
            RETURN;                                          -- No saved value
         END IF;

--
         -- Recalculate the number of blocks based on
         -- the current number of partitions and the
         -- saved average segment size
         -- Avoid reference to DBA_SEGMENTS in case
         -- there is no privilege.
         --
         SELECT cachedblk * COUNT (*)
           INTO numblks
           FROM all_objects
          WHERE     owner = p_owner
                AND object_name = p_table_name
                AND object_type = 'TABLE PARTITION';
      ELSIF p_mode = 'GATHER'
      THEN
         --
         -- Save average segment size in CACHEDBLK based on NUMBLKS
         -- and current number of partitions.
         --
         SELECT numblks / COUNT (*), TRUNC (numblks / COUNT (*)) * COUNT (*)
           INTO cachedblk, numblks
           FROM all_objects
          WHERE     owner = p_owner
                AND object_name = p_table_name
                AND object_type = 'TABLE PARTITION';
      ELSE
         RAISE PROGRAM_ERROR;
      -- Only gets here if p_mode not set to PMOP or GATHER
      END IF;

DBMS_STATS.set_table_stats
      (
         ownname     => p_owner
        ,tabname     => p_table_name
        ,numblks     => numblks
        ,cachedblk   => cachedblk
        ,force       => TRUE
      );
   END adjust_global_stats;

PROCEDURE gather_table_stats
   (
      p_owner         all_tab_col_statistics.owner%TYPE DEFAULT SYS_CONTEXT
                                                                (
                                                                   'USERENV'
                                                                  ,'CURRENT_SCHEMA'
                                                                )
     ,p_table_name    all_tab_col_statistics.table_name%TYPE
   )
   IS
   BEGIN
      DBMS_STATS.unlock_table_stats
      (
         ownname   => p_owner
        ,tabname   => p_table_name
      );

FOR r IN (SELECT *
                  FROM all_tables
                 WHERE owner = p_owner AND table_name = p_table_name)
      LOOP
         DBMS_STATS.gather_table_stats
         (
            ownname       => p_owner
           ,tabname       => p_table_name
           ,granularity   => CASE r.partitioned
                               WHEN 'YES' THEN 'GLOBAL'
                               ELSE 'ALL'
                            END
           ,method_opt    => 'FOR ALL COLUMNS SIZE repeat'
         );

adjust_column_stats_v3
         (
            p_owner        => p_owner
           ,p_table_name   => p_table_name
         );

IF r.partitioned = 'YES'
         THEN
            adjust_global_stats
            (
               p_owner        => p_owner
              ,p_table_name   => p_table_name
              ,p_mode         => 'GATHER'
            );
         END IF;
      END LOOP;

DBMS_STATS.lock_table_stats (ownname => p_owner, tabname => p_table_name);
   END gather_table_stats;

PROCEDURE set_temp_table_stats
   (
      p_owner         all_tab_col_statistics.owner%TYPE DEFAULT SYS_CONTEXT
                                                                (
                                                                   'USERENV'
                                                                  ,'CURRENT_SCHEMA'
                                                                )
     ,p_table_name    all_tab_col_statistics.table_name%TYPE
     ,p_numrows       INTEGER DEFAULT 20000
     ,p_numblks       INTEGER DEFAULT 1000
     ,p_avgrlen       INTEGER DEFAULT 400
   )
   IS
      distcnt   NUMBER;
   BEGIN
      DBMS_STATS.unlock_table_stats
      (
         ownname   => p_owner
        ,tabname   => p_table_name
      );
      $IF DBMS_DB_VERSION.version >= 12
      $THEN
         DBMS_STATS.set_table_prefs
         (
            ownname   => p_owner
           ,tabname   => p_table_name
           ,pname     => 'GLOBAL_TEMP_TABLE_STATS'
           ,pvalue    => 'SHARED'
         );
      $END
      DBMS_STATS.delete_table_stats
      (
         ownname   => p_owner
        ,tabname   => p_table_name
      );
      DBMS_STATS.set_table_stats
      (
         ownname         => p_owner
        ,tabname         => p_table_name
        ,numrows         => p_numrows
        ,numblks         => p_numblks
        ,avgrlen         => p_avgrlen
        ,no_invalidate   => FALSE
      );
      /*

We must now set column statistics to limit the effect of predicates on cardinality
      calculations; by default cardinality is reduced by a factor of 100 for each predicate.

We use a value of 2 for the number of distinct columns to reduce this factor to 2.  We
      do no not use 1 because predicates of the type "column_1 <> 'VALUE_1'" would reduce the
      cardinality to 1.

*/
      distcnt := 2;

FOR r IN (SELECT *
                  FROM all_tab_columns
                 WHERE owner = p_owner AND table_name = p_table_name)
      LOOP
         DBMS_STATS.set_column_stats
         (
            ownname         => p_owner
           ,tabname         => r.table_name
           ,colname         => r.column_name
           ,distcnt         => distcnt
           ,density         => 1 / distcnt
           ,avgclen         => 5
           ,srec            => get_srec
           ,no_invalidate   => FALSE
         );
      END LOOP;

DBMS_STATS.lock_table_stats (ownname => p_owner, tabname => p_table_name);
   END set_temp_table_stats;

PROCEDURE import_table_stats
   (
      p_owner         all_tab_col_statistics.owner%TYPE DEFAULT SYS_CONTEXT
                                                                (
                                                                   'USERENV'
                                                                  ,'CURRENT_SCHEMA'
                                                                )
     ,p_table_name    all_tab_col_statistics.table_name%TYPE
     ,p_statown       all_tab_col_statistics.owner%TYPE DEFAULT SYS_CONTEXT
                                                                (
                                                                   'USERENV'
                                                                  ,'CURRENT_SCHEMA'
                                                                )
     ,p_stat_table    all_tab_col_statistics.table_name%TYPE
   )
   IS
   BEGIN
      DECLARE
         already_up_to_date   EXCEPTION;
         PRAGMA EXCEPTION_INIT (already_up_to_date, -20000);
      BEGIN
         DBMS_STATS.upgrade_stat_table
         (
            ownname   => 'DLS'
           ,stattab   => 'DLS_TSTATS'
         );
      EXCEPTION
         WHEN already_up_to_date
         THEN
            NULL;
      END;

DBMS_STATS.unlock_table_stats
      (
         ownname   => p_owner
        ,tabname   => p_table_name
      );
      DBMS_STATS.delete_table_stats
      (
         ownname         => p_owner
        ,tabname         => p_table_name
        ,no_invalidate   => FALSE
      );
      DBMS_STATS.import_table_stats
      (
         ownname         => p_owner
        ,tabname         => p_table_name
        ,statown         => p_statown
        ,stattab         => p_stat_table
        ,no_invalidate   => FALSE
      );

-- For partitioned tables it may be that the number of (sub)partitions on
      -- the target systems do not match those on the source system.
      FOR r
         IN (SELECT *
               FROM all_tables
              WHERE     owner = p_owner
                    AND table_name = p_table_name
                    AND partitioned = 'YES')
      LOOP
         adjust_global_stats (p_owner, p_table_name, 'PMOP');
      END LOOP;

DBMS_STATS.lock_table_stats (ownname => p_owner, tabname => p_table_name);
   END import_table_stats;
END tstats;
/

[20180118]tstats的问题.txt的更多相关文章

  1. Linux内核配置选项

    http://blog.csdn.net/wdsfup/article/details/52302142 http://www.manew.com/blog-166674-12962.html Gen ...

  2. notepad++设置默认打开txt文件失效的解决方法

    1.系统环境 win10企业版,64位系统 2.初步设置 设置txt默认为notepad++打开,菜单:设置->首选项->文件关联 选择对应的文件扩展,点击"关闭"按钮 ...

  3. 使用po模式读取豆瓣读书最受关注的书籍,取出标题、评分、评论、题材 按评分从小到大排序并输出到txt文件中

    #coding=utf-8from time import sleepimport unittestfrom selenium import webdriverfrom selenium.webdri ...

  4. Bulk Insert:将文本数据(csv和txt)导入到数据库中

    将文本数据导入到数据库中的方法有很多,将文本格式(csv和txt)导入到SQL Server中,bulk insert是最简单的实现方法 1,bulk insert命令,经过简化如下 BULK INS ...

  5. 【基于WPF+OneNote+Oracle的中文图片识别系统阶段总结】之篇三:批量处理后的txt文件入库处理

    篇一:WPF常用知识以及本项目设计总结:http://www.cnblogs.com/baiboy/p/wpf.html 篇二:基于OneNote难点突破和批量识别:http://www.cnblog ...

  6. 使用page object模式抓取几个主要城市的pm2.5并从小到大排序后写入txt文档

    #coding=utf-8from time import sleepimport unittestfrom selenium import webdriverfrom selenium.webdri ...

  7. 使用python处理子域名爆破工具subdomainsbrute结果txt

    近期学习了一段时间python,结合自己的安全从业经验,越来越感觉到安全测试是一个体力活.如果没有良好的coding能力去自动化的话,无疑会把安全测试效率变得很低. 作为安全测试而言,第一步往往要通过 ...

  8. Mac新建文件夹、txt文件、无格式文件

    新建文件夹: mkdir test 新建txt touch test.txt 新建无后缀格式文件 touch test 如果要删除文件夹 rm -r -f test

  9. [转载]C#读写txt文件的两种方法介绍

    C#读写txt文件的两种方法介绍 by 大龙哥 1.添加命名空间 System.IO; System.Text; 2.文件的读取 (1).使用FileStream类进行文件的读取,并将它转换成char ...

随机推荐

  1. Xshell连接ESXI方法

    第一步.ESXI打开ssh功能按住F2进入设置如下图: 第二步.输入密码 第三步.选择Troubleshooting Options 回车 第四步.选择Enable SSH 这里只介绍了一种方式打开E ...

  2. mysql 开发进阶篇系列 41 mysql日志之慢查询日志

    一.概述 慢查询日志记录了所有的超过sql语句( 超时参数long_query_time单位 秒),获得表锁定的时间不算作执行时间.慢日志默认写入到参数datadir(数据目录)指定的路径下.默认文件 ...

  3. Java NIO中的通道Channel(一)通道基础

    什么是通道Channel 这个说实话挺难定义的,有点抽象,不过我们可以根据它的用途来理解: 通道主要用于传输数据,从缓冲区的一侧传到另一侧的实体(如文件.套接字...),反之亦然: 通道是访问IO服务 ...

  4. Shell脚本 | 性能测试之内存

    性能测试中,内存是一个不可或缺的方面.比如说在跑 Monkey 的过程中,如何准确持续的获取到内存数据就显得尤为重要. 今天分享一个脚本,可以在给定时间内持续监控内存,最后输出成一份 CSV 文件,通 ...

  5. Adam

    Adam 方法 Adam 方法将惯性保持和环境感知这两个优点集于一身.一方面, Adam 记录梯度的一阶矩(first moment),即过往梯度与当前梯度的平均,这体现了惯性保持:另一方面,Adam ...

  6. php,vue,vue-ssr 做出来的页面有什么区别?

    欢迎大家前往腾讯云+社区,获取更多腾讯海量技术实践干货哦~ 本文由shirishiyue发表于云+社区专栏 目前我这边的web页面,都是采用php+smarty模板生成的,是一种比较早期的开发模式.好 ...

  7. 猪圈密码 摩斯密码 QWE加密 栅栏加密 当铺密码

    1.猪圈密码 猪圈密码:对应下图就是HORSE 2.摩斯密码 在线摩斯密码翻译器:http://www.mathsking.net/morse.htm 3.QWE加密 键盘按ABC的顺序排列得到对应的 ...

  8. openssl x509(签署和自签署)

    openssl系列文章:http://www.cnblogs.com/f-ck-need-u/p/7048359.html 主要用于输出证书信息,也能够签署证书请求文件.自签署.转换证书格式等. op ...

  9. Umbraco 资源推荐

    Umbraco 社区 Umbraco 官方社区.找到人们谈论当前的 Umbraco 主题的最好方法是通过 Twitter.Umbraco 也知道他们很多的聚会和节日在世界各地举行.Umbraco 的开 ...

  10. Asp.Net4.5 mvc4(二) 页面创建与讲解

    一.Contorl 通过目录结构我们可以看到contorllers类的命名方式 命名规则:前缀+Controller. 在看看contorller中的action方法 using System; us ...