APPLIES TO:

Oracle Product Hub - Version 12.1.1 to 12.1.1 [Release 12.1]

Oracle Inventory Management - Version 12.1.1 to 12.1.1 [Release 12.1]

Oracle Item Master - Version 12.0.6 to 12.0.6 [Release 12]

Information in this document applies to any platform.

ABSTRACT

This document discusses the Import Catalog Items (ICI) program and how to achieve optimal performance for high volumes. It lists a number of functional areas that you can tune for optimal scalability. ICC processing and performance depends on which features
you use, as well as how you use and configure your system. This document identifies opportunities for setting up your system to scale for high volumes. It also lists and briefly discusses other item bulk import options available in Oracle Product Hub (also
known as Oracle Product Information Management Data Hub (PIMDH)).



Performance of ICI varies depending on what features you use. Performance also depends on system configuration. It is not possible to make absolute hardware recommendations because hardware requirements vary based on the features used and the processing volume.
Hardware requirements also vary depending on the time window allowed for processing. For instance, more powerful hardware is required to process 10,000 items in one hour than in three hours. 



Although it is not possible to predict throughput for a given hardware configuration, it is possible to identify throughput observed with a specific hardware configuration. Refer to Performance Results section of this document for information about throughput
observed using a specific set of features and data. The data is not a benchmark.

HISTORY

Author: Oracle Product Hub Development Team

 Create Date
 30-Jul-2010

 Update Date 19-Oct-2010

 Expire Date Not Applicable

DETAILS

Bulk Import Capabilities in PIMDH

Product Hub uses the following methods for importing items in bulk:

  • Import Items (INCOIN) 

    Import Items is the core inventory program provided to upload items in bulk from interface tables. It is available to Inventory and PIMDH users. This program is for creating and/or updating basic items in bulk, without any of the associated PIMDH features such
    as UDAs. You must load data into the items/revisions interface table before launching this program.
  • Import Catalog Items (ICI) 

    This program is a wrapper on top of the Import Items program and supports bulk load of additional PIMDH child entities, such as user defined attributes (UDAs) and associations. Details of the additional entities supported are discussed in the next section,
    Functional Background and Process Flow for ICI. As with the Import Items program, you must load data into the interface table before launching this program.
  • Excel Upload (EGOIJAVA) 

    This feature enables business end users to enter data into Excel and upload it to PIMDH. The data from Excel is automatically loaded into an interface table, then the “EGO Spreadsheet Java Concurrent Program” launches and processes the interface records. This
    program internally calls the same APIs called by Import Catalog Items, so the item import performance after uploading the data from Excel into interface tables is the same as the ICI program. Most of the discussion in this document also applies to this part
    of the Excel upload process. Time required for loading data from Excel into the interface tables is additional. 

    In addition to loading items and their child entities, this program can also create change orders, new item requests and structures, all of which are out of scope of this document.
  • Batch Import using Import Workbench 

    Access the Import Workbench through the Product Information Management Data Librarian responsibility. In the Import Workbench, you can create batches, then add items to the batch, match the items against production to eliminate duplicates, and submit the batch
    for loading the items into production tables. The Import Workbench internally calls the same EGOIJAVA program mentioned above to load items and their child entities.

Functional Background and Process Flow for ICI

This document specifically focuses on the functionality and performance of the Import Catalog Items program. As noted in the previous section, the Excel Upload and Import Workbench features are wrappers on top of this program, so the discussion also applies
to these programs

Functional Background

The Import Catalog Items program can import the following entities:

  • Items (both master and organization items)
  • Revisions
  • Item Intersections
    • Suppliers
    • Supplier Site
    • Supplier Site Store
  • User Defined Attributes at all data levels
  • Item People
  • Approved Manufacturer Lists
  • Source System Cross References

While validating and importing the above entities, the program accomplishes the following tasks based on your setup:

  • Defaulting of UDAs.
  • Applying templates.
  • Copying information from the source item.
  • Generating Item Number/Description using designated functions where applicable.
  • Applying Rules (assignment and validation rules).
  • Checking for change policies and creating change orders/NIRs where required.

Process Flow

The following list shows a sequence for data processing:

  1. Process standard/style items

    1. Preprocess interface records.
    2. Run Rules.
    3. Generate Item Number/Description if function generation is specified.
    4. Process the item records to create items.
    5. Call the Change Management pre-process API.
    6. Call APIs to process child entities (such as UDAs, Intersections, AML, People).
    7. Call the Change Management post-process API.
  2. Process SKU items (same flow as step 1)
    1. Preprocess interface records.
    2. Run Rules.
    3. Generate Item Number/Description if function generation is specified.
    4. Process the items records to create items.
    5. Call the Change Management pre-process API.
    6. Call APIs to process child entities (such as UDAs, Intersections, AML, People).
    7. Call the Change Management post-process API.

Architecturally, the Import Catalog Items program is divided into smaller programs to handle items and each item's entities. The processing is also split across the Java and PL/SQL layers. The validation and processing of entities such as items and UDAs
are mainly performed in PL/SQL while supporting activities like number/description generation and Rules are performed in the Java layer. The Java layer program processes each interface record individually due to the functional nature of Rules and Function
Generation.

The PL/SQL programs mainly handle the corresponding interface records in bulk rather than processing one record at a time. However, certain validations and processing, due to their functional nature, are performed on individual interface records or in a
smaller chunk of records.

Depending on the availability of resources, customers can submit multiple batches in parallel. Finding the optimal number of threads and the optimal batch size for processing in each thread on your system is an iterative process, which is described in the
next section.

Factors Affecting the Performance of Import Catalog Items

The performance of the Import Catalog Items program depends on numerous factors. This section discusses each of these factors in detail.

  1. Number of items: Larger batches take a longer time to complete. However, having too large a batch size puts excessive memory requirements on the program, especially in cases involving rules and change management. The ideal batch size depends
    on resource availability.
  2. Transaction Type: You can submit batches with CREATE or UPDATE transaction types (or a combination of both). Performance varies with each transaction type. CREATE transactions have additional processing requirements, like creating revisions
    and applying default values.
  3. Number of UDAs Per Item: Each user defined attribute value you specify is represented as a separate row in the UDA interface table. Therefore, the volume of this table for a batch could potentially grow very large depending on the number
    of items in the batch and the number of UDAs per item. 

    In addition to the UDAs provided by the user, rows are inserted into this interface table from templates and UDA defaults. Also, more UDAs may be added depending on the rules configuration for the Item Catalog Category/Attribute Groups participating in this
    batch.
  4. Associations and other child entities: Adding supplier associations, role assignments, AMLs and cross references in the same batch as the item creation batch requires additional processing and  more time for the program to complete. Supplier
    associations also add more UDAs to the interface table (and further processing time) depending on the setup.
  5. Rules: You can define rules at the Attribute Group level or the Item Catalog Category level. Performance is affected by the number of rules, number of attributes participating in these rules, and the types of rules. As described earlier,
    assignment rules add more records to the UDA interface table, causing additional processing time. 

    Rules processing is performed in a java program, unlike the rest of the import program which is a PL/SQL program. To avoid multiple database trips, all information required to process these rules is cached into the memory in chunks of 100 items at a time. This
    makes rules a memory intensive process. If more memory is needed for this process, consider adding the following parameters to the concurrent program definition form: -Xms<memorysize>M -Xmx<memorysize>M. Also, review Document
    362851.1
    , JVM: Guidelines to setup the Java Virtual Machine in Apps Ebusiness Suite 11i and R12, to ensure optimal performance.
  6. Change Management requirements: If a batch is created using the Import Workbench function, you can select the change management option to add changes to a change order or add new items to a new item request. This is a java program that is
    memory intensive and causes additional processing time.
  7. Type of Items (Style/SKU): Due to the functional nature of SKU items, they are created after the Style items. In batches where SKU items coexist along with Style/Standard items, certain sections of the program are repetitively called, first
    to process style/standard items, then again to process SKU items. Including SKU items in same batch as Style items also increases the number of records for processing in multiple interface tables, since UDAs, organizations, associations, and so on are defaulted
    from the Style item to the SKU item.
  8. Items for Child Organizations: Assigning items to child organizations requires the same amount of processing time as creating an item in the child organizations. If you create 1000 items and assign each item to two child organizations in
    the same batch, the batch essentially creates 3000 items in the items table. In addition, extra processing, such as validating for master controlled attributes, copying master controlled attributes from master records, and so on, increases the processing time
    for child organization item records. Consider these factors while measuring performance of batches.
  9. Number of Instances of this Program Running: Under ideal circumstances, the time expected for a single ICI program to run should remain the same while running more than one ICI process in parallel. However, this is seldom the case since
    each program shares the same resources with the other programs running in parallel. CPU and memory utilization determine the number of parallel threads. The CPU utilization ideally never exceeds 80% at any point when all threads are running. 

    In addition, despite best efforts to tune all SQLs, there still are a few queries that perform a Full Table Scan on the interface tables, mainly due to the data distribution. These queries degrade while running more threads of the program in parallel, since
    each program scans through all records of the other batches processing at the same time.
  10. Hardware configuration: The throughput of the ICI program greatly depends on hardware capabilities such as number of processors and the amount of memory in the configuration of the various tiers (dB Tier, Middle Tier, Concurrent Manager,
    and so on). If your installation uses Real Application Clusters (RAC) with multiple nodes for each tier, Oracle recommends bringing down all nodes and retaining only one node per tier for best performance during the initial data load. Additionally, refer to
    the "Concurrent Processing on RAC Information Center" section in Document 1304305.1, Concurrent Processing - Product Information Center (PIC), and ensure that
    all dB parameters are optimized and the same for all nodes.

The above factors affect the performance of the item import programs. Next, we will discuss how to measure and, if required, tune the performance of the program.

Measuring and Tuning the Performance of the ICI Program

The process of tuning the performance of the ICI program involves multiple steps:

  1. Ensuring the system configuration prerequisites are met.
  2. Measuring the current performance of the ICI program.
  3. Tuning and measuring the post-tuning performance.

The following sections discuss each of the above steps in detail.

System Configuration Pre-requirements

Before using PIMDH, ensure system configurations are performed as specified in the following My Oracle Support documents. Some of these documents are specific to the Linux x86-64 bit platform, but documents for other platforms as well are available in My
Oracle Support.

396009.1 - Database Initialization Parameters for Oracle Applications Release 12 

761566.1 - Oracle Applications Installation and Upgrade Notes (Linux X86-64 bit) 

361323.1 - HugePages on Linux 

859397.1 - Integrating Oracle E-Business Suite Release 12 with Oracle Database Vault 11.1.0.7 (if you use 11gR2, review note 823587.1 instead)

466649.1 - Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 12

The list of documents listed above is not a comprehensive list, and it is only indicative for a specific configuration. If you use a different platform or configuration, look for other similar documents that are relevant to your configuration 



Verify the following init.ora parameters in your environment since they can impact the performance of batch processes.

"optimizer_dynamic_sampling" = 2 

"optimizer_index_cost_adj" = 100 

"_optim_peek_user_binds" = false 

"_optimizer_extended_cursor_sharing" = NONE 

"_optimizer_extended_cursor_sharing_rel" = NONE

Gather statistics regularly on your INV/EGO/APPLSYS schema using the Gather Schema Statistics program.

Measuring the Current Performance of the ICI Program

This section describes the steps needed to measure performance of the Import Catalog Items program.

  1. Ensure the test environment is comparable in configuration to your final production instance configuration. Ensure the instance is isolated and not subjected to varying loads (for example: accessed by other users, other programs) during the time of the
    test.
  2. Ensure the performance measurement run is not the first run after a bounce of the database, middle tier, or concurrent manager. If you have bounced any of these components, perform a warm up run for a small batch of items. This eliminates overhead times
    taken for activities such as class loading and query parsing during the actual performance measurement runs.
  3. One of the factors impacting performance is the batch size. Ensure that you measure performance for a batch size most commonly used by your business process. However, Oracle recommends limiting batch sizes to functionally manageable limits to facilitate
    error reviews and corrections due to  warnings received. A functionally manageable size depends on the setup, which dictates the number of records in the interface table. For example, if you have 5 UDAs per item, you would have 25,000 records in the UDA interface
    tables for a batch of 5,000 items. However, if you had 100 UDAs per item, a 5,000 item batch results in 500,000 records in the UDA interface table.
  4. Ensure that statistics are gathered for the EGO, INV and APPLSYS schema regularly. If you have performed any major data load or data update activities recently, ensure that the statistics are collected on these schema again before measuring performance.
  5. Ensure that the following interface tables are truncated and do not contain any data:
    1. MTL_SYSTEM_ITEMS_INTERFACE
    2. MTL_ITEM_REVISIONS_INTERFACE
    3. MTL_INTERFACE_ERRORS
    4. EGO_ITM_USR_ATTR_INTRFC
    5. EGO_ITEM_ASSOCIATIONS_INTF
    6. EGO_AML_INTF
    7. EGO_ITEM_PEOPLE_INTF
    8. EGO_ BULKLOAD_INTF
  6. Ensure tracing is disabled.
    1. Set the profile "Initialization SQL Statement - Custom" to Null.
    2. Clear the “Enable Trace” check box in the Concurrent Programs form for Import Catalog Items.
  7. Ensure statistics gathering is disabled .
    1. Set the profile "EGO: Enable Stats Collection" to No.
    2. Set the profile "EGO: Stats Collection Threshold" to 999999999.
  8. Ensure logging is disabled.
    1. Set the profile “INV:Debug Level” to Null.
    2. Set the profile “FND: Debug Log Enabled” to No.
  9. Load data into relevant interface tables. Ensure the quality of data you are using for the performance measurement is good. If too many records fail the program validation, the program performance degrades. Preferably, measure performance on a batch of
    items that all process successfully. Also, ensure the data loaded is representative of your production data. For example, if you expect your production items to have an average of 100 UDAs each, load around the same number of UDAs per item during this measurement
    run. 

    Keep the setup similar to your production setup. For example, if you plan to use 10 business rules involving 20 attributes in production, set up similar rules in the performance measurement run. In addition, follow these tips while setting up:

    1. If rules functionality is enabled, then set up to run assignment rules before validation rules.
    2. For bigger data loads, avoid using approval type rules since approvals significantly impact performance.
    3. Consider creating separate batches for items with different properties (for example, items with a different set of attribute groups or rules written using different attribute groups).
    4. For simple validations and assignments, perform these tasks before loading the data into interface tables instead of running the Rules framework during import.
  10. Submit the Import Catalog Items program to process the data.
  11. Once the program completes, ensure there are no errors or warnings. Note the start and end time of the program from the Concurrent Program log to get the elapsed time of the program.

The performance of multiple batches running in parallel is different from the performance measured for a single batch. To measure the performance of multiple batches running in parallel, follow the same steps outlined above, except load data for multiple
batches in step 8 and launch a program for each batch in step 9.

If the performance of the ICI program does not meet your business requirements, check the hardware configuration. If the performance is still unsatisfactory, try to tune it using the steps provided in the next section.

Tuning the Performance of ICI Program

Ensure the following patches are applied to your instance:

In addition to tuning, performance predominately depends on the statistics of your instance. The steps below explain how to gather and maintain a good set of statistics. These steps consider import of items and UDAs. If your import includes other entities
(such as suppliers), include the relevant interface tables in these steps.

  1. Decide on an optimal batch size that you plan to use for your data loads.
  2. Truncate the following interface tables:



    MTL_SYSTEM_ITEMS_INTERFACE 

    MTL_ITEM_REVISIONS_INTERFACE 

    EGO_ITM_USR_ATTR_INTRFC 

    MTL_INTERFACE_ERRORS
  3. Load data into the interface tables for one batch (use the size of the batch chosen in step 1).
  4. Run the ICI Program to process the batch.
  5. Ensure there are no errors in this batch and that all records in all interface tables are processed successfully.
  6. Create a backup of the three interface tables that have processed data.
    CREATE TABLE msii_bak_batch1 

    AS (SELECT * FROM MTL_SYSTEM_ITEMS_INTERFACE); 

    CREATE TABLE miri_bak_batch1 

    AS (SELECT * FROM MTL_ITEM_REVISIONS_INTERFACE); 

    CREATE TABLE eiuai_bak_batch1 

    AS (SELECT * FROM EGO_ITM_USR_ATTR_INTRFC); 

    COMMIT;
  7. Repeat steps 2 through 5 for another batch id.
  8. Copy the data backed up in step 6 to the respective interface tables. 
    INSERT INTO MTL_SYSTEM_ITEMS_INTERFACE 

    (SELECT * FROM msii_bak_batch1); 

    INSERT INTO MTL_ITEM_REVISIONS_INTERFACE 

    (SELECT * FROM miri_bak_batch1); 

    INSERT INTO EGO_ITM_USR_ATTR_INTRFC 

    (SELECT * FROM eiuai_bak_batch1); 

    COMMIT;
  9. Gather statistics on these two sets of processed data in the interface tables using the code below. This code includes some recommended tweaks to the statistics to achieve optimal performance. 
    DECLARE 

    l_schema VARCHAR2(30); 

    l_schema_status VARCHAR2(1); 

    l_industry VARCHAR2(1); 

    BEGIN 

    fnd_stats.load_histogram_cols('DELETE', 

    401, 

    'MTL_SYSTEM_ITEMS_INTERFACE', 

    'PROCESS_FLAG') ; 

    fnd_stats.load_histogram_cols('DELETE', 

    401, 

    'MTL_SYSTEM_ITEMS_INTERFACE', 

    'SET_PROCESS_ID') ; 



    IF (FND_INSTALLATION.GET_APP_INFO('EGO', 

    l_schema_status, 

    l_industry,l_schema)) 

    THEN 

    FND_STATS.GATHER_TABLE_STATS( 

    OWNNAME => l_schema, 

    TABNAME => 'EGO_ITM_USR_ATTR_INTRFC', 

    CASCADE => True); 

    END IF; 



    IF (FND_INSTALLATION.GET_APP_INFO('INV', 

    l_schema_status, l_industry, 

    l_schema)) 

    THEN 

    FND_STATS.GATHER_TABLE_STATS( 

    OWNNAME => l_schema, 

    TABNAME => 'MTL_SYSTEM_ITEMS_INTERFACE', 

    CASCADE => True); 

    FND_STATS.GATHER_TABLE_STATS( 

    OWNNAME => l_schema, 

    TABNAME => 'MTL_ITEM_REVISIONS_INTERFACE', 

    CASCADE => True); 

    END IF; 

    dbms_stats.set_column_stats('EGO', 

    'EGO_ITM_USR_ATTR_INTRFC', 

    'PROCESS_STATUS', 

    distcnt=>4, 

    density=>0.25); 

    dbms_stats.lock_table_stats('EGO', 'EGO_ITM_USR_ATTR_INTRFC'); 

    dbms_stats.lock_table_stats('INV', 'MTL_SYSTEM_ITEMS_INTERFACE'); 

    dbms_stats.lock_table_stats('INV', 'MTL_ITEM_REVISIONS_INTERFACE'); 

    END;
  10. Perform the steps listed in the “Measuring the Current Performance of the ICI Program” section to measure the performance of a single batch and multiple batches.
  11. Back up the statistics into backup tables for future use in case of an instance refresh. This avoids running all of the above steps to collect a good set of statistics. You can upload the statistics from the backup files directly instead. 
    drop table ego.EIUAI_STATS; 

    exec dbms_stats.create_stat_table('EGO','EIUAI_STATS'); 

    exec dbms_stats.export_table_stats('EGO', 

    'EGO_ITM_USR_ATTR_INTRFC', 

    null, 

    'EIUAI_STATS'); 

    drop table inv.MSII_STATS; 

    exec dbms_stats.create_stat_table('INV','MSII_STATS'); 

    exec dbms_stats.export_table_stats('INV', 

    'MTL_SYSTEM_ITEMS_INTERFACE', 

    null, 

    'MSII_STATS'); 

    drop table inv.MIRI_STATS; 

    exec dbms_stats.create_stat_table('INV','MIRI_STATS'); 

    exec dbms_stats.export_table_stats('INV', 

    'MTL_ITEM_REVISIONS_INTERFACE', 

    null, 

    'MIRI_STATS');

    Dump the data from EIUAI_STATS, MSII_STATS, MIRI_STATS to files. 

    In case of a refresh, restore these tables from the file dumps and restore the statistics.

Functional Recommendations

Deferring Sync of Item Text Index

The ICI program synchronizes the Item Text Index (used for item simple searches) with the latest item information after creating/updating items. This process has contention issues when executed in parallel across multiple concurrent programs. For initial
data load scenarios, where you do not expect other users to search for items at that time, Oracle recommends that you customize the product code to comment the call to sync index, and do the same manually after each set of parallel threads are completed. To perform
this customization:

  1. In INVPOPIF, comment call to INV_ITEM_PVT.SYNC_IM_INDEX.
  2. Run your data load batches.
  3. After a set of parallel batches, call INV_ITEM_PVT.SYNC_IM_INDEX manually.
  4. After you complete the data load process, uncomment the change you made in step 1.
Note: Commenting the call to INV_ITEM_PVT.SYNC_IM_INDEX stops the Item Text Index sync as well as the iProcurement (ICX) Item Text Index sync. If you do not want to stop the Item Text Index sync, but do want to stop the iProcurement (ICX) Item
Text Index sync, then:

  • Set the profile option 'INV:Sync iProcurement Item Index' to No.
  • Execute the concurrent request 'Rebuild Catalog interMedia Index' manually to sync the Item Text Index with Oracle iProcurement.

Disabling ENI Sync

The items created through the ICI program are also synced into the ENI tables. If you do not use ENI, Oracle recommends that you disable this using the following steps.

  1. Confirm that you do not use ENI by running the following query. The output of this query should not be "I".

    SELECT FA.APPLICATION_SHORT_NAME, FA.APPLICATION_ID, 

    PI.STATUS, PI.PRODUCT_VERSION, PI.PATCH_LEVEL 

    FROM FND_PRODUCT_INSTALLATIONS PI,FND_APPLICATION FA 

    WHERE FA.APPLICATION_ID = PI.APPLICATION_ID 

    AND UPPER(APPLICATION_SHORT_NAME)='ENI
  2. If the above query does not return an "I", then truncate table ENI_OLTP_ITEM_STAR. This disables ENI sync during item import.

Table Valueset

Queries used in a table valueset also affect the performance of the Import Catalog Items program. Ensure that the queries used in a valueset based on a table are tuned for optimal performance.

Troubleshooting

If you have implemented the tips explained in this document and you still have a performance issue, you may need development assistance. This section discusses the information you need to provide while logging a Service Request (SR) in order to involve the
Oracle Product Hub development team. When logging a SR, provide the following information in addition to the standard information provided in all SRs (such as DB version and RUP):

  1. Your hardware configuration:

    1. Are DB/Middle and Concurrent tiers co-located on the same box or on different boxes?
    2. Number of CPUs in each box.
    3. Number of cores per CPU.
    4. Memory
  2. Number of items that already exist in your instance.
    1. select count(*) from mtl_system_items_b
    2. select count(*) from mtl_system_items_tl
    3. select count(*) from ego_mtl_sy_items_ext_b
    4. select count(*) from ego_mtl_sy_items_ext_tl
    5. select count(*) from mtl_item_catalog_groups_b
  3. Number of items you are trying to load.
  4. Number of items you are loading per batch.
  5. Number of child entities (UDAs, Intersections, People, AML, etc.).
  6. Number of batches you are trying to run in parallel.
  7. Provide the following reports:
    1. Concurrent Program Log(s) for a debug enabled run. To enable debugs:

      1. Set profile FND: Debug Log Enabled to Yes.
      2. Set profile FND: Debug Log Level to Statement.
      3. Set profile INV:Debug Level to 102.
    2. Tkprof of the trace generated for the request. 

      To enable the trace:

      1. Set the profile "Initialization SQL Statement - Custom" to "BEGIN 

        FND_CTL.FND_SESS_CTL('','','','','','ALTER SESSION SET EVENTS ='||''''||' 

        10046 TRACE NAME CONTEXT FOREVER, LEVEL 8 '||''''); END;"
      2. Deselect the “Enable Trace” check box in the Concurrent Programs form for Import Catalog Items.
      3. Once the program is complete, run the query below to get a trace file name and trace location: 
        SELECT 'Request id: '||request_id , 

        'Trace id: '||oracle_Process_id, 

        'Trace Flag: '||req.enable_trace, 

        'Trace Name: 

        '||dest.value||'/'||dbnm.value||'_ora_'||oracle_process_id||'.trc', 

        'Prog. Name: '||prog.user_concurrent_program_name, 

        'File Name: '||execname.execution_file_name|| execname.subroutine_name , 

        'Status : '||decode(phase_code,'R','Running') 

        ||'-'||decode(status_code,'R','Normal'), 

        'SID Serial: '||ses.sid||','|| ses.serial#, 

        'Module : '||ses.module 

        from fnd_concurrent_requests req, v$session ses, v$process proc, 

        v$parameter dest, v$parameter dbnm, fnd_concurrent_programs_vl prog, 

        fnd_executables execname 

        where req.request_id = &request_id 

        and req.oracle_process_id=proc.spid(+) 

        and proc.addr = ses.paddr(+) 

        and dest.name='user_dump_dest' 

        and dbnm.name='db_name' 

        and req.concurrent_program_id = prog.concurrent_program_id 

        and req.program_application_id = prog.application_id 

        and prog.application_id = execname.application_id 

        and prog.executable_id=execname.executable_id;
      4. AWR report for exact duration of the program.
      5. ADDM and ASH report for exact duration of the program.
      6. Sar report for the duration of the program. 

        If you think your program runs for 20 minutes, then, on the DB tier of your instance, run the command “sar 2 600” in the unix prompt. Spool that output and upload it.
      7. Top output for the duration of the program. 

        Run the top command in the DB nodes unix prompt every 30 seconds when the program is running and provide the output.

Performance Results

This section lists some performance data observed using various hardware configurations during a performance tuning exercise. These numbers are only indicative and are not meant to be a benchmark. You should not use them for sizing your hardware. As stated
earlier, the performance numbers can vary greatly depending on the hardware configuration, functional setup, and so on.

Test Case 1

Hardware Setup
  • DB Server (also hosts Concurrent Manager): 2 nodes RAC, including 2 CPUs, 4 cores each, with 32GB memory in each node.
  • Middle Tier: 2 nodes RAC, including 2 CPUs, 4 cores each, with 32GB memory in each node.
Functional Setup
  • Number of UDAs per item (including user entered and defaulted): 70
  • Number of rules: 27
  • Item Number is Function Generated.

Timing Data
Number of items per thread Number of orgs to which each item is assigned Number of threads Average time in minutes per thread
5000 1 (master org) 1 9
10000 

 

  
1 (master org) 1 19
5000 1 (master org) 4 24
5000 1 (master org) 6 32
10000 1 (master org) 4 53
10000 1 (master org) 6 83
5000 2 (1 master, 

1 child)
4 53
5000 2 (1 master, 

1 child)
6 90

Test Case 2

Hardware Setup
  • DB Server (also hosts Concurrent Manager): Non RAC. 32 processors and 64 GB memory total.
  • Middle Tier: 1 processor with 32 cores each and 16 GB memory total.
Functional Setup
  • Number of UDAs per item (including user entered and defaulted): 30-40
  • No rules. No function generated item number/description.
Timing Data
Number of items per thread Number of orgs to which each item is assigned Number of threads Average time in minutes per thread
5000 1 (master org) 1 6.5
5000 1 (master org) 2 7.0
10000 1 (master org) 1 11.0
10000 1 (master org) 2 19.0
5000 1 (master org) 4

11.0

Test Case 3

Hardware Setup
  • DB Server (also hosts Concurrent Manager): Non RAC. 34 processors with 4 cores each and 48 GB memory total.
  • Middle Tier: 3 FE servers (2 forms/web tiers, 1 Concurrent Manager tier), 2 processors with 4 cores each and 8 GB memory total.
Functional Setup
  • Number of UDAs per item (including user entered and defaulted): 50-60
  • No rules. No function generated item number/description.
Timing Data
Number of items per thread Number of orgs to which each item is assigned Number of threads Average time in minutes per thread
5000 1 (master org) 1 10
5000 1 (master org) 2 15
10000 1 (master org) 1  20
10000 1 (master org) 2  
5000 1 (master org) 4
Functional Setup
  • Number of UDAs per item (including user entered and defaulted): 50-60
  • Number of Rules: 600.
  • Item Number and Description generated using PLSQL function.
Number of items per thread Number of orgs to which each item is assigned Number of threads Average time in minutes per thread
1000 1 (master org) 1 16
250 1 (master org) 4 5

文档详细信息

 
通过电子邮件发送此文档的链接在新窗口中打开文档

 
  类型:
  状态:
  上次主更新:
  上次更新:
 
  WHITE PAPER
  PUBLISHED
  2014-4-10
  2014-4-10
     
 

相关产品

 
Oracle Product Hub
Oracle Inventory Management
Oracle Item Master
     
 

文档引用

 
  此文档无可用的引用。  
     

Optimizing Item Import Performance in Oracle Product Hub/Inventory的更多相关文章

  1. Oracle Product Hub / Product Lifecycle Management / Product Information Management / Advanced Produc

    In this Document   Goal   Solution   1. Master List showing sample code for APIs in Product Data Hub ...

  2. Understanding Item Import and Debugging Problems with Item Import (Doc ID 268968.1)

    In this Document Purpose Details   Scenario 1: Testing the basic item import with minimum columns po ...

  3. Item Import: What Does "Sync" Items Do? (Doc ID 417887.1)

    In this Document Goal Solution APPLIES TO: Oracle Item Master - Version 11.5.10.0 to 11.5.10.0 [Rele ...

  4. How To Tune or Test PLSQL Code Performance in Oracle D2k Forms

    You can test or tune your program unit performance in Oracle forms with Ora_Prof package.Suppose you ...

  5. 【ERROR】while loading shared libraries: /u01/app/oracle/product/11.2.0/lib/libclntsh.so.11.1: cannot

    问题: [oracle@mydb]$ lsnrctl status lsnrctl: error while loading shared libraries: /u01/app/oracle/pro ...

  6. lsnrctl: error while loading shared libraries: /opt/app/oracle/product/11.2/db_1/lib/libclntsh.so.11

    错误描述: 安装好数据库后,在oralce用户下敲入 查看监听状态命令,返回错误提示 [oracle@centos3 ~]$ lsnrctl statuslsnrctl: error while lo ...

  7. OEM无法启动:oracle/product/10.2.0/dbhome_1/oc4j/j2ee/OC4J_DBConsole_localhost.localdomain_orcl not found.

    配置OEM环境变量 ORACLE_UNQNAME=$ORACLE_SID 要想打开OEM的web管理程序,必须先启动Database listener,再启动Datebase Control端口监控程 ...

  8. ERROR: mount point </.alt.rootd3_EISMar14/opt/oracle/product/10.2> is already in use

    在给solaris系统升级的时候,用lu方法遇到下面的错误. -bash-3.2# lumount rootd3_EISMar14 ERROR: mount point </.alt.rootd ...

  9. ORA-01078: failure in processing system parameters LRM-00109: could not open parameter file '/u01/app/oracle/product/19.2.0/db_1/dbs/initsanshi.ora'报错

    本人是在Linux安装Oracle19C之后,启动数据库时,XSHELL命令行窗口报的该错误,看了几个解决方案之后,总结如下 从字面的意思来看,是在dbs目录当中没有这个initsanshi.ora文 ...

随机推荐

  1. 在web应用中使用Log4j 2

    Using Log4j 2 inWeb Applications (在web应用中使用Log4j 2) 来源:http://logging.apache.org/log4j/2.x/manual/we ...

  2. Java内存泄漏分析系列之三:jstat命令的使用及VM Thread分析

    原文地址:http://www.javatang.com 使用jstat命令 当服务器CPU100%的时候,通过定位占用资源最大的线程定位到 VM Thread: "VM Thread&qu ...

  3. Sybase identity 字段

    1.identity Oracle, DB2, pgSQL中都有sequence的概念,这个概念比Identity先进很多,在Sybase中没有Sequence对象,与之相对应的是Identity 2 ...

  4. Java中synchronized的使用实例

    一.使用场景 在负责后台开发的时候,很多时候都是提供接口给前端开发人员去调用,会遇到这样的场景: 需要提供一个领奖接口,每个用户名只能领取一次,我们可以将成功领取的用户在数据库用个标记保存起来.如果这 ...

  5. Apache commons email 使用过程中遇到的问题

    apache-commons-email是对mail的一个封装,所以使用起来确实是很方便.特别的,官网上的tutorial也是极其的简单.但是我也仍然是遇到了没有解决的问题. jar包的添加 mail ...

  6. Erlang的常驻模块与功能模块

    Erlang的常驻模块与功能模块Residence moduleThe module where a process has its tail-recursive loop function(s).I ...

  7. iOS应用程序工程文件以及启动流程

    转载请标明出处: http://blog.csdn.net/xmxkf/article/details/51351188 本文出自:[openXu的博客] iOS程序启动流程 完整启动流程 UIApp ...

  8. openfire环境搭建

    1.下载源代码:http://www.igniterealtime.org/downloads/source.jsp 2.把源代码解压出的openfire_src文件夹放至eclipse workpl ...

  9. 1091. Acute Stroke (30)

    题目如下: One important factor to identify acute stroke (急性脑卒中) is the volume of the stroke core. Given ...

  10. java基本类型(数值范围):浮点的底层表示定义,float计算快一些

    Java八种基本类型: 六种数字类型(四个整数型,两个浮点型), 一种字符类型, 一种布尔型. 具体如下 1.整数:包括int,short,byte,long 2.浮点型:float,double 3 ...