APPLIES TO:
Oracle Shipping
Execution - Version 12.1.3 and later
Information in this document applies to any platform.
On : 12.1.3 version
Have observed during the R12 implementation that the WSH_NEW_DELIVERIES_S sequence is generating delivery_ids with large gaps. The sequence is not always incrementing by 1 as expected. It mostly increments by 1 but then will increment by just less than 1000. In order to eliminate confusion and possible errors in processing shipments, the delivery_ids need to be as sequential as possible. The issue does not occur in the 11i instance. Users use the Shipping Transactions Form (STF) to autocreate deliveries.
THis is an example of
the large gap on 25-March:
Date DELIVERY ORG 25-MAR-2014 13:06 410457 282 25-MAR-2014 13:06 410458 282 25-MAR-2014 13:06 410459 282 25-MAR-2014 13:07 410460 282 25-MAR-2014 13:07 410461 282 25-MAR-2014 13:07 410462 282 25-MAR-2014 13:31 410463 284 25-MAR-2014 13:34 410464 284 25-MAR-2014 13:39 410465 284 25-MAR-2014 13:40 410466 284 25-MAR-2014 13:58 410467 282 25-MAR-2014 13:59 410468 282 ==> gap of 989 occurs here 25-MAR-2014 18:00 411457 282 25-MAR-2014 18:01 411458 282 25-MAR-2014 18:06 411459 282
The query below was used to obtain the results seen above:
select to_char (CREATION_DATE,'DD-MON-YYYY HH24:MI'), DELIVERY_ID , organization_id from wsh.wsh_new_deliveries where trunc (CREATION_DATE) = trunc (sysdate) order by 1;
STEPS
-----------------------
The issue can be reproduced as follows: 1) Open the Shipping Transactions Form (STF) 2) Select a Detail 3) Select Actions--Autocreate Delivery 4) Do this many times then wait some time (few hours) 5) Then Select Actions--Autocreate Delivery again and delivery id has a large gap
The cause for the large
gaps in delivery id's is because on R12, the default value for CACHE_SIZE =
1000.
In 11i, the default value for CACHE_SIZE = 20 The observed behavior can happen because of CACHE_SIZE = 1000. By having the parameter CACHE_SIZE = 1000 the system is keeping the next 1000 sequence in the memory for pick release to use. When pick relese is done and the flushing of objects in the Database occurs, whatever numbers which were not used are lost. Then when pick release or autocreating deliveires is done again, it allocates another 1000 numbers and the process repeats.
To implement the
solution, please execute the following steps:
1. Please perform the alter statement in off business hours in a Test instance. 2. This is the statement to run to change the cache_size to 20. You need to log into WSH user and then run this alter statement.
ALTER
SEQUENCE WSH_NEW_DELIVERIES_S CACHE 20
/ 3. Then observe the performance of processes like pick release process and process deliveries SRS) for couple of days. If everything is fine, keep the cache as 20. 4. Migrate the solution as appropriate to other environments.
NOTES on the WSH_NEW_DELIVERIES_S sequence:
To obtain the values for the sequence, run the following:
select * from all_sequences
where sequence_name = 'WSH_NEW_DELIVERIES_S';
The outut will be
similar to this:
SEQUENCE_OWNER = WSH
SEQUENCE_NAME = WSH_NEW_DELIVERIES_S MIN_VALUE = 1 MAX_VALUE = 2147483647 INCREMENT_BY = 1 CYCLE_FLAG = N ORDER_FLAG = N CACHE_SIZE = 1000 LAST_NUMBER = 406457 =================================================
The delivery_id gaps can happen because of CACHE_SIZE = 1000.
This parameter keeps the next 1000 sequence in the memory.
For example, run select WSH_NEW_DELIVERIES_S.nextval from dual.
The value returned will be the next sequence value and the next 1000 values will be in memory so that next subsequent WSH_NEW_DELIVERIES_S.nextval doesn't have to update sys.seq$ and be committed.
But if any of the below scenarios happens, the stored values
will be lost:
1) If the database restarts (unlikely). or 2) You don't use the sequence for a while and Database controls the flushing of objects in the shared pool. If the flushing of objects happens, then you will loose the next 999 numbers which are there in the memory.
Please be advised it is nearly impossible to keep the sequence
GAP free.
However, to control it to some extent you can increase the size of the cache by increasing the size of the shared pool. But still be advised that even if you make your shared pool 1000TB in size - Database is still free to flush the objects (sequence) out of the pool whenever it feels like. =================================================
In 11i, the cache size had a default as 20.
From 12.1.3, to improve scalabiltiy of EBS in general, but specially in RAC, sequence caches need to be increased to a minimum value of 1000 for non-setup tables or tables that can grow into the tens of thousands or more rows for large customers. ================================================= You need to log into WSH user and then run this alter statement to alter the cache. 1) Please perform the alter statement in off business hours. 2) Chances of overwriting the cache size are low but if there is any patch which changes the cache size of WSH_NEW_DELIVERIES_S, then the patch can overwrite the cache size. 3) If any shipping patch is applied that has wshseq.odf as part of patch, please check if wshseq.odf has any alter to the WSH_NEW_DELIVERIES_S. |
Saturday, 23 January 2016
WSH_NEW_DELIVERIES_S Sequence Generating Delivery_ids in Large Gaps Since R12 Implementation (Doc ID 1665930.1)
http://paymentclearingquery.blogspot.com/2016/01/wshnewdeliveriess-sequence-generating.html
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment