Wednesday, October 9, 2013

INVENTORY MANAGEMENT (in BW)

Symptom
A query with non-cumulative key figures does not display the expected values. You want to find out whether the data in the cube is already wrong by using the transaction LISTCUBE.

Since non-cumulative key figures are calculated during query runtime (they are not physically stored in the fact tables) you don't know how to derive the query result from the data in the cube.
Other terms
inventory cubes, ncum, 0IC_C03, 2LIS_03_BX, 2LIS_03_BF, 2LIS_03_UM
Reason and Prerequisites
In order to be able to interpret query results correctly, it is very important to understand the basic concept of cubes with non-cumulative key figures. This note should help you to aquire this knowledge by providing sufficient technical background information.


CONTENTS

[I]   Definitions and Metadata
[II]  Process Flow
      [II.a] Example
      [II.b] Loading
      [II.c] Compression & Updating Reference Points
      [II.d] Calculation of Non-Cumulatives in a Query (Reporting)
      [II.e] Validity Table
[III] Stock Initialization & DTP
[IV]  Data Mart
[V]   Performance & Memory Consumption
[VI]  Analysis of Problems
  • [I] Definitions and Metadata (sample key figure 0TOTALSTCK)

(A)Values of non-cumulative(ncum) key figures cannot be cumulated (summed up) in a meaningful way over time. The 'result' is caculated during query runtime, in the majority of cases the exception aggregation LAST is used.

(B)Key Figure Definition
In RSD1 you can see that 'normal' cumulative key figures are assigned to such a non-cumulative key figure which describe the inflow and outflow (or the value change (delta)). In the query the ncum key figure is then calculated (for the requested days) by taking the data given by the key figures 'Inflow' and 'Outflow'.
Example:
non-cumulative key figure 0TOTALSTCK:
outflow 0ISSTOTSTCK (cumulative key figure)
inflow  0RECTOTSTCK (cumulative key figure)

(C)Time Reference Characteristic
It's the most detailed time characteristic of the cube (often 0calday) and is used to determine the values of all other time characteristics.

(D)Fact Tables
If you use the key figure 0TOTALSTCK in an infocube, the actual fact tables will only contain colums for the cumulative inflow and outflow key figures 0ISSTOTSTCK and 0RECTOTSTCK. You will have no column for the ncum key figure 0TOTALSTCK.
The values are only calculated during query runtime and not stored in the fact tables! Thats why you can't find this key figure in the transaction LISTCUBE either.

(E)Reference Points
Technically, only non-cumulative changes (movements) and the stock refering to a certain point in time (so called reference points or markers) are stored in the fact tables. In order to distinguish between this two kind of values the technical infoobject 0RECORDTP (part of the package dimension) was introduced.
For movements we have 0RECORDTP=0, for the reference points 0RECORDTP is always equal to 1 and the time reference characteristic is set to 'infinity' (e.g 0CALDAY = 31.12.9999).
In case the cube is totally compressed and you are only interested in the current stock values of a certain material, the query can only read the marker and hasn't to carry out calculations using the inflow and outflow.
By default the transaction LISTCUBE only reads data records where 0RECORDTP=0! Hence, if you want to get the reference points displayed you need to restrict this infoobject explicitly to 1 (or interval [0,1]).

(F)Validity
As the values of non-cumulatives are computed by reading the reference points and the non-cumulative changes, it is in principle possible to compute a value for every point in time. To be able to do this calculation you need to know for which time interval you can calculate the non-cumulative. The minimum and maximum loaded values of the time reference characteristic are used as standard as the boundaries for the validity area.


(G)Online Help/ How to Paper
Please search for the term 'Non-Cumulatives' in the online help or directly execute the URL
http://help.sap.com/saphelp_nw70/helpdata/en/8f/da1640dc88e769e10000000a155106/frameset.htm

There is a "How to..." Paper for Inventory Management Scenarios in BI which can be found in SAP Developer Network (SDN), under https://www.sdn.sap.com/irj/sdn/howtoguides choose SAP NetWeaver 2004 -> Business Intelligence -> How to Handle Inventory Management Scenarios
  • [II]Process Flow

In this chapter we always refer to a very simple example which should help to understand the process steps and definitions faster.
  • [II.a] Example

STEP 1:
A stock initialization is performed on 06.05.2010 for the material A. The current stock value is 100 PC.
STEP 2:
For this material there are also movements for calday < 06.05.2010. Only the movements from the last 2 days before the initialization are extracted.
STEP 3:
A day after the stock initialization a new movement is posted and loaded into the cube. The current stock is then 90.
STEP 4: Compression of stock initialization
STEP 5: Compression of historical data
STEP 6: Compression of new movements
STEP 7: Running queries

relevant stock values in ERP(transaction MB5B):
03.05.2010:  40 PC
04.05.2010:  60 PC
05.05.2010: 100 PC
06.05.2010: 100 PC
07.05.2010:  90 PC
Please note that the stock for a certain day always includes already all changes of this day. Hence, by looking at the table above, we know that e.g. on the 05.05. 40 PC must have been added and that 'nothing' happened on the 06.05..
  • [II.b] Loading

For inventory cubes there is a special loading process called stock initialization. It transfers the current non-cumulatives once from the source system to the BW system. With the initialization you are only able to post one initial value for each combination of characteristics of the data source. Afterwards you can start loading 'historical data' and new movements (changes) using a different data source:

With 2LIS_03_BX the initialization of the current stock is carried out.
With 2LIS_03_BF you can load the 'historical data' and the new data (movements)

For all data records of the stock initialization we have 0RECORDTP=1. They are called reference points (marker) and play an important role when the non-cumulative values are calculated during query runtime.
Since the stock initialization refers to a certain point in time we must distinguish between movements which happened before and after this moment. The former are called 'historical data', the latter are new movements. When the delta (not stock!) initialization is carried out, the historical data (back to a required date) can be loaded into the cube. (Its not allowed to have historical data and movements in one request..)

Its not mandatory to carry out a stock initialization (but it is e.g. necessary in case not all movements exist any more in the source system or if you don't want to load all movements because of the amount of data). If you skip this and load only movements the system assumes that the inital stock is zero for all materials. New markers are then created whenever requests are compressed with 'new' materials.
Please review also point [III]Stock Initialization & DTP.
The standard extractor for 2LIS_03_BX can also deliver records from the source system with stockvalues = 0. In order to activate this option you have to mark the flag "Zero stock to be transferred" in the transaction MCNB. Please be aware that this may lead to a considerable runtime increase.


                            EXAMPLE

STEP 1: Loading of stock initialization
RequestID 1000
initialization: using 2LIS_03_BX
record:
material A -> 100 PC (on 06.05.2010)

STEP 2: Loading of historical data
RequestID 2000
historical data: using 2LIS_03_BF
records:
material A -> 40 PC added on 05.05.2010
material A -> 20 PC added on 04.05.2010

STEP 3: Loading of new movements(deltas)
RequestID 3000
delta (new movement): using 2LIS_03_BF
record:
material A -> 10 PC removed on 07.05.2010


F-table:

requestid recordtp  calday      0ISSTOTSTCK   0RECTOTSTCK
------------------------------------------------------------
2000        0      04.05.2010                  20 PC
2000        0      05.05.2010                  40 PC
3000        0      07.05.2010     10 PC
1000        1      31.12.9999                  100

Please note that the marker refers to the calender day 31.12.9999 and that recordtp is equal to 1 !

E-table: still empty (we have not compressed yet)
  • [II.c] Compression & Updating Reference Points

After the initialization the reference point represents the stock value for a certain material at the time of the initialization. If movements (not historical data) are loaded, the reference points get updated during compression of the requests if the flag 'No Marker Update' (RSA1 -> Manage->tab Collapse) is NOT set (this is the default setting). Hence, for a fully compressed cube, the reference point tells you the current stock value for a certain material.
*******************      Important      ****************
There is one very important exception regarding the update of the marker: if you compress historical data the reference points must not be adjusted !! Otherwise you get wrong data in the cube (since these movements are already included in the marker of the stock initialization)! Such inconsistencies cannot be repaired automatically, once the marker is wrong a reload of the data into the cube is necessary.
Please also note that the request with the historical data must be compressed since the query 'can't know' that these records shouldn't get added to the marker (see Example: STEP 7 point(A))!
*********************************************************

                            EXAMPLE

STEP 4: Compression of request 1000 (with marker update)

F-table after compression:

requestid recordtp  calday      0ISSTOTSTCK   0RECTOTSTCK
------------------------------------------------------------
2000        0      04.05.2010                  20 PC
2000        0      05.05.2010                  40 PC
3000        0      07.05.2010     10 PC


E-table after compression:

requestid recordtp  calday      0ISSTOTSTCK   0RECTOTSTCK
------------------------------------------------------------
   0        1      31.12.9999                  100


STEP 5: Compression of request 2000 Historical Data (WITHOUT! marker update)

F-table after compression:

requestid recordtp  calday      0ISSTOTSTCK   0RECTOTSTCK
------------------------------------------------------------
3000        0      07.05.2010     10 PC


E-table after compression:

requestid recordtp  calday      0ISSTOTSTCK   0RECTOTSTCK
------------------------------------------------------------
   0        0      04.05.2010                  20 PC
   0        0      05.05.2010                  40 PC
   0        1      31.12.9999                  100


STEP 6: Compression of request 3000 new deltas (with marker update)

F-table after compression: empty

E-table after compression:

requestid recordtp  calday      0ISSTOTSTCK   0RECTOTSTCK
------------------------------------------------------------
   0        0      04.05.2010                  20 PC
   0        0      05.05.2010                  40 PC
  0        0       07.05.2010     10 PC
   0        1      31.12.9999                  90


As we already know, the marker is defined in such a way that it refers to the date 31.12.9999. For inventory cubes this date is exclusively reserved for the marker, its not allowed to use this date for movements!

Even if you use 'No Marker Update = X' it may happen that the number of records with recordtp = 1 increase during compression. The reason is that the compression is adding marker records (its assumed that the stock value was zero before the change) for movements where no corresponding marker records exist (see also note 1795551)
  • [II.d]Calculation of Non-Cumulatives in a Query (Reporting)

The query calculates the stock for the requested points in time(t) by using the movements(inflow and outflow) and the marker.
It starts with the calculation of the current stock value (CSV): [Marker + (total delta (change,movement) of all uncompressed requests)]:
CSV = [M + SUM(deltas with requestid>0)]
Then it calculates 'backwards' the stock value for the requested day(t):
Value(t) = CSV - SUM(deltas with time>t)


                              EXAMPLE

STEP 7: Running Queries

(A)query executed after STEP 4 was carried out:

0TOTALSTCK(on 04.05)  = [100 (marker in E-table) + 50(uncompressed requests)] + 10(movement 07.05) - 40(movement 05.05) = 120 PC

120 is wrong! The calculation of the marker is by 60 PCs too large since the query took into account the historical movements. Thats why you must always compress the request with the historical data records!

(B)query executed after STEP 5 was carried out:

0TOTALSTCK(on 04.05) = [100 (marker in E-table) - 10(uncompressed request)] + 10(movement 07.05) - 40(movement 05.05) = 60 PC

(C)query executed after STEP 6 was carried out:

0TOTALSTCK(on 07.05) = [90 (marker in E-table)]  = 90 PC
Please note that in this case (fully compressed, current stock requested) the calculation was very simple ( and fast). The query had only to read the marker.
  • [II.e] Validity Table

By default the so called validity table contains only the time reference characteristic and the interval for which non-cumulative values are defined (and can be displayed in a query). The validity interval is automatically adjusted when new data is loaded into the cube (by default the minimal and the maximal value of the time reference characteristic of all entries which have been loaded into the infocube).
Please also see note 1501907 and the online help (see above).
It might be necessary to add an additional characteristic to the validity table, e.g if the validity interval should depend on the plant(or source system). Please only add such validity-determining characteristics if its really necessary since too many of them ( or a validity-determining characteristic with a lot of characteristic values) decreases the performance considerably.

The validity table has the following form /BIC/L"ICUBE". The query only takes into account data records where SID_0REQUID is equal to -1 (or -2). These records contain the relevant validity interval for the entire cube (which gets calculated by taking into account all request). You can change the validity independently of the data loaded into the cube, this can be done by using the transaction RSDV. In such a case the corresponding record in the validity table has then SID_0REQUID=-2.

If the cube is already filled then adaptation(remove/add validity objects) of the validity table is only possible with the report RSDG_CUBE_VALT_MODIFY.

If the validity table is inconsistent, then rebuilding is possible using the function module RSDV_VALID_RECREATE. For the parameter I_REBUILD_FROM_FACT you can choose between two values:

= space : recomputation of the values for request -1 by aggregation of all reporting relevant requests (by only reading of validity table itself)
= X  : recomputation, complete facttables must be read


                          EXAMPLE

In our example we would have the following entry in the L-table

SID_0REQUID  SID_0CALDAY_F SID_0CALDAY_T MODEFROM MODETO
--------------------------------------------------------
    -1        20100504      20100507

The record with SID_0REQUID=-1 is the relvant one for the OLAP engine. If we change this interval manually in the transaction RSDV to [04.05.2010 - 10.05.2010] the L-table gets an additional entry (with SID_0REQUID=-2):

SID_0REQUID  SID_0CALDAY_F SID_0CALDAY_T MODEFROM MODETO
--------------------------------------------------------
    -1        20100504      20100507
    -2           0          20100510              F

A query (without any restrictions) where the infoobject 0calday is drilled down displays now the following result:

calday          0TOTALSTCK
04.05.2010      60 PC
05.05.2010     100 PC
06.05.2010     100 PC
07.05.2010      90 PC
08.05.2010      90 PC  <-
09.05.2010      90 PC  <-
10.05.2010      90 PC  <-

Please note that for the days 08-10 (extended part) the value from the day 07 was taken.
In case you have in addition to 0calday e.g. the infoobject 0plant as an validity object, the processing of the query gets more complicated. Lets assume we have the following validities:

SID_0REQUID SID_0PLANT SID_0CALDAY_F SID_0CALDAY_T MODEFROM MODETO
-------------------------------------------------------------------
      -1        1         20100504    20100507
      -1        2         20100504    20100512

Please note that the validity intervals for the 2 plants are not identical. If non-cumulative key figures are aggregated across several validity characteristics it may be necessary to determine non-cumulative values outside the corresponding validity periods. For this the system determines a kind of superset of all involved validity periods, please see note 1501907 for more details to this. In case the validity object is drilled down, the system highlights the values which are outside the validity by setting of square brackets.
Lets discuss this again with the help of our example. We assume that the total stock is 75 PC for the plant 2 on the 10.05.2010. If we restrict a query to this day we get the following overall result:

0TOTALSTCK
165 PC

In case 0plant gets drilled down, the query displays the following

0PLANT  0TOTALSTCK
   1      [90]
   2       75
Result    165

Please note that the value 90 for plant 1 is actually outside of the validity and therefore displayed in brackets. If 0PLANT were restricted to the value 1 (hard filter under 'Characteristic Restrictions' which can't be changed during query navigation) the query wouldn't display the value in brackets but issue the message 'no applicable data found' (since an aggregation over different plants won't take place anyway).

Important Remark: The non-cumulative logic works only if no posted non-cumulative movements exist outside the validity interval! Please review note 1501907 for more details.
  • [III] Stock Initialization & DTP

If you use a BW70 data source and a DTP in order to carry out the stock initialization the following points have to be fulfilled:

- for the data source the setting 'Opening Balance' has to be active (field name STOCKUPD of table ROOSOURCE(source system) or RSOLTPSOURCE(BW3x)/RSDS(BW7x), respectivelly)
- the data target has to be a cube (with non-cumulative key figures; NCUMTIM in table RSDCUBE filled)
- only then the DTP offers the Extraction Mode 'Initial Non-Cumulative for Non-Cumulative values' which must be chosen in order to get the time reference characteristic and 0recordtp filled correctly
  • [IV] Data Mart in BW7x

If you want to use the data mart interface to update data from an inventory cube into another one (see review note 375098 for further details, procedure depends on the BW release!) please note the following:

-if both, the target and the source cube contain non-cumulative key figures (system checks whether the field NCUMTIM in table RSDCUBE is filled) the DTP offers the extraction mode 'Initial Non-Cumulative for Non-Cumulative values'. This mode has to be taken for the stock initialization. Then the system reads automatically all records with 0RECORDTP=1 from the source and updates the data records correctly into the target.

-the source cube must be completely compressed, otherwise data consistency cannot be guaranteed!
  • [V] Performance & Memory Consumption
    • As for all basis cubes it must be observed that as many requests as possible are compressed (see note 590370) For inventory cubes the system has to take into account all uncompressed request when the marker is calculated. This additionally contributes to a long runtime in case of uncompressed cubes.
    • Please only use validity-determining characteristics if its really necessary since too many of them ( or a validity-determining characteristic with a lot of characteristic values) decreases the performance considerably. See also chapter [II.e].
    • Its important to restrict the query as far as possible regarding time. This obviously improves the runtime by simply lowering the number of points in time for which ncum values have to be calculated. Keep in mind that less calculations during the query runtime also decrease memory requirements.

  • [VI] Analysis of Problems

In case its necessary to look at a non_cumulative value in a query more closely, you could follow the step-by-step description below:

1) Simplify query: this helps a lot to be fast and keep track of things
    • identify a material and set a corresponding filter
    • add 0CALDAY (or the time reference characteristic) to the free characteristics
    • add time filters and focus on one certain point in time
    • if necessary add validity characteristics to the drill down
    • take all 3 relevant key figures (0TOTALSTCK, 0RECTOTSTCK, 0ISSTOTSTCK) in the structure (no restriced/calculated key figures)

2) Compare query result with LISTCUBE
    • under 'field selections for output' choose material, the time reference characteristic, validity char., requestID, recordtp, 0isstotstck, 0rectotstck
    • restrict material to the chosen value
    • display the marker (0RECORDTP = 1) and remember the value
    • display the deltas 0RECORDTP = 0)and sort by time
    • compare if cumulative key figures in query match the values in LISTCUBE (deltas have been posted ok)
    • check how many records aren't already compressed (not added to the marker) and hence need to be used in the calculation of the current stock
    • calculate the marker and check for correctness
    • calculate the stock for requested point in time (see point [IV])

3a) if query and LISTCUBE results don't match, please search for relevant notes under the component BW-BEX-OT-NC. In case no note can be found, please contact SAP support with the technical name of the simplified query and the corresponding LISTCUBE report(see note 1479893).

3b) if query and LISTCUBE results match, it means that the data is already wrong in the cube - continue with 4)

4) check PSA tables for the data sources 2LIS_03_BX, 2LIS_03_BF and 2LIS_03_UM: do the values correspond with the LISTCUBE output?

5a) if no, the data extraction from the source system has to be analyzed in detail. The corresponding component is BW-BCT-MM-IM (if the delivered data sources 2LIS_03_BF and 2LIS_03_BX are used), see also note 586163 in order to get more background information.

5b) if yes, continue with 6a and 6b.

6) typical issues:
    • a) Did you compress the requests correctly with or without marker update ? In case a request was compressed with 'no marker update' you should find the following message in the SLG1 log: Loading of historical data switched on
    • b) Did you carry out the stock initialization in a dtp with the extraction mode 'Initialize Non-cumulatives'?
    • c) In case the query diplays no values at all please check the validity table
    • d) Do you have characteristics in the drilldown that were not loaded by the data source 2LIS_03_BX (e.g. storage location)? Please see note 589024 for further details.
    • e) Please pay attention to the order of the loaded requests. At first the initialization needs to be done (data source 2LIS_03_BX), then the movements can be loaded. Please see note 1795551 for further details.

Friday, October 4, 2013

Differences Between Articles and Materials


In the case of the Retail solution, the central object in all logistics processes is the article, whereas in industry it is the material.
Retail and industry have different needs in terms of the corresponding master data and maintenance options. For example, to optimally process all its fashion processes, retail needs variants (such as color and size). Industry, however, uses bills of material for special requirements in the area of production planning. Due to the large number of articles in the retail industry, master data maintenance must also be integrated into processing.
To best support the different processes in retail and industry, ECC provides different maintenance transactions for materials and articles. Unlike the material master, the article master contains the following additional points:
·  Specific additional data is integrated, for example, purchasing info records, listings and sales price calculations.
·  Several organizational levels, for example, sites and distribution chains, and for generic articles several articles, can be processed online in one single maintenance transaction.
In the material master you can process only one material or one plant at a time, but in the article master the system transfers data from higher levels to the dependent levels, provided you have not maintained these differently. For example, the system transfers data from a generic article to all its variants. It also transfers data from reference sites to individual sites.