Wednesday, October 9, 2013

INVENTORY MANAGEMENT (in BW)

Symptom
A query with non-cumulative key figures does not display the expected values. You want to find out whether the data in the cube is already wrong by using the transaction LISTCUBE.

Since non-cumulative key figures are calculated during query runtime (they are not physically stored in the fact tables) you don't know how to derive the query result from the data in the cube.
Other terms
inventory cubes, ncum, 0IC_C03, 2LIS_03_BX, 2LIS_03_BF, 2LIS_03_UM
Reason and Prerequisites
In order to be able to interpret query results correctly, it is very important to understand the basic concept of cubes with non-cumulative key figures. This note should help you to aquire this knowledge by providing sufficient technical background information.


CONTENTS

[I]   Definitions and Metadata
[II]  Process Flow
      [II.a] Example
      [II.b] Loading
      [II.c] Compression & Updating Reference Points
      [II.d] Calculation of Non-Cumulatives in a Query (Reporting)
      [II.e] Validity Table
[III] Stock Initialization & DTP
[IV]  Data Mart
[V]   Performance & Memory Consumption
[VI]  Analysis of Problems
  • [I] Definitions and Metadata (sample key figure 0TOTALSTCK)

(A)Values of non-cumulative(ncum) key figures cannot be cumulated (summed up) in a meaningful way over time. The 'result' is caculated during query runtime, in the majority of cases the exception aggregation LAST is used.

(B)Key Figure Definition
In RSD1 you can see that 'normal' cumulative key figures are assigned to such a non-cumulative key figure which describe the inflow and outflow (or the value change (delta)). In the query the ncum key figure is then calculated (for the requested days) by taking the data given by the key figures 'Inflow' and 'Outflow'.
Example:
non-cumulative key figure 0TOTALSTCK:
outflow 0ISSTOTSTCK (cumulative key figure)
inflow  0RECTOTSTCK (cumulative key figure)

(C)Time Reference Characteristic
It's the most detailed time characteristic of the cube (often 0calday) and is used to determine the values of all other time characteristics.

(D)Fact Tables
If you use the key figure 0TOTALSTCK in an infocube, the actual fact tables will only contain colums for the cumulative inflow and outflow key figures 0ISSTOTSTCK and 0RECTOTSTCK. You will have no column for the ncum key figure 0TOTALSTCK.
The values are only calculated during query runtime and not stored in the fact tables! Thats why you can't find this key figure in the transaction LISTCUBE either.

(E)Reference Points
Technically, only non-cumulative changes (movements) and the stock refering to a certain point in time (so called reference points or markers) are stored in the fact tables. In order to distinguish between this two kind of values the technical infoobject 0RECORDTP (part of the package dimension) was introduced.
For movements we have 0RECORDTP=0, for the reference points 0RECORDTP is always equal to 1 and the time reference characteristic is set to 'infinity' (e.g 0CALDAY = 31.12.9999).
In case the cube is totally compressed and you are only interested in the current stock values of a certain material, the query can only read the marker and hasn't to carry out calculations using the inflow and outflow.
By default the transaction LISTCUBE only reads data records where 0RECORDTP=0! Hence, if you want to get the reference points displayed you need to restrict this infoobject explicitly to 1 (or interval [0,1]).

(F)Validity
As the values of non-cumulatives are computed by reading the reference points and the non-cumulative changes, it is in principle possible to compute a value for every point in time. To be able to do this calculation you need to know for which time interval you can calculate the non-cumulative. The minimum and maximum loaded values of the time reference characteristic are used as standard as the boundaries for the validity area.


(G)Online Help/ How to Paper
Please search for the term 'Non-Cumulatives' in the online help or directly execute the URL
http://help.sap.com/saphelp_nw70/helpdata/en/8f/da1640dc88e769e10000000a155106/frameset.htm

There is a "How to..." Paper for Inventory Management Scenarios in BI which can be found in SAP Developer Network (SDN), under https://www.sdn.sap.com/irj/sdn/howtoguides choose SAP NetWeaver 2004 -> Business Intelligence -> How to Handle Inventory Management Scenarios
  • [II]Process Flow

In this chapter we always refer to a very simple example which should help to understand the process steps and definitions faster.
  • [II.a] Example

STEP 1:
A stock initialization is performed on 06.05.2010 for the material A. The current stock value is 100 PC.
STEP 2:
For this material there are also movements for calday < 06.05.2010. Only the movements from the last 2 days before the initialization are extracted.
STEP 3:
A day after the stock initialization a new movement is posted and loaded into the cube. The current stock is then 90.
STEP 4: Compression of stock initialization
STEP 5: Compression of historical data
STEP 6: Compression of new movements
STEP 7: Running queries

relevant stock values in ERP(transaction MB5B):
03.05.2010:  40 PC
04.05.2010:  60 PC
05.05.2010: 100 PC
06.05.2010: 100 PC
07.05.2010:  90 PC
Please note that the stock for a certain day always includes already all changes of this day. Hence, by looking at the table above, we know that e.g. on the 05.05. 40 PC must have been added and that 'nothing' happened on the 06.05..
  • [II.b] Loading

For inventory cubes there is a special loading process called stock initialization. It transfers the current non-cumulatives once from the source system to the BW system. With the initialization you are only able to post one initial value for each combination of characteristics of the data source. Afterwards you can start loading 'historical data' and new movements (changes) using a different data source:

With 2LIS_03_BX the initialization of the current stock is carried out.
With 2LIS_03_BF you can load the 'historical data' and the new data (movements)

For all data records of the stock initialization we have 0RECORDTP=1. They are called reference points (marker) and play an important role when the non-cumulative values are calculated during query runtime.
Since the stock initialization refers to a certain point in time we must distinguish between movements which happened before and after this moment. The former are called 'historical data', the latter are new movements. When the delta (not stock!) initialization is carried out, the historical data (back to a required date) can be loaded into the cube. (Its not allowed to have historical data and movements in one request..)

Its not mandatory to carry out a stock initialization (but it is e.g. necessary in case not all movements exist any more in the source system or if you don't want to load all movements because of the amount of data). If you skip this and load only movements the system assumes that the inital stock is zero for all materials. New markers are then created whenever requests are compressed with 'new' materials.
Please review also point [III]Stock Initialization & DTP.
The standard extractor for 2LIS_03_BX can also deliver records from the source system with stockvalues = 0. In order to activate this option you have to mark the flag "Zero stock to be transferred" in the transaction MCNB. Please be aware that this may lead to a considerable runtime increase.


                            EXAMPLE

STEP 1: Loading of stock initialization
RequestID 1000
initialization: using 2LIS_03_BX
record:
material A -> 100 PC (on 06.05.2010)

STEP 2: Loading of historical data
RequestID 2000
historical data: using 2LIS_03_BF
records:
material A -> 40 PC added on 05.05.2010
material A -> 20 PC added on 04.05.2010

STEP 3: Loading of new movements(deltas)
RequestID 3000
delta (new movement): using 2LIS_03_BF
record:
material A -> 10 PC removed on 07.05.2010


F-table:

requestid recordtp  calday      0ISSTOTSTCK   0RECTOTSTCK
------------------------------------------------------------
2000        0      04.05.2010                  20 PC
2000        0      05.05.2010                  40 PC
3000        0      07.05.2010     10 PC
1000        1      31.12.9999                  100

Please note that the marker refers to the calender day 31.12.9999 and that recordtp is equal to 1 !

E-table: still empty (we have not compressed yet)
  • [II.c] Compression & Updating Reference Points

After the initialization the reference point represents the stock value for a certain material at the time of the initialization. If movements (not historical data) are loaded, the reference points get updated during compression of the requests if the flag 'No Marker Update' (RSA1 -> Manage->tab Collapse) is NOT set (this is the default setting). Hence, for a fully compressed cube, the reference point tells you the current stock value for a certain material.
*******************      Important      ****************
There is one very important exception regarding the update of the marker: if you compress historical data the reference points must not be adjusted !! Otherwise you get wrong data in the cube (since these movements are already included in the marker of the stock initialization)! Such inconsistencies cannot be repaired automatically, once the marker is wrong a reload of the data into the cube is necessary.
Please also note that the request with the historical data must be compressed since the query 'can't know' that these records shouldn't get added to the marker (see Example: STEP 7 point(A))!
*********************************************************

                            EXAMPLE

STEP 4: Compression of request 1000 (with marker update)

F-table after compression:

requestid recordtp  calday      0ISSTOTSTCK   0RECTOTSTCK
------------------------------------------------------------
2000        0      04.05.2010                  20 PC
2000        0      05.05.2010                  40 PC
3000        0      07.05.2010     10 PC


E-table after compression:

requestid recordtp  calday      0ISSTOTSTCK   0RECTOTSTCK
------------------------------------------------------------
   0        1      31.12.9999                  100


STEP 5: Compression of request 2000 Historical Data (WITHOUT! marker update)

F-table after compression:

requestid recordtp  calday      0ISSTOTSTCK   0RECTOTSTCK
------------------------------------------------------------
3000        0      07.05.2010     10 PC


E-table after compression:

requestid recordtp  calday      0ISSTOTSTCK   0RECTOTSTCK
------------------------------------------------------------
   0        0      04.05.2010                  20 PC
   0        0      05.05.2010                  40 PC
   0        1      31.12.9999                  100


STEP 6: Compression of request 3000 new deltas (with marker update)

F-table after compression: empty

E-table after compression:

requestid recordtp  calday      0ISSTOTSTCK   0RECTOTSTCK
------------------------------------------------------------
   0        0      04.05.2010                  20 PC
   0        0      05.05.2010                  40 PC
  0        0       07.05.2010     10 PC
   0        1      31.12.9999                  90


As we already know, the marker is defined in such a way that it refers to the date 31.12.9999. For inventory cubes this date is exclusively reserved for the marker, its not allowed to use this date for movements!

Even if you use 'No Marker Update = X' it may happen that the number of records with recordtp = 1 increase during compression. The reason is that the compression is adding marker records (its assumed that the stock value was zero before the change) for movements where no corresponding marker records exist (see also note 1795551)
  • [II.d]Calculation of Non-Cumulatives in a Query (Reporting)

The query calculates the stock for the requested points in time(t) by using the movements(inflow and outflow) and the marker.
It starts with the calculation of the current stock value (CSV): [Marker + (total delta (change,movement) of all uncompressed requests)]:
CSV = [M + SUM(deltas with requestid>0)]
Then it calculates 'backwards' the stock value for the requested day(t):
Value(t) = CSV - SUM(deltas with time>t)


                              EXAMPLE

STEP 7: Running Queries

(A)query executed after STEP 4 was carried out:

0TOTALSTCK(on 04.05)  = [100 (marker in E-table) + 50(uncompressed requests)] + 10(movement 07.05) - 40(movement 05.05) = 120 PC

120 is wrong! The calculation of the marker is by 60 PCs too large since the query took into account the historical movements. Thats why you must always compress the request with the historical data records!

(B)query executed after STEP 5 was carried out:

0TOTALSTCK(on 04.05) = [100 (marker in E-table) - 10(uncompressed request)] + 10(movement 07.05) - 40(movement 05.05) = 60 PC

(C)query executed after STEP 6 was carried out:

0TOTALSTCK(on 07.05) = [90 (marker in E-table)]  = 90 PC
Please note that in this case (fully compressed, current stock requested) the calculation was very simple ( and fast). The query had only to read the marker.
  • [II.e] Validity Table

By default the so called validity table contains only the time reference characteristic and the interval for which non-cumulative values are defined (and can be displayed in a query). The validity interval is automatically adjusted when new data is loaded into the cube (by default the minimal and the maximal value of the time reference characteristic of all entries which have been loaded into the infocube).
Please also see note 1501907 and the online help (see above).
It might be necessary to add an additional characteristic to the validity table, e.g if the validity interval should depend on the plant(or source system). Please only add such validity-determining characteristics if its really necessary since too many of them ( or a validity-determining characteristic with a lot of characteristic values) decreases the performance considerably.

The validity table has the following form /BIC/L"ICUBE". The query only takes into account data records where SID_0REQUID is equal to -1 (or -2). These records contain the relevant validity interval for the entire cube (which gets calculated by taking into account all request). You can change the validity independently of the data loaded into the cube, this can be done by using the transaction RSDV. In such a case the corresponding record in the validity table has then SID_0REQUID=-2.

If the cube is already filled then adaptation(remove/add validity objects) of the validity table is only possible with the report RSDG_CUBE_VALT_MODIFY.

If the validity table is inconsistent, then rebuilding is possible using the function module RSDV_VALID_RECREATE. For the parameter I_REBUILD_FROM_FACT you can choose between two values:

= space : recomputation of the values for request -1 by aggregation of all reporting relevant requests (by only reading of validity table itself)
= X  : recomputation, complete facttables must be read


                          EXAMPLE

In our example we would have the following entry in the L-table

SID_0REQUID  SID_0CALDAY_F SID_0CALDAY_T MODEFROM MODETO
--------------------------------------------------------
    -1        20100504      20100507

The record with SID_0REQUID=-1 is the relvant one for the OLAP engine. If we change this interval manually in the transaction RSDV to [04.05.2010 - 10.05.2010] the L-table gets an additional entry (with SID_0REQUID=-2):

SID_0REQUID  SID_0CALDAY_F SID_0CALDAY_T MODEFROM MODETO
--------------------------------------------------------
    -1        20100504      20100507
    -2           0          20100510              F

A query (without any restrictions) where the infoobject 0calday is drilled down displays now the following result:

calday          0TOTALSTCK
04.05.2010      60 PC
05.05.2010     100 PC
06.05.2010     100 PC
07.05.2010      90 PC
08.05.2010      90 PC  <-
09.05.2010      90 PC  <-
10.05.2010      90 PC  <-

Please note that for the days 08-10 (extended part) the value from the day 07 was taken.
In case you have in addition to 0calday e.g. the infoobject 0plant as an validity object, the processing of the query gets more complicated. Lets assume we have the following validities:

SID_0REQUID SID_0PLANT SID_0CALDAY_F SID_0CALDAY_T MODEFROM MODETO
-------------------------------------------------------------------
      -1        1         20100504    20100507
      -1        2         20100504    20100512

Please note that the validity intervals for the 2 plants are not identical. If non-cumulative key figures are aggregated across several validity characteristics it may be necessary to determine non-cumulative values outside the corresponding validity periods. For this the system determines a kind of superset of all involved validity periods, please see note 1501907 for more details to this. In case the validity object is drilled down, the system highlights the values which are outside the validity by setting of square brackets.
Lets discuss this again with the help of our example. We assume that the total stock is 75 PC for the plant 2 on the 10.05.2010. If we restrict a query to this day we get the following overall result:

0TOTALSTCK
165 PC

In case 0plant gets drilled down, the query displays the following

0PLANT  0TOTALSTCK
   1      [90]
   2       75
Result    165

Please note that the value 90 for plant 1 is actually outside of the validity and therefore displayed in brackets. If 0PLANT were restricted to the value 1 (hard filter under 'Characteristic Restrictions' which can't be changed during query navigation) the query wouldn't display the value in brackets but issue the message 'no applicable data found' (since an aggregation over different plants won't take place anyway).

Important Remark: The non-cumulative logic works only if no posted non-cumulative movements exist outside the validity interval! Please review note 1501907 for more details.
  • [III] Stock Initialization & DTP

If you use a BW70 data source and a DTP in order to carry out the stock initialization the following points have to be fulfilled:

- for the data source the setting 'Opening Balance' has to be active (field name STOCKUPD of table ROOSOURCE(source system) or RSOLTPSOURCE(BW3x)/RSDS(BW7x), respectivelly)
- the data target has to be a cube (with non-cumulative key figures; NCUMTIM in table RSDCUBE filled)
- only then the DTP offers the Extraction Mode 'Initial Non-Cumulative for Non-Cumulative values' which must be chosen in order to get the time reference characteristic and 0recordtp filled correctly
  • [IV] Data Mart in BW7x

If you want to use the data mart interface to update data from an inventory cube into another one (see review note 375098 for further details, procedure depends on the BW release!) please note the following:

-if both, the target and the source cube contain non-cumulative key figures (system checks whether the field NCUMTIM in table RSDCUBE is filled) the DTP offers the extraction mode 'Initial Non-Cumulative for Non-Cumulative values'. This mode has to be taken for the stock initialization. Then the system reads automatically all records with 0RECORDTP=1 from the source and updates the data records correctly into the target.

-the source cube must be completely compressed, otherwise data consistency cannot be guaranteed!
  • [V] Performance & Memory Consumption
    • As for all basis cubes it must be observed that as many requests as possible are compressed (see note 590370) For inventory cubes the system has to take into account all uncompressed request when the marker is calculated. This additionally contributes to a long runtime in case of uncompressed cubes.
    • Please only use validity-determining characteristics if its really necessary since too many of them ( or a validity-determining characteristic with a lot of characteristic values) decreases the performance considerably. See also chapter [II.e].
    • Its important to restrict the query as far as possible regarding time. This obviously improves the runtime by simply lowering the number of points in time for which ncum values have to be calculated. Keep in mind that less calculations during the query runtime also decrease memory requirements.

  • [VI] Analysis of Problems

In case its necessary to look at a non_cumulative value in a query more closely, you could follow the step-by-step description below:

1) Simplify query: this helps a lot to be fast and keep track of things
    • identify a material and set a corresponding filter
    • add 0CALDAY (or the time reference characteristic) to the free characteristics
    • add time filters and focus on one certain point in time
    • if necessary add validity characteristics to the drill down
    • take all 3 relevant key figures (0TOTALSTCK, 0RECTOTSTCK, 0ISSTOTSTCK) in the structure (no restriced/calculated key figures)

2) Compare query result with LISTCUBE
    • under 'field selections for output' choose material, the time reference characteristic, validity char., requestID, recordtp, 0isstotstck, 0rectotstck
    • restrict material to the chosen value
    • display the marker (0RECORDTP = 1) and remember the value
    • display the deltas 0RECORDTP = 0)and sort by time
    • compare if cumulative key figures in query match the values in LISTCUBE (deltas have been posted ok)
    • check how many records aren't already compressed (not added to the marker) and hence need to be used in the calculation of the current stock
    • calculate the marker and check for correctness
    • calculate the stock for requested point in time (see point [IV])

3a) if query and LISTCUBE results don't match, please search for relevant notes under the component BW-BEX-OT-NC. In case no note can be found, please contact SAP support with the technical name of the simplified query and the corresponding LISTCUBE report(see note 1479893).

3b) if query and LISTCUBE results match, it means that the data is already wrong in the cube - continue with 4)

4) check PSA tables for the data sources 2LIS_03_BX, 2LIS_03_BF and 2LIS_03_UM: do the values correspond with the LISTCUBE output?

5a) if no, the data extraction from the source system has to be analyzed in detail. The corresponding component is BW-BCT-MM-IM (if the delivered data sources 2LIS_03_BF and 2LIS_03_BX are used), see also note 586163 in order to get more background information.

5b) if yes, continue with 6a and 6b.

6) typical issues:
    • a) Did you compress the requests correctly with or without marker update ? In case a request was compressed with 'no marker update' you should find the following message in the SLG1 log: Loading of historical data switched on
    • b) Did you carry out the stock initialization in a dtp with the extraction mode 'Initialize Non-cumulatives'?
    • c) In case the query diplays no values at all please check the validity table
    • d) Do you have characteristics in the drilldown that were not loaded by the data source 2LIS_03_BX (e.g. storage location)? Please see note 589024 for further details.
    • e) Please pay attention to the order of the loaded requests. At first the initialization needs to be done (data source 2LIS_03_BX), then the movements can be loaded. Please see note 1795551 for further details.

Friday, October 4, 2013

Differences Between Articles and Materials


In the case of the Retail solution, the central object in all logistics processes is the article, whereas in industry it is the material.
Retail and industry have different needs in terms of the corresponding master data and maintenance options. For example, to optimally process all its fashion processes, retail needs variants (such as color and size). Industry, however, uses bills of material for special requirements in the area of production planning. Due to the large number of articles in the retail industry, master data maintenance must also be integrated into processing.
To best support the different processes in retail and industry, ECC provides different maintenance transactions for materials and articles. Unlike the material master, the article master contains the following additional points:
·  Specific additional data is integrated, for example, purchasing info records, listings and sales price calculations.
·  Several organizational levels, for example, sites and distribution chains, and for generic articles several articles, can be processed online in one single maintenance transaction.
In the material master you can process only one material or one plant at a time, but in the article master the system transfers data from higher levels to the dependent levels, provided you have not maintained these differently. For example, the system transfers data from a generic article to all its variants. It also transfers data from reference sites to individual sites.

Sunday, September 29, 2013

Exception CX_RSBK_REQUEST_LOCKED logged

1.Previous request has  status 'red' and
2.Exception CX_RSBK_REQUEST_LOCKED logged.

We are getting this error at DTP level. Once the error is triggered and we need to delete the failed req and re-load it

Ans:

It seems that the system is trying to load data when there is another lock (mostly another request being loaded or another target being loaded).

See if you have enough background processes while executing the DTP.
Also, you can see SM12 for locks. Also check SM37 if the job has finished or not.

Now, delete the request which has failed (in RED) and if you are using a process chain, repeat the step in the chain.
Or else, load manually.

To avoid it everyday, see if there are any conflicting jobs running at that time.
If yes, please run/ schedule this load prior to the job or after it finishes.

Thursday, September 26, 2013

BW to HANA migration Workplan

Many organizations are planning to move BW to to HANA and is looking for information on how to do this. In this blog we take a look at some of the major tasks you should consider when moving BW to HANA. We also include references to notes and transactions so that you can get started on writing your project plan.

Introduction
These are some of the major tasks for a BW to HANA migration. Once you have added all sub-tasks, a typical workplan has 400-600 detailed items, including status reports, team meetings, issues, hardware validation, portal integration and tests of front-end tools. 
However, these first 76 steps should give you enough to get you started on writing your project plan.

Pre-Activity Steps for each environment (Dev, QA & Prod)

1. Clean the Persistent Staging Area (PSA) for data already loaded to DSOs
2. Delete the Aggregates aggregates (summary tables). They will not be needed again.
3. Compress the E and F tables in all InfoCubes. This will make InfoCubes much smaller.
4. Remove data from the statistical cubes (they start with the technical name of 0CTC_xxx). These contain performance information for the BW system running on the relational database. You can do this using the transaction RSDDSTAT or the program RSDDSTAT_DATA_DELETE to help you.
5. Look at log files, bookmarks and unused BEx queries and templates (transaction RSZDELETE). 6. Remove as much as possible of the DTP temporary storage, DTP error logs, and temporary database objects. Help and programs to do this are found in SAP Notes 1139396 and 1106393.
7. For write-optimized DSOs that push data to reportable DSOs (LSA approach), remove data in the write-optimized DSOs. It is already available in higher level objects.
8. Migrate old data to Near-Line Storage (NLS) on a small server. This will still provide access to the data for the few users who infrequently need to see this old data. You will also be able to query it when BW is on HANA, but it does not need to be in-memory.
10. Remove data in unused DSOs, InfoCubes and files used for staging in the BW system. This includes possible reorganization of masterdata text and attributes using process type in RSPC 11. You may also want to clean up background information stored in the table RSBATCHDATA. This table can get very big if not managed.
12. You should also consider archiving any IDocs and clean the tRFC queues. All of this will reduce size of the HANA system and help you fit the system tables on the master node.
13. Size reduction of Basis Tables. In SAP Note 706478, SAP provides some ideas on how to keep the basis tables from growing too fast too fast in the future.
14. As of Service Pack 23 on BW 7.0, we can also delete unwanted master data directly (see SAP Note: 1370848).
15. Use the program RSDDCVER_DIM_UNUSED to delete any unused dimension entries in your InfoCubes to reduce the overall system size (not a factor if you plan on using the optimized HANA infocubes)

Pre-Validation steps

16. Run BW to HANA Migration Readiness Check (v2.0) in  SAP Note 1729988 and resolve all high priority issues including connectivity checks, service packs, security, basis, db notes and general check in the tool for SB, Dev, QA and Prod.
17. Run ZBW_ABAP_ANALYZER for checks of ETL ABAP code and fix outstanding issues in SB, Dev, QA and Prod. Add HANA hints where required. This is found in  SAP Note 1847431
18. Run full delta load and gather benchmark performance statistics for load and activations
20. Run key queries, reports and dashboards for collection of benchmark statistics and performance, including any BEx broadcaster jobs, pre-calculations and workbooks.

- Other: 
a) Conduct HANA development workshops
b) Complete HANA Admin workshop and training
c) Create an SLA and Support plan for SAP HANA internally
d) Confirm Backup, Disaster recovery and HA plan

Environment staging and Migration (post hardware install, validation and handover)

21. Run backup of database in SAP BW Sandbox
22. Suspend jobs and SB development activities
23. Migrate Database to SAP HANA (export and import)
24. Connectivity validations and security checks (roles, authentication and authorization)
25. External job scheduling checks (openhub and outbound interfaces)
26. Execute key queries identified in step 20 and reconcile data at summary level and detailed as required.
27. Collect new HANA performance statistics for dashboards, BEX broadcaster, reports, workbooks and WAD templates and analyze performance gains.
28. Run full Delta Loads from process chains (create a few records in the source environment if required) and collect benchmark statistics from ETL processing and activation. Analyze these against those collected in step 18.
29. Rerun the HANA Migration Readiness Check  in  SAP Note 1729988 and resolve any issues
30 Go / No-Go decision
31. Run backup of database in SAP BW Development environment
32. Suspend jobs and development activities in the development environment
33. Migrate Database to SAP HANA (technical migration export and import)
34. Connectivity validations and security checks (roles, authentication and authorization)
35. External job scheduling checks (openhub and outbound interfaces)
36. Execute key queries identified in step 20 and reconcile data at summary level and detailed as required.
37. Collect new HANA performance statistics for dashboards, BEX broadcaster, reports, workbooks and WAD templates and analyze performance gains.
38. Run full Delta Loads from process chains (create a few records in the source environment if required) and collect benchmark statistics from ETL processing and activation. Analyze these against those collected in step 18.
39. Rerun the HANA Migration Readiness Check  in  SAP Note 1729988 and resolve any issues
40 Go / No-Go decision
41. Refresh QA box from Prod if feasible
42. Run backup of database in SAP BW QA environment
43. Suspend jobs and transports in the QA environment
44. Migrate Database to SAP HANA (technical migration export and import)
45. Connectivity validations and security checks (roles, authentication and authorization)
46. External job scheduling checks (openhub and outbound interfaces)
47. Execute key queries identified in step 20 and reconcile data at summary level and detailed as required.
48. Collect new HANA performance statistics for dashboards, BEX broadcaster, reports, workbooks and WAD templates and analyze performance gains.
49. Run full Delta Loads from process chains (create a few records in the source environment if required) and collect benchmark statistics from ETL processing and activation. Analyze these against those collected in step 18.
50. Rerun the HANA Migration Readiness Check  in  SAP Note 1729988 and resolve any issues
Other:
a) Test backup/restore capabilities
b) Test and collect benchmarks for High Availability and/or Disaster Recovery
51 Go / No-Go decision
----------
52. Develop detailed Production cutover plan with times steps for export and import of files.
53. Develop production validation test plan and assign resources
54. Run backup of database in SAP BW QA environment
55. Suspend jobs and transports in the QA environment
56. Migrate Database to SAP HANA (technical migration export and import)
57. Connectivity validations and security checks (roles, authentication and authorization)
58. External job scheduling checks (openhub and outbound interfaces)
59. Execute key queries identified in step 20 and reconcile data at summary level and detailed as required.
60. Collect new HANA performance statistics for dashboards, BEX broadcaster, reports, workbooks and WAD templates and analyze performance gains.
61. Run full Delta Loads from process chains  and collect benchmark statistics from ETL processing and activation. Analyze these against those collected in step 18.
62. Rerun the HANA Migration Readiness Check  in  SAP Note 1729988 and resolve any issues
63. User migration and log-on validation
64. Setup Alerts in SAP HANA Studio
65. Assign System roles and privileges in all environments
66. Setup basepaths for all backups
67. Validate CPU usage, memory footprint and system load during test run for reports and data loads.
68. Create a standard operating procedure for SAP HANA support staff
69. Setup update site and process for HANA studio (SUM), license keys and support
70. Go/No-Go Decision (restore)
71. Optional: Optimize key DSOs to new HANA optimized DSOs
72. Optional: Optimize key InfoCubes to new HANA optimized InfoCubes (flattened dimensions)
73. Optional: remove partitions from LSA and simplify data architecture by reducing layers
74. Optional: Create a New capabilities assessment to take advantage of near-time data warehousing using SLA
75. Optional: setup external data movement into new Data Services ETL processing chains
76. Optional: Start removing InfoCubes and build HANA views on DSOs where appropriate, thereby reducing data replication, size and data latency (non-IP, BPC and non-cumulative KFs).

Summary

While not a complete workplan, these tasks can function as the skeleton and starting point for your more detailed workplan with timelines and assigned resources specific to your organization and scope.
Hope it helps..



                                                  Copied from Dr. Bergs Document

Thursday, September 19, 2013

Migrating Existing Data Flows

Use

If you want to benefit from the features of the new concepts and technology while working with an existing data flow, we recommend that you migrate the data flow.
The following sections outline what you need to consider and how you proceed when migrating a data flow.

Constraints

You cannot migrate hierarchy DataSources, DataSources that use the IDoc transfer method, DataSources from BAPI source systems, or export DataSources (namespace 8* or /*/8*) that are used by the data mart interface.

Prerequisites

In the procedure outlined here, we make the following simple assumptions:
      The data flow is to be migrated without making any changes to the transformation logic and the InfoSource (which is optional in the new data flow) is to be retained. 
      In the data flow used, one DataSource is connected to multiple InfoProviders. All objects required in the data flow exist as active versions.

Preliminary Remarks

      Always migrate an entire data flow, that is, always include all the objects that it contains, starting from the InfoProvider down to the DataSource.
      Do not migrate the data flow in a production system. The recommended procedure differs depending on whether your landscape is a two-system (development system and production system) or three-system (development system, test system, and production system) landscape. You generate the new objects in the development system, run tests in the development or test system (depending on your system landscape), and then transport the new objects, as well as the deletions of the 3.x objects no longer required, in the production system. Note that the transports are made in the same sequence in which they are created.
      When you model a data flow using the new object types, you use the emulation for the DataSource. This affects in particular the evaluation of settings in the InfoPackage, because in the new data flow, it is only used to load data into the PSA. The emulation makes it difficult to use InfoPackages because only a subset of the settings made here is evaluated. In the new data flow, the settings not evaluated by the InfoPackage must be made in the data transfer process. We therefore recommend that you consider these dependencies when you emulate the DataSource and that you emulate your DataSource in a development or test system only.

Procedure

Carry out steps 1-6 and 8-9 in the development system:
       1.      For each InfoProvider that is supplied with data from the DataSource, generate a transformation from each of the update rules.
In doing this, copy the 3.x InfoSource to a new InfoSource.
       2.      Generate a transformation from each of the corresponding transfer rules.
When you do this, use the existing InfoSource that was created when the update rules were migrated.
       3.      Make any necessary adjustments to the transformation.
You need to postprocess a transformation, for example, if you use formulas or inverse routines in your transfer rules. You can also adjust the routines to improve performance.
For more information about steps 1-3, see Migrating Update Rules, 3.x InfoSources and Transfer Rules.
       4.      Create the data transfer processes for further updating the data from the PSA.
In the new data flow, the data transfer process uses the settings from the InfoPackage that are relevant for updating the data from the PSA. For more information, see InfoPackage -> Data Transfer Process.
                            a.      In InfoPackage maintenance, navigate to the Data Targets tab page.
                            b.      Create a data transfer process for each of the InfoProviders that is supplied with data from the InfoPackage.
                            c.      To do this, under Create/Maintain DTP, choose Error! Objects cannot be created from editing field codes. with the quick info text Create New Data Transfer Process for This Data Target.
                            d.      Check all DTP settings that the DTP is to use in the new data flow for the InfoPackage.
       On the Update tab page, check the settings for error handling.
       On the Extraction tab page, check the extraction mode.
For more information about data transfer processes, see Creating Data Transfer Processes.
       5.      In the InfoPackage, check on the Data Targets tab page that any InfoProviders that are supplied with data by means of update rules are not selected. This allows you to ensure that the InfoProviders are subsequently only supplied with data by the new data flow.
       6.      Check the process chains that are used in the data flow and make any necessary adjustments.
                            a.      Open the process chain from InfoPackage maintenance by choosing Process Chain Maintenance.
                            b.      Insert the DTPs into the process chain directly after the InfoPackage.
                            c.      Make any adjustments required in the subsequent process (previously InfoPackages, but now DTPs).
Note
If the process variants previously referred to the object type InfoPackage, you now need to specify the corresponding DTP, for example, when activating the data in the DataStore object.
       7.      Test the data flow.
       If you have a two-system landscape, perform this test in the development system.
       If you have a three-system landscape, first transport the affected objects to the test system and test your new data flow there.
       8.      Migrate the DataSource.
The migration process generates a new DataSource. The 3.x DataSource and the transfer structure and mapping are deleted. The PSA and InfoPackage are used in the new migrated DataSource.
Note
When the DataSource is migrated, the InfoPackage is not migrated in the sense of a new object being created. However, after migration, only the specifications about how data is loaded into the PSA are used in the InfoPackage.  Existing delta processes continue to run. The delta process does not need to be reinitialized.
Note
You can convert a migrated DataSource to a 3.x DataSource if you export the 3.x objects during migration.
       9.      To maintain clarity, we recommend that you delete the 3.x InfoSource and update rules.
   10.      If you have a three-system landscape, transport the migrated DataSource and the deletion of the 3.x InfoSources and update rules into the test system.
   11.      Examine the data flow for completeness and check that any objects that are no longer required are deleted.
       In a two-system landscape, do this in the development system.
       In a three-system landscape, do this in the test system.
   12.      Finally, transport the objects and deletions from the development or test system into your production system.

Result

You have migrated a 3.x data flow and can now benefit from the features of the new concepts and technology.

Migrating Data Flows That Use the Data Mart Interface

You cannot migrate export DataSources that are used when the data mart interface is used to update data from one InfoProvider to another InfoProvider. However, a data flow that uses export DataSources can be migrated to the new object types, because the export DataSource and its transfer and update rules can be modeled in the new data flow using DTPs and transformations. The data flow can be migrated as outlined above. However, the actual migration of the DataSource and the corresponding steps involved are omitted.


                                                                                                                          source is:  help.sap.com

Friday, September 13, 2013

Early Delta Initialization


With early delta initialization, you have the option of writing the data into the delta queue or into the delta
tables for the application during the initialization request in the source system.
This means that you are able to execute the initialization of the delta process (the init request), without
having to stop the updating of data in the source system.
You can only execute an early delta initialization if the DataSource extractor called in the source system with
this data request supports this.
Extractors that support early delta initialization were delivered with Plug-Ins as of Plug-In (-A) 2002.1
You cannot run an initialization simulation together with an early delta initialization.


While filling setup tables, the first step we think about is locking the users.Why we think about locking users is we might miss the records posted by the users when we are filling setup tables. The next doubt we get is " Can't we fill without locking users..?".
Yes, we can fill the setup tables without locking users if data source supports that feature.

In this one we do the initialization before filling setup tables. So that users can post the records when we are performing setup tables filling. we will get the posted records in next delta run.

In this method we need to do the below steps.

i) Run the Info Package with option " early delta initialization". This will enable the delta queue and setup the timestamp for delta in system
ii) Delete the data in setup tables for specific application component
iii) Fill the setup tables.
iv) Load the setup tables data to BI using full repair IP.

How to check whether Data source supports for Early Delta Initialization?

We can check this in table ROOSOURCE.
i) Go to SE16 --> table name and enter
ii) In the next screen give the data source name and execute.
iii) If field ZDD_ABLE in the result has value X, then data supports for early delta initialization.
iv) If it has space, then it doesn't support.


Tuesday, September 10, 2013

Important tables in Delta / Data mart Scenarios:


RSSDLINIT:  (BW system): Last Valid Initializations to an OLTP Source
ROOSPRMSC (OLTP system):  Control Parameter Per DataSource Channel
ROOSPRMSF : Control Parameters Per DataSource
RSSDLINITSEL: Last Valid Initializations to an OLTP Source
RSDMDELTA: Data Mart Delta Management
This table contains the list of all the requests which are already fetched by the target system.
ROOSGENDLM: Generic Delta Management for DataSources (Client-Dep.)
RSBKREQUEST:  DTP Request
RSSTATMANREQMAP: Mapping of Higher- to Lower-Level Status Manager Request
RSDMSTAT: Data mart Status Table
This table is used for processing a Repeat Delta. Using field DMCOUNTER we can follow up in RSDMDELTA.
RSMDATASTATE: Status of the data in the Infocubes
This table is used to know the state of the Data Target regarding compression, aggregation etc. For Data Marts the two field DMEXIST and DMALL are used.
RODELTAM: BW Delta Process
ROOSGEN: Generated Objects for OLTP Source
RSSELDONE: Monitor: Selections for executed request
RSSTATMANPART: Store for G_T_PART of Request Display in DTA Administration
RSBODSLOGSTATE: Change log Status for ODS Object
ROOSOURCE: Table Header for SAP BW OLTP Sources
Mainly used to get data regarding Initialization of the Extractor. It contains the header metadata of Datasources, e.g. if the datasource is capable of delta extraction.