Sunday, September 29, 2013

Exception CX_RSBK_REQUEST_LOCKED logged

1.Previous request has  status 'red' and
2.Exception CX_RSBK_REQUEST_LOCKED logged.

We are getting this error at DTP level. Once the error is triggered and we need to delete the failed req and re-load it

Ans:

It seems that the system is trying to load data when there is another lock (mostly another request being loaded or another target being loaded).

See if you have enough background processes while executing the DTP.
Also, you can see SM12 for locks. Also check SM37 if the job has finished or not.

Now, delete the request which has failed (in RED) and if you are using a process chain, repeat the step in the chain.
Or else, load manually.

To avoid it everyday, see if there are any conflicting jobs running at that time.
If yes, please run/ schedule this load prior to the job or after it finishes.

Thursday, September 26, 2013

BW to HANA migration Workplan

Many organizations are planning to move BW to to HANA and is looking for information on how to do this. In this blog we take a look at some of the major tasks you should consider when moving BW to HANA. We also include references to notes and transactions so that you can get started on writing your project plan.

Introduction
These are some of the major tasks for a BW to HANA migration. Once you have added all sub-tasks, a typical workplan has 400-600 detailed items, including status reports, team meetings, issues, hardware validation, portal integration and tests of front-end tools. 
However, these first 76 steps should give you enough to get you started on writing your project plan.

Pre-Activity Steps for each environment (Dev, QA & Prod)

1. Clean the Persistent Staging Area (PSA) for data already loaded to DSOs
2. Delete the Aggregates aggregates (summary tables). They will not be needed again.
3. Compress the E and F tables in all InfoCubes. This will make InfoCubes much smaller.
4. Remove data from the statistical cubes (they start with the technical name of 0CTC_xxx). These contain performance information for the BW system running on the relational database. You can do this using the transaction RSDDSTAT or the program RSDDSTAT_DATA_DELETE to help you.
5. Look at log files, bookmarks and unused BEx queries and templates (transaction RSZDELETE). 6. Remove as much as possible of the DTP temporary storage, DTP error logs, and temporary database objects. Help and programs to do this are found in SAP Notes 1139396 and 1106393.
7. For write-optimized DSOs that push data to reportable DSOs (LSA approach), remove data in the write-optimized DSOs. It is already available in higher level objects.
8. Migrate old data to Near-Line Storage (NLS) on a small server. This will still provide access to the data for the few users who infrequently need to see this old data. You will also be able to query it when BW is on HANA, but it does not need to be in-memory.
10. Remove data in unused DSOs, InfoCubes and files used for staging in the BW system. This includes possible reorganization of masterdata text and attributes using process type in RSPC 11. You may also want to clean up background information stored in the table RSBATCHDATA. This table can get very big if not managed.
12. You should also consider archiving any IDocs and clean the tRFC queues. All of this will reduce size of the HANA system and help you fit the system tables on the master node.
13. Size reduction of Basis Tables. In SAP Note 706478, SAP provides some ideas on how to keep the basis tables from growing too fast too fast in the future.
14. As of Service Pack 23 on BW 7.0, we can also delete unwanted master data directly (see SAP Note: 1370848).
15. Use the program RSDDCVER_DIM_UNUSED to delete any unused dimension entries in your InfoCubes to reduce the overall system size (not a factor if you plan on using the optimized HANA infocubes)

Pre-Validation steps

16. Run BW to HANA Migration Readiness Check (v2.0) in  SAP Note 1729988 and resolve all high priority issues including connectivity checks, service packs, security, basis, db notes and general check in the tool for SB, Dev, QA and Prod.
17. Run ZBW_ABAP_ANALYZER for checks of ETL ABAP code and fix outstanding issues in SB, Dev, QA and Prod. Add HANA hints where required. This is found in  SAP Note 1847431
18. Run full delta load and gather benchmark performance statistics for load and activations
20. Run key queries, reports and dashboards for collection of benchmark statistics and performance, including any BEx broadcaster jobs, pre-calculations and workbooks.

- Other: 
a) Conduct HANA development workshops
b) Complete HANA Admin workshop and training
c) Create an SLA and Support plan for SAP HANA internally
d) Confirm Backup, Disaster recovery and HA plan

Environment staging and Migration (post hardware install, validation and handover)

21. Run backup of database in SAP BW Sandbox
22. Suspend jobs and SB development activities
23. Migrate Database to SAP HANA (export and import)
24. Connectivity validations and security checks (roles, authentication and authorization)
25. External job scheduling checks (openhub and outbound interfaces)
26. Execute key queries identified in step 20 and reconcile data at summary level and detailed as required.
27. Collect new HANA performance statistics for dashboards, BEX broadcaster, reports, workbooks and WAD templates and analyze performance gains.
28. Run full Delta Loads from process chains (create a few records in the source environment if required) and collect benchmark statistics from ETL processing and activation. Analyze these against those collected in step 18.
29. Rerun the HANA Migration Readiness Check  in  SAP Note 1729988 and resolve any issues
30 Go / No-Go decision
31. Run backup of database in SAP BW Development environment
32. Suspend jobs and development activities in the development environment
33. Migrate Database to SAP HANA (technical migration export and import)
34. Connectivity validations and security checks (roles, authentication and authorization)
35. External job scheduling checks (openhub and outbound interfaces)
36. Execute key queries identified in step 20 and reconcile data at summary level and detailed as required.
37. Collect new HANA performance statistics for dashboards, BEX broadcaster, reports, workbooks and WAD templates and analyze performance gains.
38. Run full Delta Loads from process chains (create a few records in the source environment if required) and collect benchmark statistics from ETL processing and activation. Analyze these against those collected in step 18.
39. Rerun the HANA Migration Readiness Check  in  SAP Note 1729988 and resolve any issues
40 Go / No-Go decision
41. Refresh QA box from Prod if feasible
42. Run backup of database in SAP BW QA environment
43. Suspend jobs and transports in the QA environment
44. Migrate Database to SAP HANA (technical migration export and import)
45. Connectivity validations and security checks (roles, authentication and authorization)
46. External job scheduling checks (openhub and outbound interfaces)
47. Execute key queries identified in step 20 and reconcile data at summary level and detailed as required.
48. Collect new HANA performance statistics for dashboards, BEX broadcaster, reports, workbooks and WAD templates and analyze performance gains.
49. Run full Delta Loads from process chains (create a few records in the source environment if required) and collect benchmark statistics from ETL processing and activation. Analyze these against those collected in step 18.
50. Rerun the HANA Migration Readiness Check  in  SAP Note 1729988 and resolve any issues
Other:
a) Test backup/restore capabilities
b) Test and collect benchmarks for High Availability and/or Disaster Recovery
51 Go / No-Go decision
----------
52. Develop detailed Production cutover plan with times steps for export and import of files.
53. Develop production validation test plan and assign resources
54. Run backup of database in SAP BW QA environment
55. Suspend jobs and transports in the QA environment
56. Migrate Database to SAP HANA (technical migration export and import)
57. Connectivity validations and security checks (roles, authentication and authorization)
58. External job scheduling checks (openhub and outbound interfaces)
59. Execute key queries identified in step 20 and reconcile data at summary level and detailed as required.
60. Collect new HANA performance statistics for dashboards, BEX broadcaster, reports, workbooks and WAD templates and analyze performance gains.
61. Run full Delta Loads from process chains  and collect benchmark statistics from ETL processing and activation. Analyze these against those collected in step 18.
62. Rerun the HANA Migration Readiness Check  in  SAP Note 1729988 and resolve any issues
63. User migration and log-on validation
64. Setup Alerts in SAP HANA Studio
65. Assign System roles and privileges in all environments
66. Setup basepaths for all backups
67. Validate CPU usage, memory footprint and system load during test run for reports and data loads.
68. Create a standard operating procedure for SAP HANA support staff
69. Setup update site and process for HANA studio (SUM), license keys and support
70. Go/No-Go Decision (restore)
71. Optional: Optimize key DSOs to new HANA optimized DSOs
72. Optional: Optimize key InfoCubes to new HANA optimized InfoCubes (flattened dimensions)
73. Optional: remove partitions from LSA and simplify data architecture by reducing layers
74. Optional: Create a New capabilities assessment to take advantage of near-time data warehousing using SLA
75. Optional: setup external data movement into new Data Services ETL processing chains
76. Optional: Start removing InfoCubes and build HANA views on DSOs where appropriate, thereby reducing data replication, size and data latency (non-IP, BPC and non-cumulative KFs).

Summary

While not a complete workplan, these tasks can function as the skeleton and starting point for your more detailed workplan with timelines and assigned resources specific to your organization and scope.
Hope it helps..



                                                  Copied from Dr. Bergs Document

Thursday, September 19, 2013

Migrating Existing Data Flows

Use

If you want to benefit from the features of the new concepts and technology while working with an existing data flow, we recommend that you migrate the data flow.
The following sections outline what you need to consider and how you proceed when migrating a data flow.

Constraints

You cannot migrate hierarchy DataSources, DataSources that use the IDoc transfer method, DataSources from BAPI source systems, or export DataSources (namespace 8* or /*/8*) that are used by the data mart interface.

Prerequisites

In the procedure outlined here, we make the following simple assumptions:
      The data flow is to be migrated without making any changes to the transformation logic and the InfoSource (which is optional in the new data flow) is to be retained. 
      In the data flow used, one DataSource is connected to multiple InfoProviders. All objects required in the data flow exist as active versions.

Preliminary Remarks

      Always migrate an entire data flow, that is, always include all the objects that it contains, starting from the InfoProvider down to the DataSource.
      Do not migrate the data flow in a production system. The recommended procedure differs depending on whether your landscape is a two-system (development system and production system) or three-system (development system, test system, and production system) landscape. You generate the new objects in the development system, run tests in the development or test system (depending on your system landscape), and then transport the new objects, as well as the deletions of the 3.x objects no longer required, in the production system. Note that the transports are made in the same sequence in which they are created.
      When you model a data flow using the new object types, you use the emulation for the DataSource. This affects in particular the evaluation of settings in the InfoPackage, because in the new data flow, it is only used to load data into the PSA. The emulation makes it difficult to use InfoPackages because only a subset of the settings made here is evaluated. In the new data flow, the settings not evaluated by the InfoPackage must be made in the data transfer process. We therefore recommend that you consider these dependencies when you emulate the DataSource and that you emulate your DataSource in a development or test system only.

Procedure

Carry out steps 1-6 and 8-9 in the development system:
       1.      For each InfoProvider that is supplied with data from the DataSource, generate a transformation from each of the update rules.
In doing this, copy the 3.x InfoSource to a new InfoSource.
       2.      Generate a transformation from each of the corresponding transfer rules.
When you do this, use the existing InfoSource that was created when the update rules were migrated.
       3.      Make any necessary adjustments to the transformation.
You need to postprocess a transformation, for example, if you use formulas or inverse routines in your transfer rules. You can also adjust the routines to improve performance.
For more information about steps 1-3, see Migrating Update Rules, 3.x InfoSources and Transfer Rules.
       4.      Create the data transfer processes for further updating the data from the PSA.
In the new data flow, the data transfer process uses the settings from the InfoPackage that are relevant for updating the data from the PSA. For more information, see InfoPackage -> Data Transfer Process.
                            a.      In InfoPackage maintenance, navigate to the Data Targets tab page.
                            b.      Create a data transfer process for each of the InfoProviders that is supplied with data from the InfoPackage.
                            c.      To do this, under Create/Maintain DTP, choose Error! Objects cannot be created from editing field codes. with the quick info text Create New Data Transfer Process for This Data Target.
                            d.      Check all DTP settings that the DTP is to use in the new data flow for the InfoPackage.
       On the Update tab page, check the settings for error handling.
       On the Extraction tab page, check the extraction mode.
For more information about data transfer processes, see Creating Data Transfer Processes.
       5.      In the InfoPackage, check on the Data Targets tab page that any InfoProviders that are supplied with data by means of update rules are not selected. This allows you to ensure that the InfoProviders are subsequently only supplied with data by the new data flow.
       6.      Check the process chains that are used in the data flow and make any necessary adjustments.
                            a.      Open the process chain from InfoPackage maintenance by choosing Process Chain Maintenance.
                            b.      Insert the DTPs into the process chain directly after the InfoPackage.
                            c.      Make any adjustments required in the subsequent process (previously InfoPackages, but now DTPs).
Note
If the process variants previously referred to the object type InfoPackage, you now need to specify the corresponding DTP, for example, when activating the data in the DataStore object.
       7.      Test the data flow.
       If you have a two-system landscape, perform this test in the development system.
       If you have a three-system landscape, first transport the affected objects to the test system and test your new data flow there.
       8.      Migrate the DataSource.
The migration process generates a new DataSource. The 3.x DataSource and the transfer structure and mapping are deleted. The PSA and InfoPackage are used in the new migrated DataSource.
Note
When the DataSource is migrated, the InfoPackage is not migrated in the sense of a new object being created. However, after migration, only the specifications about how data is loaded into the PSA are used in the InfoPackage.  Existing delta processes continue to run. The delta process does not need to be reinitialized.
Note
You can convert a migrated DataSource to a 3.x DataSource if you export the 3.x objects during migration.
       9.      To maintain clarity, we recommend that you delete the 3.x InfoSource and update rules.
   10.      If you have a three-system landscape, transport the migrated DataSource and the deletion of the 3.x InfoSources and update rules into the test system.
   11.      Examine the data flow for completeness and check that any objects that are no longer required are deleted.
       In a two-system landscape, do this in the development system.
       In a three-system landscape, do this in the test system.
   12.      Finally, transport the objects and deletions from the development or test system into your production system.

Result

You have migrated a 3.x data flow and can now benefit from the features of the new concepts and technology.

Migrating Data Flows That Use the Data Mart Interface

You cannot migrate export DataSources that are used when the data mart interface is used to update data from one InfoProvider to another InfoProvider. However, a data flow that uses export DataSources can be migrated to the new object types, because the export DataSource and its transfer and update rules can be modeled in the new data flow using DTPs and transformations. The data flow can be migrated as outlined above. However, the actual migration of the DataSource and the corresponding steps involved are omitted.


                                                                                                                          source is:  help.sap.com

Friday, September 13, 2013

Early Delta Initialization


With early delta initialization, you have the option of writing the data into the delta queue or into the delta
tables for the application during the initialization request in the source system.
This means that you are able to execute the initialization of the delta process (the init request), without
having to stop the updating of data in the source system.
You can only execute an early delta initialization if the DataSource extractor called in the source system with
this data request supports this.
Extractors that support early delta initialization were delivered with Plug-Ins as of Plug-In (-A) 2002.1
You cannot run an initialization simulation together with an early delta initialization.


While filling setup tables, the first step we think about is locking the users.Why we think about locking users is we might miss the records posted by the users when we are filling setup tables. The next doubt we get is " Can't we fill without locking users..?".
Yes, we can fill the setup tables without locking users if data source supports that feature.

In this one we do the initialization before filling setup tables. So that users can post the records when we are performing setup tables filling. we will get the posted records in next delta run.

In this method we need to do the below steps.

i) Run the Info Package with option " early delta initialization". This will enable the delta queue and setup the timestamp for delta in system
ii) Delete the data in setup tables for specific application component
iii) Fill the setup tables.
iv) Load the setup tables data to BI using full repair IP.

How to check whether Data source supports for Early Delta Initialization?

We can check this in table ROOSOURCE.
i) Go to SE16 --> table name and enter
ii) In the next screen give the data source name and execute.
iii) If field ZDD_ABLE in the result has value X, then data supports for early delta initialization.
iv) If it has space, then it doesn't support.


Tuesday, September 10, 2013

Important tables in Delta / Data mart Scenarios:


RSSDLINIT:  (BW system): Last Valid Initializations to an OLTP Source
ROOSPRMSC (OLTP system):  Control Parameter Per DataSource Channel
ROOSPRMSF : Control Parameters Per DataSource
RSSDLINITSEL: Last Valid Initializations to an OLTP Source
RSDMDELTA: Data Mart Delta Management
This table contains the list of all the requests which are already fetched by the target system.
ROOSGENDLM: Generic Delta Management for DataSources (Client-Dep.)
RSBKREQUEST:  DTP Request
RSSTATMANREQMAP: Mapping of Higher- to Lower-Level Status Manager Request
RSDMSTAT: Data mart Status Table
This table is used for processing a Repeat Delta. Using field DMCOUNTER we can follow up in RSDMDELTA.
RSMDATASTATE: Status of the data in the Infocubes
This table is used to know the state of the Data Target regarding compression, aggregation etc. For Data Marts the two field DMEXIST and DMALL are used.
RODELTAM: BW Delta Process
ROOSGEN: Generated Objects for OLTP Source
RSSELDONE: Monitor: Selections for executed request
RSSTATMANPART: Store for G_T_PART of Request Display in DTA Administration
RSBODSLOGSTATE: Change log Status for ODS Object
ROOSOURCE: Table Header for SAP BW OLTP Sources
Mainly used to get data regarding Initialization of the Extractor. It contains the header metadata of Datasources, e.g. if the datasource is capable of delta extraction.