Thursday, December 22, 2011

TYPES OF ROUTINES

Types of Routines in BW


Routines are used to define complex transformation rules. In most of the cases data won’t be coming directly in desired form before sending to the target. In some cases output needs to be derived on some incoming data. In such cases we need to go for writing of routines at the transformation level.
There are four types of Routines available

  • Characteristic or Field Routine

  • Start Routine

  • End Routine

  • Expert Routine

The routine which we need to go for depends on when it needs to be executed. Suppose if some logic needs to be implemented before transformation then the start routine needs to be implemented.

Characteristic or Field Routine

It operates on a single record for a single characteristic or key figure. The value gets modified in the routine based on one or more source fields before it is transferred to the target

Start Routine
The start routine is run at the start of the transformation. The start routine has a table in the format of the source structure as input and output parameters. It is used to perform preliminary calculations and store these in a global data structure or in a table. This structure or table can be accessed from other routines. You can modify or delete data in the source_package .
For field routine it will act on each record but in start routine it will have all the data in source_package . In Start routine we will have the structure of the source fields.

End Routine
An end routine is a routine with a table in the target structure format as input and output parameters. You can use an end routine to postprocess data after transformation on a package-by-package basis. Data is stored in result_package.
End Routine is processed after start routine, mappings, field routines and finally before the values is transferred to the output. End routine has the structure of the target and result_package contains the entire data which finally is the output.

Expert Routine
An Expert routine is a routine with contains both the source and target structure. we can use Expert routine if there are not sufficient functions to perform transformation.
For Expert Routine every things needs to be written using coding. In simple an expert routine performs all the actions of Start Routine, Mappings, Field and End Routines.
In Expert Routine we will read from source_package which contains all the data and update into result_package which should be the output .

Steps To Run Strucked Process Chain

 
1) If a proces chain is stuck at a particular step and we want to restart it.
2) Data loading is successful, but the next step of the chain is not executed..
Reason :- The event that will register in the back end data base is not set properly or may go to hold state, so as a result will have to change the parameter manually.
Solution:-

Step 1: At the step where the process chain is stuck
Go to display messages >>>chain>>>copy variant, instance & start date
Step 2: Go to se16>>>rspcprocesslog
Enter the below details,
Variant >>> variante
Instance >>> instance
Batch date >>> start date
This will give a single entry in the table rspcprocesslog.
Step 3: Execute the function module RSPC_PROCESS_FINISH
Enter the below details,
Rspcprocesslog >>>instance >>> I_INSTANCE.
Rspcprocesslog >>>variante>>> I_VARIANT
Rspcprocesslog >>>logid>>>I_LOGID
Rspcprocesslog >>> type >>>I_TYPE
Enter 'G' for parameter I_STATE and press execute (F8).

After executing the above FM, with the above stated parameters, the actual process will be set to green and the following process in the chain will be started and the chain can run to the end.

PARTITIONING

Info cube partitioning


InfoCube Partitioning
So what is partitioning and why do it?
You use partitioning to divide the data in an InfoCube into multiple, smaller, independent and related segments. This separation improves system performance when performing data analysis InfoProvider, this is because the system knows only to read a particular part of the disk to retrieve the data you are after. In addition it can perform parallel data reads of multiple partitions speeding up the queries execution time. There are always two extra 'catch all' partitions created as well as the ones you explicitly specify in order to store all of the low and high values that fall outside of the normal range, as shown in the example below:

What partitioning options do you have?
Physical Partitioning
Physical partitioning is done at database level and is can only be based on one of the following date based criteria: Calendar Month (0CALMONTH) or Fiscal Year/Period (0FISCPER).
An increase in performance is only seen for the partitioned InfoCube if the time dimension of the InfoCube is consistent. In the standard example shown on the right only the first record is consistent.


To setup partitioning for an InfoCube, choose Extras -> DB Performance -> Partitioning and specify the value range. Where necessary, limit the maximum number of partitions, the SAP recommended optimal maximum number of partitions is 30-40, so consider this when planning the range spilt. All changes are transparent for the user and behind the scenes you still have one logical database table.
In BW 3.5 you had to setup partitions whilst the InfoCube was empty but this constraint has disappeared in BI 7.0 (for all database providers except for DB2). In 7.0 it is not only possible to partition whilst the InfoCube has data but also re-partition any existing groupings. Repartitioning can be useful if you have already loaded data to your InfoCube and you have loaded more data into your InfoCube than you had planned when you partitioned it, you did not choose a long enough period of time for partitioning or some partitions contain no data or little data due to data archiving over a period of time.
You can access repartitioning in the Data Warehousing Workbench using Administration, or in the context menu of your InfoCube.
Logical Partitioning
Logical portioning is done at the data level and is achieved by separating out related data into separate InfoCubes that are joined into a multi-cube. In this case the related groupings are not solely restricted to time only, although by year would be a sensible choice, you could partition by plan and actual data, regions or business area. All queries based on the multi-cube will automatically use parallel sub-queries as required to retrieve the result-set thus improving query performance.

Conclusions
For anyone involved in planning solutions the logical partitioning will already be part of everyday life for comparing Plan vs Actuals data but other InfoCubes that contain a fair amount of data could benefit from being split into smaller cubes by for example year. Not only will this bring performance benefits but also make it easier to archive old unused data.
In BW 3.5 many cubes were partitioned at their point of creation but maintenance was a real issue and many were left with inadequate ranges over time, now this problem has been addressed in BI 7.0 it should be much easier to keep the systems running in a optimal state and well worth investigating.

Attribute Change Run

Attribute Change Run

Attribute change run is nothing but adjusting the master data after its been loaded from time to time so that it can change or generate or adjust the SID's so that u may not have any problem when loading the transaction data in to data targets. It is used for realignment of the master data..

Whenever master data attributes or hierarchy is loaded an attribute change run need to be performed due to the following reasons:

1. When a master data is changed/updated it gets loaded into the master data table as version "M"(modified).
It is only after a attribute change run that the master data becomes activated i.e. version "A".
2. Master data attributes and hierarchies are used in cube aggregates also. So to do update the
latest master data attributes values in the aggregates attribute change run need to be performed.
You need to schedule an attribute change run when you change master data attributes or hierarchies that are used as navigational attributes or in aggregates.
The hierarchy/attribute change run which activates hierarchy and attribute changes and adjusts the corresponding aggregates is divided, into 4 phases:

Finding all affected aggregates.
set up all affected aggregates again and write the result in the new aggregate table.
Activating attributes and hierarchies
rename the new aggregate table. When renaming, it is not possible to execute queries.


T code RSATTR
Program : RSDDS_ATTR_CHANGERUN
RSDDS_CHANGERUN_MONITOR To monitor change run

In any SAP system Only one Attribute change Run can Run at a given Point.


check the following link for more information
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/17435