Friday, May 24, 2013

Points to remember in Inventory Management

1. There are 3 SAP delivered transactional data sources for stock management. 2LIS_03_BX : Always needed to carry the initialization of the stocks. 2LIS_03_BF : Initialize this in source if you need historical data else an empty init can be done for this to load only future delta records. If the Source system is new then no need to do an initialization. 2LIS_03_UM : Only needed if revaluations are carried out in Source. This data source is helpful if adjustment in material prices is done time to time otherwise it won’t extract any data.

 2. Check list for source system. Needs to be carried out before initialization of the above 3 data sources . 

  • Table TBE11 : Maintain Entry ‘NDI’ with text ‘New Dimension Integration’ and activate the   flag(Note 315880) 
  • Table TPS01 : Entry should be as below(Note 315880) PROCS – 01010001 INTERFACE - SAMPLE_PROCESS_01010001 TEXT1 – NDI Exits Active 
  • Table TPS31 : Entry should be as below(Note 315880) PROCS – 01010001 APPLK - NDI FUNCT – NDI_SET_EXISTS_ACTIVE 
  • Tcode – MCB_ In most cases you need to set the industry sector as ‘standard’. For more info please see Note 353042 
  • Tcode – BF11 Set the indicator to active for the Business warehouse application entry. This entry may needs to be transported to the production system. (Note 315880)


 3. After running the setup data, check the data for the fields BWVORG, BWAPPLNM, MENGE. If no data available in these fields then some setting mentioned in the above checklist are missing in R/3. Correct the issue and rerun the setup data.

 4. Data staging with DSO for BX extractor is not allowed. Data should directly load from extractor to Cube only once. Choose the extraction mode as ‘Initial Non-cumulative for Non-cumulative values’ in the DTP.

 5. DSO is possible for BF. If you are creating a standard DSO then choose the fields MJAHR,BWCOUNTER,MBLNR,ZEILE as key fields. Some of these fields won’t be available in the standard data source but the data source can be enhanced using LO Cockpit (LBWE) to add these fields. In addition to these other fields depending upon the DSO structure is also possible. Note 417703 gives more info on this.

 6. Point-5 is valid for UM also. The Key fields could be a combination of MJAHR,MBLNR,ZEILE,BUKRS fields. Note 581778

 7. Data load to the cube should follow the below process A. Load the BX data. Compress the request with stock marker(uncheck the no-marker update option). B. Load the BF and UM init data. Compress the loads without the stock maker(Check the no-marker update option). C. The future delta loads from BF and UM should be compressed with Stock marker(uncheck the no-marker update option).

 8. If in future the cube needs to be deleted due to some issues then the load process should also be carried out as above. (only init of BF and UM should be loaded first and then the deltas should be processed)

 9. To check the data consistency of a Non cumulative cube the standard program SAP_REFPOINT_COMPLETE can be used. To check the compression status of the cube the table RSDCUBE can be refered. Before the compression of BX request, the ‘REFUPDATE’ field should be blank and after the compression the value should become ‘X’. Check Note 643687 for more info.

 10. After BX data load to cube the data won’t be visible by LISTCUBE. Only after compression the data can be seen by running a query on the Non cumulative cube.

Thursday, May 16, 2013

Partitioning

Use

You use partitioning to split the total dataset for an InfoProvider into several, smaller, physically independent and redundancy-free units. This separation improves system performance when you analyze data delete data from the InfoProvider.

Prerequisites

You can only partition a dataset using one of the two partitioning criteria ‘calendar month’ (0CALMONTH) or ‘fiscal year/period (0FISCPER). At least one of the two InfoObjects must be contained in the InfoProvider.

If you want to partition an InfoCube using the fiscal year/period (0FISCPER) characteristic, you have to set the fiscal year variant characteristic to constant.

There are 2 types of partition.1. Physical Partition
2. Logical Partition


1. Physical Partition:

Introduction As we all know that the SAP BI system will have a backend database to store the data records which comes into the system through the various data sources like R3, Standalone Legacy Systems etc.,. Whenever a data target is created in the SAP BI/BW system, some portion of the database memory will be allocated to its (data targets) in-terms of physical address.
This allocation of physical memory will differ based on the type of the data target.

For example:
 DSO is a flat-table and its capacity is entirely based on the total capacity of the database ideally.
 Info-cube is a multi-dimensional structure and its capacity is based on the number of dataload requests it has or indirectly number of the data partitions happened at the database level because of the data-loads happening or happened to the respective info-cubes.

As the number of the dataload requests increases, the respective partitions at the database level also increases. This limit is database dependent.

Example, for SQL data base, it is 1000 and for DB2 it is 1200 approx based on their respective service packs installed.
Throughout this document I have consider the Microsoft’s SQL database for the explanation, i.e.,
whose data partition limit is equal to 1000

Illustration of partition of the database w.r.t the Infocube data loads
As mentioned above the database partition of the database for the Infocube happens based on the number of dataload requests that cubes has. I.e. one new dataload request into the respective cubes creates a partition at the database multiply by the number of dataloads happened into it till date.

FIG1: Illustration to indicate the dataload request for the Infocube
As the number of the dataload requests increases, the number of the data-base partitions w.r.t info-cube also increases since both are directly proportional. Once this reaches the limit of the database used, no more data loads can happen for the respective Infocube (data-target) according to the property of the database.

Fig2: Illustration example used to explain the database partition limit.

Data base partition limit calculation In the above considered cube, the dataload is happening from 21st Feb of 2008 as depicted in the figure (shown above). Assumptions: Dataloads are happening every day. Hence in a non-leap year, ideally 365 data loads =
365 data partitions at the database level. [SQL database is considered whose upper partition limit is 1000.]
Therefore till Nov 18, 2010 (313 days of 2008+ 365 days of 2009 + 322 days of 2010) the dataloads will happen successfully to the considered Infocubes, later to this the loads will fail as there will be no physical space to store the records.
And also the SAP’s standard table "
RSMSSPARTMON " provides the details like the number of partitions happened at the database level w.r.t to the data targets(info-cube), these details can be used to get the accurate result w.r.t Info-cubes.
How to avoid the database partition limit consequences One method is to use the logical partition of the data-targets which includes the backup of the data into some other data-targets from the actual ones and deleting the respective data (which is replaced to the other cubes) from the actual data-targets and compression of the actual data-targets dataload request whose data is replaced from the actual to the another newly created (copied) data-targets.
This compression activity will delete the data-load request details from the database level and hence indirectly reducing the number of partitions at the data-base level previously had happened w.r.t individual data-targets.
Steps to compress the Infocube (data)

Fig: Screenshot indicating the compression process of the infocube which reduces the number of partition at the data base level.

Select the required request ID or range of request ids for which the compression has to be carried, and then schedule the process.
This will delete the respective data-load request id/ids and reduce the number of partitions w.r.t the infocube/data-target created at the database level, thus increases the number of available partitions. [This compression activity can be controlled to the required level like by giving the individual request numbers or the range of the requests as depicted in the above screen shot.] 
The compression of data-targets, for e.g. for info-cubes, will make the data deletion process using the data-load request id/ids impossible by moving the contents of fact table into E table, but this will not have any impact on the abovementioned logical process as the compression is done for the request/requests whose data is already replaced from the actual target to the other target and deleted from the actual target into which the data loads will be happening on regular basis.

2. Logical Partition:


When you activate the InfoProvider, the system creates the table on the database with one of the number of partitions corresponding to the value range. You can set the value range yourself.

Choose the partitioning criterion 0CALMONTH and determine the value range
From     01.1998
To   12.2003
6 years x 12 months + 2 = 74 partitions are created (2 partitions for values that lay outside of the range, meaning < 01.1998 or >12.2003).

You can also determine the maximum number of partitions created on the database for this table.
Choose the partitioning criterion 0CALMONTH and determine the value range
From       01.1998
To   12.2003
Choose 30 as the maximum number of partitions.
Resulting from the value range: 6 years x 12 calendar months + 2 marginal partitions (up to 01.1998, from 12.2003) = 74 single values.
The system groups three months together at a time in a partition (meaning that a partition corresponds to exactly one quarter); in this way, 6 years x 4 partitions/year + 2 marginal partitions = 26 partitions created on the database.

The performance gain is only achieved for the partitioned InfoProvider if the time characteristics of the InfoProvider are consistent. This means that with a partition using 0CALMONTH, all values of the 0CAL x characteristics of a data record have to match.

In the following example, only record 1 is consistent. Records 2 and 3 are not consistent:
This graphic is explained in the accompanying text

Activities

In InfoProvider maintenance, choose Extras DB Performance -Partitioning and specify the value range. Where necessary, limit the maximum number of partitions.

Partitioning InfoCubes Using the Characteristic 0FISCPER:

You can partition InfoCubes using two characteristics – calendar month (0CALMONTH) and fiscal year/period (0FISCPER). The special feature of the fiscal year/period characteristic (0FISCPER) being compounded with the fiscal year variant (0FISCVARNT) means that you have to use a special procedure when you partition an InfoCube using 0FISCPER.

Prerequisites

When partitioning using 0FISCPER values, values are calculated within the partitioning interval that you specified in the InfoCube maintenance. To do this, the value for 0FISCVARNT must be known at the time of partitioning; it must be set to constant.

Procedure

       1   The InfoCube maintenance is displayed. Set the value for the 0FISCVARNT characteristic to constant. Carry out the following steps:
a.       Choose the Time Characteristics tab page.
b.       In the context menu of the dimension folder, choose Object specific InfoObject properties.
c.       Specify a constant for the characteristic 0FISCVARNT. Choose Continue.
       2.      Choose Extras ® DB Performance ® Partitioning. The Determine Partitioning Conditions dialog box appears. You can now select the 0FISCPER characteristic under Slctn. Choose Continue.
       3.      The Value Range (Partitioning Condition) dialog box appears. Enter the required data.
   Data Flow Summary:

E fact table
 contains consolidated data
 is optimized for reading
 might be huge
 is partitioned by the user
 cannot be partitioned once InfoCube contains data!

F fact table
 contains data on request level
 is optimized for writing / deleting
 should be small
 is partitioned by the system


 

PSA PARTITIONING


Is a range partition based on a partition number that is assigned for each record

Provides for faster data load performance
Array inserts are possible
Insert occurs into smaller database object

Ability to easily maintain PSA as a temporary data staging area
Deletion of requests from the PSA can be handled as a drop partition SQL command

Deleting PSA records:
Select request to be deleted - records from request in PSA are flagged for deletion
When all records have been flagged - PSA partition is dropped



Threshold is not a hard limit
Thresholds will normally be exceeded by data loads

Threshold setting exists as a way of determining if a new partition is required when records from a new data load are about to be added

Records from one InfoPackage will never “span” more than one partition
A partition could become very large if a data load volume is very large

A PSA partition can hold records from several InfoPackages
Repartitioning 

Repartitioning

Use

Repartitioning can be useful if you have already loaded data to your InfoCube, and:
      You did not partition the InfoCube when you created it.
      You loaded more data into your InfoCube than you had planned when you partitioned it.
      You did not choose a long enough period of time for partitioning.
      Some partitions contain no data or little data due to data archiving over a period of time.

Tuesday, May 14, 2013

BI 7.0 or Netweaver 2004s Features

It is possible to search BI metadata (such as InfoCubes, InfoObjects, queries, Web templates) using the TREX search engine. This search is integrated into the Metadata Repository, the Data Warehousing Workbench and to some degree into the object editors. With the simple search, a search for one or all object types is performed in technical names and in text. During the text search, lower and uppercase are ignored and the object will also be found when the case in the text is different from that in the search term. With the advanced search, you can also search in attributes. These attributes are specific to every object type. Beyond that, it can be restricted for all object types according to the person who last changed it and according to the time of the change. For example, you can search in all queries that were changed in the last month and that include both the term "overview" in the text and the characteristic customer in the definition. Further functions include searching in the delivered (A) version, fuzzy search and the option of linking search terms with “AND” and “OR”. "Because the advanced search described above offers more extensive options for search in metadata, the function ""Generation of Documents for Metadata"" in the administration of document management (transaction RSODADMIN) was deleted. You have to schedule (delta) indexing of metadata as a regular job (transaction RSODADMIN).
Effects on Customizing Installation of TREX search engine Creation of an RFC destination for the TREX search engine Entering the RFC destination into table RSODADMIN_INT Determining relevant object types Initial indexing of metadata"
Remote Activation of DataSources (Developer Functionality) :

1. When activating Business Content in BI, you can activate DataSources remotely from the BI system. This activation is subject to an authorization check. You need role SAP_RO_BCTRA. Authorization object S_RO_BCTRA is checked. The authorization is valid for all DataSources of a source system. When the objects are collected, the system checks the authorizations remotely, and issues a warning if you lack authorization to activate the DataSources.

2. In BI, if you trigger the transfer of the Business Content in the active version, the results of the authorization check are based on the cache. If you lack the necessary authorization for activation, the system issues a warning for the DataSources. BW issues an error for the corresponding source-system-dependent objects (transformations, transfer rules, transfer structure, InfoPackage, process chain, process variant). In this case, you can use Customizing for the extractors to manually transfer the required DataSources in the source system from the Business Content, replicate them in the BI system, and then transfer the corresponding source-system-dependent objects from the Business Content. If you have the necessary authorizations for activation, the DataSources in the source system are transferred to the active version and replicated in the BI system. The source-system-dependent objects are activated in the BI system.

3. Source systems and/or BI systems have to have BI Service API SAP NetWeaver 2004s at least; otherwise remote activation is not supported. In this case, you have to activate the DataSources in the source system manually and then replicate them to the BI system.
Copy Process Chains (Developer Functionality):

You find this function in the Process Chain menu and use it to copy the process chain you have selected, along with its references to process variants, and save it under a new name and description.

InfoObjects in Hierarchies (Data Modeling):

1. Up to Release SAP NetWeaver 2004s, it was not possible to use InfoObjects with a length longer than 32 characters in hierarchies. These types of InfoObjects could not be used as a hierarchy basic characteristic and it was not possible to copy characteristic values for such InfoObjects as foreign characteristic nodes into existing hierarchies. From SAP NetWeaver 2004s, characteristics of any length can be used for hierarchies.
2. To load hierarchies, the PSA transfer method has to be selected (which is always recommended for loading data anyway). With the IDOC transfer method, it continues to be the case that only hierarchies can be loaded that contain characteristic values with a length of less than or equal to 32 characters.

Parallelized Deletion of Requests in DataStore Objects (Data Management) :

Now you can delete active requests in a DataStore object in parallel. Up to now, the requests were deleted serially within an LUW. This can now be processed by package and in parallel.

Object-Specific Setting of the Runtime Parameters of DataStore Objects (Data Management):

Now you can set the runtime parameters of DataStore objects by object and then transport them into connected systems. The following parameters can be maintained:
· Package size for activation
· Package size for SID determination
· Maximum wait time before a process is designated lost
· Type of processing: Serial, Parallel(batch), Parallel (dialog)
· Number of processes to be used
· Server/server group to be used

Enhanced Monitor for Request Processing in DataStore Objects (Data Management):

1. For the request operations executed on DataStore objects (activation, rollback and so on), there is now a separate, detailed monitor. In previous releases, request-changing operations are displayed in the extraction monitor. When the same operations are executed multiple times, it will be very difficult to assign the messages to the respective operations.

2. In order to guarantee a more simple error analysis and optimization potential during configuration of runtime parameters, as of release SAP NetWeaver 2004s, all messages relevant for DataStore objects are displayed in their own monitor.


Write-Optimized DataStore Object (Data Management):

1. Up to now it was necessary to activate the data loaded into a DataStore object to make it visible to reporting or to be able to update it to further InfoProviders. As of SAP NetWeaver 2004s, a new type of DataStore object is introduced: the write-optimized DataStore object.

2. The objective of the new object type is to save data as efficiently as possible in order to be able to further process it as quickly as possible without addition effort for generating SIDs, aggregation and data-record based delta. Data that is loaded into write-optimized DataStore objects is available immediately for further processing. The activation step that has been necessary up to now is no longer required.

3. The loaded data is not aggregated. If two data records with the same logical key are extracted from the source, both records are saved in the DataStore object. During loading, for reasons of efficiency, no SID values can be determined for the loaded characteristics. The data is still available for reporting. However, in comparison to standard DataStore objects, you can expect to lose performance because the necessary SID values have to be determined during query runtime.


Deleting from the Change Log (Data Management):

The Deletion of Requests from the Change Log process type supports the deletion of change log files. You select DataStore objects to determine the selection of requests. The system supports multiple selections. You select objects in a dialog box for this purpose. The process type supports the deletion of requests from any number of change logs.

Using InfoCubes in InfoSets (Data Modeling):

1. You can now include InfoCubes in an InfoSet and use them in a join. InfoCubes are handled logically in InfoSets like DataStore objects. This is also true for time dependencies. In an InfoCube, data that is valid for different dates can be read.

2. For performance reasons you cannot define an InfoCube as the right operand of a left outer join. SAP does not generally support more than two InfoCubes in an InfoSet.
Pseudo Time Dependency of DataStore Objects and InfoCubes in InfoSets (Data Modeling) :

In BI only master data can be defined as a time-dependent data source. Two additional fields/attributes are added to the characteristic. DataStore objects and InfoCubes that are being used as InfoProviders in the InfoSet cannot be defined as time dependent. As of SAP NetWeaver 2004s, you can specify a date or use a time characteristic with DataStore objects and InfoCubes to describe the validity of a record. These InfoProviders are then interpreted as time-dependent data sources.

Left Outer: Include Filter Value in On-Condition (Data Modeling) :

The global properties in InfoSet maintenance have been enhanced by one setting Left Outer: Include Filter Value in On-Condition. This indicator is used to control how a condition on a field of a left-outer table is converted in the SQL statement. This affects the query results: If the indicator is set, the condition/restriction is included in the on-condition in the SQL statement. In this case the condition is evaluated before the join. If the indicator is not set, the condition/restriction is included in the where-condition. In this case the condition is only evaluated after the join. The indicator is not set by default.
Key Date Derivation from Time Characteristics (Data Modeling) :

Key dates can be derived from the time characteristics 0CALWEEK, 0CALMONTH, 0CALQUARTER, 0CALYEAR, 0FISCPER, 0FISCYEAR: It was previously possible to specify the first, last or a fixed offset for key date derivation. As of SAP NetWeaver 2004s, you can also use a key date derivation type to define the key date.

Repartitioning of InfoCubes and DataStore Objects (Data Management):

With SAP NetWeaver 2004s, the repartitioning of InfoCubes and DataStore objects on the database that are already filled is supported. With partitioning, the runtime for reading and modifying access to InfoCubes and DataStore objects can be decreased. Using repartitioning, non-partitioned InfoCubes and DataStore objects can be partitioned or the partitioning schema for already partitioned InfoCubes and DataStore objects can be adapted.

Remodeling InfoProviders (Data Modeling):


As of SAP NetWeaver 2004s, you can change the structure of InfoCubes into which you have already loaded data, without losing the data. You have the following remodeling options: For characteristics: Inserting, or replacing characteristics with: Constants, Attribute of an InfoObject within the same dimension, Value of another InfoObject within the same dimension, Customer exit (for user-specific coding). DeleteFor key figures: Inserting: Constants, Customer exit (for user-specific coding). Replacing key figures with: Customer exit (for user-specific coding). DeleteSAP NetWeaver 2004s does not support the remodeling of InfoObjects or DataStore objects. This is planned for future releases. Before you start remodeling, make sure: (A) You have stopped any process chains that run periodically and affect the corresponding InfoProvider. Do not restart these process chains until remodeling is finished.
(B) There is enough available tablespace on the database.

After remodeling, check which BI objects that are connected to the InfoProvider (transformation rules, MultiProviders, queries and so on) have been deactivated. You have to reactivate these objects manually
Parallel Processing for Aggregates (Performance):

1. The change run, rollup, condensing and checking up multiple aggregates can be executed in parallel. Parallelization takes place using the aggregates. The parallel processes are continually executed in the background, even when the main process is executed in the dialog.

2. This can considerably decrease execution time for these processes. You can determine the degree of parallelization and determine the server on which the processes are to run and with which priority.

3. If no setting is made, a maximum of three processes are executed in parallel. This setting can be adjusted for a single process (change run, rollup, condensing of aggregates and checks). Together with process chains, the affected setting can be overridden for every one of the processes listed above. Parallelization of the change run according to SAP Note 534630 is obsolete and is no longer being supported.

Multiple Change Runs (Performance):

1. You can start multiple change runs simultaneously. The prerequisite for this is that the lists of the master data and hierarchies to be activated are different and that the changes affect different InfoCubes. After a change run, all affected aggregates are condensed automatically.

2. If a change run terminates, the same change run must be started again. You have to start the change run with the same parameterization (same list of characteristics and hierarchies). SAP Note 583202 is obsolete.

Partitioning Optional for Aggregates (Performance):

1. Up to now, the aggregate fact tables were partitioned if the associated InfoCube was partitioned and the partitioning characteristic was in the aggregate. Now it is possible to suppress partitioning for individual aggregates. If aggregates do not contain much data, very small partitions can result. This affects read performance. Aggregates with very little data should not be partitioned.

2. Aggregates that are not to be partitioned have to be activated and filled again after the associated property has been set.

MOLAP Store (Deleted) (Performance):

Previously you were able to create aggregates either on the basis of a ROLAP store or on the basis of a MOLAP store. The MOLAP store was a platform-specific means of optimizing query performance. It used Microsoft Analysis Services and, for this reason, it was only available for a Microsoft SQL server database platform. Because HPA indexes, available with SAP NetWeaver 2004s, are a platform-independent alternative to ROLAP aggregates with high performance and low administrative costs, the MOLAP store is no longer being supported. Data Transformation (Data Management):

1. A transformation has a graphic user interfaces and replaces the transfer rules and update rules with the functionality of the data transfer process (DTP). Transformations are generally used to transform an input format into an output format. A transformation consists of rules. A rule defines how the data content of a target field is determined. Various types of rule are available to the user such as direct transfer, currency translation, unit of measure conversion, routine, read from master data.

2. Block transformations can be realized using different data package-based rule types such as start routine, for example. If the output format has key fields, the defined aggregation behavior is taken into account when the transformation is performed in the output format. Using a transformation, every (data) source can be converted into the format of the target by using an individual transformation (one-step procedure). An InfoSource is only required for complex transformations (multistep procedures) that cannot be performed in a one-step procedure.

3. The following functional limitations currently apply:
· You cannot use hierarchies as the source or target of a transformation.
· You can not use master data as the source of a transformation.
· You cannot use a template to create a transformation.
· No documentation has been created in the metadata repository yet for transformations.
· In the transformation there is no check for referential integrity, the InfoObject transfer routines are not considered and routines cannot be created using the return table.

Quantity Conversion :

As of SAP NetWeaver 2004s you can create quantity conversion types using transaction RSUOM. The business transaction rules of the conversion are established in the quantity conversion type. The conversion type is a combination of different parameters (conversion factors, source and target units of measure) that determine how the conversion is performed. In terms of functionality, quantity conversion is structured similarly to currency translation. Quantity conversion allows you to convert key figures with units that have different units of measure in the source system into a uniform unit of measure in the BI system when you update them into InfoCubes.

Data Transfer Process :

You use the data transfer process (DTP) to transfer data within BI from a persistent object to another object in accordance with certain transformations and filters. In this respect, it replaces the InfoPackage, which only loads data to the entry layer of BI (PSA), and the data mart interface. The data transfer process makes the transfer processes in the data warehousing layer more transparent. Optimized parallel processing improves the performance of the transfer process (the data transfer process determines the processing mode). You can use the data transfer process to separate delta processes for different targets and you can use filter options between the persistent objects on various levels. For example, you can use filters between a DataStore object and an InfoCube. Data transfer processes are used for standard data transfer, for real-time data acquisition, and for accessing data directly. The data transfer process is available as a process type in process chain maintenance and is to be used in process chains.

ETL Error Handling :

The data transfer process supports you in handling data records with errors. The data transfer process also supports error handling for DataStore objects. As was previously the case with InfoPackages, you can determine how the system responds if errors occur. At runtime, the incorrect data records are sorted and can be written to an error stack (request-based database table). After the error has been resolved, you can further update data to the target from the error stack. It is easier to restart failed load processes if the data is written to a temporary store after each processing step. This allows you to determine the processing step in which the error occurred. You can display the data records in the error stack from the monitor for the data transfer process request or in the temporary storage for the processing step (if filled). In data transfer process maintenance, you determine the processing steps that you want to store temporarily.

InfoPackages :

InfoPackages only load the data into the input layer of BI, the Persistent Staging Area (PSA). Further distribution of the data within BI is done by the data transfer processes. The following changes have occurred due to this:
· New tab page: Extraction -- The Extraction tab page includes the settings for adaptor and data format that were made for the DataSource. If data transfer from files occurred, the External Data tab page is obsolete; the settings are made in DataSource maintenance.
· Tab page: Processing -- Information on how the data is updated is obsolete because further processing of the data is always controlled by data transfer processes.
· Tab page: Updating -- On the Updating tab page, you can set the update mode to the PSA depending on the settings in the DataSource. In the data transfer process, you now determine how the update from the PSA to other targets is performed. Here you have the option to separate delta transfer for various targets.

For real-time acquisition with the Service API, you create special InfoPackages in which you determine how the requests are handled by the daemon (for example, after which time interval a request for real-time data acquisition should be closed and a new one opened). For real-time data acquisition with Web services (push), you also create special InfoPackages to set certain parameters for real-time data acquisition such as sizes and time limits for requests.PSA :

The persistent staging area (PSA), the entry layer for data in BI, has been changed in SAP NetWeaver 2004s. Previously, the PSA table was part of the transfer structure. You managed the PSA table in the Administrator Workbench in its own object tree. Now you manage the PSA table for the entry layer from the DataSource. The PSA table for the entry layer is generated when you activate the DataSource. In an object tree in the Data Warehousing Workbench, you choose the context menu option Manage to display a DataSource in PSA table management. You can display or delete data here. Alternatively, you can access PSA maintenance from the load process monitor. Therefore, the PSA tree is obsolete.

Real-Time Data Acquisition :

Real-time data acquisition supports tactical decision making. You use real-time data acquisition if you want to transfer data to BI at frequent intervals (every hour or minute) and access this data in reporting frequently or regularly (several times a day, at least). In terms of data acquisition, it supports operational reporting by allowing you to send data to the delta queue or PSA table in real time. You use a daemon to transfer DataStore objects that have been released for reporting to the ODS layer at frequent regular intervals. The data is stored persistently in BI. You can use real-time data acquisition for DataSources in SAP source systems that have been released for real time, and for data that is transferred into BI using the Web service (push). A daemon controls the transfer of data into the PSA table and its further posting into the DataStore object. In BI, InfoPackages are created for real-time data acquisition. These are scheduled using an assigned daemon and are executed at regular intervals. With certain data transfer processes for real-time data acquisition, the daemon takes on the further posting of data to DataStore objects from the PSA. As soon as data is successfully posted to the DataStore object, it is available for reporting. Refresh the query display in order to display the up-to-date data. In the query, a time stamp shows the age of the data. The monitor for real-time data acquisition displays the available daemons and their status. Under the relevant DataSource, the system displays the InfoPackages and data transfer processes with requests that are assigned to each daemon. You can use the monitor to execute various functions for the daemon, DataSource, InfoPackage, data transfer process, and requests.

Archiving Request Administration Data :

You can now archive log and administration data requests. This allows you to improve the performance of the load monitor and the monitor for load processes. It also allows you to free up tablespace on the database. The archiving concept for request administration data is based on the SAP NetWeaver data archiving concept. The archiving object BWREQARCH contains information about which database tables are used for archiving, and which programs you can run (write program, delete program, reload program). You execute these programs in transaction SARA (archive administration for an archiving object). In addition, in the Administration functional area of the Data Warehousing Workbench, in the archive management for requests, you can manage archive runs for requests. You can execute various functions for the archive runs here.

After an upgrade, use BI background management or transaction SE38 to execute report RSSTATMAN_CHECK_CONVERT_DTA and report RSSTATMAN_CHECK_CONVERT_PSA for all objects (InfoProviders and PSA tables). Execute these reports at least once so that the available request information for the existing objects is written to the new table for quick access, and is prepared for archiving. Check that the reports have successfully converted your BI objects. Only perform archiving runs for request administration data after you have executed the reports.

Flexible process path based on multi-value decisions :

The workflow and decision process types support the event Process ends with complex status. When you use this process type, you can control the process chain process on the basis of multi-value decisions. The process does not have to end simply successfully or with errors; for example, the week day can be used to decide that the process was successful and determine how the process chain is processed further. With the workflow option, the user can make this decision. With the decision process type, the final status of the process, and therefore the decision, is determined on the basis of conditions. These conditions are stored as formulas.

Evaluating the output of system commands :

You use this function to decide whether the system command process is successful or has errors. You can do this if the output of the command includes a character string that you defined. This allows you to check, for example, whether a particular file exists in a directory before you load data to it. If the file is not in the directory, the load process can be repeated at pre-determined intervals.Repairing and repeating process chains :

You use this function to repair processes that were terminated. You execute the same instance again, or repeat it (execute a new instance of the process), if this is supported by the process type. You call this function in log view in the context menu of the process that has errors. You can restart a terminated process in the log view of process chain maintenance when this is possible for the process type.

If the process cannot be repaired or repeated after termination, the corresponding entry is missing from the context menu in the log view of process chain maintenance. In this case, you are able to start the subsequent processes. A corresponding entry can be found in the context menu for these subsequent processes.

Executing process chains synchronously :

You use this function to schedule and execute the process in the dialog, instead of in the background. The processes in the chain are processed serially using a dialog process. With synchronous execution, you can debug process chains or simulate a process chain run.

Error handling in process chains :

You use this function in the attribute maintenance of a process chain to classify all the incorrect processes of the chain as successful, with regard to the overall status of the run, if you have scheduled a successor process Upon Errors or Always. This function is relevant if you are using metachains. It allows you to continue processing metachains despite errors in the subchains, if the successor of the subchain is scheduled Upon Success.

Determining the user that executes the process chain :

You use this function in the attribute maintenance of a process chain to determine which user executes the process chain. In the default setting, this is the BI background user.

Display mode in process chain maintenance :

When you access process chain maintenance, the process chain display appears. The process chain is not locked and does not call the transport connection. In the process chain display, you can schedule without locking the process chain.

Checking the number of background processes available for a process chain :

During the check, the system calculates the number of parallel processes according to the structure of the tree. It compares the result with the number of background processes on the selected server (or the total number of all available servers if no server is specified in the attributes of the process chain). If the number of parallel processes is greater than the number of available background processes, the system highlights every level of the process chain where the number of processes is too high, and produces a warning.

Open Hub / Data Transfer Process Integration :

As of SAP NetWeaver 2004s SPS 6, the open hub destination has its own maintenance interface and can be connected to the data transfer process as an independent object. As a result, all data transfer process services for the open hub destination can be used. You can now select an open hub destination as a target in a data transfer process. In this way, the data is transformed as with all other BI objects. In addition to the InfoCube, InfoObject and DataStore object, you can also use the DataSource and InfoSource as a template for the field definitions of the open hub destination. The open hub destination now has its own tree in the Data Warehousing Workbench under Modeling. This tree is structured by InfoAreas.
The open hub service with the InfoSpoke that was provided until now can still be used. We recommend, however, that new objects are defined with the new technology.

BW 7.3 Features


1. DWH Enchantments add the Planning Functions and Data flow as well as Process chain at   Modelling era.
2. DTPs automatically integrate the process chain.
3. Migrate the Data from SAP BW 3.X To 7.3 directly by using the DATA SOURCE tab.
  (When migrate the data it will be Recovery if any mishap any situation)
4. Enchance the Info provider options i.e
    a. create the aggregation
    b. Create the Hybrid provider
5.  Info cube Enchantment
  a)  Navigation Attributes
 b)  Selection Conditions
c) Property (Partitioned)
6.  Flexible Generic search screen (like process chain & open hubs) at ‘FIND’ Tab.
7.   Info cubes additional functions (Create Update rules, Generate Export datasource, Remodeling, Repartioning, Display log)
8. Info provider and transformation
Info cube setting
  In memory:-When set, DSO is optimized for SAP HANA database .default value is not checked.
 BWA Status: - When set, Infocube is optimized for SAP HANA database. Default value 'Info cube stores its data in database.
Transformation enhancements
 Reference integrity (No new, but can be set in transformation rule)
New Rule Type   (Read from data store object (DSO))
Define semantic groups for packages.
9. Transformation rule to fill a field in the transformation target by reading the value from a DSO (similar to reading from master data).
10. Enchane the new connection between SAP Net Weaver BW 7.3 and SAP Business Objects Data Integrator (source system type “Data Services”) enables you to establish connections between SAP NetWeaver BW and non-SAP systems, and trigger the generation of metadata and data flows.
11. New tab in DTP Extraction Tab (Delta Init without Data)
12. DTP Parallel Extraction:
System uses Parallel Extraction to execute the DTP when the following conditions are met.
·        The Source of the DTP supports Parallel Extraction (Ex: Data Source).
·        Error Handling is De-activated.
·        The list for creating semantic groups in DTPs and transformations is empty.
·        The Parallel Extraction field is selected.
 If the system is currently using the processing type “Parallel Extraction and Processing”, we can select this option and change the processing type to “Serial Extraction, Immediate Parallel Processing.
13.  DTP Execute tab

ASAP Methodologies


ASAP stands for Accelerated SAP. Its purpose is to help design SAP implementation in the most efficient manner possible. Its goal is to effectively optimize time, people, quality and other resources, using a proven methodology to implementation.ASAP focuses on tools and training, wrapped up in a five-phase process oriented road map for guiding implementation.The road map is composed of five well-known consecutive phases:
• Phase 1 Project Preparation
• Phase 2 Business Blueprint
• Phase 3 Realization
• Phase 4 Final Preparation
• Phase 5 Go-Live and supportIn today's post we will discuss the first phase.


Phase 1 : Project PreparationPhase
1 initiates with a retrieval of information and resources. It is an important time to assemble the necessary components for the implementation. Some important milestones that need to be accomplished for phase 1 include
• Obtaining senior-level management/stakeholder support
• identifying clear project objectives
• architect an efficient decision-making process
• creating an environment suitable for change and re-engineering
• building a qualified and capable project team.

Senior level management support:
One of the most important milestones with phase 1 of ASAP is the full agreement and cooperation of the important company decision-makers - key stake holders and others. Their backing and support is crucial for a successful implementation.

Clear project objectives:
be concise in defining what your objectives and expectations are for this venture. Vague or unclear notions of what you hope to obtain with SAP will handicap the implementation process. Also make sure that your expectations are reasonable considering your company's resources. It is essential to have clearly defined ideas, goals and project plans devised before moving forward.

An efficient decision making process:
One obstacle that often stalls implementation is a poorly constructed decision-making process. Before embarking on this venture, individuals need to be clearly identified. Decide now who is responsible for different decisions along the way. From day one, the implementation decision makers and project leaders from each area must be aware of the onus placed on them to return good decisions quickly.

Environment suitable for change and re engineering: Your team must be willing to accept that, along with new SAP software, things are going to change, the business will change, and information technology enabling the business will change as well. By implementing SAP, you will essentially redesign your current practices to model more efficient or predefined best business practices as espoused by SAP. Resistance to this change will impede the progress of your implementation.

ASAP- Second Phase- Business Blueprint
SAP has defined a business blueprint phase to help extract pertinent information about your company that is necessary for implementation. These blueprints are in the form of questionnaires that are designed to probe for information that uncovers how your company does business. As such, they also serve to document the implementation. Each business blueprint document essentially outlines your future business processes and business requirements. The kinds of questions asked are germane to the particular business function, as seen in the following sample questions:

1) What information do you capture on a purchase order?
2) What information is required to complete a purchase order?
Accelerated SAP question and answer database:
The question and answer database (QADB) is a simple although aging tool designed to facilitate the creation and maintenance of your business blueprint. This database stores the questions and the answers and serves as the heart of your blue print. Customers are provided with a customer input template for each application that collects the data. The question and answer format is standard across applications to facilitate easier use by the project team.
Issues database:
Another tool used in the blueprinting phase is the issues database. This database stores any open concerns and pending issues that relate to the implementation. Centrally storing this information assists in gathering and then managing issues to resolution, so that important matters do not fall through the cracks. You can then track the issues in database, assign them to team members, and update the database accordingly.

ASAP Phase- 3 - Realization:
With the completion of the business in phase 2, "functional" experts are now ready to begin configuring SAP. The Realization phase is broken in to two parts.
1) Your SAP consulting team helps you configure your baseline system, called the baseline configuration.
2) Your implementation project team fine-tunes that system to meet all your business and process requirements as part of the fine tuning configuration.

The initial configuration completed during the base line configuration is based on the information that you provided in your blueprint document. The remaining approximately 20% of your configuration that was not tackled during the baseline configuration is completed during the fine tuning configuration. Fine tuning usually deals with the exceptions that are not covered in baseline configuration. This final bit of tweaking represents the work necessary to fit your special needs.

Configuration Testing:
With the help of your SAP consulting team, you segregate your business processes into cycles of related business flows. The cycles serve as independent units that enable you to test specific parts of the business process. You can also work through configuring the SAP implementation guide (IMG). A tool used to assist you in configuring your SAP system in a step by step manner.

Knowledge Transfer:
As the configuration phase comes to a close, it becomes necessary for the Project team to be self-sufficient in their knowledge of the configuration of your SAP system. Knowledge transfer to the configuration team tasked with system maintenance (that is, maintenance of the business processes after Go-live) needs to be completed at this time.In addition, the end users tasked with actually using the system for day-to-day business purposes must be trained.

ASAP Methodology - Phase 4 - Final Preparation:
As phase 3 merges into phase 4, you should find yourselves not only in the midst of SAP training, but also in the midst of rigorous functional and stress testing. Phase 4 also concentrates on the fine tuning of your configuration before Go-live and more importantly, the migration of data from your old system or systems to SAP.
Workload testing (including peak volume, daily load, and other forms of stress testing), and integration or functional testing are conducted to ensure the accuracy of your data and the stability of your SAP system. Because you should have begun testing back in phase 2, you do not have too far to go until Go-live. Now is an important time to perform preventative maintenance checks to ensure optimal performance at your SAP system.At the conclusion of phase 4, take time to plan and document a Go-live strategy. Preparation for Go-live means preparing for your end-users questions as they start actively working on the new SAP system.

ASAP - Phase 5 - Go-live and Support:
The Go-live milestone is itself is easy to achieve; a smooth and uneventful Go-live is another matter altogether. Preparation is the key, including attention to what-if scenarios related not only to the individual business processes deployed but also to the functioning of technology underpinning these business processes and preparation for ongoing support, including maintenance contracts and documented processes and procedures are essential.