Use
If you want to
benefit from the features of the new concepts and technology while working
with an existing data flow, we recommend that you migrate the data flow.
The following
sections outline what you need to consider and how you proceed when migrating
a data flow.
Constraints
You cannot migrate
hierarchy DataSources, DataSources that use the IDoc transfer method,
DataSources from BAPI source systems, or export DataSources (namespace 8* or
/*/8*) that are used by the data mart interface.
Prerequisites
In the procedure
outlined here, we make the following simple assumptions:
●
The data flow is to
be migrated without making any changes to the transformation logic and the
InfoSource (which is optional in the new data flow) is to be retained.
●
In the data flow
used, one DataSource is connected to multiple InfoProviders. All
objects required in the data flow exist as active versions.
Preliminary Remarks
●
Always migrate an
entire data flow, that is, always include all the objects that it contains,
starting from the InfoProvider down to the DataSource.
●
Do not migrate the
data flow in a production system. The recommended procedure differs depending
on whether your landscape is a two-system (development system and production
system) or three-system (development system, test system, and production
system) landscape. You generate the new objects in the development system, run
tests in the development or test system (depending on your system landscape),
and then transport the new objects, as well as the deletions of the 3.x
objects no longer required, in the production system. Note that the transports
are made in the same sequence in which they are created.
●
When you model a
data flow using the new object types, you use the emulation for the
DataSource. This affects in particular the evaluation of settings in the
InfoPackage, because in the new data flow, it is only used to load data into
the PSA. The emulation makes it difficult to use InfoPackages because only a
subset of the settings made here is evaluated. In the new data flow, the
settings not evaluated by the InfoPackage must be made in the data transfer
process. We therefore recommend that you consider these dependencies when you
emulate the DataSource and that you emulate your DataSource in a development
or test system only.
For more
information, see
Emulation, Migration
and Restoring DataSources and
Using Emulated 3.x
DataSources.
Procedure
Carry out steps 1-6
and 8-9 in the development system:
1.
For each
InfoProvider that is supplied with data from the DataSource, generate a
transformation from each of the update rules.
In doing
this, copy the 3.x InfoSource to a new InfoSource.
2.
Generate a
transformation from each of the corresponding transfer rules.
When you do
this, use the existing InfoSource that was created when the update rules were
migrated.
3.
Make any necessary
adjustments to the transformation.
You need to
postprocess a transformation, for example, if you use formulas or inverse
routines in your transfer rules. You can also adjust the routines to improve
performance.
For more
information about steps 1-3, see
Migrating Update
Rules, 3.x InfoSources and Transfer Rules.
4.
Create the data
transfer processes for further updating the data from the PSA.
In the new
data flow, the data transfer process uses the settings from the InfoPackage
that are relevant for updating the data from the PSA. For more information,
see InfoPackage -> Data
Transfer Process.
a.
In InfoPackage
maintenance, navigate to the Data Targets tab page.
b.
Create a data
transfer process for each of the InfoProviders that is supplied with data from
the InfoPackage.
c.
To do this, under
Create/Maintain DTP, choose Error! Objects cannot be created from editing
field codes. with the
quick info text Create New Data Transfer Process for This Data
Target.
d.
Check all DTP
settings that the DTP is to use in the new data flow for the
InfoPackage.
■
On the Update tab page, check the settings for error
handling.
■
On the Extraction tab page, check the extraction mode.
For more
information about data transfer processes, see
Creating Data Transfer
Processes.
5.
In the InfoPackage,
check on the Data Targets tab page that any InfoProviders that are supplied
with data by means of update rules are not selected. This allows you to ensure
that the InfoProviders are subsequently only supplied with data by the new
data flow.
6.
Check the process
chains that are used in the data flow and make any necessary
adjustments.
a.
Open the process
chain from InfoPackage maintenance by choosing Process Chain
Maintenance.
b.
Insert the DTPs
into the process chain directly after the InfoPackage.
c.
Make any
adjustments required in the subsequent process (previously InfoPackages, but
now DTPs).
If the process
variants previously referred to the object type InfoPackage, you now need to specify the corresponding
DTP, for example, when activating the data in the
DataStore object.
7.
Test the data
flow.
○
If you have a
two-system landscape, perform this test in the development system.
○
If you have a
three-system landscape, first transport the affected objects to the test
system and test your new data flow there.
8.
Migrate the
DataSource.
The migration
process generates a new DataSource. The 3.x DataSource and the transfer
structure and mapping are deleted. The PSA and InfoPackage are used in the new
migrated DataSource.
When the
DataSource is migrated, the InfoPackage is not migrated in the sense of a new
object being created. However, after migration, only the specifications about
how data is loaded into the PSA are used in the InfoPackage. Existing delta processes continue to run. The
delta process does not need to be reinitialized.
You can
convert a migrated DataSource to a 3.x DataSource if you export the 3.x
objects during migration.
For more
information, see
Emulation, Migration
and Restoring DataSources and
Migrating 3.x
DataSources.
9.
To maintain
clarity, we recommend that you delete the 3.x InfoSource and update
rules.
10.
If you have a
three-system landscape, transport the migrated DataSource and the deletion of
the 3.x InfoSources and update rules into the test system.
11.
Examine the data
flow for completeness and check that any objects that are no longer required
are deleted.
○
In a two-system
landscape, do this in the development system.
○
In a three-system
landscape, do this in the test system.
12.
Finally, transport
the objects and deletions from the development or test system into your
production system.
Result
You have migrated a
3.x data flow and can now benefit from the features of the new concepts and
technology.
Migrating Data Flows That Use the Data Mart Interface
You cannot migrate
export DataSources that are used when the data mart interface is used to
update data from one InfoProvider to another InfoProvider. However, a data
flow that uses export DataSources can be migrated to the new object types,
because the export DataSource and its transfer and update rules can be modeled
in the new data flow using DTPs and transformations. The data flow can be
migrated as outlined above. However, the actual migration of the DataSource
and the corresponding steps involved are omitted.
source is: help.sap.com
Hello Sir,
ReplyDeleteI have seen your info on sap.. Especially on sap hana admin. The info you provided is very useful for any interested on sap. I am very much satisfied with your info. we also provide online training on SAP HANA ADMINISTRATION ONLINE TRAINING .
Administrators can use the SAP HANA studio, for example, to start and stop services, to monitor the
system, to configure system settings, and to manage users and authorizations. The SAP HANA studio
accesses the servers of the SAP HANA database by SQL.
SAP HANA ADMINISTRATION ONLINE TRAINING