Monday, September 17, 2012

CO-PA Data Source

The main entity for a Co-PA data source is "Operating Concern", which would be given by COPA Consultants.

Here, 2 types of operating concerns would be defined. We can check the Operating Concern structure in the transaction code: KER1/KEAS or KEA0(to create and to display).

1. Cost Based : Used for Product based industries.
2. Account Based: Used in service based industries.


The standard R/3 System contains the sample operating concern "S001". You can copy this operating concern to create your own operating concern.
When you activate the data structures of your operating concern, the system creates the following objects in the ABAP Dictionary:



Type      NameDescription
StructureCE0xxxxlogical line item structure
TableCE1xxxxactual line item table
TableCE2xxxxplan line item table
TableCE3xxxxsegment level
TableCE4xxxxsegment table
TableCE4xxxx_KENC    realignments
TableCE4xxxx_ACCT    account assignments
TableCE4xxxx_FLAG    posted characteristics
StructureCE5xxxx           logical segment level
StructureCE7xxxx    internal help structure for                                                                                                              assessments
Structure  CE8xxxxinternal help structure for assessments
                
for Account based PA its extracted from
COPE-Actual line items
COEJ-plan line items
COSP COSS-summary table
CE4XXXX-segment table
 

In these names, "xxxx" stands for the name of the operating concern.
Creating CO-PA Data Source:


1. After getting the Operating Concern from the functional team, go to the tcode KEB0 or follow the below mentiond path in SBIW.


Data Transfer to the SAP Business Information Warehouse -> Settings for Application-Specific DataSources -> (PI)Profitability Analysis -> Create Transaction Data DataSource.

In the COPA DataSource for Transaction Data screen: 1_CO_PA%CL%ERK is automatically generated. Do not change anything. You can add additional characters.
Note: The name of copa data source always start with the numeric One. Here CL is your client number and ERK is your operating concern name.
Select Field Name for Partitioning (Eg, Ccode(BUKRS) for cost based Profit Centre(PRCTR) for account based).
In the transaction bar enter the tcode =INIT . Now, you could be able to select the required fields.
Select all Characteristics from the segment level and Select all Value Fields.
Choose InfoCatalog from the application toolbar.

a). For costing-based Profitability Analysis, select the value fields and calculated key figures that are to be included in the DataSource. It is useful to include all the value fields of the operating concern. These value fields are already selected but you can deactivate them if necessary. The system checks that the units of measure relating to the value fields are also transferred.

Technical notes:
Along with the selected characteristics and value fields, the fiscal year variant and the record currency are also included in the replication so that the data in SAP BW can be interpreted correctly.


The technical field PALEDGER (currency type) in Profitability Analysis is encrypted as CURTYPE (currency type) and VALUTYP (valuation) during the transfer to SAP BW.


The plan/actual indicator (PLIKZ) is copied to SAP BW as value type (WRTTP).


2. The Create Object Directory Entry dialog box is displayed requesting the user to enter a development class. Enter a development class in the customer namespace and choose Save or choose Local object.
3. On the DataSource: Customer version Edit screen:
      a. Select all possible checkboxes in column Selection.
      b. Choose Save.
4. In the information dialog box, choose Enter.
5. Then replicate the datasource in BW and generate the data flow.
6. To maintain the deltas, maintain the time stamp in KEB2/KEB5.


CO-PA Datasource Enhancement:
You cannot change a DataSource once created. You can, however, delete it and then create a DataSource with the same name but with different technical properties. You should not upload metadata between these two steps.
In the ECC System:
1) Add the fields to the operating concern. So that the required field is visible in CE1XXXX table and other concerned tables CE2XXXX, CE3XXXX etc.
2) After you have enhanced the operating concern then you are ready to add it to the CO-PA data source. Since CO-PA is a regenerating application you can't add the field directly to the CO-PA data source. You need to delete the data source and then need to re-create using KEB2 transaction.
3) While re-creating the data source use the same old name so that there won't be any changes in the BW side when you need to assign the data source to info-source. Just replicate the new data source in BW side and map the new field in info-source. If you re-create using a different name then you will be needing extra build efforts to take the data into BW through IS all the way top to IC. I would personally suggest keep the same old data source name as before.
If you are adding the fields from the same "Operating concern" then goto KE24 and edit the dataaource and add your fields. However if you are adding fields outside the "Operating concern" then you need to append the extract structure and populate the fields in user exit using ABAP code. Reference OSS note: 852443
In the BW System: 
1. Check RSA7 on your R3 to see if there is any delta queue for COPA. (just to see, sometimes there is nothing here for the datasource, sometimes there is)
2. On BW go to SE16 and open the table RSSDLINIT
3. Find the line(s) corresponding to the problem datasource.
4. You can check the load status in RSRQ using the RNR from the table 
5. Delete the line(s) in question from RSSDLINIT table 
6. Now you will be able to open the infopackage. So now you can ReInit. But before you try to ReInit .... 
7. In the infopackage go to the 'Scheduler' menu > 'Initialization options for the source system' and delete the existing INIT (if one is listed)

Friday, September 14, 2012

How to size of a BW system

Disk Space Requirements in General 


Disk space requirements depend heavily on the design of InfoCubes and data distribution. The
following hints on estimating disk space do not account for possibly sparse tables or compression the
database might use. There is a QuickSizer available on http://service.sap.com/QuickSizer  that helps
you perform what is described below.
The Business Information Warehouse software including the development workbench requires
approximately 9 GB of disk space.
The required disk space for data depends on:
• Number of InfoCubes
• Number of key figures and dimensions per InfoCube
• Volume of data loaded
• Number of aggregates
• Amount of master data
• Number and size of flat files used for data load, residing on application servers
• The number and size of PSA and ODS tables
• Indexes

Estimating an InfoCube 
When estimating the size of an InfoCube, tables like fact and dimension tables are considered.
However, the size of the fact table is the most important, since in most cases it will be 80-90% of the
total storage requirement for the InfoCube.
When estimating the fact table size consider the effect of compression depending on how many
records with identical dimension keys will be loaded.
The amount of data stored in the PSA and ODS has a significant impact on the disk space required.
If data is stored in the PSA beyond a simply temporary basis, it could be possible that more than 50%
of total disk space will be allocated for this purpose.

Dimension Tables 
• Identify all dimension tables for this InfoCube.
• The size and number of records need to be estimated for a dimension table record. The size of
one record can be calculated by summing the number of characteristics in the dimension table at
10 bytes each. Also, add 10 bytes for the key of the dimension table.
• Estimate the number of records in the dimension table.
• Adjust the expected number of records in the dimension table by expected growth.
• Multiply the adjusted record count by the expected size of the dimension table record to obtain
the estimated size of the dimension table.


Fact Tables 
• Count the number of key figures the table will contain, assuming a key figure requires 17 bytes.
• Every dimension table requires a foreign key in the fact table, so add 6 bytes for each key. Don‘t
forget the three standard dimensions.
• Estimate the number of records in the fact table.
• Adjust the expected number of records in the fact table by expected growth.
• Multiply the adjusted record count by the expected size of the fact table record to obtain the
estimated size of the fact table.

PSA tables 
The record length of a PSA table can be estimated by adding the field lengths of all the fields of the
respective InfoSource and 48 bytes for request-id and other fields being added by the system.
Multiplying the record length by the number of records being held in the PSA table will give you an
estimate on disk space needed per PSA table.
Data in PSA might be deleted regularly depending on your scenario (e.g. use ODS for permanent
storage). Estimate the number of records accordingly.

ODS tables 
The record length of a ODS table can be estimated by adding the field lengths of all the fields of the
respective InfoSource.
After activation of data the activation queue is empty, so the space needed for new data can in most
cases be neglected.
For the change log table add 48 bytes for request-id and other fields being added by the system.
Multiplying the record length by the number of records being held in the ODS table will give you an
estimate on disk space needed per ODS table.
The number of records of the change log table depends on the average number of updates and the
delta type used for the ODS object.
The minimum size of the change log table has the same size like the active data table (assuming that
all records are distinct).


Limitations 
There are no limits to the number of InfoObjects, number of InfoCubes, or the number of reports per
InfoCube.
The system administrator can create 13 dimension tables in addition to the dimensions for time,
Packet ID, and measure of unit for each InfoCube.
Number of characteristics in all dimension tables:
Each of the dimension tables can have 248 InfoObjects, computed by 255 – 6 (no access to these
fields) –1 (Dimension ID) to obtain 248. For performance reasons we recommend a maximum of 16
InfoObjects per dimension.
A fact table can have up to 233 key figures, computed by 255 – 6 (no access to these fields) – 16
(dimensions) to obtain 233.


Simplified estimation of disk space (used by the QuickSizer) 
The following assumptions are made in order to simplify the disk sizing:
• compression factor resulting out of identical records is 0% (all records are distinct)
• index space is added with 100%
• space for aggregates is  additional 100%
• compared to the size of the fact table we ignore the size of the dimension tables
• PSA data is deleted after upload or after a certain time.
• No. of master data records is less than 10% of the fact able (could be wrong for Retail
customers)
• We ignore compression algorithm of the different database vendors.
A simplified estimation of disk space for the BW can be obtained by using the following formula:
For each cube:
 Size in bytes =
(n + 3) x 10 bytes + (m x 17 bytes) *
[ rows of initial load + rows of periodic load * no. of periods]
n  = number of dimensions
m  = number of key figures
+
For each ODS (incl. change log):
 Size in bytes =
(n x 10 bytes) + (m x 17 bytes) *
[ rows of initial load + rows of periodic load * no. of periods]
n  = number of character fields
m  = number of numeric fields
Remarks:
• The change log has the same size as the active ODS table if you load only distinct data. This
is reflected in an uplift (see below).
• For ease of use we have assumed an average length of 10 bytes per character field. If you

use character fields which are significantly longer you should increase the number of fields in
the QuickSizer accordingly.
To the obtained results you have to add
 + 20% for PSA
 + 100% of total size of the InfoProviders for aggregates / change log (ODS)
 + 100% for indexes
 + 10% for master data
 + twice the size of the biggest fact table - at least 10 GB

CPU requirements in general 
The CPU requirements of  BW depend heavily on the design of the data model and the queries/user
behavior.
The CPU resource requirements for data staging depend on:
- number of InfoProviders
- volume of data loaded
- number of aggregates
- amount of rollups/change-runs
- number of distinct requests
In order to find the CPU requirements you have to determine the critical path for the data staging.
This path is different from customer to customer and could be different from upload to upload.
As an example, the red path in the picture below describes the critical path including load, rollup,
change-run, index rebuild, gathering statistics. This path has to be measured in order to get the
number of records per hour, which could be applied to the system.


The CPU resource requirements for queries depend on:
- number of InfoProviders
- number of connected users
- number of aggregates
- types of queries
In order to find the CPU requirements, you have to classify your queries in different query classes like
“easy” , “medium” and “heavy” and analyze the different response times in order to get relational
factors. Additionally you have to classify your user behavior in different groups like “Information
Consumers”, “Executive Users” and “Power Users”  (a definition of the properties for these user
groups can be found below) and to examine which user type is using which kind of query.

Simplified CPU sizing for data staging 
The following assumptions are made in order to simplify the CPU sizing for data staging:
• A system is able to apply 750000 records per hour per job
• No special Transformation rule and update rule are used
• The critical path contains an ODS –object , an InfoCube and 10 aggregates
• The data applied to the system could be split into several requests (one job per request)
• 350 SAPS are used for one job
The CPU requirements are calculated as follow:
SAPS = #jobs * 350 SAPS / 0.65 (at 65% CPU consumption)
While the number of jobs is calculated as following:
#jobs = {sum (delta upload)/ maintenance window} / 750.000


Simplified CPU sizing for queries 
The number of users working in the system forms the basis for estimating CPU resources of the
database and application servers. Different definitions of “user” are available:
• Information Consumers
• Executive Users

• Information Consumers: accesses the BW System from time to time, typically are seeking
information occasionally using predefined reports and generate in average 1 navigation
step per hour (averaged over 1 week).
• Executive Users: access the BW System regularly and continuously, typically navigating
within reports supported by aggregates, generating 11 navigation steps per hour (averaged
over 1 week).
• Power Users: work intensively with the BW System, running ad hoc queries which in most
cases are not supported by aggregates, generating 33 navigation steps per hour (averaged
over 1 week).

Remarks: 
• The classification of the users corresponds to the user classification within the early watch
report.
• A default user distribution based on a customer survey is 71% : 27% : 2%
Additionally we have to define different types of reports being used within BW:
• Easy queries
• Medium queries
• High queries
• Easy“ queries: predefined reports using optimized aggregates. These queries are
weighted by a factor 1.
• Medium“ queries: slicing and dicing, navigating in reports using various aggregates These
queries are weighted by a factor 1,5 (i.e. these queries require 50%  more resources than
easy queries)
• Heavy“ queries: ad-hoc reports with unpredictable navigation paths, access of detail data,
full table scans. These queries are weighted by a factor 5 (i.e. these queries require five
times as many resources as easy queries)

Remark: 
These factors are based on the BW benchmark scenario. In different scenarios using other
reports the factors could be different.

Given the user and query definitions described above, a matrix describes the usage of different query
types by different user groups. The values in the matrix are based on best practice and can be
changed in SAP QuickSizer if required:


Remark: 
The SAP BW benchmark derives a total number of navigation steps as one of the key
performance indicators, allowing for a close correlation of benchmark results and QuickSizer
results.
Finally, the number of navigation steps needs to be converted into SAPS. We use the following
formula for this:
SAPS = (Nav/h * 1.33 * 2.28 * 9   / 60) / 0.65 (at 65% CPU consumption)
The various factors in this formula were derived from several tests. In a comparison of SAP BW benchmark
results with SD benchmark results we have obtained a factor of 2.28 (The definition of SAPS is based on the SD
benchmark and the notion of dialog steps). In average one dialog step produces the same load as 9 navigation
steps.  Finally, we have introduced an adjustment factor of 1.33 by comparing the obtained results with
productive installation figures.
2.2.3.4 Memory requirements in general
The memory requirements of BW depend heavily on the design of the data volume and the number of
connected users
The memory resource requirements for data staging depend on:
- number of parallel jobs
- number of records loaded within one data package
- Complexity of update and transformation rule
- degree of parallel processes within the RDBMS
- BW buffers configured for the system (table buffers, program buffers, export/import,…)
- RDBMS buffers (data buffer, process buffer)


In order to obtain precise information about the memory requirements, you would have to check the
memory consumption for each distinct load step in your system and determine the maximum memory
consumption for all parallel jobs. Note that the critical path described above is not always the step
which consumes the most memory.
The memory resource requirements for queries depend on
• number of connected user
• keep-alive time for WAS
• data volume of the result set
During the daily load, you would have to monitor how many users are connected to the system. The
memory used by these users can be measured with the transaction sm04 and could be summarized.
Another parameter which impacts memory consumption is the keep-alive time of the Web Application
Server. The WAS keeps a connection open for this time even if the user has closed his browser.

Simplified memory sizing for data staging  
The following assumptions are made in order to simplify the memory sizing for data staging:
• Number of records per data package is 50000
• No special Transformation rule and update rule are used
• The critical path contains an ODS  object, an InfoCube and 10 aggregates
• 300 MB are used for one job
• minimum of BW buffer is 500 MB
• minimum RDBMS buffer is 700 MB
• Max. RDBMS buffer is 10 GB
• 10 MB buffer is used for each million of records of the largest InfoCube
• 120 MB is used for each process connected to the RDBMS
• for parallel processing 2 times * number of jobs is used within the RDBMS
The BW memory requirements are calculated as follow:
Memory BW =  #jobs * 300 MB + 500 MB
The number of parallel load jobs is calculated as follows:
#jobs = {sum (delta upload)/ maintenance window} / 750.000

This number can also be used as a minimum number of batch processes to be configured.
The RDBMS memory requirements are calculated as follow:
Memory =  700 MB + max(rows of InfoProvider/1000000) * 10 MB +
#Jobs * 120MB + 2*#Jobs*120MB
The number of parallel jobs is calculated as described above.

Simplified memory sizing for queries  
The following assumptions are made in order to simplify the memory sizing for queries:
• Users are connected via WAS
• Keep alive time is 60 seconds
• 30 MB are used for one job
• minimum of BW buffer is 500 MB
• minimum RDBMS buffer is 700 MB
• Max. RDBMS buffer is 10 GB
• 120 MB is used for each process connected to the RDBMS
• no parallel processing for queries
• One BW process can handle 1 high or 2 medium or 5 low users concurrently
The BW memory requirements are calculated as follows:
Memory = 500 MB + number of connected users * 30 MB
The number of connected users is calculated as following:
Number of connected users = (#Information Consumers +
  #Executive Users +
  #Power Users)/3600 s * keep-alive time

The RDBMS memory requirements are calculated as follow:
Memory = 700 MB + max(rows of InfoProvider/1000000) * 10 MB +
number of concurrent users  * 120MB

The number of connected users is calculated as described above.

Sizing result for 2 –Tier systems 
If the system should be installed on central server the following calculation should be made in order
to obtain the resource consumption figures:
SAPS = max (SAPS data staging ; SAPS query)
Memory = max (memory data staging (RDBMS + BW) ;
memory query (RDBMS + BW))

Sizing result for 3-Tier systems 
If the system is separated into a dedicated RDBMS server and a BW server the distribution of
resources is:
BW:
SAPS = max (50% SAPS data staging ; 80 %SAPS query)
Memory = max (memory data staging (BW) ; memory query (BW))
RDBMS:
SAPS = max (50% SAPS data staging ; 20 %SAPS query)
Memory = max (memory data staging (RDBMS) ; memory query (RDBMS))
Note that the total amount of SAPS for a 3-tier system could be higher than for a 2-tier system since
CPU resources of a central system can be shared dynamically.

Frontend PC requirements 
Please refer to note 321973











Wednesday, September 12, 2012

How to remove the reference characteristics


ISSUE:
The info object for ex ZCST_TST1 has a reference characteristic 0CUSTOMER. So the requirement is to remove the reference characterisitcs 0CUSTOMER.

Steps to remove the reference characteristics :
The below screen shot shows the characteristic ZCST_TST1 having reference characteristics 0CUSTOMER.


Step 1 : Go to SE16 and put table name RSDCHABAS.


Step 2 : Here we have to create the new entry for the ZCST_TST1  characteristic.


Step 3 : 
We have to fill the fields CHABASNM = ‘ZCST_TST1’ , OBJVERS = M, DATATYP  AND INTLEN.
The easiet way is to copy the table entry of 0CUSTOMER char.


Step 4 :  Go to se16 and put table name RSDCHA


Step 5 : Find for ZCST_TST1



Step 6 :
Change the field CHABASNM in the RSDCHA table from 0CUSTOMER to ZCST_TST1 and  BCHREFFL from X to Space.

Save the table.

Step 7 :
Now the char ZCST_TST1 has no reference Char. 0CUSTOMER.

Monday, September 3, 2012

How does a datasource communicates "DELTA" with BW?



What is Delta?
It is a feature of the extractor which refers to the changes(new/modified entries)occured in the sourcesytem.
How to identify?
In ROOSOURCE table, key in the data source name and check the field "DELTA".
If the field is left blank,it implies that the datasource is not delta capable.
image

The field 0RECORDMODE in BW determines how a record is updated in the delta process.
image
Now the question is how this delta will be brought to BW?
Using one of the following ways:
ABR: After before and Reverse image
AIE: After image
ADD: Additive image
ABR: After before and Reverse image
Example: Logistics
What is it?
Once a new entry was posted or an exiting posting was changed at R/3 side, an after image shows the status after the change, a before image shows the status before the change with a negative sign and the reverse image also shows the negative sign next to the record while indicating it for deletion. This serializes the delta packets.
What update type ( for key figures) it supports?
  • Addition
  • Overwrite
Does it support loading to both infocube and ODS (DSO)?
YES
Technical name of the delta process:ABR
Brief overview:
You will find two types of ABR delta processes in RODELTAM table depending on serialization.
image
  • ABR with serialization "2" means serialization is required between requests while sending data but not necessarily at data package level.
  • ABR1 with serialization "1" means no serialization.
image 
Since it can be used for both infocube and ODS, let's consider a scenario where in the loading happens directly to ODS (Advantage: we can track the record changes in change log table)
For ODS/DSO, the field ROCANCEL which is part of the data source holds the changes from R/3 side. ROCANCEL serves the same purpose at R/3 side which its counterpart 0RECORDMODE does at BW side. This field for the Data Source is assigned to the Info Object 0RECORDMODE in the BW system.
Check the mapping in Transfer rules(applicable to BW 3.5):
image 
Note:0STORNO AND 0ROCANCEL both are one and same.

In our case, ODS is set to additive mode so that the data source sends both before and after image
Incase if it is set to overwrite, it sends only after image
How it works?
Let's check the new entry in ODS.
image

Note:I have taken an example of ODS which contains CRM data.
Now in the source sytem, the value of CRM gross weight (CRM_GWEIGH) has been changed to 5360
In order to reflect this change, data source will send two entries to BW:
One is before image with negative sign to nullify the initial value
image
And the other one is after image entry (modified value)
image
Upon activation , the after image goes to active table.
image
  
After image delta process:
Example: FI-AP/AR
What update type (for key figures) it supports?
Overwrite Only
Does it support loading to both infocube and ODS (DSO)?
No, only ODS/DSO
Technical name of the delta process: AIE
Brief overview:
We have after images with (AIM/AIMD) or without delta queue (AIE/AIED) .
image
Here, serialization is required between requests, because the same key can be transferred a number of times within a request.
image 
How it works?
Initially the target (ODS) was loaded as shown:
image
The value of CRM gross weight (CRM_GWEIGH) has been changed to 5360 in the source sytem.
This time,data source sends only one entry i.e. after image which will hold the change.
image
The final entry after activation in the active table:
image
Additive delta process:
Example: LIS datasources
What update type( for key figures) it supports?
Addition Only
Does it support loading to both infocube and ODS (DSO)?
YES
Technical name of the delta process: ADD
Brief overview:
In RODELTAM table,we have two types of additive delta processes:
  • ADD without delta queue and 
  • ADDD with delta queue
image
The extractor provides additive deltas that are serialized on a request basis.
This serialization is necessary since the extractor provides each key once in a request, and changes to the non-key fields would otherwise not be transferred correctly.
How it works?
Check the initial entry in ODS.
image
The value of CRM gross weight (CRM_GWEIGH) has changed to 5360
Here, the data source sends an entry with value 1,267 with + sign.
image
Upon activation, check the final entry in active table which is the result of (4093+1267=5360) KG
image

Calculating Number of Working Days in Query Level



if ( i_vnam eq 'ZWRKDDAYS' ).
  data: end_date type d,
        prev_wrk_day type sy-datum,
        start_date type d,
        bgn_of_month type sy-datum,
        num_of_days type d.
  concatenate sy-datum+0(6)  '01' into bgn_of_month.
  prev_wrk_day = sy-datum - 1.
  CALL FUNCTION 'DATE_CONVERT_TO_FACTORYDATE'
    EXPORTING
     CORRECT_OPTION                     = '-'
      DATE                               = prev_wrk_day
      FACTORY_CALENDAR_ID                = '00'
   IMPORTING
     FACTORYDATE                        = end_date.
            .
  
  CALL FUNCTION 'DATE_CONVERT_TO_FACTORYDATE'
    EXPORTING
     CORRECT_OPTION                     = '+'
      DATE                               = bgn_of_month
      FACTORY_CALENDAR_ID                = '00'
   IMPORTING
     FACTORYDATE                        = start_date.
            .
  num_of_days = end_date - start_date.
  e_t_data-sign = 'I'.
  e_t_data-opt = 'EQ'.
  e_t_data-low = num_of_days.
  append e_t_data to e_t_range.
  endif.



https://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/13579

Variable with Customer Exit - example

*& ZCALMONTH_DELTAP01P02 - month difference
*&--------------------------------------------------------------
  WHEN 'ZCALMONTH_DELTAP01P02'.
    DATA:
         dh TYPE d,
         dl TYPE d,
         res TYPE i.
    IF I_STEP EQ 2.
      L_S_RANGE-SIGN = 'I'.
      L_S_RANGE-OPT  = 'EQ'.
      READ TABLE I_T_VAR_RANGE INTO LOC_VAR_RANGE WITH KEY VNAM = 'ZCALMONTHP01'.
      IF SY-SUBRC = 0.
        CONCATENATE LOC_VAR_RANGE-LOW '01' INTO L_DATE.
        IF L_DATE < '20080701'.
           L_DATE = '20080701'.
        ENDIF.
        READ TABLE I_T_VAR_RANGE INTO LOC_VAR_RANGE WITH KEY VNAM = 'ZCALMONTHP02'.
        IF SY-SUBRC = 0.
           CONCATENATE LOC_VAR_RANGE-LOW '01' INTO H_DATE.
           dh = H_DATE.
           dl = L_DATE.
           CALL FUNCTION 'FIMA_DAYS_AND_MONTHS_AND_YEARS'
             EXPORTING
               I_DATE_FROM          = dl
*              I_KEY_DAY_FROM       =
               I_DATE_TO            = dh
*              I_KEY_DAY_TO         =
*              I_FLG_SEPARATE       = ' '
            IMPORTING
*              E_DAYS               =
              E_MONTHS             = res
*              E_YEARS              =
                     .
           L_S_RANGE-LOW = res + 1.
        ELSE.
          L_S_RANGE-LOW = 1.
        ENDIF.
      ELSE.
        L_S_RANGE-LOW = 1.
      ENDIF.
      APPEND L_S_RANGE TO E_T_RANGE.
    ENDIF.