Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
jgleichmann
Active Contributor
If you successfully finished my last blog post about data aging 'General Iinformation about data aging' it is time for the deep dive. How SAP has implement it and how it works in detail.

As you already have read partitioning is an elementary part of data aging process to separate the current from the historical. Therefor range partitioning is used with an additional column called '_DATAAGING':



Short separation for the the two parts:

Current data is the data relevant to the operations of application objects, needed in day-to day-business transactions. The application logic determines when current data turns historical by using its knowledge about the object’s life cycle. The application logic validates the conditions at the object level from a business point of view, based on the status, execution of existence checks, and verification of cross-object dependencies.


 

Historical data is data that is not used for day-to day-business transactions. By default, historical data is not visible to ABAP applications. It is no longer updated from a business point of view. The application logic determines when current data turns historical by using its knowledge about the object’s lifecycle. The application logic validates the conditions at object level from a business point of view, based on the status, executing existence checks, and verifying cross object dependencies.


 

Limitation: There can only be one current partition with max. 2 billion rows, but there can be multiple ones for the historical part.

If you activate data aging for one object / table you only can select it via a special syntax. The SAP HANA-specific database shared library (DBSL) in the ABAP server adds a corresponding clause to the SQL statements that are sent to SAP HANA. The classes ABAP CL_ABAP_SESSION_TEMPERATURE and CL_ABAP_STACK_TEMPERATURE enables the data access for the historical data.




Selection


By adding the clause WITH RANGE_RESTRICTION ('CURRENT') to a SQL statement, SAP HANA restricts the operation to the hot data partition only.
This restricts the operation to all partitions with data temperatures above the specified value. The clause WITH RANGE_RESTRICTION ('20120701'), for example, tells SAP HANA to search the hot partition and all cold partitions that contain values greater or equal than '20120701'. Range restriction can be applied to SELECT, UPDATE, UPSERT, DELETE statements and to procedure calls.

RANGE_RESTRICTION Current






RANGE_RESTRICTION Time

 



The query will select the current partition 1 and partly the partition 2. HANA won't load the complete partition 2 into memory! Cold partitions make use of Paged Attributes. While ordinary columns are loaded entirely into memory upon first access, Paged Attributes are loaded page-wise. Ideally only the pages that hold the requested rows are being loaded.






Parameter


It is possible to configure the amount of memory used by page loadable columns. The parameter are a little bit confusing. The defaults in megabyte or procent are:

global.ini:page_loadable_columns_min_size=1047527424
global.ini:page_loadable_columns_limit=1047527424

global.ini:page_loadable_columns_min_size_rel=5
global.ini:page_loadable_columns_limit_rel=10

The first ones are set with a default of 999TB!

The last two ones setting a relative lower and upper (*_rel) threshold for the total memory size of page loadable column resources per service in percent of the process allocation limit.





When the total size of page loadable column resources per service falls below the minimum of the two threshold values (page_loadable_columns_min*) resulting from the corresponding parameters (= effective lower threshold value), the HANA System stops unloading page loadable column resources from memory with first priority based on an LRU strategy and switches to a weighted LRU strategy for all resources.





When the total memory size of page loadable column resources per service exceeds the minimum of the two threshold (page_loadable_columns_limit*) values resulting from the parameters the HANA System automatically starts unloading page loadable column resources from memory with first priority based on an LRU strategy.

You can set them by hana studio interface or via sql command (example value 50GB):
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'System' ) SET ('memoryobjects', 'page_loadable_columns_min_size') = '51200' WITH RECONFIGURE;









 




Partitioning


You can define a partition range for every table. For instance you can define a partition per year and if the partitions are getting too big you can repartition (only splitting) them from yearly to monthly:



 

But be careful, currently it is not possible to merge partitions with the transaction DAGPTM (tested with release: S/4 1610 FP1). So start with a high level range (year) and split them if needed.




Known Bugs






























Note Description Fixed with
2509513 Indexserver Crash at UnifiedTable::
ColumnFragmentPagedNBitReaderIterator::r
eposition During Table Load of Cold Paged Partition
Revisions:
>= 122.12 (SPS12)
>= 012.01 (SPS01)
2497016 Pages Belonging to Cold Partitions Created With Paged
Attribute Are Not Unloaded by The Resource Manager if
They Are Pinned by an Inverted Index
Revisions:
>= 122.10 (SPS12)
>= 002.01 (SPS00)
>= 012.00 (SPS01)
2440614 SAP HANA: SQL error for MDX statement
with WITH RANGE_RESTRICTION
DBSL:
745 Patch Level 415
749 Patch Level 210
750 Patch Level 27
751 Patch Level 17
752 Patch Level 7
 2128075  AppLog: Short dump ASSERTION_FAILED
in CL_BAL_DB_SEARCH
 SAP_BASIS SP:
SAP_BASIS 740 SP13
SAP_BASIS 750 SP3

 
13 Comments
ennowulff
Active Contributor
0 Kudos
Hi Jens, thanks for showing! Just sitting in hands-on-session to explore data aging.

Do you know where to set the residence time for customer (Z-) tables?

 

Thanks

Enno
jgleichmann
Active Contributor
0 Kudos
Hi Enno,

I also attended Richards hands-on-session in Barcelona. It is planned to centralize the residence time.

Currently this should work with TX DAGPTC => Edit partitioning objects => edit partitioning object with new threshold

Regards,

Jens

 
ennowulff
Active Contributor
0 Kudos
Thanks Jens!

btw: I didn't see you... 😞 Maybe I was to data aged... :]
BrigitteReinelt
Advisor
Advisor
0 Kudos
Hi Jens, Enno,

defining residence times for aging is at the moment per object, i.e. the aging objects have each their own residence time customizing, described in the documentation and respective notes that are linked with the central note  2315141 (Collective note for Data Aging Framework). Most of the objects also provide a default residence time, e.g. 15 days for the Basis objects.

If you want to create own aging objects for z-tables, you can create a corresponding customizing possibility on your own as part of the development and/or hard-code a default residence time within the aging logic as part of the new object.

In case you are just enhancing existing aging objects with z-tables, the same residence time applies for the z-tables that are valid for the corresponding leading object. You can veto single object instances during an aging run, though, by implementing the corresponding enhancement BAdI that is offered by the aging objects that are marked as being extendable.

What Jens mentioned with respect to threshold values in transaction DAGPTC is something different and not related to residence times at all: This is a setting or rather an internal fine-tuning possibility that we use in combination with SAP S/4HANA Cloud and has no relevance for on premise.

Warm regards,

Biggi.

 

 
jgleichmann
Active Contributor
0 Kudos
Hi Biggi,

thanks for the update. Is there any documentation which is up-to-date for the named aspects?

Regards,

Jens

 
BrigitteReinelt
Advisor
Advisor

Hi Jens,

try out our development guide for Data Aging:

https://www.sap.com/documents/2016/09/1e768600-8b7c-0010-82c7-eda71af511fa.html

Warm regards,

Biggi.

0 Kudos

Does it require dynamic tie ring or sap in? Or it only requires netweaver and Hana?

 

jgleichmann
Active Contributor
0 Kudos
Hi Grigory,

DataAging is a funcionality linked to NW / S/4 and HANA.

Please have a look into SAP note 2315141

Regards,

Jens

 
former_member265504
Participant
0 Kudos
Hello Jens,
Very informative & helpfull blog on data aging.Thankyou for sharing this.we are looking to implement the hana data aging in our landscape. Regarding this I have below two queries which i could not fine the answear anywhere. It will be great help if you can help me with the answear to these two queries.

 

1. As data aging involves automatic table level change like addition of the column “_DATAAGING” to the concern table & other non-manual changes done by the data aging in the database. Is there any transport request generated for this whole data aging change process? If no such change request is generated during the process then will not there be a table structure inconsistency between different systems in the landscape considering if i had done the data aging for that table in one system & in other system i didnot do it(like between DEV & QA)?

2. So when i will schedule the periodic background job for taking care of the future growth of the table, how the future growth will be hanle.
For example i have done a data aging where i have three partition 1. for 2017 data(cold) 2. for 2018 data(cold) & 3rd for 2019 data(hot) & my data restriction says anything older then 1 year should be moved to cold storage in that case wil there be a 4th paritition created automaticly in 2020 & 2019 data will be moved to the 4th partition & then the 4th parititon be moved to cold storage?

 

Thanks,

Rajdeep
former_member615896
Discoverer
0 Kudos
c9b9c8ea15574d29bfafe89e88ac94ec


rajdeep_b02. can someone please answer ?



 
janlars_goedtke
Active Participant
0 Kudos
Hi,

I think the Default vaulues are now in Byte not MByte:

 

global.ini:page_loadable_columns_min_size=1047527424B
global.ini:page_loadable_columns_limit=1047527424B

 

best regards

Jan
jens_becher
Explorer
0 Kudos
Hello,

 

I was wondering if parameters like page_loadable_columns_min_size are still used under HANA 2.0 SP04, because there is the new function NATIVE STORAGE EXTENSION which uses some similar or in my opinion identical techniques.

Thanks for feedback or some hints where to get it.

 

Best regards,

 

Jens
jgleichmann
Active Contributor
0 Kudos
Hi,

data aging is a application level solution. You have to adapt your coding. With NSE this ist not needed anymore. It is a pure database solution for data tiering. Please check my NSE blog NSE part I – tech. details Q&A for more details. For new implementation I would rather use NSE.

 

Regards,

Jens
Labels in this area