Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
jgleichmann
Active Contributor

last updated: 2023-09-27 15:40 CEST


Some of you may already know the limitation of 2 billion entries in a single table / partition.
Another hint regarding this limitation is the alert ID 17 '<table> contains <n> records'. The default threshold of this alert is 300,000,000 rows. This limitation applies to BSoH / S/4HANA and BWoH / BW/4HANA => general HANA limitation. But both have its own distribution rules. So I will split this blog into two parts BWoH and BSoH.

At first you have to know that you have to execute manual steps to create new partitions and distribute the data! I have heard this very often that some customers think this is a automatically task. You should use table distribution tasks if you run a scale-out BW. For single tables you can use this manual cookbook.
There is one exception: If you use dynamic range partitioning, but also this must be implemented manually.

table of content:






 

 

Test Environment:


VMware 6.0
11xCPU: Intel(R) Xeon(R) CPU E7-8880 v3 @ 2.30GHz
22vCPUs
SLES for SAP 12 SP1
HANA 1.0 SPS12 Rev. 122.11

If you face such an issue in a BW system, you have to check first if you can solve it in the application layer. So check if you can use partitioning options in the backend system with SAP note 2019973.
When this is not possible anymore, than you have to do the following manual steps.




1. Check table distribution rules / table placement


Follow the steps in SAP note 1908075 for BWoH / 2334091 for BW/4HANA.
Download the attachment and choose your script regarding your HANA revision and topology.
In this example I have chosen the most spread variant: HANA 1.0 SPS12 Scale-up
Normally at the time of installation or migration this step is already performed but some thresholds may have changed over time. So first check the current parameter and thresholds:
ALTER SYSTEM ALTER CONFIGURATION ('global.ini','system') 
SET ('table_placement','same_num_partitions') = 'true' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('global.ini','system')
SET ('table_placement','max_rows_per_partition') = '1500000000' WITH RECONFIGURE;

SELECT 
SCHEMA_NAME,GROUP_TYPE,MIN_ROWS_FOR_PARTITIONING,
INITIAL_PARTITIONS,REPARTITIONING_THRESHOLDS,
LOCATION,DYNAMIC_RANGE_PARTITIONING
FROM
"SYS"."TABLE_PLACEMENT";



If all the values match to the values from the downloaded SQL script, there is nothing to do for you.
In the other case replace $$PLACEHOLDER with the SAP schema and execute the script with a user with sufficient priviliges.




The current rules explained also in the attached pdf inside the same folder as your SQL script:
InfoCube fact tables are ROUNDROBIN partitioned on first level (sap.bw.cube).
DSO tables are HASH partitioned on first level (sap.bw.dso). They have 1 partition on first level regardless of the number of records in the tables - except for tables with more than 1.5 billion records, see remarks below. Tables for InfoCubes and DSOs are located on the master node.
InfoObjects tables (sap.bw.iobj) are not partitioned, except for InfoObjects with high cardinality. Those tables are HASH partitioned on first level. All other InfoObject tables are not partitioned i.e. they do not have a partitioning specification. InfoObject tables are located on the master node.
PSA tables (sap.bw.psa) and errorstack tables (sap.bw.dtp) are HASH partitioned on first level. They have 1 partition on first level regardless of the number of records in the tables - except for tables with more than 1.5 billion records, see remarks below. PSA and errorstack tables are located on the master node.
SAP HANA DataSources (sap.bw.dsrc) can have an insert and an upsert table. The insert table is dynamic range and the upsert table is HASH partitioned on first level.
Temporary BW tables (sap.bw.temp, sap.bw.trex) and OpenHub tables (sap.bw.openhub) are not partitioned i.e. they do not have a partitioning specification. They are located on the master node.

The number of partitions on first level according to the rules above is only set when the table is initially created; the number of partitions on first level is not adapted dynamically.
The number of first level partitions of a table does not exceed the number of nodes that are potential valid locations for that table. This rule is disregarded if a higher number of first level partitions is required to avoid first level partitions with more than 1.5 billion records (global.ini, section [table_placement], max_rows_per_partition = 1500000000).

=> this means a split will not performed before the table has reached 1.5 billion rows.
=> in a scale-up system all tables are created without partitioning besides the table was above 1.5 billion records at the time of the migration - SUM will take care




 

2. Check table grouping


If you have an old release or your migration was not completed correctly, some tables have a missing table group type.
select * from "SYS"."TABLE_GROUPS" where TABLE_NAME like '%<TABLE_NAME>%';

When the group type is missing, you have to implement the latest notes for report RSDU_TABLE_CONSISTENCY (currently: 2264053 - RSHDB: SAP HANA Tools 08 for BW) and execute the it to classify the tables.
Usage: 2569097 - How to use of report RSDU_TABLE_CONSISTENCY [VIDEO]





3. Repartitioning plan


In the SQL select statement before you checked the table group type. In the output you'll also found the group name.
This one we have to use in the reorg plan => the steps can be found at the end of SAP note 2019973.
Note: In case of a scale-out system you can skip this step and execute a table redistribution.

Be sure that you execute this steps in one SQL session with the schema owner or a user with sufficient priviliges to tables and content!

3.1. Generate a reorg plan


call reorg_generate(6,'GROUP_NAME=><your-BW-group-name>');

3.2. Check the generated plan


select * from reorg_plan;



If there is no output, you have to check if your table is above the defined threshold (max_rows_per_partition).

I have adjusted the value that my plan results in 8 partitions => currently: 1.

3.3. Troubleshooting (if no plan was generated)



  • check the table place rules  (1908075 - SAP BW on SAP HANA: Table Placement and Landscape Redistribution)

  • Remove the unwanted records from TABLE_PLACEMENT table. 'Note that $$PLACEHOLDER must not be used as SCHEMA_NAME in TABLE_PLACEMENT.' (2738610 - How to truncate or delete rows from sys.table_placement - SAP HANA)

  • Check the IS_GROUP_LEAD parameter in TABLE_GROUPS:


SELECT * FROM TABLE_GROUPS WHERE table_name LIKE '<TABLE_NAME>';


  • If the value of IS_GROUP_LEAD is FALSE change, it to TRUE.


ALTER TABLE "<SCHEMA_NAME>"." <TABLE_NAME>" SET GROUP LEAD;

Details: 3149900 - Table Redistribution Skipping Split of Some Tables

 

3.4. Execute the plan


call reorg_execute(?);

Check the free ressources (data+log disk space + CPU + memory) of the system and the configured parallelism.

Adjust the parameters or execute it only in a time frame without high business load.
indexserver.ini => partitioning => split_threads => 16 (default)
indexserver.ini => partitioning => bulk_load_threads => 4 (default)

3.5. Check the status of the plan execution


select * from reorg_steps 
where reorg_id = <value returned by reorg_execute before>;
select * from reorg_steps where reorg_id = 1;



Normally reorg_id should be 1. Now you can monitor the process via the statement and the running sessions.



You will see the splitting per col as adjusted via split_threads. The number of 'ZipResultJob' threads correlate with the value of parameter bulk_load_threads.
In my example the splitting operation (step 1-8) tooks 50min for 1.8 billion rows (23GB of data).






Note: Be aware of this limitation and the tasks to split the table to solve the issue. If you won't do this no more inserts are possible if you reach the limit and you will get a lot of errors in the backend.

 
13 Comments
Hi Jens,

Very good Information.

Can you please help us in how to calculate the values of split_threads, bulk_load_threads.

I didn't found much info on these parameters.

Also please share SAP Notes related to these Parameters.
jgleichmann
Active Contributor
Hi Hemanth,

for sure, here are the related notes:

  • 2044468 - FAQ: SAP HANA Partitioning

  • 2222250 - FAQ: SAP HANA Workload Management

  • 2600030 - Parameter Recommendations in SAP HANA Environments

  • 2036111 - Configuration parameters for the SAP HANA system


split_threads should not be set higher than the available amount of logical CPUs known to the system.

bulk_load_threads should be set to a value which correlates with 25-30% of the value of split_threads.

 

Regards,

Jens

 
0 Kudos
Hi Jens,

 

Thank Q for the quick reply.

Waiting for the part 2 of your blog.
parasmali
Discoverer
0 Kudos
Hi Jens,

How to make a plan for such kind of partitions?

if my cube records are from inventory data sources and having 10 years record in a cube. every day some millions records are updating in a cube, partition plans are not cleared can you please please elaborate or give some examples on it?that will be the great help for my scenario, thank you.

 

 
jgleichmann
Active Contributor
0 Kudos
Hi Paras,

what do you mean with "make a plan"? You have default rules the system is following when you execute such a repartitioning (see distribution rules / table placement). These rules and configs can be adjusted to your needs. On this base the plan will be automatically created.

Please give some details on your scenario:

  • Is your cube already partitioned?

  • How many records does it contain?

  • why you want to partition it?


There is no partitioning plan / statement which you have to create by your own like on an ERP system.

Regards,

Jens
0 Kudos

Thank you jens.

My cube is already partitioned in March-2017. earlier method user for partition round robin 6.

Cube has 6 partitions and each partition has 1.73 (1733897826) billions record .

Now 2 billions is the limitations for cube per partitions, before it fills each partitions  need to do again i feel so, else my data loading  may be fail.

 

i  understand from basis team, system will automatically placed the records in each partitions and i have to check cube consistency in RSDU_TABLE_CONSISTENCY  . when basis does the partitions at DB level.

jgleichmann
Active Contributor
0 Kudos

  1. check if partitioning / table placement rules are up-to-date for your scenario

  2. check object via RSDU_TABLE_CONSISTENCY (update it with the latest note fix)

  3. check HANA paremeter for partitioning as named in the blog

  4. follow blog starting with 3) Repartitioning plan

  5. generate the plan and check if more than 6 partitions are created

  6. execute the reorg in low-load time


You can also trigger a complete landscape reorg which will the issue if your system is setup correctly. Please read all the named notes in the blog and you will resolve your issue.

If you are still struggling with it, just open a question (Q&A section) with detailed information about your issue and formulate your questions. Currently all your questions are covered by the blog or the named notes.
0 Kudos

Hi Jens,

would like to ask your suggestion here,

I have set the Dynamic range threshold to 100 million records of the inbound table of cube like Adso.

i see the threshold set to 100 million in “m_cs_partitions” for inbound table. Have executed the Table Placement rules valid for our system such that range set would remain permanent.

Also ours is single node system.

Now when i activate the ADSO, the Dynamic range threshold in m_cs_partitions of the inbound table gets reset to Zero. This happens only when the inbound table has zero records & i activate the ADSO.

Please let me know if this is an expected behavior. As i always activate the data requests and ensure that inbound table has zero records. i have tried replacing the table placement placeholder with schema however the behaviour is still the same.

 

Best Wishes,

Kishore Kumar Challa

jgleichmann
Active Contributor
0 Kudos
Hi Kishore,

 

please check if the right partitioning design was applied.
"The actual partitioning specification depends on the data tiering settings of the DataStore object, the table type, and the release of SAP BW/4HANA. The system uses range partitioning at the first level (and hash partitioning at the second level in SAP BW/4HANA 2021 only) for DataStore objects with DTO temperature maintenance at partition level if temperature tiers hot and warm are switched on. For all other DataStore objects, SAP BW/4HANA generates tables with hash partitioning at the first level - and optional user-defined range partitioning at the second level for the active data table resp. dynamic range partitioning by REQTSN for inbound table and changelog."

2985173 - DTO: Data Tiering Optimization and Partitioning


 

Regards,

Jens
liuzipeng
Explorer
0 Kudos
c9b9c8ea15574d29bfafe89e88ac94ec

 

Thanks for your sharing . can you description how to adjusted the value that  plan results in 8 partitions => currently: 1.  thank you

 

Best Regards,

 

hayden
jgleichmann
Active Contributor
0 Kudos
Hi hayden,

it depends on which type of table you are referring to. You can adjust the parameter max_rows_per_partition or the table groups / placement content. By default most tables will be split at 1.5 billion records. Means you have to lower it if your table only has 800 million records. If you want 8 partitions lower it to 100 million instead of 1500 million.

Regards,

Jens
liuzipeng
Explorer
0 Kudos
Dear Jens,

 

My system scenario looks like this。

1、I use TABLE_PLACEMENT_BW4HANA  "050 = Single node (more than 2 TB)" Rule。

2、From the 050 = Single node (more than 2 TB) Rule blow , the (table_placement','max_partitions')='4

my table is has been reached 2 billions records. and the table belong to GROUP_TYPE => 'sap.bw.dso')

 

 

ALTER SYSTEM ALTER CONFIGURATION ('global.ini','SYSTEM') SET ('table_placement','max_partitions_limited_by_locations')='false' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('global.ini','SYSTEM') SET ('table_placement','max_partitions')='4' WITH RECONFIGURE;
ALTER SYSTEM ALTER CONFIGURATION ('global.ini','SYSTEM') SET ('table_placement','max_rows_per_partition') = '1500000000' WITH RECONFIGURE;

ALTER SYSTEM ALTER TABLE PLACEMENT
SET (LOCATION => 'master', MIN_ROWS_FOR_PARTITIONING => 0, INITIAL_PARTITIONS => 1, REPARTITIONING_THRESHOLD => 0);

ALTER SYSTEM ALTER TABLE PLACEMENT (SCHEMA_NAME => 'SAPHANADB')
SET (LOCATION => 'master', MIN_ROWS_FOR_PARTITIONING => 0, INITIAL_PARTITIONS => 1, REPARTITIONING_THRESHOLD => 0);

ALTER SYSTEM ALTER TABLE PLACEMENT (SCHEMA_NAME => 'SAPHANADB', GROUP_TYPE => 'sap.bw.dso')
SET (LOCATION => 'master', MIN_ROWS_FOR_PARTITIONING => 50000000, INITIAL_PARTITIONS => 1, REPARTITIONING_THRESHOLD => 50000000, DYNAMIC_RANGE_THRESHOLD => 100000000, SAME_PARTITION_COUNT => 'TRUE');

ALTER SYSTEM ALTER TABLE PLACEMENT (SCHEMA_NAME => 'SAPHANADB', GROUP_TYPE => 'sap.bw.iobj')
SET (LOCATION => 'master', MIN_ROWS_FOR_PARTITIONING => 1500000000, INITIAL_PARTITIONS => 1, REPARTITIONING_THRESHOLD => 1500000000, SAME_PARTITION_COUNT => 'TRUE');

ALTER SYSTEM ALTER TABLE PLACEMENT (SCHEMA_NAME => 'SAPHANADB', GROUP_TYPE => 'sap.bw.psa')
SET (LOCATION => 'master', MIN_ROWS_FOR_PARTITIONING => 1500000000, INITIAL_PARTITIONS => 1, REPARTITIONING_THRESHOLD => 1500000000);

ALTER SYSTEM ALTER TABLE PLACEMENT (SCHEMA_NAME => 'SAPHANADB', GROUP_TYPE => 'sap.bw.dtp')
SET (LOCATION => 'master', MIN_ROWS_FOR_PARTITIONING => 1500000000, INITIAL_PARTITIONS => 1, REPARTITIONING_THRESHOLD => 1500000000);

ALTER SYSTEM ALTER TABLE PLACEMENT (SCHEMA_NAME => 'SAPHANADB', GROUP_TYPE => 'sap.bw.dsrc')
SET (LOCATION => 'master', MIN_ROWS_FOR_PARTITIONING => 200000000, INITIAL_PARTITIONS => 1, REPARTITIONING_THRESHOLD => 200000000, DYNAMIC_RANGE_THRESHOLD => 200000000, SAME_PARTITION_COUNT => 'TRUE');

ALTER SYSTEM ALTER TABLE PLACEMENT (SCHEMA_NAME => 'SAPHANADB', GROUP_TYPE => 'sap.bw.temp')
SET (LOCATION => 'master', MIN_ROWS_FOR_PARTITIONING => 0, INITIAL_PARTITIONS => 1, REPARTITIONING_THRESHOLD => 0);

ALTER SYSTEM ALTER TABLE PLACEMENT (SCHEMA_NAME => 'SAPHANADB', GROUP_TYPE => 'sap.bw.openhub')
SET (LOCATION => 'master', MIN_ROWS_FOR_PARTITIONING => 0, INITIAL_PARTITIONS => 1, REPARTITIONING_THRESHOLD => 0);
jgleichmann
Active Contributor
0 Kudos
Hi hayden,

 

it is simple just have a look at the attachment of note 2334091.  You should define the number of initial partitions and the repartitioning threshold. The max_partitions is defined as 4 in a scale-up system. But this rule is disregarded if the value of parameter max_rows_per_partition = 1500000000 is reached. This I have already demonstrated in this blog. Just adjust this parameter to reach your target partitions and reset it afterwards back to default.

Here the official statement of SAP (source: Table Placement SAP BW4HANA 2021 with SAP HANA 2.0.pdf):

The number of hash partitions is limited to a maximum of four. This rule is disregarded if a higher number of hash partitions is required to avoid partitions with more than 1.5 billion records (global.ini [table_placement] max_rows_per_partition = 1500000000).


 

Regards,

Jens
Labels in this area