Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
VolkerSaggau
Product and Topic Expert
Product and Topic Expert
HDI is the environment that helps you to create tables and other database objects in an automated way. Specific HDI helps to change tables without the customer to care writing “alter” or other statements. But there are cases where you want to do something with the table that is related to its physical storage. The default rights might not match your needs so here is an option.

Careful: The system does not know what you altered.

  • Each new instance of HDI may or may not need the alter statements (QA, PROD)

  • Export/Import may not “survive” these changes and you have to reapply them

  • to be used with hdbmigrationtable


 

This is a deep change and you should be aware that this is not the standard but some additional functionality with the customer being fully responsible on the results!

 

The SAP Business Technology Platform creates a HDI-service for you. That are the following schemas:

  • <myHDIservice> -> schema that contains the runtime db objects

  • <myHDIservice>#DI -> schema that contains metainformation and the sources

  • <myHDIservice>#OO -> schema for the ObjectOwner


 

  • <123456789>_DT -> schema for each DesignTime user in a binding

  • <123456789>_RT -> schema for each RunTime user in a binding


 

The rights of RT-user are defined in the default_access_role.

GRANT_CONTAINER_SCHEMA_PRIVILEGES - SAP Help Portal

https://www.npmjs.com/package/@sap/hdi-deploy#the-default_access_role-role

The use of the HDI API in SQL is here: The Default Access Role for HDI Containers - SAP Help Portal

Option 1: Add alter to RT user


 

Customer can override the default role with his own definition:

In the sample the option “ALTER” was added to the role. Customer must be very sure what to change with this right.

In the src section of DB module

Create a folder “defaults

In the defaults folder create a file “default_access_role.hdbrole

The default of the role is currently defined as:
{
"role": {
   "name": "default_access_role",
   "schema_privileges": [
     {
       "privileges": [
            "DELETE",
            "CREATE TEMPORARY TABLE“,
"EXECUTE",
            "INSERT",
            "SELECT",
            "SELECT CDS METADATA",
            "UPDATE"
       ]
     }
  ]
}
}

 

Adding the “, ALTER” to the list of right will allow the RunTime-User to alter tables.

       


Compare default_access_rights


 

Rights after the change:


Runtime user alters a HDI table


 

The “RT user” can now successfully alter the table with a given partition.

Option 2 – Dedicated “alter” user (suggested approach)


 

The better approach is the usage of a dedicated “ALTERUSER”:

Steps:

  1. Create a dedicated “ALTERUSER”

  2. Grant the “alter” rights to the “ALTERUSER”

  3. Let the ALTERUSER alter the table


 

Create the alter-user as DBAMIN:
--DROP USER ALTERUSER;


CREATE USER ALTERUSER
    PASSWORD "Manager123456"
    NO FORCE_FIRST_PASSWORD_CHANGE
    VALID
         FROM NOW
         UNTIL FOREVER -- never do this in production!
    SET USERGROUP
         DEFAULT;

The DT-user grants the rights to the ALTERUSER

Sample here with a HDI demonstration case:

Project name: RIGHTS_IN_HDI

HDI Container name: RIGHTS_IN_HDI_HDI_DB_1

Alter role name:     alter_access_role

Alter user name:     ALTERUSER

 

Open from within the SAP Business Application Studio the DatabaseExplorer

In the context menu of the HDI-Container choose “SQL(ADMIN) Console”


Open SQL console (ADMIN)


Use below statement to grant the alter_access_role to the ALTERUSER

SET SCHEMA RIGHTS_IN_HDI_HDI_DB_1#DI;

CREATE LOCAL TEMPORARY COLUMN TABLE #ROLES LIKE _SYS_DI.TT_SCHEMA_ROLES;

INSERT INTO #ROLES ( ROLE_NAME, PRINCIPAL_SCHEMA_NAME, PRINCIPAL_NAME ) VALUES ( 'alter_access_role', '', 'ALTERUSER' );

CALL GRANT_CONTAINER_SCHEMA_ROLES(#ROLES, _SYS_DI.T_NO_PARAMETERS, ?, ?, ?);

DROP TABLE #ROLES;



Example of event.hdbmigrationtable

==version=1

column table Events(
eventid BIGINT not null GENERATED ALWAYS AS IDENTITY,
eventdate datetime not null,
eventname nvarchar(25),
eventstype int default 0

)



With the ALTERUSER
SET SCHEMA RIGHTS_IN_HDI_HDI_DB_1;

SELECT top 10 * from EVENTS;
-- just to prove we are right here

ALTER TABLE  EVENTS
   PARTITION BY RANGE (year(EVENTDATE))
     (PARTITION 2020  <= values < 2025 ,
     -- looking ahead
     PARTITION OTHERS page loadable )

-- want NSE usage
;

SELECT * FROM M_TABLE_PARTITIONS WHERE TABLE_NAME = 'EVENTS'
;

 

Summary:

HDI does an automated change of tables each time you change the table design-time definition. With *.hdbmigrationtable you have the possibility to also change physical parameters as partitions in the runtime definition on that particular database without the risk to get overwritten by the next version of the design-time object. However this should be tested thoroughly.

Also please make sure that only dedicated users have the right to alter runtime objects. So you make sure that changes do not happen accidental.

 

 
9 Comments
Option 2 requires that the container has a role by name alter_access_role. This may be obvious. But, It is not stated in the blog.

This can be done by deploying a file named alter_access_role.hdbrole with contents as below.
{ 
"role":{
"name":"alter_access_role",
"schema_privileges":[
{
"privileges":[ "ALTER" ]
}
]
}
}
VolkerSaggau
Product and Topic Expert
Product and Topic Expert
0 Kudos
Thanks Dinu for your comment. You are right with your finding. Maybe your approach is a bit tight with this rights, but this up to your needs.

One could also use the default_access_role.hdbrole and rename or copy it to the alter_access_role.hdbrole.
sagarsv
Explorer
0 Kudos
Hello Volker,

Table partitioning in general (more or less) is the same procedure as above(by running sql) for repository tables , In this blog by saying "Partition a table in HANA Deployment Infrastructure(HDI)" are you trying to explain the way the access is managed to perform a table partition on .hdbtable or .hdbmigrationtable in a container?

I have a follow up question, it would be easier to ask if above question is answered:)

regards

Sagar
VolkerSaggau
Product and Topic Expert
Product and Topic Expert
0 Kudos
Hi Sagar,

 

we added partitioning to the .hdbmigrationtable plug-in. Also there is a new role in HANA Cloud that is called PARTITION ADMIN.

Using .hdbmigrationtable is a stable way to achieve consistent change management with partitions.

Not sure what you mean with repository tables.

 

BR

 

Volker
sagarsv
Explorer
0 Kudos
Hello Volker,

Bit of context, I am speaking in terms of HANA on-prem (HANA 2.0 59.09)

By repository tables I mean tables which are created using HANA studio.

We use HDI way of development for some projects (we are on the way to migrate all developments) , in HDI way of development we use .hdbtable by default for all table artifacts , so my follow up question was linked to this if I partition the table(.hdbtable) using SQL : ALTER statement(via DB explorer) , if I re-deploy my container having this .hdbtable will the partition sustain or we have to redo partition again ?

 

regards

sagar
VolkerSaggau
Product and Topic Expert
Product and Topic Expert
0 Kudos
Hi Sagar,

.hdbtable: with a new version of the table you will create a new table.

With the deploy parameter "try_fast_migration=true" and no change in the partition you should get away from a table copy.

.hdbmigrationtable: supports now also alter table partition... a way you might get a smother change. For sure if your subsequent changes are not changing the partition.

BR. Volker
sagarsv
Explorer
0 Kudos
Thanks Volker,

I shall tryout this solution.

 

regards

sagar
VolkerSaggau
Product and Topic Expert
Product and Topic Expert
0 Kudos

Hi Sagar,

 

here a sample:

== version = 3

column table "GX_EMPLOYEES" (

"EMPLOYEE_ID"NVARCHAR(50) NOT NULL,

"EMPLOYEE_FIRSTNAME"NVARCHAR(50),

"EMPLOYEE_LASTNAME"NVARCHAR(50),

"EMPLOYEE_ACCOUNT_NO"NVARCHAR(50),

"EMPLOYEE_SALARY"DECIMAL(8,2),

"EMPLOYEE_START_YEAR"INTEGER NOT NULL DEFAULT 1900,

"EMPLOYEE_GENDER"NVARCHAR(1),

"EMPLOYEE_REGION"NVARCHAR(50),

"EMPLOYEE_ZIPCODE"NVARCHAR(50),

"EMPLOYEE_T-LEVEL"NVARCHAR(50),

"EMPLOYEE_EDUCATION"NVARCHAR(50),

primarykey ("EMPLOYEE_ID")




)

PARTITIONby

HASH ("EMPLOYEE_ID") PARTITIONS 1 SUBPARTITION BY

RANGE("EMPLOYEE_START_YEAR")

(

(PARTITION0<=values<2008

page loadable GROUP NAME 'HISTORY'),

(PARTITION others)

)

== migration = 3

altertable"GX_EMPLOYEES"

PARTITIONby

HASH ("EMPLOYEE_ID") PARTITIONS 1 SUBPARTITION BY

RANGE("EMPLOYEE_START_YEAR")

(

(PARTITION0<=values<2008

page loadable GROUP NAME 'HISTORY'),

(PARTITION others)

)

Hope this works for you. This also uses NSE. From a business point of view this does not make sense. But it shows the concept.

BR

Volker

isuruwarn
Product and Topic Expert
Product and Topic Expert
0 Kudos

For anyone using CDS to maintain their database artifacts:

If you call the ALTER TABLE command manually to create/update partitions, these configurations will be overwritten by the HDI deployment when the table is modified. 

Therefore, you need to use the @sql.append option in CDS to maintain your partition configurations. Eg:

 

@sql.append: ```sql
PARTITION BY RANGE(YEAR(USAGE_DATE))
(
    PARTITION '2020' <= VALUES < '2021' PAGE LOADABLE,
    PARTITION OTHERS DYNAMIC INTERVAL 1 YEAR DISTANCE 1 YEAR PAGE LOADABLE
)
```
@cds.persistence.journal
entity PARTITION_EXAMPLE {
    ID                        : Integer64;
    NAME                      : String(16);
    DESCRIPTION               : String(32);
    USAGE_DATE                : Timestamp not null;
};

 

The @cds.persistence.journal option will automatically add the .hdbmigrationtable file to your gen directory.

When partitioning an existing table, you will also need to add the .hdbtable file to the undeploy.json file, in order to start using the new migration table file. Alternatively, you can also use the AUTOUNDEPLOY= TRUE option.

undeploy.json

 

[
    "src/gen/PARTITION_EXAMPLE.hdbtable"
]

 

Thanks to @VolkerSaggau for providing the above solutions.