Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
former_member184876
Active Participant

Recap:


In the last blog, we started with building an integration suite extension application and deployed a bare minimum application to Kyma runtime. We have a "/hello" endpoint, which gave the "Hello world!" response. We ensured the basic plumbing of the application worked. If you happened not to check part1, the link is below.

https://blogs.sap.com/2021/11/16/integration-suite-extension-part-1-introduction-with-sap-btp-hana-a...

 

Introduction:


In this, let us provision an integration suite tenant in SAP BTP trial to use with our extension application and deploy few i-flows in to the tenant. Let us provision a HANA cloud database and add persistence our extension application. Add functionality to the extension application to pull the i-flow details from integration suite in to HANA. Let us get started. This is a pretty long one as we go though lot of topics and some interesting detours. So i hope you grab a coffee and enjoy this as much as i do.. 🙂

 

Previous parts:


If you did not check the previous parts, i suggest to go through them to understand the story. Below are the links for them


 







Previous blogs Link




















Part2 – Integration suite extension – Persistence in HANA cloud Part2
Part1 – Integration suite extension – Introduction Part1
Part0.1 – Integration suite extension – Message Monitor Overview Part0.1
Part0.2 – Integration suite extension – Enhanced user defined message search Part0.2


Architecture Diagram:


 


 

SAP Integration Suite


STEP 1: SAP Integration suite provisioning

Provisioning an integration suite trial tenant is documented in detail in SAP documentation. I am mentioning high level steps with some important pointers below.

  • In SAP BTP, go to integration suite subscription to launch integration suite homepage.


 


 


 

  • Activate integration suite tenant from "Manage capabilities". If the steps are successful you should be able to access the provisioned tenant like below.



 

  • Now we need to complete some post provisioning steps. A booster is provided in BTP for these steps. Run the "Enable Integration suite" booster.



 

  • The booster does few things automatically. One important step it does is creation of service instances for process integration runtime. It creates instances of "integration-flow" and "api" (below). From the "api" service instance, get the service key and copy the client credentials from the service key . We will need it later in step 3 for triggering integration suite APIs.



 

STEP 2: Deploy i-flows in the integration tenant

I deployed couple of i-flows for testing our extension application. These are simple flows with out much process steps. It does not really matter which i-flows we use. As long as they are deployed and running, we should be good.

  • i-flow 1: A SOAP to Northwind service OData call



 

  • i-flow 2: A SOAP service that returns Hello World



 

STEP 3: Test the integration suite APIs

  • To test the integration suite APIs, get the OAuth access token using client credentials from step 1. Call the token endpoint using the client credentials. If the set up is correct, the token endpoint should respond back with an access token.


POST https://681769b2trial.authentication.us10.hana.ondemand.com/oauth/token
Authorization: client credentials (id + secret)
Body: form-urlencoded
grant_type : client_credentials

 


 

  • Next do an API call to get the list of deployed i-flows. Use access token from above for authorization.


GET https://681769b2trial.it-cpitrial05.cfapps.us10-001.hana.ondemand.com/api/v1/IntegrationRuntimeArtif...
Authorization: Bearer <access_token from above>

 


 

  • In this, we will use an API to get the service endpoints of the deployed i-flows.


GET https://681769b2trial.it-cpitrial05.cfapps.us10-001.hana.ondemand.com/api/v1/ServiceEndpoints?$expan...
Authorization: Bearer <access_token from above>

 


 

SAP HANA Cloud:


Let us provision HANA Cloud database in SAP BTP. We have SAP documentation for completing the provisioning. Please follow the documentation and provision the HANA cloud. I am not mentioning the details of this as it is pretty straight forward processes. Once the HANA Cloud database is provisioned, create some database artifacts that we need for our extension application. We will create a schema, few tables, few sequences, a database user, a database role to grant required access to the user.

STEP 4 - Create database artifacts.

Open HANA database explorer and run following sql commands to create database artifacts.

 
//Create schema

CREATE SCHEMA "INTEGRATION_SUITE";

//Create Tables

CREATE TABLE "INTEGRATION_SUITE"."JOBSCHEDULER" (id integer, active boolean, time integer, duration NVARCHAR(10);

CREATE TABLE "INTEGRATION_SUITE"."JOBSCHEDULES" (id integer, name NVARCHAR(50), time integer, duration NVARCHAR(10), PARAMS NVARCHAR(300)));

CREATE TABLE "INTEGRATION_SUITE"."JOBRUNS" (RUNID integer, JOBID integer, STARTTIME LONGDATE, ENDTIME LONGDATE, DURATION NVARCHAR(15), STATUS NVARCHAR(10), LOGURL NVARCHAR(500));

CREATE TABLE "INTEGRATION_SUITE"."IFLOWS" ("ID" bigint NOT NULL primary key GENERATED BY DEFAULT AS IDENTITY, Name NVARCHAR(100), Version NVARCHAR(10), Description NVARCHAR(100), Protocol NVARCHAR(10), Endpoint NVARCHAR(200));


//Create sequence

CREATE SEQUENCE "INTEGRATION_SUITE"."JOBRUNID" START WITH 1000 MAXVALUE 999999 RESET BY SELECT IFNULL(MAX(RUNID), 0)+1 FROM INTEGRATION_SUITE.JOBRUNS;


//Create a database service user

CREATE USER SU_INTEGRATION_SUITE PASSWORD SecretPassword;


//Create Role

CREATE ROLE "INTEGRATION_SUITE_ADMIN";


//Grant all previleges on schema to the role

GRANT ALL PRIVILEGES ON SCHEMA INTEGRATION_SUITE TO INTEGRATION_SUITE_ADMIN;


//Grant role to the user

GRANT INTEGRATION_SUITE_ADMIN TO SU_INTEGRATION_SUITE WITH ADMIN OPTION;

 

Job scheduler architecture:


My idea in the extension application is to run a job periodically and get the i-flows from integration suite tenant to our extension application. Rather than scheduling one job, i took a small detour to build a simple job scheduler framework that we can use to schedule any kind of background jobs required.

 


 

A job scheduler acts as a clock and check periodically the scheduler configuration in table "JOBSCHEDULER" (here configured for every 15 seconds). If there are any outstanding jobs to be scheduled, scheduler schedules those jobs.

 


 

The jobs to be scheduled are maintained in table 'JOBSCHEDULES'. It has the job name and the frequency it need to be scheduled etc. For example, EXTRACTINTEGRATIONFLOWS job will be run every 10 minutes. Scheduler's work is to make sure these jobs are scheduled correctly at their scheduled time intervals.

 


 

Finally there is a "JOBRUNS" table having the log of each job run. RUNID will help to uniquely identify a job run.

 


 

Extension application implementation:


STEP 5: Implement job scheduler

Main program jobs-scheduler.go starts InitializeScheduler function in a goroutine which runs as concurrent process. This will get the scheduler configuration using GetSchedulerConfig function and starts the job scheduler. Job scheduler runs in a loop and check the jobschedules every 15 seconds based on scheduler configuration.

 


 


 

STEP 6: Implement job schedules

Function ScheduleJob runs concurrently and will check the current job runs and schedule the next job. It checks few things before scheduling a job like is the previous job is still running, is the job scheduled time reached etc.

 


 

STEP 7: Implement job execution

The execution of the job takes place in function. A typical job skeleton looks like below. As the job starts, it send a message to ControlJob function indicating the job start (kind of pre processing step) and get a job run id from the backend. It later executes the job logic. Once execution is finished, it sends a message to ControlJob function again that job is complete (kind of post processing step).

 


 

The implementation to get the integration flows data is below where integration suite ServiceEndpoints API is called. For authorization, OAuth client credentials flow is triggered.

 


 

All database access artifacts are in data package.

 

Running application locally:


STEP 8: Create environment variables for running application locally

For running an application locally, create a .env file in root directory with all the environment variables below.

 
HANA_SECRET_DRIVER
HANA_SECRET_DSN
CPI_SECRET_CLIENTID
CPI_SECRET_CLIENTSECRET
CPI_SECRET_TOKENENDPOINT
CPI_SECRET_APIENDPOINT

 

Deploy application to Kyma:


STEP 9: Deploy kubernetes secrets for accessing HANA

 
kubectl create secret generic hanacloud --from-literal=driverName='hdb' --from-literal=hdbDsn='hdb://<DBserviceuser>:<DBPassword>@<dbhost>:443?TLSServerName=<dbhost>'

 

It creates our required secret for accessing hana

 


 

STEP 10: Deploy kubernetes secrets for accessing Integration suite

 
kubectl create secret generic cpi --from-literal=cpi_client_id='<client id>' --from-literal=cpi_client_secret='<client secret>' --from-literal=cpi_token_endpoint='https://681769b2trial.authentication.us10.hana.ondemand.com/oauth/token' --from-literal=cpi_api_endpoint='https://681769b2trial.it-cpitrial05.cfapps.us10-001.hana.ondemand.com'

 

It creates our required secret for accessing integration suite

 


 

STEP 11: Access secrets as environment variables

Secrets are accessed as environment variables through deployment descriptor

 


 

Result:


Our job to extract integration flows runs every 10 mins and extract all integration flows (details such as i flow name, version, protocol, endpoint etc) from integration tenant in to HANA.

EXTRACTINTEGRATIONFLOWS job scheduled to run every 10 mins

 


 

i flow details extracted in to IFLOWS table

 


 

Based on the job schedule, job runs every 10 minutes and extracts the i flow details

 


 

Code Repository:


The way code is is organized is there is a branch till the result of each blog. If you are checking previous blog, you can check the blanch for blog1. The main repository will always be current.

 
git clone https://github.com/ravipativenu/integration-suite-extention.git

 

Summary:


In this we have added persistence to our integration suite extension application and started enhancing our application logic. We built a job scheduler to schedule our background jobs. We set up our first job to extract i flow details from integration suite. We also seen high level details of integration suite tenant provisioning.

What's Next:


Next, let us use Azure for storing our files like payloads etc. and start testing our integration scenarios. Also we use Azure to store the job logs.

I hope you like the blog. Please do feel free to share your comments and feedback.

 
1 Comment
RobHofman
Explorer
0 Kudos
very curious about the next part!

 
Labels in this area