Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
VishwaGopalkris
Product and Topic Expert
Product and Topic Expert
This blog post is part of a series on SAP Datasphere and SAP HANA Cloud CI/CD. Before reviewing the details in this blog post, I recommend checking  "SAP Datasphere SAP HANA Cloud HDI CI/CD Automation Approach" for an overview of the use case scenario, toolset and concepts.

Introduction


This blog post will look into the implementation details of the automation pipelines for SAP Datasphere and SAP HANA Cloud HDI introduced in the earlier blog "SAP Datasphere, SAP HANA Cloud HDI CI/CD Automation Approach." I'll do a code walkthrough of each pipeline and show end-to-end automation of the scenario discussed in the overview blog. Review the additional setup pre-requisite topics section towards the end of this blog before trying on the system.

Recap


Before going into details of each pipeline, let's recap pipeline flow and transport landscape setup. Figure (a) depicts the transport landscape. The key point is that both DEV and QA HDI containers would be under the same subaccount (linked cloud foundry org) and space. This can be extended to a 3-system landscape with DEV QA and PRD, with production on a separate SAP HANA Cloud tenant or similar approach.


Figure (a) Transport Landscape setup


Figure (b) outlines the automation flow; two pipelines are linked to two separate GIT repos for the HDI container and SAP Datasphere artifacts. The flow can either start from the HDI container pipeline or the SAP Datasphere pipeline. Suppose it involves committing HDI container artifacts via VS code or Business Application Studio. Webhook will trigger the HDI pipeline to build, deploy, validate, and upload MTA archives to SAP Cloud Transport Management. SAP Cloud Transport Management will move the MTA archives through the landscape. If all the earlier steps are successful, it will trigger the SAP Datasphere pipeline. SAP Datasphere pipeline flows through the build, deploy and validation of SAP Datasphere artifacts, deploying them into QA space.


Figure (b) Automation flow


As mentioned at the bottom of Figure (a), all the artifacts are deployed on the same SAP HANA Cloud database tenant. A separate HDI container service and open SQL schema access would be linked to each DEV and QA Datasphere space. The SAP Datasphere artifacts from open SQL schema can be accessed via linked user-provided service in the HDI container enabling SQL data warehousing. Even though our primary use case did not need accessing/enhancing SAP Datasphere artifacts on the HDI, bi-directional access is possible, as outlined in many of the earlier SQL data warehousing blogs, for example, "Access the SAP HANA Cloud database underneath SAP Datasphere."

Pipeline 1 - SAP HANA Cloud HDI Container


For pipeline 1, I am using an existing HANA Cloud GIT repo leveraging Cloud Application Programming Model. Using project "Piper," we provide instructions for the build server within the CAP project. This is done using two artifacts – Jenkinsfile and .pipeline/config.yml. You can create these two files manually in your project or generate them using  the @sap/cds-dk command line interface (CLI) as below:
cds add pipeline

This is how the Jenkinsfile and config.yml would look for the HDI container,

Jenkinsfile:
@Library('piper-lib-os') _
node() {
stage('prepare') {
deleteDir()
checkout scm
setupCommonPipelineEnvironment script: this,
verbose: true
}
stage('build') {
mtaBuild script: this,
mtaBuildTool: 'cloudMbt',
verbose: true
}
stage('deploy') {
cloudFoundryDeploy script: this,
deployTool:'mtaDeployPlugin',
verbose: true
}
stage('Validation') {
npmExecuteScripts script: this,
verbose: true
}
stage('tmsUpload') {
tmsUpload script: this
}

stage('Trigger_DWC_Pipeline') {
build 'HDACSM/dwc_cli_ctms/master'
}
}

.pipeline/config.yml
steps:
### Stage Build
mtaBuild:
buildTarget: 'CF'
### Stage Deploy CF deploy
cloudFoundryDeploy:
deployTool: 'mtaDeployPlugin'
deployType: 'standard'
cloudFoundry:
org: 'CF-ORG-ABC-E2E-DEMOS'
space: 'ABC-E2E-DEMOS-DWC-SPACE'
credentialsId: 'CF-CREDENTIALSID'
database_id: XeeabcdX-abcd-481d-abcd-00b0417Xabcd
### Stage Validation,
#Execute npm script 'test' to validate db artifacts.
npmExecuteScripts:
buildDescriptorExcludeList:
- db/package.json
runScripts:
- "test"
### Stage tmsUpload, Trigger cTMS to move through landscape
tmsUpload:
credentialsId: 'BTP-TMS'
nodeName: 'IMPORT_DEV'
verbose: 'true'

As shown in the files above, the full coordination and sequencing of pipelines are done using the Piper library. Details of what happens in each stage, prepare, build, deploy and tmsUpload have been explained in the blog post "SAP Business Technology Platform – integration aspects in the CI/CD approach"; please refer to the “Automation pipeline and end point” section. The only addition here is the validation stage and triggering pipeline 2. For the validation stage, I used a unit testing code similar to what Thomas Jung explained in "SAP HANA Basics For Developers: Part 12 Unit Tests". I have adapted the code from the git repo https://github.com/SAP-samples/hana-opensap-cloud-2020 to use gulp 4.0 since the earlier 3.9 version would no longer work with the latest node version. In the validation stage, we run the "SELECT * FROM" query on each of the views and tables in the HDI container to ensure nothing is broken after changes to the repo.

Now let's look at specific elements in mta.yml and mtaext files, which helps in realizing the landscape as shown in figure (b).  The tmsUpload method uploads the solution package to an import buffer of the SAP Cloud Transport Management service (cTMS) node. The MTA extension descriptor (mtaext) is uploaded to the nodes in the transport landscape of SAP Cloud transport management, which helps apply QA configuration.

  • Add app-name under deployer module, service name and schema name under com.sap.xs.hdi-container resource parameter. Override the app-name, service name and schema name in the mtaext file.

    • app-name: HDIDEV -> HDIQA

    • service-name: SP_PROJECTDEV_DWC_S1 -> SP_PROJECTQA_DWC_S1

    • schema: SP_PROJECTDEV_DWC_HDI -> SP_PROJECT_QA_DWC_HDI

      • If you remove schema, you will end with guids as schema names. Functionality wise it would be ok; however, having schema names might make maintenance easier.





  • Rather than the actual UPS service name, the key added under the SERVICE_REPLACEMENTS group is used under hdbgrants and synonymconfig to override the UPS service name under mtaext.

    • service-name: UPS_SQL_SP_PROJECTDEV -> UPS_SQL_SP_PROJECTQA




mta.yaml
ID: MVP_SP
version: 0.0.1
modules:
# --------------------Deployer (side car)--------------
- name: db # deployer
# -----------------------------------------------------
type: hdb
path: db
parameters:
app-name: HDIDEV
requires:
- name: HDI_SPDEV # Depends on the HDI Container
properties:
TARGET_CONTAINER: ~{hdi-container-name}
- name: DWC.UPS_SQL_SP_PROJECTDEV
group: SERVICE_REPLACEMENTS
properties:
key: ups_schema_access
service: '~{ups_sql_sp}'
resources:
# --------------------HDI Container--------------
- name: HDI_SPDEV
# -----------------------------------------------
type: com.sap.xs.hdi-container
parameters:
config:
database_id: Xee99a70-abcd-481d-abcd-00b0417Xabcd # to deploy against DWC HC
schema: SP_PROJECTDEV_DWC_HDI
service-name: SP_PROJECTDEV_DWC_S1
properties:
hdi-container-name: ${service-name}
# --------------------UPS user provided service --------------
# to be created first in the CF space in which this HDI shared service gets created.
# Credential you get from DWC Space management. You use this service to access DWC views
- name: DWC.UPS_SQL_SP_PROJECTDEV
# # --------------------------------------------------------------
type: org.cloudfoundry.existing-service
parameters:
service-name: UPS_SQL_SP_PROJECTDEV
properties:
ups_sql_sp: ${service-name}

MTA Extension File .mtaext
_schema-version: "3.1"
ID: MVP_SP.config.first
extends: MVP_SP

modules:
- name: db
type: hdb
path: db
parameters:
app-name: HDIQA
resources:
- name: VI_HDI_SPDEV
type: com.sap.xs.hdi-container
parameters:
service: hana
service-plan: hdi-shared
config:
database_id: Xee99a70-abcd-481d-abcd-00b0417Xabcd
schema: SP_PROJECT_QA_DWC_HDI
service-name: SP_PROJECTQA_DWC_S1
properties:
hdi-container-name: ${service-name}
- name: DWC.UPS_SQL_SP_PROJECTDEV
type: org.cloudfoundry.existing-service
parameters:
service-name: UPS_SQL_SP_PROJECTQA


Pipeline 2 - SAP Datasphere Pipeline


SAP Datasphere pipeline Jenkinsfile and config.yml are as below. Prepare step is used to checkout code from source control, and initialize Piper commonPipelineEnviroment. Build and deploy steps call Build.js and Deploy.js nodeJS files, respectively. The parameters for the Build and Deploy steps come from config.xml except for the SAP Datasphere login credential, which is stored as a secret in Jenkins and passed using the withCredentials module. This would mask the credentials field even in the build server logs.  As shown in the Dockerfile code below, a custom docker image is used to ensure all the dependencies are met. And the Build.js is called inside the docker container. Please refer to the comments inside the Dockerfile on how to build the docker image.

Jenkinsfile:
@Library('piper-lib-os') _

node() {

stage('prepare') {
deleteDir()
checkout scm
setupCommonPipelineEnvironment script: this
verbose: true
}

stage('build') {
withCredentials([
usernamePassword(credentialsId: "DWC_CredentialsID",
usernameVariable: 'DWC_USER',
passwordVariable: 'DWC_PASS')
])
{
dockerExecute(
script: this,
dockerImage: 'vishwagi/puppeteer-dwc-node-docker:latest',
dockerEnvVars: ['DWC_PASS':'$DWC_PASS','DWC_USER':'$DWC_USER',])
{
sh 'node Build.js';
}
verbose: true
}
}

stage('deploy') {

withCredentials([
usernamePassword(credentialsId: "DWC_CredentialsID",
usernameVariable: 'DWC_USER',
passwordVariable: 'DWC_PASS')
])
{
dockerExecute(
script: this,
dockerImage: 'vishwagi/puppeteer-dwc-node-docker:latest',
dockerEnvVars: ['DWC_PASS':'$DWC_PASS','DWC_USER':'$DWC_USER',])
{
sh 'node Deploy.js';
}
verbose: true
}
}
stage('Validation') {
npmExecuteScripts script: this,
verbose: true
}

}

.pipeline/config.yml
steps:
### Stage Build and Deploy set env variables
dockerExecute:
dockerEnvVars:
DWC_URL: 'https://dwc-ab-abcd.eu10.hcs.cloud.sap/'
DWC_PASSCODE_URL: 'https://dwc-ab-abcd.authentication.eu10.hana.ondemand.com/passcode'
HDIDEV: 'SP_PROJECTDEV_DWC_HDI'
HDIQA: 'SP_PROJECT_QA_DWC_HDI'
SPACE: 'SP_PROJECTDEV'
SPACEQA: 'SP_PROJECTQA'
LABELQA: 'DWC_QA'
ENTITIES: ''
SPACE_DEFINITION_FILE: 'SP_PROJECTDEV.json'
NEW_SPACE_DEFINITION_FILE: 'SP_PROJECTQA.json'

### Stage Validation, Execute npm script 'test' to validate db artifacts.
npmExecuteScripts:
buildDescriptorList:
- srv/package.json
runScripts:
- "test"

Dockerfile.



FROM geekykaran/headless-chrome-node-docker:latest

LABEL version="1.0"
LABEL author = "Vishwa Gopalkrishna"

RUN apt update; \
apt upgrade;

RUN npm cache clean -f; \
npm install n -g; \
n stable;
ADD package.json package-lock.json /

# The steps below are to enhance Docker image
# otherwise the image from Docker Hub can be used as is.
# open terminal in the same folder as Dockerfile and run below
# Command #1 to create package.json file.
# npm init --yes

# Command #2 install dependencies, these would be written in package.json file
# npm install @sap/dwc-cli fs-extra puppeteer path
# Now if you check the package.json and package-lock.json you should see the dependency list.

RUN npm install

# #3 Build command
# docker build -t vishwagi/puppeteer-dwc-node-docker:latest .


# Version 1.0 image has below packages
# ***IMPORTANT other @sap/dwc-cli version may need changes to Build.js
# "@sap/dwc-cli": "^2022.14.0",
# "fs-extra": "^10.1.0",
# "path": "^0.12.7",
# "puppeteer": "^15.3.0"

Build.js and Deploy.js files are nodeJS files wrapped around @Sap/dwc-cli commands. Both these modules use a headless chromium browser for automated passcode retrieval (puppeteer). Please refer to Jascha Kanngiesser’s dwc-cli blog post explaining the passcode retrieval details. With SAP Datasphere's latest version, there is support for OAuth authentication, which should simplify the Build.js even further. I’ll write a follow-on blog updating the Build and Deploy JS files with OAuth authentication, keep a look out for my updates here.

Functionality-wise, Build.js downloads the DEV space entities to a file parses it to translate them to QA space entities, changing the relevant parameters like the label, mapped HDI name, DB user etc. And Deploy.js updates/creates the QA space with appropriate entity changes. The parameters from config.yml and secrets are retrieved as environment parameters.

Build.js
const puppeteer = require("puppeteer");
const exec = require("child_process").exec;
const fs = require('fs-extra');

const SPACE_DEFINITION_FILE = process.env.SPACE_DEFINITION_FILE;
const NEW_SPACE_DEFINITION_FILE = process.env.NEW_SPACE_DEFINITION_FILE;
const SPACE = process.env.SPACE;
const SPACEQA = process.env.SPACEQA;
const LABELQA = process.env.LABELQA;
const ENTITIES = process.env.ENTITIES;
const HDIDEV = process.env.HDIDEV;
const HDIQA = process.env.HDIQA;
const DWC_URL = process.env.DWC_URL;
const DWC_PASSCODE_URL = process.env.DWC_PASSCODE_URL;
const USERNAME = process.env.DWC_USER;
const PASSWORD = process.env.DWC_PASS;

let page;

const getPasscode = async () => {
console.log('Inside get passcode module');
await page.waitForSelector('div.island > h1 + h2', {visible: true, timeout: 5000});
await page.reload();
return await page.$eval('h2', el => el.textContent);
}

const execCommand = async (command) => new Promise(async (res, rej) => {
const passcode = await getPasscode();
console.log('Passcode OK');
const cmd = `${command} -H ${DWC_URL} -p ${passcode}`;
console.log('command for space download', cmd);

exec(cmd, (error, stdout, stderr) => {
if (error) {
console.error(`error: ${error.message}`);
if (error.code === 1) {
res({ error, stdout, stderr });
}else {
rej({ error, stdout, stderr });
}
}
else{
res({ error, stdout, stderr });
}

console.log(`stdout:\n${stdout}`);
console.log(`error:\n${error}`);
console.log(`stderr:\n${stderr}`);

});
});

(async () => {
const browser = await puppeteer.launch({args: ['--no-sandbox', '--disable-setuid-sandbox']});
page = await browser.newPage();
await page.goto(DWC_PASSCODE_URL);

await page.waitForSelector('#logOnForm', {visible: true, timeout: 5000});
if (await page.$('#logOnForm') !== null) {
await page.type('#j_username', USERNAME);
await page.type('#j_password', PASSWORD);
await page.click('#logOnFormSubmit');
}

//--------- READ DEV SPACE ------------------//

console.log(process.env);
await execCommand(`dwc cache-init`);
await execCommand(`dwc spaces read -s ${SPACE} -o ${SPACE_DEFINITION_FILE} -d ${ENTITIES}`);

//--------- CREATE/UPDATE QA SPACE ------------------//

const spaceContent = await fs.readFile(SPACE_DEFINITION_FILE, 'utf-8')
console.log('Read file');
const replacer = new RegExp(HDIDEV, 'gi')
const spaceContentQA = spaceContent.replace(replacer, HDIQA);

// parse the downloaded space definition file
const spaceDefinition = JSON.parse(spaceContentQA);
// We need to update the SPACE ID as well the dbuser as it is specific to space
// First lets get the current space name and label and get the dbusername.
const dbuser_name = SPACE +'#'+ spaceDefinition[SPACE].spaceDefinition.label;
// copy the dbuser details into a placeholder for now, we will attach the same config to new dbuser.
const dbuser_details = spaceDefinition[SPACE].spaceDefinition.dbusers[dbuser_name];

console.log(dbuser_details);
console.log(spaceDefinition[SPACE].spaceDefinition.dbusers)

// update to new dbusername
const dbuser_name_new = SPACEQA+'#'+LABELQA;

// const dbuserjson = JSON.stringify([dbuser_name_new]: dbuser_details)
// parse the created json otherwise it would add double escape / later
const dbuser_json = JSON.parse(JSON.stringify({ [dbuser_name_new] : dbuser_details}));
// Udpate laberl and dbuser details with new one
spaceDefinition[SPACE].spaceDefinition.label = LABELQA;
spaceDefinition[SPACE].spaceDefinition.dbusers = dbuser_json;

// Change root node to new QA space
var json = JSON.stringify({ [SPACEQA] : spaceDefinition[SPACE]});
// console.log(json);

// Write the space details to the file to be consumed by deploy later.
await fs.writeFile(NEW_SPACE_DEFINITION_FILE, json, 'utf-8');

console.log('MAIN after executing commands');

await browser.close();
})();

Deploy.js
const puppeteer = require("puppeteer");
const path = require('path');
const exec = require("child_process").exec;

const NEW_SPACE_DEFINITION_FILE = process.env.NEW_SPACE_DEFINITION_FILE;
const DWC_URL = process.env.DWC_URL;
const DWC_PASSCODE_URL = process.env.DWC_PASSCODE_URL;
const USERNAME = process.env.DWC_USER;
const PASSWORD = process.env.DWC_PASS;

let page;

const getPasscode = async () => {
console.log('Inside get passcode module');
await page.waitForSelector('div.island > h1 + h2', {visible: true, timeout: 20000});
await page.reload();
return await page.$eval('h2', el => el.textContent);
}

const execCommand = async (command) => new Promise(async (res, rej) => {
const passcode = await getPasscode();
console.log('Passcode OK');

const cmd = `${command} -H ${DWC_URL} -p ${passcode}`;
console.log('command for space download', cmd);

exec(cmd, (error, stdout, stderr) => {
if (error) {
console.error(`error: ${error.message}`);
if (error.code === 1) {
res({ error, stdout, stderr });
}else {
rej({ error, stdout, stderr });
}
}
else{
res({ error, stdout, stderr });
}

console.log(`stdout:\n${stdout}`);
console.log(`error:\n${error}`);
console.log(`stderr:\n${stderr}`);

});
});

(async () => {
const browser = await puppeteer.launch({args: ['--no-sandbox', '--disable-setuid-sandbox']});
page = await browser.newPage();
await page.goto(DWC_PASSCODE_URL);

await page.waitForSelector('#logOnForm', {visible: true, timeout: 10000});
if (await page.$('#logOnForm') !== null) {
await page.type('#j_username', USERNAME);
await page.type('#j_password', PASSWORD);
await page.click('#logOnFormSubmit');
}

// console.log(process.env);
await execCommand(`dwc cache-init`);

//--------- CREATE SPACE ------------------//
// The below command will create dwc space from the supplied .json(-f) file

await execCommand(`dwc spaces create -f ${NEW_SPACE_DEFINITION_FILE}`);
console.log('MAIN after executing commands');

await browser.close();
})();

 

I'll add a video here of the code walkthrough and end-to-end demo soon; watch this space.

 

Additional Setup Pre-requisite Topics



  1. Service Broker Mapping to enable SQL data warehousing


  2. Bi-directional access SAP Datasphere artifacts  <-> SAP HANA Cloud HDI Container - Tutorial


  3. Project "Piper" Jenkins setup

    • Project “Piper” is one of SAP’s solutions for continuous integration and delivery, as detailed in the solutions overview. Piper is used here in the current scenario because of its added flexibility in setting CICD automation. Start CX Server as a build server for your CICD pipeline. The CX Server is part of the project “Piper.” It’s a lifecycle-management tool to bootstrap a preconfigured Jenkins instance. Thanks to its lifecycle-management scripts, it uses Docker images and can be used out of the box.





Conclusion


Along with the earlier overview blog post, this blog post details the use case scenario, automation flow, challenges and the approach for CI/CD automation with SAP Datasphere and SAP HANA Cloud!. With SAP Cloud Transport Management, project “Piper” and @Sap/dwc-cli CI/CD automation can be realized. SAP Datasphere pipeline is automated using a wrapper around dwc-cli, and Docker image is used to ensure dependencies are met. Also, both DEV and QA HDI containers linked to SAP Datasphere can be deployed automatically under the same subaccount (linked cloud foundry org) and space.

Let me know your thoughts about the approach, and feel free to share this blog. If some section is unclear, let me know, and I'll add more details. All your feedback and comments are welcome. If you have any questions, please do not hesitate to ask in the Q&A area as well.
8 Comments
wounky
Participant
0 Kudos

Hi vishwanath.g

thank you for the post. It seems like a nice approach for the mixed CI/CD.

Could you please share some thoughts on;

  • how to apply this scenario to the standalone DWC without the requirement to use the CAP model?
    • is it a prerequisite for the DWC pipeline setup to enable Service Broker Mapping to enable SQL data warehousing even if CAP is not needed?
    • if not - would the idea be as below?
      • create a new Jenkins server as explained in https://ccfenner.github.io/jenkins-library/guidedtour/  & connect it to BTP / Cloud Foundry
      • Run the Jenkinsfile pipeline with .pipeline/config.yml. The pipeline would move the projects between the landscape using Dockerfile service with Build.js & Deploy.js steps.
      • Take the SPACEQA: 'SP_PROJECTQA' as the variable from Jenkins for the user to decide which space he wants to move.
      • Do you think it would be a good idea to extend the build & deploy to update single objects selected from the Space instead of selecting all?
        It seems that there is a great limitation because DWC does not store any changelog of the objects so it's not possible to provide timestamps in the pipeline based on which objects changed in the selected time window would be moved.
  • at which point in pipeline #2 do you update the GIT repo for DWC? or did you mean the Jenkins repo?
  • in the Build.js you update the space IDs and so on because the DEV and QA are on the same tenant, how would you set it up if they were on different tenants? Would you pass all the credentials as secrets and put the target environment as Jenkins variable?

Kind regards,
Sebastian

VishwaGopalkris
Product and Topic Expert
Product and Topic Expert
0 Kudos

Hi sebastian.gesiarz ,

Thanks for reaching out; below are my comments in blue.

  • how to apply this scenario to the standalone DWC without the requirement to use the CAP model?
    • is it a prerequisite for the DWC pipeline setup to enable Service Broker Mapping to enable SQL data warehousing even if CAP is not needed? VG: Not required if you are using only DWC pipeline standalone. I am also trying to see if I can integrate CAP, Business Application Studio and DWC in some way; this may be another blog if I make headway.
    • if not - would the idea be as below?
      • create a new Jenkins server as explained in https://ccfenner.github.io/jenkins-library/guidedtour/  & connect it to BTP / Cloud Foundry VG: ok
      • Run the Jenkinsfile pipeline with .pipeline/config.yml. The pipeline would move the projects between the landscape using Dockerfile service with Build.js & Deploy.js steps.VG: ok
      • Take the SPACEQA: 'SP_PROJECTQA' as the variable from Jenkins for the user to decide which space he wants to move. VG: Jenkins variable or variable under piper with config.yml. See which one is more beneficial for you.
      • Do you think it would be a good idea to extend the build & deploy to update single objects selected from the Space instead of selecting all?
        It seems that there is a great limitation because DWC does not store any changelog of the objects so it's not possible to provide timestamps in the pipeline based on which objects changed in the selected time window would be moved. VG: yes single object would be good; I left a variable ENTITIES under .pipeline/config.yml just for that. @dwc-cli supports individual entity updates. 
  • at which point in pipeline #2 do you update the GIT repo for DWC? or did you mean the Jenkins repo? VG: It's a Jenkins repo; since there is no object-level git integration in DWC, this is a kind of workaround approach. After building QA space, a copy of DEV space metadata could be updated to GIT; it would be helpful if you want to quickly develop something from the command line and move through the pipelines or similar use cases.
  • in the Build.js you update the space IDs and so on because the DEV and QA are on the same tenant, how would you set it up if they were on different tenants? Would you pass all the credentials as secrets and put the target environment as Jenkins variable? VG: if DEV and QA are on different tenants, I can retain the same names on the HANA cloud HDI side and override only the 'database id's for the corresponding tenants in cTMS. On the DWC side, one possibility is to retain the same space name, but I assume the users will be different, right? QA and test users with other privileges? Those have to be handled in any case in Build.js.

Best Regards,

Vishwa

wounky
Participant
0 Kudos

Thank you, Vishwa. Everything is clear and the answer is much appreciated.

former_member867525
Discoverer
0 Kudos
Hey Vishwa,

 

Is there a possiblity to see your repository(ies), on which this blog post is built upon?

I need to see the structure especially of the DWC/Jenkins repo.

Best and thanks you very much,

Lars
VishwaGopalkris
Product and Topic Expert
Product and Topic Expert
0 Kudos
Hi Lars,

Would it be helpful if I shared an image of the repository structure? Additionally, is there a specific question you have in mind that I can help clarify?  I currently do not have any plans to publish the repository into SAP samples.

Best Regards,

Vishwa
former_member867525
Discoverer
Yes that would help a lot!

I was wondering if I could get our entire space mapped to Git. So not only Hana native objects, but also task chains or data flows or schedules.

I imagine that my space, including all pipelines and objects etc., is my master branch. I pull a feature branch and edit a view and a data flow. Then I make a merge request and after a successful merge the innovations are deployed. Just like a normal Software Development project e.g. in Python.

Would that work? If yes how? I still don't have an idea for this, hence my question if I can look at your repo 🙂

Thanks and best regards,
Lars
VishwaGopalkris
Product and Topic Expert
Product and Topic Expert
0 Kudos

Hi Lars,

The idea of mapping an entire space to GIT is good. That is where @sap/datasphere-cli should head towards. I started working on something similar for Datasphere exercises with end-to-end scenarios; have one space mapped to Git with all entities, all layers -> acquisition, harmonization, reporting and consumption. But as of now, there are limitations as we work towards adding more and more objects;  dataflow, analytical model and some more. Below is the message you will get if you try to deploy using cli.

# The following objects do not support mass deployment:
# - E/R models, Intelligent Lookup
# - Perspectives, Consumption Models, Fact Models
# - Generated objects (such as the time dimensions)
# - Any object with a status of "Design-Time Error" or "Run-Time Error"
# - Any shared object from a Space you aren't a member of
Attached is the screenshot of the Datasphere Jenkins repo. I hope it helps.
Best Regards,
Vishwa

repository structure datasphere jenkins repo

didierheck
Advisor
Advisor
0 Kudos

Hi,

Is it possible with your approach to just automate a CI/CD pipeline for SAP DSP Replication Flows (path-through) ?

Regards,

Didier