Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 



In this post, we will continue into our CI/CD journey. We have already explored the main concepts (part 1 — including what is CI/CD) and ideas behind our sample application and the design principles for the pipeline when HANA is involved (part 2). Now we will show you what the plumbing looks like for one of the microservices pipelines.


The image above shows the CI/CD flow we will implement. When the app developer commits the code, it will be picked up by a trigger in Cloud Build. It will then build, test and deploy our application. Let’s focus initially on a single environment pipeline since expanding it to multiple environments later is quite simple.


 

Demo app recap


Before we dive into the pipeline, let’s quickly recap what our application looks like:




As Lucia Subatin explained in part 2, the application has four major components:




  1. Frontend: web interface, allows the user access to the application.

  2. Backend: translation service

  3. DAL: the data access layer

  4. Database: SAP HANA — stores our data.


The diagram above shows how these components interact with each other. For more details, check out part 2 of this series.


 

Where do I even start?


Now you might be thinking: That is great! Where do we start? How do I pick the first microservice to become CI/CD’ed? (yes, I just invented a word… Poetic License FTW!!!)


When first starting a CI/CD pipeline for an existing application, it might be a challenge to know how to begin dissecting the beast. Look for the following traits in a service:




  • Fairly independent, meaning it doesn’t have to call a lot of other services within the same application (it’s ok to be a dependency for other services)

  • A service that has good test coverage for the unit, functional and integration tests (yes — they are all important)

  • Has a relatively low overall complexity


Take a peek at the diagram above again, which service do you think is the most adequate to be the first? Take a guess!


We are starting with the backend service since it’s quite isolated and well tested. When automating deployments, always try to improve your tests to ensure the reliability of your application — your future self will thank you.


 

Build Containers


The backend depends on the Google Translate API SDK for Golang; this means we would always need to install that SDK with a go get before can build. We also want to run go test and go vet as part of our test process.


Pipeline steps in Cloud Build run inside containers; this means we can simplify our build by creating a custom container with all the tools we need. Let’s go ahead and create a build container for the backend and store it in the container registry for later use within the pipeline.


Why bother, you ask? Eventually, you will want to run 10’s or even 100’s of builds a day — so those few seconds get magnified; this drives the needs to make builds as fast as possible. A build container allows us to skip the download and install step for build dependencies — saving time.


Here is what our backend build container Dockerfile looks like:



# build stage
FROM golang:alpine
RUN apk — no-cache add build-base git bzr gcc
RUN go get -u cloud.google.com/go/translate
ENTRYPOINT [ “/bin/ash” ]

Bonus tip: you may decrease the build time even further by implementing Kaniko Cache.


 

The Pipeline Configuration


Now that we have the environment, let’s check what the pipeline looks like:




For master branch we will:




  1. Run static tests using go vet

  2. Run unit tests go test

  3. Run functional tests (also go test)

  4. Build container (docker)

  5. Push container to the registry (docker)

  6. Deploy Container to the environment (gcloud)

  7. Run integration tests (go test)


The Cloud Build pipeline is configured via a YAML file called cloudbuild.yaml as follows (gist😞





# Licensing CC BY-SA 4.0
Steps:
# CI Pipeline: static tests
- id: 'app-hana-be: static tests'
name: 'gcr.io/$PROJECTID/be-build-env'
waitFor: ['-']
dir: translate
args:
- -c
- >
echo "running: go vet -v" &&
go vet -v

# CI Pipeline: unit + functional tests
- id: 'app-hana-be: unit tests'
name: 'gcr.io/$PROJECTID/be-build-env'
waitFor: ['app-hana-be: static tests']
dir: translate
args:
- -c
- >
echo "running go test -v" &&
go test -v

# CI Pipeline: static tests
- id: 'app-hana-be: build container'
name: 'gcr.io/cloud-builders/docker'
waitFor:
- 'app-hana-be: static tests'
- 'app-hana-be: unit tests'
dir: translate
args: ['build', '-t', 'gcr.io/$PROJECTID/app-hana-be-cicd', '.']


# CD Pipeline: push container to GCR
- id: 'app-hana-be: push container'
name: 'gcr.io/cloud-builders/docker'
waitFor:
- 'app-hana-be: build container'
dir: translate
args: ['push', 'gcr.io/$PROJECTID/app-hana-be']

# CD Pipeline: deploy to Cloud Run
- id: 'app-hana-be: deploy container'
name: 'gcr.io/cloud-builders/gcloud'
waitFor:
- 'app-hana-be: push container'
dir: translate
args:
- 'run'
- 'deploy'
- 'app-hana-be-cicd'
- '--image'
- 'gcr.io/$PROJECTID/app-hana-be'
- '--region'
- 'us-central1'
- '--platform'
- 'managed'
- '--allow-unauthenticated'

# CI Pipeline: integration tests would go here.



Of course, to implement the pipeline for other technologies the details will change; however, the overall flow will remain the same. For example, when testing the DAL layer using NodeJS the command will be npm test instead of go test; and so on.


Some important considerations here are:




  1. Each microservice has an individual pipeline (cloudbuild.yaml)

  2. Each microservice will generate its artefacts independently

  3. Each microservice should test the integration with its dependencies.


Remember this image Lucia explained in part 2?




 

Look at the cloudbuild.yaml closely and you will see a dir: translate on each step that points to the backend microservice and thus this will only run when we make changes in that folder. There are many ways of doing this so pick the version that best suits your release and development needs. Also, notice how the cloudbuild.yaml file sits inside the folder for its relevant microservice.


 


Creating the Trigger


It’s very straightforward to create a trigger in Cloud Build, once we have the YAML file created:




 

Notice that we configured the glob pattern to match only changes applied inside our microservice folder in the case of our backend it’s translate/**. This prevents this trigger from going off if we commit in other folders in the repo.


Also see how our regex is matching a branch named master; for non-master branches, you can check the “Invert Regex” checkbox; and finally the Cloud Build configuration file also points specifically for our microservice pipeline definition with translate/cloudbuild.yaml


By now you can pretty much visualize how you would create the pipeline for the other microservices too!


To run your pipeline commit to your repository in the correct folder, you also have the option to run it manually. The build logs can be viewed by going to the Console and looking at the build execution like so:




The pipeline for HANA artefacts


Credit / Source: icanhas.cheezburger.com

The pipeline for deploying the HANA artefacts is mostly the same, but different … but still mostly the same.


As you saw in part 2, this pipeline relies on an HDI container and a special build container with the correct tooling to allow us to connect and deploy to the instance. The actual cloudbuild.yaml file is rather simple:




Notice how it has a single step (for now). It mostly relies on the prepared build environment and tools like hana-cli and HDI connection context via the environment. All that is accomplished by creating a specific build container (much like our backend build container).


This is what the build container Dockerfile looks like (gist😞






# Licensing CC BY-SA 4.0
FROM ubuntu

WORKDIR /usr/src/app

RUN apt-get update && \
apt-get install -y apt-utils && \
apt-get install -y openjdk-8-jre && \
apt-get install -y git && \
apt-get install -y less && \
apt-get install -y vim && \
apt-get install -y nodejs && \
apt-get install -y npm && \
apt-get install ca-certificates-java;

RUN update-ca-certificates -f;

# Setup JAVA_HOME -- useful for docker commandline
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/
RUN export JAVA_HOME

WORKDIR /usr/src/app

# download from https://tools.hana.ondemand.com/#hanatools
COPY hanaclient-2.4.177-linux-x64.tar.gz .

RUN tar xzf hanaclient-2.4.177-linux-x64.tar.gz
RUN client/hdbinst --path /usr/src/app

# setup Tom's CLI
# https://github.com/SAP-samples/hana-developer-cli-tool-example
RUN git clone https://github.com/SAP-samples/hana-developer-cli-tool-example
RUN cd hana-developer-cli-tool-example
RUN npm config set @sap:registry=https://npm.sap.com
RUN npm install && npm link

RUN ln -s /usr/src/app/hdbsql /usr/sbin/hdbsql

WORKDIR /usr/src/app




As a side note, this Dockerfile is not optimized and you may want to tweak it a bit before using.


Notice how we set up the container with the required tooling to connect to HANA via HDI. You can build this container in the same way we created our backend build container. This step is important because it essentially allows us to be ready to connect and deploy our db-things via HDI.


Another important difference is how the npm start command is configured. We added the exit flag, so it won’t wait for a human to press Ctrl+C without that the pipeline would hang and timeout.




These are the main differences between a regular microservice pipeline and the HANA deployment pipeline using HDI containers.


Now you should be able to build your very own pipeline for your microservices and your HANA artefacts! Enjoy!


 

Lucia Subatin and Fatima Silveira.


 

(Originally posted in Medium.com )
2 Comments
dylan-drummond
Contributor
nice article!
0 Kudos
thank you!
Labels in this area