Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
nunomcpereira
Explorer
As a follow-up of blog https://blogs.sap.com/2023/02/02/sap-cpi-ci-cd-from-from-zero-to-hero/, I got very valuable feedback that I should give project piper a try.

I wanted to try piper on Kyma, since we already have our pipelines and custom shared libraries, so I followed this option with the hope to be able to have both piper and our pipelines working on the same container so I believe it is the most valuable scenario for our team. Also, having in the end a docker image containing everything needed would be a much more portable solution than having to install each software one by one on a new server.

Piper documentation (https://www.project-piper.io/infrastructure/overview/) catalogs deployment on kubernetes as experimental, so make sure to test it properly if you're using this as your productive environment. Also a disclaimer: I'm not any expert on Kyma or Piper (it was my first exposure to both), so maybe I had a bad journey due to the lack of knowledge which is fair. Nevertheless, I think the whole process could be documented better so I hope this blog brings attention so that someone can contribute to enhance the documentation into a step by step procedure.

1st Attempt - Using deprecated helm chart


Checked the official documentation that points to the usage of helm charts to install it on Kyma. From my understanding helm charts are just highly configurable recipes to import all the kyma/kubernetes resources via yaml files. You can then override the values while installing from charts via command line or via an extra override file.


Official piper installation on kubernetes documentation


If you follow the link for the helm chart you'll get into a page with a deprecated github repo. The deprecated helm chart has indeed a Master.Image value that you can supply but I wasn't able to determine the url of this deprecated repo to add it via "helm add repo piperofficial <repourl>". If you know the answer just let me know in the comments. I didn't focus to much on this since the helm chart was marked as deprecated anyway so I moved to the new one.

2nd Attempt - Using the new official helm chart


If you follow the link to the new helm chart, you'll get here (https://github.com/jenkinsci/helm-charts). I was able to add the repo with command
helm repo add jenkins https://charts.jenkins.io

then to install it via
 .\helm.exe install devops jenkins/jenkins --set namespaceOverride=piper

Execution was successful, I was able to open jenkins, authenticated successfully and went to the system settings to confirm the piper shared library was there. Problem: Image is the jenkins/jenkins standard image without the piper lib installed. If on top of this I would need to install all the jenkins plugins required by piper and the piper lib manually it would be too much effort, so I removed everything and tried again but this time overriding the image and tag of the helm chart to the piper one.
 .\helm.exe install devops jenkins/jenkins --set namespaceOverride=piper --set controller.image=ppiper/jenkins-master --set controller.tag=latest

Everything was deployed but the pod didn't start... Why?


Jenkins pod log


Logs of the pod mention problems with jenkins plugin dependencies... Last thing I want to deal is with dependencies version mismatches. From what I understand docker image ppiper/jenkins-master and jenkins/jenkins (that the helm chart was prepared for) use different Jenkins runtime versions and have different plugins installed which is normal but led me to conclude that this would not work without a deeper look into solving these dependencies/versions/compatibility issues. If you've found a simple way to do it via this approach please let me know which steps you've followed on the comments.

3rd Attempt - Create it from scratch


I gave up on using the official documentation and the only solution that got it working for me was to create it from scratch on Kyma. Below you can find the artifacts I've created (I've used a Deployment with a ReplicaSet having a volume on jenkins home pointing to PersistentVolumeClaims instead of using the StatefulSet approach that the helm chart uses).
apiVersion: v1
kind: Namespace
metadata:
name: piper
labels:
app.kubernetes.io/name: piper
istio-injection: enabled
kubernetes.io/metadata.name: piper
spec:
finalizers:
- kubernetes
status:
phase: Active
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: piperpvc
namespace: piper
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: piperapp
namespace: piper
labels:
app: piperapp
spec:
replicas: 1
selector:
matchLabels:
templatename: piperapp
template:
metadata:
labels:
templatename: piperapp
app: piperapp
spec:
containers:
- name: pipercontainer
image: ppiper/jenkins-master
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
volumeMounts:
- name: jenkins-volume
mountPath: /var/jenkins_home
volumes:
- name: jenkins-volume
persistentVolumeClaim:
claimName: piperpvc
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
---
kind: Service
apiVersion: v1
metadata:
name: piperappserv
namespace: piper
labels:
app: piperapp
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
app: piperapp
type: ClusterIP
sessionAffinity: None
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
internalTrafficPolicy: Cluster
status:
loadBalancer: {}
---
apiVersion: gateway.kyma-project.io/v1beta1
kind: APIRule
metadata:
finalizers:
- gateway.kyma-project.io/subresources
labels:
app: piperapp
name: piperapirule
namespace: piper
spec:
gateway: kyma-gateway.kyma-system.svc.cluster.local
host: piperapihost
rules:
- accessStrategies:
- config: {}
handler: noop
methods:
- GET
- POST
- PUT
- DELETE
- OPTIONS
- PATCH
path: /.*
service:
name: piperappserv
port: 80

To apply it, assuming you have a file named fullyaml.yaml with the contents above, just run:
kubectl apply -f fullyaml.yaml

I've tested this image and I was able to execute piper commands as well as to force termination of the pods to guarantee that the state was saved after their automatic recreation. Despite this was done on my trial account I believe it will work on a non trial the same way.


Jenkins with piper working on Kyma


I was able to run piper integrationArtifactDeploy and worked without problems. Now, if I want to use our current shared libraries with the github actions approach, I would need either to:

  1. Break our shared libraries into small individual pieces that can be executed separately (this would be a lot of work and I would have as result at least 10x more files than I have now). Problem: our shared libraries are doing many operations (reading, writing and zipping files, parsing jsons or delegation tables, creating backups on the OS). I'm not sure how to put all of this together via github pipeline without becoming a monster file. It would need to be spitted in a very smart way to encapsulate many of these low level commands.

  2. Leave them as they are (doing all the logic inside of it). This would be easier since the github pipeline would only reference each of our custom shared libraries (one for backup, one for documentation, one for testing, etc). The github pipeline file would be cleaner, on the other hand the potential for reuse could drop (not everyone wants to sync with crucible for instance).


Next steps



  • Give CI/CD BTP Service a chance and see how well we can integrate it with our own pipelines on an advanced usage of this service. UPDATE: with this service you can either configure your steps from the UI or to read them from GitHub pipeline, nevertheless the execution will always be from a managed internal piper installation that AFAIK we don't have access to. So this option would not allow us to use our own shared libraries.

  • Continue to evaluate potential benefit of using the official piper libraries instead of the regular jenkins pipelines we already have (right now I have serious doubts it would be worth the effort of migration)

  • I was thinking that it would be pretty cool if you would be able to provision everything in an automated way, meaning to enable the Kyma runtime, fetching the kubeconfig file and apply an helm chart or a yaml file (such as the one above) in a completely automated process without user intervention. I've checked BTP setup automator (https://github.com/SAP-samples/btp-setup-automator). On the FAQs, you can find a reference to Kyma (https://github.com/SAP-samples/btp-setup-automator/blob/main/docs/FAQ.md) where it is mentioned that because of the kubelogin, you would need a browser opened, so most likely you would need to break it into two processes and do this step manually in between which is not that great. If you had some success story trying to do this let me know.


Conclusion


As you can see, my installation journey was not smooth, not sure if it was due to my lack of knowledge on Kyma, or the documentation that was not that great or both. To be fair, I was able to use it on premise with the cx server which is also the recommended approach, but I was thinking that if we invest in migrating to piper, we should also get rid of maintenance costs for our on premise server. Not sure if there's a way to use cx server on Kyma. If you have experience with that please comment.
7 Comments
woongkipark
Participant
Really Nice
nunomcpereira
Explorer
0 Kudos
Thanks Park Woongki 🙂
hugofigueiredo
Newcomer
Hello Nuno,

How did you got access to the credentials? I can't find them to log in.

Other than that, great job, very helpful!
nunomcpereira
Explorer
0 Kudos
Hi Hugo,

 

Thanks, hope this tutorial can help you.

 

Nuno Pereira
gregorw
Active Contributor
0 Kudos
Hi Nuno,

I think Hugo is asking how to get the credentials for the Jenkins Admin login. Can you provide any information? I'm also stuck there.

Best Regards
Gregor
PA
Discoverer
0 Kudos
Hi Gregor,

The initial password will be created while the container is set up. You will find it in the logs of the created Pod.

best regards Peter
MaximFuchs
Explorer
0 Kudos
Hi Peter,

unfortunately i can't find it in the logs, maybe i'm looking in the wrong place. Could you specify where we have to look at?

Best regards,

Maxim
Labels in this area