Gitlab CI/CD on Openshift with Quay integration

Openshift has pretty cool built in CI/CD solutions (based on Jenkins or Tekton), but it can easily be used with external CI/CD tools too. I recently had to create a small POC for a customer who wanted to see how Openshift integrates with Gitlab, and they were also curious how they could incorporate container scanning and the Quay registry. I figured I might as well share this setup with the rest of the world, so below you can find the steps to set it up. Let me know what you think in the comments!

You can find the source code here: https://gitlab.com/kdubois/quay-openshift-gitlab-ci

Gitlab CI/CD on Openshift (with Quay container scanning integration)

This project demonstrates how a Python application could be built and deployed to Openshift using Gitlab CI/CD. It also demonstrates how a Quay container registry could be leveraged to store and scan the container image, and how the deployment can be interrupted if any vulnerabilities are discovered in the image.
Gitlab provides a certified operator to deploy a Gitlab runner. This is the only secure way to integrate Gitlab CI/CD with Kubernetes.

Steps to reproduce:

Clone the project

Clone this repository in your Gitlab repository. It contains a very basic Flask app; a .gitlab-ci.yml file that defines the pipeline, a kubefiles folder that contains the kubernetes yaml files referenced below in the instructions; a Dockerfile which can be used to build a Python base image without vulnerabilities; and Dockerfile in the gitlab folder that’s used for the runner’s container image. It includes some utilities such as Skopeo and openshift client tools (oc, kubectl, kn)

Configure Gitlab

First, we’re going to create a project for the Gitlab CI/CD tools. Log in to Openshift and create a gitlab ci/cd namespace / project: oc new-project gitlab
In this project, we’ll install an operator that’s going to manage the lifecycle of a Gitlab runner instance. To do this, go to the OperatorHub in your Openshift cluster UI (Administrator > Operators > OperatorHub), search for ‘Gitlab’ and install the Operator. Alternatively you can also install the operator by applying the following yaml: oc apply -f kubefiles/gitlab-operator.yaml
Next, we’re going to deploy the actual Gitlab runner instance, but before we can do that, we need a Gitlab token so the runner can be mapped back to Gitlab.
To create/retrieve a Gitlab token, log in to your Gitlab repository, go to settings > CI / CD > Runners>(expand). Then under ‘Specific Runners’ -> ‘Set up a specific Runner manually’ you will find the token.
The runner is configured to use the token through a ‘secret’ that’s referenced in the runner config. To create the secret: oc create secret generic runner-token-secret –from-literal runner_registration_token=__YOUR_TOKEN__ -n gitlab
Now we can deploy the runner with oc apply -f kubefiles/gitlab-runner.yaml (you can do it through the ‘Installed Operators > Gitlab > ‘Create instance’ in the UI as well).
Go back to Gitlab again, and verify that the runner has been registered (settings>CI/CD->Runners>Runners activated for this project). If everything went well you should see the runner in the list. Click on the edit icon and check the ‘Run untagged jobs’.

Configure the Build

At this point we can create a project for the application we’re going to deploy. eg. oc new-project python-project
Since the runner is deployed in a different namespace (gitlab) and Openshift by default isolates namespaces, we will need to explicitely allow the gitlab runner’s service account to have access to the python-project namespace. This can done with the following command: oc policy add-role-to-user edit system:serviceaccount:gitlab:default -n python-project (feel free to create a more specific service account, for the sake of this demo we’re just going to use the default service account of the gitlab project)
One of the nice things of Openshift is that it allows you to build images directly on the cluster, so you don’t need to have a local container build/run environment and pull secrets etc configured. For this demo, you can just apply the img-build.yaml that’s included in the project, but if you’d like to learn more about Openshift builds, check out the documentation: https://docs.openshift.com/container-platform/4.5/builds/understanding-image-builds.html.
The img-build.yaml tells Openshift to create a container image using the location of this git repository, and a given Python base image. Apply it with oc apply -f kubefiles/img-build.yaml -n python-project

Quay

The pipeline is configured to push the built image to a Quay registry. While the built-in Openshift registry does a great job for basic registry functionality, many organizations opt to use an external registry, especially when they have multiple clusters. The external registry can live inside of Openshift, but for this use case it’s using a Quay.io repository. One of the nice things about Quay is that it has a built in container scanner. Our demo pipeline will check for the result of this scan (using a custom scanresult.py script included in this repository) before promoting the image and deploying it, unless there are high/critical vulnerabilities – in this case the pipeline will be marked as failed.
Connect Quay to Gitlab: In Quay.io, log in (or create a free account if you don’t have a login yet), and create a new repository ‘python-app’. If you don’t have a robot account yet, go to user settings -> Robot Accounts, and create a robot account. Then click on the new account, and click on the ‘Robot Token’. With these credentials, you will need to create two environment variables in Gitlab. In the Gitlab repository, go to Settings -> CI/CD -> Variables and add a new variable ‘REG_CREDS’, with value : . (the robot username and the token, separated by a colon). Create the second variable ‘REGISTRY_NAMESPACE’ to set the Quay namespace, which should correspond with your Quay username (not the Robot username).

Set up Knative/Openshift Serverless

The pipeline is configured to deploy the python app in a ‘serverless’ way. This is a ‘bonus’ capability of Openshift, deployable as an operator. To deploy it, go to the operatorhub in your Openshift instance and install the ‘Openshift Serverless’ operator. Once it’s installed, create a new ‘knative-serving’ project and in it, go to the Installed Operators > Openshift Serverless Operator > Knative Serving > Create Knative Serving. The default options should be fine for this use case.

Kick off the build

Everything should be in place to run the CI/CD pipeline now. In your Gitlab repository, go to Pipelines and click the ‘Run Pipeline’ button. You’ll notice that the pipeline starts running (if it’s in stuck status, make sure you checked the ‘run untagged jobs’ for the runner as described in the ‘Configure Gitlab’ section above). If there are no vulnerabilities in the base image or code, then the build should pass and the application gets deployed using Openshift’s Serverless Serving capability. This means the application is going to start up, wait for requests, and if there are no requests coming in to the application, it will scale down to 0 and wait for requests to scale back up.
That’s it!

Extras

ArgoCD Automated Deploy:

If you’d like to manage the application configuration with ArgoCD:
To install ArgoCD on your cluster:
Install the ArgoCD (community) operator through the UI; or with oc create -f argocd-operator.yaml -n argocd
Deploy ArgoCD instance: oc create -f kubefiles/argocd.yaml -n argocd
Once it’s deployed, get the admin password: oc get secret argocd-cluster -n argocd -o jsonpath=”{.data.admin\.password}” -n argocd and log in to the UI with the route that was exposed: oc get route argocd-server -n argocd -o jsonpath=”{.spec.host}”
Once logged in to ArgoCD, create a new app, in fill out the form with the required information. Note that the ‘Project’ is an ArgoCD terminology so you can set this to ‘default’. For Destination, if you deployed ArgoCD on the same OCP cluster you can use “https://kubernetes.default.svc”, otherwise use your cluster’s info. Set namespace to “python-app”.
At this point you can go ahead and sync the project and it should automatically deploy the application. Feel free to remove the ‘deployment’ step from the .gitlab-ci.yml since it too will try to deploy.

Vulnerability scan

To see the pipeline fail due to vulnerabilities, you can configure the kubefiles/img-build.yaml to use a different base image, or you could try adding to the threshold_list in the scanresult.py on line 34 (adding ‘Medium’ did the trick at the time of writing)

Burst Requests to the application

Since the application is leveraging Knative / Openshift Serverless, it scales very rapidly based on the incoming requests. To see this in action, there is a ‘knburst.sh’ script included in this repository. To run it, open a terminal, make sure you’re logged in to your Openshift cluster and are using the python-project project, and then launch the script and watch what happens to the number of pods of the application. Play around with the memory and cpu limits to see what effect it has on the number of pods that get deployed.

Custom git repo

If you would like to use a custom git repo not hosted at gitlab.com, you will need to change the URL in kubefiles/gitlab-runner.yaml

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top