CI/CD pipelines with OpenShift Pipelines
OpenShift Pipelines is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses Tekton building blocks to automate deployments across multiple platforms by abstracting away the underlying implementation details. Tekton introduces a number of Custom Resource Definitions (CRDs) for defining CI/CD pipelines that are portable across Kubernetes distributions.
The goal of Tekton is to create small building blocks that are reusable, composable, and declarative, all in a cloud-native environment. It uses Steps, Tasks, Pipelines, and Resources to do this, as shown below in Figure 1.
Figure 1: Tekton building blocks.
Tasks
Tasks are the building blocks of a Pipeline and consist of sequentially executed Steps. Tasks are reusable and can be used in multiple Pipelines. Task definition consists of one or more Steps.
Steps are a series of commands that achieve a specific goal, such as building an image. Every Task runs as a Pod and each Step runs in its own container within the same Pod. Because Steps run within the same Pod, they have access to the same volumes for caching files, ConfigMaps, and Secrets.
The following example shows the apply-manifests Task.
apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: apply-manifests spec: params: - default: k8s description: The directory in source that contains yaml manifests name: manifest_dir type: string steps: - args: - |- echo Applying manifests in $(inputs.params.manifest_dir) directory oc apply -f $(inputs.params.manifest_dir) echo ----------------------------------- command: - /bin/bash - -c image: quay.io/openshift/origin-cli:latest name: apply workingDir: /workspace/source workspaces: - name: source
Pipelines
A Pipeline is a collection of Tasks arranged in a specific order of execution. You can define a CI/CD workflow for your application using Pipelines containing one or more Tasks.
A Pipeline definition consists of a number of fields or attributes, which together enable the Pipeline to accomplish a specific goal. Each Pipeline definition must contain at least one Task, which ingests specific inputs and produces specific outputs. The Pipeline definition can also optionally include Conditions, Workspaces, Parameters, or Resources depending on the application requirements.
The following example shows the build-and-deploy Pipeline, which builds an application image from a Git repository using the buildah ClusterTask:
apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: build-and-deploy spec: workspaces: - name: shared-workspace params: - name: deployment-name type: string description: name of the deployment to be patched - name: git-url type: string description: url of the git repo for the code of deployment - name: git-revision type: string description: revision to be used from repo of the code for deployment default: "release-tech-preview-2" - name: IMAGE type: string description: image to be build from the code tasks: - name: fetch-repository taskRef: name: git-clone kind: ClusterTask workspaces: - name: output workspace: shared-workspace params: - name: url value: $(params.git-url) - name: subdirectory value: "" - name: deleteExisting value: "true" - name: revision value: $(params.git-revision) - name: build-image taskRef: name: buildah kind: ClusterTask params: - name: TLSVERIFY value: "false" - name: IMAGE value: $(params.IMAGE) workspaces: - name: source workspace: shared-workspace runAfter: - fetch-repository - name: apply-manifests taskRef: name: apply-manifests workspaces: - name: source workspace: shared-workspace runAfter: - build-image - name: update-deployment taskRef: name: update-deployment workspaces: - name: source workspace: shared-workspace params: - name: deployment value: $(params.deployment-name) - name: IMAGE value: $(params.IMAGE) runAfter: - apply-manifests
Workspace
Workspaces declare shared storage volumes that a Task in a Pipeline needs at runtime. Instead of specifying the actual location of the volumes, Workspaces enable you to declare the filesystem or parts of the filesystem that would be required at runtime. You must provide the specific location details of the volume that is mounted into that Workspace in a TaskRun or a PipelineRun. This separation of volume declaration from runtime storage volumes makes the Tasks reusable, flexible, and independent of the user environment.
With Workspaces, you can:
- Store Task inputs and outputs
- Share data among Tasks
- Use it as a mount point for credentials held in Secrets
- Use it as a mount point for configurations held in ConfigMaps
- Use it as a mount point for common tools shared by an organization
- Create a cache of build artifacts that speed up jobs
You can specify Workspaces in the TaskRun or PipelineRun using:
- A read-only ConfigMaps or Secret
- An existing PersistentVolumeClaim shared with other Tasks
- A PersistentVolumeClaim from a provided VolumeClaimTemplate
- An emptyDir that is discarded when the TaskRun completes
The following example shows a code snippet of the build-and-deploy Pipeline, which declares a shared-workspace Workspace for the build-image and apply-manifests Tasks as defined in the Pipeline.
apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: build-and-deploy spec: workspaces: - name: shared-workspace params: ... tasks: - name: build-image taskRef: name: buildah kind: ClusterTask params: - name: TLSVERIFY value: "false" - name: IMAGE value: $(params.IMAGE) workspaces: - name: source workspace: shared-workspace runAfter: - fetch-repository - name: apply-manifests taskRef: name: apply-manifests workspaces: - name: source workspace: shared-workspace runAfter: - build-image
Trigger
Use Triggers in conjunction with Pipelines to create a full-fledged CI/CD system where the Kubernetes resources define the entire CI/CD execution. Pipeline Triggers capture the external events and process them to extract key pieces of information. Mapping this event data to a set of predefined parameters triggers a series of tasks that can then create and deploy Kubernetes resources.
For example, you define a CI/CD workflow using OpenShift Pipelines for your application. The PipelineRun must start for any new changes to take effect in the application repository. Triggers automate this process by capturing and processing any change events and by triggering a PipelineRun that deploys the new image with the latest changes.
Triggers consist of the following main components that work together to form a reusable, decoupled, and self-sustaining CI/CD system:
- EventListeners provide endpoints, or an event sink, that listen for incoming HTTP-based events with a JSON payload. The EventListener performs lightweight event processing on the payload using Event Interceptors, which identify the type of payload and optionally modify it. Currently, Pipeline Triggers support four types of Interceptors: Webhook Interceptors, GitHub Interceptors, GitLab Interceptors, and Common Expression Language (CEL) Interceptors.
- TriggerBindings extract the fields from an event payload and store them as parameters.
- TriggerTemplates specify how to use the parameterized data from the TriggerBindings. A TriggerTemplate defines a resource template that receives input from the TriggerBindings, and then performs a series of actions that result in creation of new PipelineResources and initiation of a new PipelineRun.
EventListeners tie the concepts of TriggerBindings and TriggerTemplates together. The EventListener listens for the incoming event, handles basic filtering using Interceptors, extracts data using TriggerBindings, and then processes this data to create Kubernetes resources using TriggerTemplates.
The following example shows a code snippet of the vote-app-binding TriggerBinding, which extracts the Git repository information from the received event payload:
apiVersion: triggers.tekton.dev/v1alpha1 kind: TriggerBinding metadata: name: vote-app spec: params: - name: git-repo-url value: $(body.repository.url) - name: git-repo-name value: $(body.repository.name) - name: git-revision value: $(body.head_commit.id)
Condition
A Condition refers to a validation or check, which is executed before a Task is run in your Pipeline. Conditions are like if statements which perform logical tests, with a return value of True or False. A Task is executed if all Conditions return True, but if any of the Conditions fail, the Task and all subsequent Tasks are skipped. You can use Conditions in your Pipeline to create complex workflows covering multiple scenarios.
Pipeline Builder
Now that we’ve covered some basic details of Tekton, let’s dive into the Pipeline Builder, which is available on the Pipelines page in the OpenShift Web Console Developer Perspective. Pipeline Builder’s interface provides developers with a visual representation of the Pipelines that can be easily modified to suit your needs. The tasks can be arranged sequentially or in parallel as shown below in Figure 2.
Figure 2: Defining the task structure in the Pipeline Builder.
The side panel also makes it easy to edit the settings for each one of your tasks. From there, you can add parameters or change the associated resources that will be mapped to resources once you start the Pipeline, as shown below in Figure 3.
Figure 3: Edit individual task settings.
PipelineRuns
Once Pipelines are created, the final step is to execute them. The execution of a Pipeline is called a PipelineRun. OpenShift Pipelines makes it easy to trigger those Pipelines. Find the one you want to start in the Pipelines list and select Start. A dialog will appear where you specify the resources to use with this run, which will then create the Pipeline Run and start the execution.
You will then be prompted on the resources to use with this run. This selection will create the PipelineRun and start the execution. You can then follow the status of this run either from the Pipelines dashboard or by drilling down into one Task to see the details of each Step. This visual representation also makes it easier for developers to see where their deployments failed.
Figure 4: Visualization of a PipelineRun.
Figure 5: PipelineRun details visualization.
Ready to get started?
10:56 AM, Sep 22
Author:
Principal Solutions Architect
Jarosław Stakun
Jaroslaw works as an Principal Solutions Architect at Red Hat and is responsible for delivery and solution selling based on Red Hat Openshift Container Platform and Red Hat Application Services in the region of Central and Eastern Europe.