Oracle Cloud Infrastructure offers the Functions service, a FaaS offering with serverless functions, based on container images built with the Project Fn FDK. In a previous article, I introduced the OCI DevOps Deployment Pipeline for Functions – a free service on OCI that automated the deployment of a Function from its underpinning Container Image. That Deployment Pipeline should really be used in conjunction with a Build Pipeline that takes the source code written by the developers, turns it into a Container Image, stores that image in the OCI Container Image Registry and then triggers the Deployment Pipeline to deploy the latest incarnation of the Function based on the newly built container image. In the article OCI DevOps–Build Pipelines and Code Repositories for CI I have introduced the Build Pipeline feature in OCI DevOps. I will now describe and show in some detail how a build pipeline is created for a function – from source code to container image in the registry followed by a trigger of the deployment pipeline.
The OCI DevOps Project that is the logical container for related DevOps resources is created with:
- a Git source code repository (in this case a mirror of my GitHub repository)
- an External Connection to link the source code repository the GitHub repository (using a Personal Access Token, stored in an OCI Key Vault)
- an Artifact – a reference to the Container Image to be pushed to the OCI Container Registry (this is the same Artifact that in the Deployment Pipeline is a source for a Deployment Pipeline stage
- the Build Pipeline that implicitly acquires a build server, uses resources from the Code Repository (and/or clones a repository from GitHub), performs the actions required to produce the artifact(s) from the sources, then pushes the container image to the registry and finally triggers a deployment pipeline to redeploy the function on OCI with the latest container image
- a Trigger – that runs the build pipeline when a push event occurs on the main branch of the Code Repository
Let’s take a look at each of these components, then run the build pipeline in its entirety.
Code Repository
OCI DevOps Projects can contain Git code repositories or be associated with external GitLab or GitHub repositories. A third option is to have a Git repository in the OCI DevOps that is a mirror of an external GitLab or GitHub repository. The DevOps service will periodically synchronize the mirror from the source repository.
In this article, I have created a Code Repository called cloudnative-2021-on-oci-repo. It is defined as a mirror from my GitHub repository https://github.com/lucasjellema/cloudnative-on-oci-2021.
The link to the GitHub repository to mirror from is defined through an External Connection. This connection consists of no more than a type (GitHub or GitLab are currently the options), a name and a reference to a Secret in an OCI Vault. This secret contains the Personal Access Token that was created for the GitHub account to connect to following these instructions.
The Secret is defined in an OCI Vault like this:
The External Connection allows the Code Repository mirror mechanism to connect in an efficient way.
The default mirroring interval is 15 minutes. You can override this value to define a more relaxed or aggressive mirroring behavior. Later on we will see how we can define a trigger to kick off the build pipeline based on an event in the code repository. In the case of a mirrored repository, this event only takes place when the synchronization to the mirror repository is complete.
Artifact
The artifact(s) that the build pipeline will produce are to be defined in advance as well. In this case, the artifact is the Container Image for the function that comes out of the build pipeline and the artifact definition includes the location of the artifact in the appropriate artifact repository. In this case, the repo is the Container Image Registry. The artifact definition looks like this:
The fully qualified path is learned from the repository in the Container Image Registry – combined with the region identifier – iad in my case for Ashburn – iad.ocir.op as prefix for namespace and repository name. Note the placeholder ${imageVersion}; the value for the imageVersion needs to be supplied in all build and deploy pipelines that want to work with this artifact – to make clear which version of the image they want to produce or consume.
Here is the container image repository for this artifact:
Build Pipeline
The Build Pipeline consists of three stages, for build, publish and trigger.
Managed Build Stage
The managed build stage is where the real work is done. This stage is assigned a build server instance into which the indicated sources from the source repository are automatically loaded.
At the time of writing (November 2021) the build will run on an AMD or Intel x86 host with 1 OCPU and 8GB memory. The base container runs Oracle Linux 7.9. At this point in time, I cannot choose a custom image or shape for the build server; this is likely to change in the near future. The build server has the OCI CLI pre-installed and pre-configured This means that your build can use the cli to interact with OCI Services and Resources. (I wonder whether the build server also has Terraform with the OCI Provider installed; that would be convenient). It also has the fn command line tool with which we can build functions into container images and the Docker tooling.
The definition of the Managed Build stage has a number of elements:
- name and description
- build server image (just one flavor available right now)
- location of the build specification file (see below); this location is relative to the root of the primary code repository – in this case in directory /functions/tweet-summarizer.
- primary code source – the code repository from which the sources to be built are to be cloned from. This can be an OCI code repository in the DevOps project (as in this example). It can just as easily be an external Github or Gitlab repository to which an external connection has been defined. Note: the location on the build server of the cloned primary code source is called (by me) tweet_retriever_source. This name is used for naming the directory where the repo is cloned
- additional code repositories – other repositories from which the sources should also be available during the build process
The Managed Build Stage is shown in the screenshot:
The actions to be performed in the managed build stage are specified in a yaml file – the build specification file. This file uses a fairly simple format for defining the steps in the build process. For more details on the format of this file, check the documentation here: Build Specification file.
The build specification file is included with the other resources in the Code Repository. During initialization of the build server the designated code repositories are cloned to the build server. This brings the build specification file to the build server. In the definition of the Managed Build Pipeline Stage we have to specify where this file is located. The build server will interpret this file and go through the steps that are defined in it.
The Build Specification file for building the Tweet Retriever function is shown below:
Share:
https://gist.github.com/lucasjellema/5411d40f5948f013a983b8566e829af3
Embed:
In the initialization phase, environment variables can be defined in the build specification file. A number of environment variables is predefined on the build server (for example OCI_BUILD_RUN_ID, OCI_TRIGGER_SOURCE_URL, OCI_PRIMARY_SOURCE_DIR, OCI_PRIMARY_SOURCE_NAME, OCI_WORKSPACE_DIR). Check the documentation (OCI DevOps Docs – Build Specification file) for details. Additionally, all build pipeline parameters are available during the build as environment variables. Note that we can define “exported variables” whose value carry over to next steps in the pipeline and can for example be used to replace the value of placeholders in artifact references. Environment variables can also be defined based on secrets in an OCI Vault. The value for these variables is retrieved from the vault – based on their OCID – and made available as environment variables for all the steps inside the build specification file.
Input Artifacts can be defined in the build specification – to supplement the code repositories whose code drives the build. There are two types of artifacts that can be defined: Artifacts from any of the previous Build stages and Artifacts from any external downloadable HTTP URL (The URL must be publicly available. Currently authentication/authorization is not supported.).
The build specification ends with the definition of the output artifacts of the build stage – in order to make that output available in next steps in the pipeline. Each output artifact is given a name (to which we can refer in next steps) and a type (BINARY or DOCKER_IMAGE) and a reference to a file or a container image in the local registry (which I will need in the current example),
In my example, the output artifacts are defined with this
The name output01 is a poor name. However, it is used only as a logical reference used in the next stage in the build pipeline. The location is the fully qualified name of the container image in the local registry. That is the image produced by the earlier steps in the build process.
The actual work done by the build server can be described in a few steps:
- install NVM
- install the NPM dependencies of my Node application (the function I want to build is implemented in NodeJS)
- perform the fn build of the function – producing a container image
- tag the image to make its name match the name used in the location attribute of the outputArtifact definition
The most interesting bit in the build specification file:
The command in the various steps in the build specification are Linux (bash) shell commands – nothing special about those. Everything written to the terminal – such as all echo statements – becomes available in the logs of the build. See the next example of some of the logs for the managed build step.
Deliver Artifact Stage
The second stage in the pipeline is a Deliver Artifact stage. This type of step takes output artifacts from a managed build stage and publishes them in an artifact repository – a repository in either the OCI Container Image Registry or the OCI Artifacts Registry. In the current case, the output artifact is a container image and it needs to be published to a repository in the OCI Container Image Registry.
In this step, we select one or more of the artifacts earlier defined. In this case the container image that the function we are constructing is based on. Subsequently, this (destination) artifact is mapped to one of the output artifacts produced by an earlier managed build stage. In this case the output artifact that was named output01 in the managed build stage described overhead.
This step will therefore take the container image created by the managed build process in the local container image registry on the build server (label output01) and push it to the OCI container image registry as iad.ocir.io/idtwlqf2hanz/cloudnative-2021/functions/tweet_retriever:${imageVersion}. The placeholder ${imageVersion} is replaced with the value assigned to the build pipeline parameter:
Trigger Deployment Stage
The third stage in this build pipeline is a trigger of the Deployment Pipeline (introduced in this article) that deploys the function based on the newly produced container image.
The configuration of this step is quite simple and straightforward: choose the deployment pipeline to trigger, indicate whether or not build pipeline parameter should be forwarded to the deployment pipeline. In this case, we need that: the ${imageVersion} placeholder in the artifact reference in the deployment pipeline needs the value from the build pipeline parameter that determined the imageVersion produced by the build. The artifacts to be used by the Deployment Pipeline are displayed but cannot be edited as they are part of the Deployment Pipeline’s definition.
Policies to allow the Build Pipeline to do its job
A build pipeline needs permissions to do its things. When it runs, it needs to be able to read a code repository, to publish a container image and to trigger a deployment pipeline. A pipeline can run as a service principal inside OCI. When it runs, it can inherit permissions from a dynamic group that it belongs to and that has been granted these permissions. Dynamic groups and the policies that grant permissions can be defined in a very fine grained manner – such that each dynamic group consists of only one specific build pipeline and such that a policy grants access only on a specific code repository and a specific artifact. In this example, I have been more relaxed in my demonstration compartment.
The dynamic group simply contains all Build Pipelines in the compartment:
The – quite widely defined – policies for a dynamic group (that consists of all build pipelines in my compartment):
When the build pipeline executes, it leverages these permissions.
Note: if I want the build to also access other OCI Resources and Services, I would have to grant additional permissions to this dynamic group.
Code Repositories use a resource principal to access the Secret (with the GitHub Personal Access Token needed to mirror an external Git repository). For this I need a Dynamic Group for a specific Code Repository (or more widely for all Code Repositories in my Compartment) and a Policy that allows this Dynamic Group with Code Repositories to access secrets in a specific vault.
Run the Build Pipeline
The Pipeline can be triggered through an event on the code repository and it can be manually started.
The automatic trigger is defined by linking a code repository event with a build pipeline. In the trigger definition, we can explicitly indicate which branch should be pushed to in order to fire the trigger. At present, we cannot filter on for example the folders in which files were changed or the types of files that were changed. That means that for now many more build may be triggered by code repository events than is really needed. In the future we are likely to able to filter in a more finegrained way whether a build should be triggered given the type of repository change.
The Build Pipeline History provides a lasting overview of what went on:
We can check the logs, the timing and the values of the pipeline parameters used in this run. And we can check on the intermediate deliverables.
The result of the Deliver Artifact Stage can be checked in the Container Registry console for the TweetRetriever repository:
We see that the 0.0.3 version of the image has been pushed to the registry.
The final outcome of the two pipelines is found in the Function resource that has been updated by the Deployment Pipeline with the new 0.0.3 version of the container image:
Resources
OCI DevOps Docs – Build Specification file
GitHub Docs – creating a Personal Access Token – https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token
OCI DevOps IAM Policies – https://docs.oracle.com/en-us/iaas/Content/devops/using/devops_iampolicies.htm
OCI DevOps Developer Live session: https://developer.oracle.com/developer-live/simplifying-modern-app-dev/
Hands-on-Lab: https://apexapps.oracle.com/pls/apex/dbpm/r/livelabs/view-workshop?wid=857
Jonathan Schreiber’s Sample app showing end-to-end config: https://github.com/jonschreiber/oci-devops-node-express
OCI Docs on DevOps: https://docs.oracle.com/en-us/iaas/Content/devops/using/home.htm
The post OCI DevOps Build Pipeline for Functions appeared first on AMIS Technology Blog | Oracle - Microsoft Azure.