Starting from the first initial release back in 2013, Docker and its images are still growing and growing every day. We have more and more images created, more containers used, all that is expanding so we need something that will help us in the process to complete repetitive tasks easy and fast. Every second that we can save is a plus. For that purpose, I wanted to bring this article to you and save some time so you can focus and work on new things. In this article, I will bring one process that I think is very useful when we are working with Docker, here we will see how we can easily speed up the creation of Docker images for our custom usage.
For this presentation we need several prerequisites: installed Docker on your machine, Google Artifact Registry, the appropriate accounts set up for Google Cloud (authenticated and ready for use), Bitbucket Pipelines. This article can cost money, so please first take a look at the resources.
The process
First, we have to create the folder structure from where we will build, and push the Docker images. Create one folder like Docker_Images and inside that folder, we have to create two subfolders (Base_Images and Extension_Images), in the Base images folder we will store the base layer images that will be used in the Extension Docker Images, so we will use base images, add new features and get new custom Docker Images fast. With this approach, we can have several base images and always create new ones easily using the base layers by adding new features on top of them.
We will create one Base Docker Image and one Extension Docker Image. So first, we have to create the Base Image, for that, create a new subdirectory in the Base_Images directory, with the appropriate name, I will use linux_dind_openjdk11 as my base image. When we have created the subdirectory we have to create the Dockerfile. In the Dockerfile for this base image we will have this:
FROM docker:19.03-dind #you can use different image here if you need specific one
USER root
ENV \
RUNTIME_DEPS="tar unzip curl openjdk11 bash docker-compose"
#in ENV we add the packages that we want to be installed
RUN \
echo "http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories && \
apk update && \
apk add --no-cache $RUNTIME_DEPS
#with RUN we are adding the package repo and adding the requested packages
As you can see in the Dockerfile that we have created several parts, the FROM, USER, ENV and RUN parts, each one of them is doing some specific job inside the Dockerfile. The Dockerfile can have additional parts, but in our case, these are the only ones that we need. So with this, we have created our base image. You can find more info about the Dockerfile structure on this link: https://docs.docker.com/engine/reference/builder/.
The next step is to build and push the image to the specific Google Artifact Registry where we will store the images and every member in our company or team can use those images.
docker build -t us-central1-docker.pkg.dev/gcp-team-platform/docker-registry-name/linux_dind_openjdk11:1.0.0 .
docker push us-central1-docker.pkg.dev/gcp-team-platform/docker-registry-name/linux_dind_openjdk11:1.0.0
When we execute these two commands successfully we can be sure that now we have our base image pushed on the Google Artifact Registry and can be found there. In the Google Cloud, this can be found by simply typing Artifact Registry in the search bar and selecting Registry where we will see the pushed image that we have created.
After this step, we can continue to our new Extension Docker Image, using the base image as a starting point. So now we have to create a new subfolder in the extension_images folder, with a similar pattern name, e.g. linux_dind_openjdk11_maven3_gradle7, this new folder also has to have Dockerfile inside. This Docker image will have new features like Maven and Gradle. The Dockerfile will look like below:
FROM us-central1-docker.pkg.dev/gcp-team-platform/docker-registry-name/linux_dind_openjdk11:1.0.0
#the Base Image that we have created previously
USER root
ENV \
RUNTIME_DEPS="maven gradle" #added new packages
RUN \
echo "http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories && \
apk update && \
apk add --no-cache $RUNTIME_DEPS
When we build and push this Dockerfile we will have the previous (Base Image) and the new packages in our Extension Image, which means Maven and Gradle will be installed together with opendjk11, DinD (Docker in Docker) and Linux.
This process can be done with all variations and needs for Docker Images that we need, the process will be simple buy creating Base Images in the appropriate folder, and upgrading them in the Extension Images folder where we will create the Docker Images using the Baselayers. But as you can see this is a repetitive task and we can automatize some processes here.
For the automation part, I use Bitbucket Pipelines, which is giving us a very easy and fast way of doing specific actions easily, fast and secure. You can read more here https://bitbucket.org/product/features/pipelines.
Bitbucket Pipelines
Bitbucket Pipelines is a CI/CD tool, that is working also with Docker, in a way where every build we do, Bitbucket Pipeline is using Docker Container to serve our needs.
In our example we need one Bitbucket repo where we will store our files, so please create one and push the files there, also for the Bitbucket to work we need a bitbucket-pipelines.yml file created in the root of our working directory. The name must be the exact bitbucket-pipelines.yml, this is needed so the pipeline can work and recognize the file. When we create this file inside we have the option to automate the steps needed for the build and push the images.
The Bitbucket Pipeline file has several things that we have to take care of. The first thing is the image, that is on the top of the document. This is pulling the Cloud SDK version, so we can quickly execute gcloud commands.
image: gcr.io/google.com/cloudsdktool/cloud-sdk:latest
The next one in our code is the Git/Bitbucket part where we are getting the full structure of the files in the directories.
clone:
depth: full
In this definitions part, where we are defining our script that will automate the steps:
definitions:
scripts:
- script: &buildDockerImage
- echo $SERVICE_ACCOUNT_KEY | base64 -d > key.json
- gcloud auth activate-service-account $SERVICE_ACCOUNT_EMAIL --key-file=key.json
- gcloud auth configure-docker $DOCKER_REGISTRY_LOCATION --quiet
- chmod +rx build_push_docker_images_script.sh
- ./build_push_docker_images_script.sh
Here we can see the name of the script, next command is the Service Account Key spawn, so as I mentioned we need a service account already created for this task, here we are adding the key of the service account from Google. The next two steps starting with gcloud command are for connection to our GCP platform, here we are authenticating the service account and on the next command, we are configuring the connection and authentication with the Docker repository on Google. With the cd command, we are entering in the extension_images folder where we have to build and push our new Docker images.
The last two commands are for giving permission and running our custom script build_push_docker_images_script.sh, this script is created in order to go into every new folder of the extension_images directory, build the new images with their name and tag, and push them to the Artifact Registry if everything is correct.
The Bash script is a simple one, loops through every directory and executes several commands, see the code below:
#!/bin/sh
/bin/bash
extension_path="$(dirname $(realpath $0) )/extension-images"
for dir in $extension_path/*; do
if [ -d $dir ];
then
cd $dir
folder_name=$(basename $dir);
docker build -t $DOCKER_REGISTRY/$folder_name:1.0.0 .
docker push $DOCKER_REGISTRY/$folder_name:1.0.0
fi
done
With the for cycle, we are looping through every folder in the extension_images directory and it is entering each of them and executing the docker build and push commands.
The last step in the Bitbucket pipeline file is the section where we define the pipeline how to work, means that in this section we are defining the triggers, so there are several different ways of triggering the pipeline, one of them is using the branches, so when we push to a specific branch it can be a trigger for the pipeline, in our case that branch is master.
pipelines:
branches:
master:
- step:
name: Build and Deploy Docker Images
deployment: Dev
script: *buildDockerImage
services:
- docker
In this code you can see that we have a step again, in Bitbucket Pipelines every new step is a separate Docker image, we have the name of the step for easy recognition what we do, the deployment (Dev) is the environment where this pipeline will execute in this case our script. The last one is the part of the service where we are spinning up a separate docker container for faster build and easy service editing. In our case, we have chosen docker.
You can find more about Bitbucket Pipeline Triggers on this link https://support.atlassian.com/bitbucket-cloud/docs/pipeline-triggers/.
Summary
In short, here you can find steps on how to automate and create Docker Images, using the Bitbucket Pipelines as a fast method to bring up Docker Images ready to use.
- First the manual creation of the Base Docker Image, where we see how to create the folder structure and one base image, that later can be used as many times as we need.
- Extension Docker Image, reusing the Base Image to create new custom image, how to do that and how to use the concept of fast creation.
- Bitbucket Pipelines, pioneer in the technology where we are getting different ways of running fast and reliable pipelines, new features, automated triggers, great option for this kind of tasks.
- Shell scripting for automated execution of commands that are repetitive.
I hope this helped you to understand the Docker Image creation, Base and Extension Image concept that is explained in short here, and the automation with the Bitbucket Pipelines.
Note!!!
Please be sure that some variables in this article are local so if you are running this setup be aware that you have to change them to the correct one so this can work on your end.