Molecule tester container

2024, Oct 02    

For CI, I like to make specialized docker containers, to avoid doing installations as part of normal pipelines and to lock down the build environment.

Nothing fancy, just some alpine image, some apk packages and some pip packages.

The repo is here.

Kaniko

I decided to build it using kaniko, since, unlike docker build, it does not require docker daemon to build images.

The code to do a build in the pipeline from a Dockerfile looks like this.

build:
  stage: build
  image:
    name: gcr.io/kaniko-project/executor:v1.23.2-debug
    entrypoint: [""]
  script:
    - /kaniko/executor
      --context "${CI_PROJECT_DIR}"
      --dockerfile "${CI_PROJECT_DIR}/Dockerfile"
      --destination "${CI_REGISTRY_IMAGE}:${CI_COMMIT_SHORT_SHA}"

This was the easy solution, to use the container supplied by kaniko.

Normally, I go for installing the program on my workstation, but in a pipeline this is simple. And you can use the same docker image when building and testing locally.

Alpine and Docker

Since this is my own project, I can decide to use alpine and openrc, and not ubuntu and systemd. This feels good for some reason.

In order to have docker as service inside the container (i.e. docker-in-docker), docker must run as a service and hence the need for openrc.

And it seems that it is a bit more involved than just apk add openrc. You must update openrc to handle being inside a docker environment, otherwise it just hangs at

* Starting <some service> ...`

As a practical side note, I tested with both docker and sshd, since I know that docker requires --privileged, mounting certain volumes and/or other configurations. sshd is simpler.

Openrc and docker

In order for openrc to function at startup, it must be the first process. That means using the CMD keyword as seen below.

FROM alpine:latest

RUN apk add --no-cache --update-cache python3 py3-pip docker openrc 
RUN pip3 install --break-system-packages \
    ansible ansible-lint molecule molecule-plugins['docker']

RUN rc-update add docker boot

# to ensure openrc starts
RUN sed -i '/getty/d' /etc/inittab
CMD ["/sbin/init"]

Note that docker requires the container to be privileged (or at least have lot of privileges), otherwise you will get

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?` errors when running docker commands.

In gitlab CI

I set up the usual three stage pipeline

  • Build: Build image and upload using some test tag
  • Test: Use the new image and run some simple tests
  • Publish: Retag the image. I use Crane for this.

Even though, the image worked when testing it locally, it failed in gitlab CI with the socket error mentioned above.

The issue is that gitlab replaces docker entrypoint with it’s own code. This is to clone repos and some other things. More about it here.

In order to make it work, we can go through some hoops, like described here, but I decided to go for readability.

My solution is to add a pre_script with the following lines

  before_script:
    # needed by docker
    - openrc
    - touch /run/openrc/softlevel
    - service docker start

This is how to start openrc and the docker service. The downside is that I need to do this in every CI job where I use the container and the built-in docker.

The full CI file is here.