2
votes

I'm learning to deploy github actions to run multiple jobs with docker, and this is what I have so far:

github actions yml file is shown as follow. There are 2 jobs: job0 builds docker with Dockerfile0 and job1 builds docker with Dockerfile1.

# .github/workflows/main.yml
name: docker CI

on: push

jobs:
  job0:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - name: Build and Run
      run: docker build . --file Dockerfile0 --tag job0

  job1:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - name: Build and Run
      run: docker build . --file Dockerfile1 --tag job1

Dockerfile0 and Dockerfile1 share basically the same content, except for the argument in the last line:

FROM ubuntu:20.04

ADD . /docker_ci
RUN apt-get update -y
RUN apt-get install -y ... ...

WORKDIR /docker_ci
RUN python3 script.py <arg>

I wonder, can I build a docker for the 1st job, and then invoke multiple jobs execute command within the docker built from the 1st job? Thus I don't have to keep multiple Dockerfile and save some docker building time.

It would be better to build my docker locally from Dockerfile so I hope to avoid using container from docker hub.

runs-for-docker-actions looks relevant but I have trouble finding example deploying action locally (without publishing).

1
You can create your own Github container actions and with optional input. Essentially you would need an action.yml, a Dockerfile (you already have that). You can run it with uses: ./ or something like that and with with specifications for your argument.astrochun

1 Answers

2
votes

It definitely sounds like you should not build two different images - not for CI, and not for local development purposes (if it matters).

From the details you have provided, I would consider the following approach:

  1. Define a Dockerfile with an ENTRYPOINT which is the lowest common denominator for your needs (it can be bash or python script.py).
  2. In GitHub Actions, have a single job with multiple steps - one for building the image, and the others for running it with arguments.

For example:

FROM ubuntu

RUN apt-get update && apt-get install -y python3

WORKDIR /app
COPY script.py .

ENTRYPOINT ["python3", "script.py"]

This Dockerfile can be executed with any argument which will be passed on to the script.py entrypoint:

$ docker run --rm -it imagename some arguments

A sample GitHub Actions config might look like this:

jobs:
  jobname:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - name: Build the image
      run: docker build --tag job .
    - name: Test 1
      run: docker run --rm -it job arg1
    - name: Test 2
      run: docker run --rm -it job arg2

If you insist on separating these to different jobs, as far as I understand it, your easiest option would still be to rebuild the image (but still using a single Dockerfile), since sharing a docker image built in one job, to another job, will be a more complicated task that I would recommend trying to avoid.