7
votes

I'm trying to setup an automated builder/deployer job using Jenkins Pipelines.

I have setup a docker container that installs the NodeJS and Gulp dependencies:

Dockerfile

# Node 8.9 running on lean version of alpine Linux
FROM node:8.9-alpine

# Commented out setting production since 
# we need devDependencies to build
# ENV NODE_ENV production

# Set working directory to root of container
WORKDIR /

# Copy package.json and install dependencies first
# These can be cached as a step so long as they haven't changed
COPY ["./package.json", "./package-lock.json*", "/"]
RUN npm install --no-audit --silent
RUN npm install node-sass --silent
RUN npm install gulp-cli --silent
RUN npm install [email protected] --silent

# Copy code to root, thise is a separate step from
# dependencies as this step will not be cached since code
# will always be different
COPY . .

# Debugging information
RUN ls
RUN pwd
RUN ./node_modules/.bin/gulp --version

The goal of the Dockerfile is to cache the installation of dependencies so build jobs can run faster.

The Jenkinsfile uses the Dockerfile and then attempts to run an npm build script

Jenkinsfile

pipeline {
  agent {
    dockerfile true
  }
  stages {
    stage('Compile static assets') {
      steps {
        sh 'node --version'
        sh 'npm --version'
        sh 'pwd'
        sh 'ls'
        sh 'npm run build'
      }
    }
  }
}

When running the Jenkins Pipeline, the initialization (where the Dockerfile is consumed and run) does not seem to match up with the steps. See pwd and ls of each:

Output from first step where Container is setup

Step 9/10 : RUN ls

 ---> Running in 74b7483a2467


AO18_core

Jenkinsfile

bin

dev

dev_testing

etc

gulpfile.js

home

lib

media

mnt

node_modules

opt

package-lock.json

package.json

proc

root

run

sbin

srv

sys

tmp

usr

var

Removing intermediate container 74b7483a2467

 ---> e68a07c2bb45

Step 10/10 : RUN pwd

 ---> Running in 60a3a09573bc

/

Output from stage Compile static assets

[ao_test-jenkins-YCGQYCUVORUBPWSQX4EDIRIKDJ72CXV3G5KXEDIGIY6BIVFNNVWQ] Running shell script

+ pwd

/var/lib/jenkins/workspace/ao_test-jenkins-YCGQYCUVORUBPWSQX4EDIRIKDJ72CXV3G5KXEDIGIY6BIVFNNVWQ

[ao_test-jenkins-YCGQYCUVORUBPWSQX4EDIRIKDJ72CXV3G5KXEDIGIY6BIVFNNVWQ] Running shell script

+ ls

AO18_core

Dockerfile

Jenkinsfile

README.md

dev_testing

docker-compose.debug.yml

docker-compose.yml

gulpfile.js

jsconfig.json

package.json

So, there appears to be something about the execution context that I am not clear on. My assumption was once the docker container was initialized, the entire pipeline would execute in that existing context. That appears to not be the case.

Right now I only have one stage, but will eventually have multiple stages for linting, testing, and deployment. Hopefully, all of these stages will execute in the same context.

1

1 Answers

4
votes

Even if you change the WORKDIR in the Dockerfile, Jenkins will use an assigned workspace inside the container as when you run a build in a normal agent.

You can use the customWorkspace option inside the agent definition for changing that:

pipeline {
  agent {
    dockerfile {
      customWorkspace '/test'
      filename 'Dockerfile'
    }
  }
  stages {
    stage('Compile static assets') {
      steps {
        sh 'node --version'
        sh 'npm --version'
        sh 'pwd'
        sh 'ls'
        sh 'npm run build'
      }
    }
  }
}

Also, you can use the Directive Generator inside the Pipeline Syntax section of the pipeline job for getting the agent configuration.

More info: https://jenkins.io/doc/book/pipeline/syntax/#common-options