0
votes

im having troubles contructiong my Jenkins Pipeline for what should be an easy scenario.

Im using DockerDesktop running a Jenkins image (https://hub.docker.com/r/jenkinsci/blueocean/). I have a standard react-app that i want to build and run tests after my commits.

The Webhooks configurations already works, but the commands i run inside my Jenkinsfile doesn't seem to work as i expected:

This is my current Jenkinsfile:

node {
def app

stage('Clone repository') {
    checkout scm
}

stage('Build image') {
    app = docker.build("getintodevops/hellonode", "-f Dockerfile.dev ./")
}

stage('Test image') {
    app.inside {
        sh 'pwd'
        sh 'npm run build'
    }
}

}

the Dockerfile-dev looks like this:

FROM node:alpine

COPY ./package.json ./
RUN ["npm", "install"]
COPY ./src/ ./src/
COPY ./public/ ./public/

what i expected was that the commands inside the stage(Test image) would run inside a container with the image i've just build, but the pwd command shows that the commands are exec'd inside the jenkins workspace,containging basically what i have committed on github (/src, /public, package.json ecc..), so when npm run build is launched i get an error for missing the node_modules.

Am I missing something? Shouldn't the commands run on the built image, which i verified has all the necessary folders?

Let me know if other files or configurations im using are necessary

P.S. I've also tried using this kind of Jenkinsfile, but i always have the same issue(even if the agent is defined outside the stage):

pipeline {
  agent none

  stages {
    stage('Build') {
      agent{
        dockerfile {
          filename 'Dockerfile.dev'
        }
      }

      steps {
        sh "npm run build"
      }
    }
1

1 Answers

0
votes

Am I missing something? Shouldn't the commands run on the built image, which i verified has all the necessary folders?

Yes. I'll try to explain a little bit about the part you are missing.

When a Jenkins pipeline runs a stage as a Docker container it does a couple things behind the scenes:

  • The workspace gets mounted to the container at runtime (as you have observed).
  • The working directory gets set to this workspace folder (which is why commands show your workspace files in the container).
  • The container is run with the user ID of the Jenkins user.
  • Various other options are passed to the container (such as environment variables).

Knowing this, there are two possible solutions you could do (and they each have pros and cons):

Option 1

You can ignore the node_modules and code in the container and essentially just use it as a NodeJS environment. Your Jenkins pipeline would then be responsible for doing npm install and other test commands. This would keep node_modules in your workspace. But you would also have a built container image that could potentially be deployed somewhere.

This also potentially works around issues where node_modules contain binaries that potentially differ between the Jenkins Docker image (which I believe is based on Debian, though, they do have a Alpine variant).

Option 2

If you don't want Jenkins workspace to be the source of truth then you would need to go into the location of the node_modules in the container. This appears to just be / according to your Dockerfile. You could potentially modify your steps to be inside a dir like:

pipeline {
  agent none

  stages {
    stage('Build') {
      agent {
        dockerfile {
          filename 'Dockerfile.dev'
        }
      }

      steps {
        dir("/") {
          sh "npm run build"
        }
      }
    }
  }
}

Option 2 has potential permissions issues since your Docker image builds and creates files as root but the Jenkins pipeline will run as the Jenkins user.