I made the switch to declarative recently from scripted with the kubernetes agent. Up until July '18 declarative pipelines didn't have the full ability to specify kubernetes pods. However with the addition of the yamlFile
step you can now read your pod template from a yaml file in your repo.
This then lets you use e.g. vscode's great kubernetes plugin to validate your pod template, then read it into your Jenkinsfile and use the containers in steps as you please.
pipeline {
agent {
kubernetes {
label 'jenkins-pod'
yamlFile 'jenkinsPodTemplate.yml'
}
}
stages {
stage('Checkout code and parse Jenkinsfile.json') {
steps {
container('jnlp'){
script{
inputFile = readFile('Jenkinsfile.json')
config = new groovy.json.JsonSlurperClassic().parseText(inputFile)
containerTag = env.BRANCH_NAME + '-' + env.GIT_COMMIT.substring(0, 7)
println "pipeline config ==> ${config}"
} // script
} // container('jnlp')
} // steps
} // stage
As mentioned above you can add script blocks. Example pod template with custom jnlp and docker.
apiVersion: v1
kind: Pod
metadata:
name: jenkins-pod
spec:
containers:
- name: jnlp
image: jenkins/jnlp-slave:3.23-1
imagePullPolicy: IfNotPresent
tty: true
- name: rsync
image: mrsixw/concourse-rsync-resource
imagePullPolicy: IfNotPresent
tty: true
volumeMounts:
- name: nfs
mountPath: /dags
- name: docker
image: docker:17.03
imagePullPolicy: IfNotPresent
command:
- cat
tty: true
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
volumes:
- name: docker
hostPath:
path: /var/run/docker.sock
- name: nfs
nfs:
server: 10.154.0.3
path: /airflow/dags