I'd like to build a number of declarative pipeline jobs from a scripted pipeline, and handle any failures with individual try/catch blocks nested within a parent try/catch
node {
def err = false
try{
stage('build image') {
try {
//this job is a declarative pipeline
build job: 'build-docker-image'
} catch(e) {
echo "failure at build-docker-image"
throw e
}
}
stage('deploy image') {
try {
//this job is a declarative pipeline
build job: 'deploy-docker-image'
} catch(e) {
echo "failure at deploy-docker-image"
throw e
}
}
} catch(e) {
err = true
echo "caught error ${e}"
}
if(!err) {
echo "build and deploy ran successfully"
}
}
This code behaves inconsistently. If the build job fails for syntactical reasons, the error is caught by the child try/catch and echos the error message, then throws it to the parent, which also catches it and echos the error itself. But if the build job fails for less explicit reasons, i.e. the image isn't compiled correctly, the parent try/catch will still catch the error and behave the same as the previous example, but the child try/catch will not catch the error, and will not echo its failure message.
Why the discrepancy? Are there some errors caused by a failed declarative pipeline job that a try/catch block would not catch? Is it bad practice to mix scripted and declarative pipelines? I would be grateful for any advice or insight regarding this. Thank you