0
votes

We are standing up a CI pipeline using Jenkins and we are using SonarQube to run static analysis. We have set up quality gates and now we are failing builds, when the gates are not met. When we fail a build the code is still put into sonarQube. So if a developer tries to promote twice the second build will 'pass'.

Example: Gate is no new critical issues.

The Developer checks in code with 1 new critical issue. The build fails on static analysis (SonarQube has the rule flagged and a blocker).

The Developer checks in code again (no code changes). the static analysis's passes because the critical issue is not 'new'.

Is there a way to revert back to the previous version on a failure, or better yet to run the analysis against the most current non-failing run?

Notes: Version - Sonarqube 5.1.2

1

1 Answers

-1
votes

You've asked how to keep committed code from being reflected in the SonarQube platform.

I don't recommend trying to suppress the analysis of committed code because then your quality measures don't reflect the state of your code base. Imagine that someone is making a decision about whether HEAD is releasable based on what they see in the SonarQube UI. If you're keeping "bad" code from showing up, then... what's the point of analyzing at all? Just use a photo editor to construct the "perfect" dashboard, and serve that gif whenever someone hits http://sonarqube.myco.com:9000.

Okay, that may be an extreme reaction, but you get my point. The point of SonarQube analysis is to show you what's right and wrong with your code. Having a commit show up in the SonarQube UI should not be an honor you "earn" by committing "worthy" code. It should be the baseline expectation. Like following the team's coding conventions and using an SCM.

But this rant doesn't address your problem, which is the fact that your current quality gate is based on last_analysis. On that time scale "new critical issues" is an ephemeral measure, and you're loosing the ability to detect new critical issues because they're "new" one minute and "old" the next. For that reason, I advise you to change your time scale. Instead of looking at "new" versus last analysis, compare to last version, last month (over 30 days), or last year (over 365 days). Then you'll always be able to tell what's "new" versus the accepted baseline.