15
votes

We already have a continuous integration process going where we build, run unit tests, do static code analysis and generate documentation. However, we would like expand this to include automatic performance testing. In this case, we are working on a .NET web application.

We have done some performance testing with JMeter (outside the CI process), but I don't know if this is the best tool to include in a CI process? Is Selenium an option? WAPT Pro?

On which levels should we test the performance? Should we have a set of "performance unit tests"? Should we run JMeter (or something similar) on a production-like environment and fail if any requests takes > 1 second? Wouldn't something like this have too high variance?

So, do you guys include automatic performance testing as part of your CI? What do you test, and which tools do you use? What has your experience been like?

6
Have you have a look at the JMeter Maven plugin? It's not .net but being maven based it plugs into CI servers like Jenkins quite well. github.com/Ronnie76er/jmeter-maven-pluginArdesco

6 Answers

10
votes

First off, JMeter is a good choice for inclusion in CI because it can be run from the command line and you can pass in variables when you do this. I would recommend it for this task.

In general though, integrating Perf. testing into CI is difficult. You've already listed many of the reasons why this is so you're already half way there because you understand the limitations. And that's the rub: it IS possible to have Perf. tests in CI but only to a limited extent.

I think that a good approach follows some of these principles:

You can't run full load (or soak or capacity) tests in CI, it's not practical. The results are subjective and need human interpretation and it takes time to run the tests. But you can run a simpler, cut down set of tests that measure response times for requests and then you can evaluate these response times either:

  • Against an NFR or expected range - Ie. Should be less than 1 sec.
  • Against the previous results - Ie. Should not deviate more than 10% than the last build.

You can also run automated load / perf. tests - at full volume - outside of the build process. 'Semi CI'. So maybe you could automate a test to run overnight and then check the results in the morning?

Iterate. Just start doing it and getting results and fine tune the tests and how you interpret them over time. Keep it simple and focus on the areas that appear to be useful. Don't launch with a fanfare, keep it quiet until you have confidence with the process and then start failing builds and telling people about it - initially, you're likely to get lots of false negatives.

Instrument your results Do this. A lot. CI is all about failing early so if you take that as you end objective then the best way to achieve it is to run tests early and often but the problem with that is you risk getting buried in data. So an effective method to crunch the data and present the relevant information helps considerably.

You can't automate the whole process down to Red Flag Green Flag - but you should try to get as far down that path as possible.

Finally, there was a very good talk given by the lead Perf. tester at Google that covers this subject. It's a bit old now but the principles still stand. Plus, in a few weeks I'm going to a meetup where Channel4, a British media company, will be talking about how they approached this - maybe you can ask for some slides.

2
votes

> You can't run full load (or soak or capacity) tests in CI, it's not practical.

After the TISQA conference here in the States this week, I'm more inclined to say that we should confidently be automating more and more of the full, complex load testing with CI automation.

You might even consider having a separate CI instance running in the larger load testing lab, configured with more realistic infrastructure to support meaningful test results. The load testing process itself is not unlike a separate software development process (design, construct, deploy, execute, analyze, repeat). Most of every performance tool is now supporting more elegant and robust integrations to CI solutions including SOASTA, LoadRunner/PC, JMeter, Neotys, Blazemeter, Flood.io.

But here's three things to watch out for - similar to Oliver's comments: - there's a lot more nuances to performance results...not just clearly PASS or FAIL - don't forget script maintenance to keep in sync with app changes - synchronizing/scaling your load testing lab with production might also be automated

If you wish - review some of the slides from my own TISQA presentation here. That might be a start on how to use CI + Performance across the entire lifecycle. Such as, why not have a CI instance that just "watches the configuration" as it gets changed in PROD and sync those changes back to your load test environment?

1
votes

Neither JMeter nor Selenium are tools for CI. JMeter is performance testing tool, Selenium is tool for automated functional testing. So, to have performance testing integrated into build process, you can use JMeter with any of CI servers:Jenkins, Bamdoo, etc.

AFAIK, there are two common solution of using JMeter with Jenkins nowadays:

  1. Use Jenkins/Hudson with JMeter plugin for it, which allow to start performance testing task after finishing build process. In this case you need to have appropriate number of load generator with JMeter configured on it.

  2. Another way - using JMeter testing cloud. This service provides Jenkins plugin, which allows to start remote test after building application. In this case you don't need to care about configuring test servers.

P.S. While I'm working for Blazemeter, I wanted to provide objective information. Hope, it's helpful.

0
votes

In your question you asked - Is selenium an option?

If you are running from CI utilizing either an internal grid of computers or the public Cloud then you should consider performance testing using Selenium WebDriver with the Headless browser driver.

On a small Amazon VM (ami) I get around 25 Virtual Users simulated using this approach. So, if your needs are in the order of 500 VU's, then I would investigate this as the benefits include:-

No more 'correlating' for URL re-writing etc. as the headless browser handles this automatically.

Your functional tests are repurposed as Performance tests so one tool to become an expert in and no rework just re-purpose.

0
votes

You are not the only person looking at integrating performance testing with continuous integration. In general, non-funtional testing used to be ignored or left for the very end of software delivery process by a lot of ogranisations. I can see positive change in the attitude and more interest in automatic verification of non-functional requirements in CI/CD. This includes performance, accessibility and security, to different extent. You've mentioned using Selenium for performance testing. I know some people (try to) do that, and even saw how unsuccessful was one of such attempts. I perfectly understand why people consider doing it, although I'd suggest to stay away from this. Unless you have a very good reason to do the opposite. In general, it's harder to achieve than one may think. Selenium is a great tool to include in CI for GUI testing purposes, but its incorporation in performance testing is somewhat troublesome.

There is now a new tool, which can help you integrate JMeter with CI server of your choice, with some dedicated features for TeamCity and Jenkins:

https://github.com/automatictester/lightning

Feature requests are welcome.

0
votes

If performance is essential part of your application and you care (or want to care) from the beginning and continuously, I'd aim to keep it as a part of the integration and deployment pipeline - so YES.

There are many tools in .NET world (and beyond) which are here to help you to deliver this experience and set it up seamlessly in your favourite CI/CD software e.g.:

  • k6.io (https://k6.io/ - previously known as LoadImpact) - allows you to perform performance checks outside of your environment and report it back to the pipeline with results. Easy to configure and integrate, great when it comes to more "clever" testing scenarios such as stress tests, load tests etc.
  • sitespeed.io (https://www.sitespeed.io/) - my 2nd favorite, very fun to use and easy to configure tool to track FE performance and tests (e.g. done with Selenium)
  • Locust (https://locust.io/) - for load testing on your own environments. Great repo with the ARM template to create your own "farm" of servers on Azure: https://github.com/ORBA/azure-locust
  • dynatrace (https://www.dynatrace.com/) - fully qualified APM = Application Performance Monitoring/Management tool with a ton of the features and possibilities
  • Roslyn Analyser (FxCopAnalyzers), StyleCopAnalyzers, EditorConfig and other ways to detect common (also performance-related!) issues in your code even before it's pushed to the build and deployment pipelines
  • Lighthouse Reports - might be also performed as a "pointer" to the most common issues and included as a PR comments e.g. or notifications during the process (there are many Github Actions or DevOps packages doing it)

So, yes - all of the above and many more can be placed as a steps on your pipelines. In our setup, we currently have a whole stage in between of the Staging and UAT environments where we're doing audits: static code analysis, performance tests (FE & BE), security scans and penetration tests (OWASP ZAP) and more. If tests are not matching our thresholds or expectations - we obviously don't want to introduce unwanted degradations - we stop here and going back to refactor and fix the issues before reaching UAT & Production. Hope it'll help you and maybe someone else.


I've also gathered some of my findings in my recent talk (slides below) and it converted into the series of blogs around this topics and first of them is already published:

  1. Slide deck from my talk on "Modern Web Performance Testing": https://slides.com/zajkowskimarcin/modern-web-performance-testing/
  2. First blog from the series on the same topic: https://wearecogworks.com/blog/the-importance-of-modern-web-performance-testing-part-1