Tuesday, April 19, 2016

Automated Performance Testing

Every company I've worked at previously has done performance testing using Opensta, Jmeter, VS Test or Loadrunner.  We've run our tests at the end of the project and prayed that everything worked.

At my last company we started using RUM performance testing reusing selenium tests to generate har files, this gave us some interesting results and we ran the tests during development but we still left running larger loads until the end of the project.

So at my current job we we face over a 3 month period at the end of each project called stabilization, during which time we run performance testing among other activities.  One of our goals has been to shorten this length of time before each release and to do this my team has been working on moving our performance testing into our development phase.

So we automated the deployment of each build (using chef and octopus deploy) and start of performance testing.  This was scheduled to run each night with results being available in the morning.  We would then perform additional performance tests dependent on what was needed for the project.  This worked ok, but it requires us to store the results from each performance test, the environment data and have these checked after each run.

I've talked about our reporting tool for automation before called Repauto.  We extended this to store performance test results.  Rather than storing the results from each run from our performance testing tool we installed Prometheus (an open source monitoring tool similar to splunk or ELK).  Prometheus pulls data from the servers and stores it for us to query.

In Repauto each performance run has a summary with the most important stats of the run, including the build, environments used and test details.  We also have two tabs one which has the data stored in Prometheus graphed using grafana and the another which details all of the environment stats, which we pull out using chef.  It looks a little like this.


For each of the projects we have a set of alerts which we've created such that if performance degrades we can quickly look into the run and start assessing what has changed.  Further we can compare runs looking for any differences in the environments (patches applied etc.) which before we had no archive of.  So now we have nightly performance runs and constant and consistent monitoring.  We will see soon if this helps shorten our stabilization cycle.

No comments:

Post a Comment