Many organisations understand that the end-user satisfaction depends largely on the speed and stability of an application. That is why a performance test is performed before an application – or an update – goes live. In practice, we often see that the application passes the test successfully, but still goes down as soon as the amount of users increases in a production setting. How strange, because the performance test was successful, right?
Performance test no guarantee for success
With a proper performance test, it is experimentally determined whether the combination of the application, configuration, and environment is working properly in a production environment. The test needs to demonstrate that the input of the various expertises involved in the development of this application, together form an appropriate solution. The design takes into account specific risks and extreme but realistic situations. It is therefore important to also cover these risks in the design of the performance test. If you don’t, there is the risk of the application going down in production, despite a successful performance test.
What test for what risk?
Not all risks apply to all applications. An application that is used by only a few people in a company has to be fast, but will not be overloaded. A simple load test will be sufficient in this case. For a national website that is used in emergency situations, very different risks apply. High numbers of users must be served concurrently, and scalability will probably be an important feature for this application. In short, for different applications, different risks and requirements apply. The type of performance test should fit the risks that need to be covered. A request for ‘a performance test’ should therefore always be answered with a counter question: what are the risks you are facing with this application? This way a tailored performance test can be performed.
Some of the risks we encounter in practice:
- Speed of the software
- Robustness and stability
- Scalability and fallback
- Resistance to overload
- Memory leakage
- Session management and concurrency
A few examples
The question: robustness and stability
The application has to be available 99% of the time. The application and it’s architecture should be able to assure stable operation while in maintenance. Application or database nodes need to be switched on and off without great impact on users.
The answer: failover testing
To test whether the application meets the requirements, response times are measured and metrics that monitor connections between components are implemented. Then, during a “regular” load test, components are being switched on and off. Analysis of the test results and metrics show whether the combination of the application, infrastructure and configuration behaves as expected.
The question: speed
An application for which speed is important, is exposed to a load pattern that represents a peak load level as can be expected in the production environment. The application must be able to handle the traffic with acceptable response times for a longer period of time.
The answer: load test and endurance test
During the test, response times per user action are measured and compared to the speed requirements imposed on the application. In addition, an endurance test is performed to determine whether the application remains stable during a prolonged load. Should response times not meet the requirements, or should unexpected problems occur, a bottleneck analysis is performed using diagnostic tools. This analysis will show what measures should be taken to speed up the application or to solve the problems.
The question: concurrency
Some websites are used for calamities only, such as fire or flooding. We often see these alarm websites go down caused by high numbers of users. The requirement therefore is that the application can handle many concurrent users.
The answer: stress test
For this kind of applications a special type of stress test is performed, with simple click paths but extreme amounts of visits that are relatively short. At the same time, response times and memory consumption on the web server is measured. In this way not only the speed of the application is determined, but also the capability of the application and hardware to handle and release high amounts of user sessions.
As far as I am concerned, there is no such thing as “a performance test”. Performance test are designed to generate load patterns and anomalies that can reasonably be expected to happen in specific production situations. Production situations vary by type of application, use or environment. For every single case you need to determine which test is applicable. For some applications, even multiple tests are needed to adequately cover the risks. My advice is: try to prevent a performance test being just a checkmark on the release list. Get advice on the test that fits the use and objectives of your application!
During his career Marcel has helped organisations such as the Dutch Railways, Prorail, ASR and various municipalities with improving their IT-services. Speed, capacity, scalability and stability of the software are keywords in this. Marcel started his career as a developer in 1996, followed by working on performance testing, load modeling, performance troubleshooting and in the last few years automating performance testing in agile development projects. Within the organization Marcel is known for his Dynatrace expertise.
We would like to keep in touch! Join our newsletter and be informed about new developments, blogs, events and more.