+31 (0)30 - 602 11 49

At Ymor, we regularly speak to test managers who feel that the costs of setting up a “performance test environment” are too high and therefore do not perform performance tests. As a result, however, we still encounter many applications with performance problems. For example event websites that go down when there is a peak load, ticket websites that slow down when there is a rush, or applications that suddenly fail to reach the desired speed as part of office automation. So, what stops IT departments from testing an application before it goes live? In this blog I will list the three reasons we hear the most.

 

Reason 1: “Performance testing in an test environment makes no sense”

In an ideal situation a performance test environment is available with the exact same capacity and architecture as the production environment. But we rarely see a ‘production-like’ test environment in our practice. Test environments cost money and are therefore equipped as simple as possible- with the aim to make the application function properly.

So, is there any use in doing a performance test during the testing phase of an application development? It certainly does! The speed of an application, for example, does not depend on the number of servers used side by side. Response times can therefore be well measured on a single configuration. Even the use of slow servers in a test environment has an advantage: if the application performs fast enough, the guarantee can be given that the application will be even faster in the production environment with faster hardware. Furthermore, the boundaries discovered in a test environment, provide useful knowledge for the design of the production environment. For example, for the configuration around connection pooling, and the way in which the capacity of an application can be scaled. It is therefore useful to perform performance tests on an ‘inferior’ test environment. Our advice is to use a test environment anyway, regardless of whether it is ‘production-like’.

 

Reason 2: “There is insufficient testdata”

Test environments often contain no more than a few or dozens of test cases. The degree of filling of the database is then not representative of the situation in production. Does performance testing in such an empty environment make sense? Yes, it does!

The most complex performance problems are caused by applications being inefficient with their data. This category of problems can be properly detected and analyzed in a test environment with little test data. It is important  to solve these problems as early as possible in the development process, before more functionality becomes dependent on them. Is data filling not needed at all for performance testing? Certainly, but the kind of performance problems that are sensitive to data volumes are easy to detect and solve. We prefer to do this in the test phase but monitoring and solving in the production environment can be a sensible choice.

Our advice: performance tests are preferably performed on a realistically filled database. But for finding the most serious performance problems, the presence of large numbers of database rules is not important. Therefore, always perform a performance test before going live.

 

Reason 3: “Performance tests take too much time”

This is never really a good argument. The preparation time needed to make a good performance test can be considerable, but it is easy to control. It is mainly the (in)experience of the person who makes the performance tests, i.e. the learning curve, that increases preparation time. Also, the coverage and complexity of the test that can cause long trajectories. Finally, performance testing often takes place in addition to all work already in progress, which means the test has no focus and therefore takes longer than necessary. Our advice is: have a performance test set up by an experienced performance tester and then hand it over to your own people. Think in advance about the risks that apply to your application(s) and take this into account in the composition of the performance tests. Let us advise you and be flexible in the considerations needed to get a good set of starting points for the performance tests.

 

What do you choose?

Big problems are the magnification of small problems. Small problems can be found with performance testing in a limited scaled test environment with very little data. Also, application behavior with large datasets can easily be simulated in a test environment. Furthermore, the choice is of course up to you: do we get 90% of the performance problems out of the application and does the unexpected 10% remain for production? Or do you transfer the full 100% risk to production? In the end, it’s a trade-off between interests, risks and costs.

Should you have any questions regarding the subject, or want to discuss your test-situation, please do not hesitate to contact us!

 

Marcel Wigman

Marcel Wigman

Performance Architect

During his career, Marcel has helped many organizations such as the NS, ProRail, ASR and various municipalities to improve their IT services. Speed, capacity, scalability and stability of the software are the key to success. Marcel started his career in 1996 as a developer, followed by activities such as performance testing, load testing, performance troubleshooting and in recent years automated performance testing in Agile development projects. Within Ymor, Marcel is known for his expertise in Dynatrace, troubleshooting and automated testing.

[gravityform id="12" title="false" description="false"]

Blog

Want to learn more? Check our blog section.

Read blogs

Whitepaper

Interested in automating your performance tests? 

Read whitepaper “Automated testing”

More information?

Talk to one of our experts!

Contact us