Let’s say you’re in the kitchen and you want to do some cooking. You’re making tomato soup in a pot, you cook the soup, you put it in a bowl, and you eat the soup. But now you want some ice cream to top off the meal, but you’ve only got one bowl. You probably wouldn’t just put the ice cream in the bowl that just had soup in it, first you’d clean the bowl. You would want to completely clean out your bowl of soup before adding any ice cream. The same principle applies to automated testing.

In this analogy, the bowl is our testing environment, and both the soup and the ice cream are our test objects. We want to completely clean our environment (the bowl) from the old objects (the soup). But you might be thinking, why do we want to remove our old objects? Maybe we’re just using a test environment that no end-user will ever see, does it really matter if we leave a whole bunch of leftover used data?

Graphic of soup

Why it’s vital to clean up your test environment

Last week I was writing a positive path test intended to create ‘view’ objects in our system using every available value for creating these ‘views.’ In doing so I ended up creating several thousands of these objects with each test execution. After running through the test a few times, I noticed I was no longer able to access our system’s UI surrounding that one feature. As I looked further into what was happening, I found all the guilty objects I had created were slowing the system to a stop. The methods in our repo I was using were never set up properly to remove these ‘view’ objects. Creating and running a quick script to delete all the objects was simple, but we shouldn’t need to do that after each time we run a test, that process should be automated and included in our test execution.

When we automate the process of creating objects, we need to keep track of everything we create. Whether we save the whole object or just a unique identifier depends on the SUT, but we need something to link us back to all our newly created objects. The method I was using was saving the ID values of the objects, but it didn’t do anything with those values after they were saved. It’s important to remember to not only save a reference to the objects but also to delete them when we’re done. We want to keep our environment as pristine as possible; there should be no evidence that we were in the system at all once our tests are finished executing.

Not only do we want to remove all our created objects but knowing when to remove those objects is also very important. This typically depends on the tests being executed and the data we need for successful test execution. Sometimes we create data for a single test scenario, sometimes we create data we want for a series of scenarios, or even for the entire test suite. This is highly dependent on the context of the test but is nevertheless a good thing to keep in mind when writing teardown methods.

 

The problem with leftover data

When it comes to tearing down a test, we’re not only concerned about slowing down the system, extraneous objects leftover from previous test runs can cause a flurry of issues to arise. In many cases duplicated objects can interfere with test results, some features don’t even allow for duplicated objects, causing tests to fail unexpectedly. Also filling our system with thousands of leftover objects clutters our environment, which can drastically impact any manual testers as well as muddying any troubleshooting needed for future tests. Imagine a drawer packed with a lot of random stuff, it can become a struggle to find exactly what we’re looking for through all the mess. We want our drawer of objects to be as empty as possible, so we are led directly to the objects we seek. Most importantly we want to make sure our environment is exactly the same each time a test is run so we have as precise of an idea as possible to what’s happening during our test execution, regardless if it fails or succeeds. We want our tests to run in a consistent and controlled manner upon each execution.

 

The importance of consistency in automated testing

With so many possible issues that may arise due to leftover data, it can be vitally important to the efficacy of our tests to properly dispose of any objects in our system. Not to mention resetting any settings that were adjusted during the test back to their original state, which is equally as important as removing any created objects. Consistency is so important to automated testing, when a test fails suddenly we want to know why it failed; starting every test from the same place each time with a fresh state can more easily display obvious changes to the system. If our environment is constantly being filled with data it becomes more difficult to determine the root cause of the failure, especially if the system is being overwhelmed and slowed down dramatically. The teardown phase of the test execution is just as important as any other step, and though it can easily be forgotten we must always remember to clean up after ourselves!

Interesting in reading more about test automation? Check out our post on The Path to Resilient Test Automation and take a listen to this episode of the PLATO Panel Talks podcast that dives into QA testers’ experiences working in test automation.

 

If you are thinking about how a test automation solution could bring value to your team, reach out!  We’d love to work with you to help you build a lasting and valuable solution.

Chris Wagner is a Software Tester with a focus on test automation. Chris performs a variety of different quality assurance-related projects. Chris graduated with a Bachelor of Computer Science from UNB in 2019. He worked on numerous test automation projects, automating API requests and ensuring the correct response is returned. Outside work, Chris is an avid competitive curler in New Brunswick, focusing most of his free time on competing and improving his skills.