The way we think about application architecture has changed over the years.
Traditionally, we thought of it as a monolithic object with everything living in a single code base running as a single entity in a standalone container. Testers would input values into a front end, usually Graphical User Interface or Command Line Interface, and then see the change on the other side of the application. These were truly black box, with testers only having access to inputs and outputs. What happened inside the application at run time was a mystery to developers, and testers were primed to look only at the outcomes to verify them against the test oracles. If we wanted to see how an application was interacting with the system under test, we relied on Task Manager or some other monitoring tool that looked at the overall processes to give us the memory or CPU usage.
Then in 2005, Peter Rodgers developed the idea to break web applications up into smaller parts by taking the services out of the monolith and spreading them around. While working at HP Labs, he had been researching the concept, trying to find a way to make large systems more resilient to changes.
Rodgers found that by hosting the services separately, systems were made more reliable and redundant. This approach also allowed the services to be developed by independent teams, using whatever tech stack they wanted as long as one service spoke a language another service could understand. Adding these services to a distributed network allowed the developers to dynamically increase resources when needed, and monitor individual services for how they were reacting to increased demand or recovery after a failure.
Microservices adoption
By 2012, testers were no longer living in a world of simple input and output testing. The term microservices was coined, and from then on, the way we saw applications changed. While some testers initially heard the term microservices and started to worry about how it would affect their testing, others embraced it and realized it was a great opportunity to see what was under the hood of an application.
The idea of microservices gave us a new way to understand what could be tested and how you could do it. No longer limited to simple input or output end-to-end testing, testers now had internal access to the application, multiple endpoints for testing, health and resource usage of services, communication between containers, and all the data contained within.
In the ever-expanding world of microservices, the number of endpoints we can test has grown from a single API in a monolithic application to hundreds in a complex system like Amazon or Netflix. Each one of these paths needs to be tested and verified against a well-defined and agreed upon contract.
The contract, a document that describes the way a microservice communicates with other microservices, can be simple to understand and easy to test. But when you have hundreds of these microservices communicating at once, it can lead to a testing team feeling overwhelmed. Not only do the number of API connection points continues to grow as new services are brought online, but old ones are also updated to communicate new information.
Testing from end-to-end
The upside to all these small services running across a complex system is that updates are no longer system-wide, but isolated to a single service, or at worse, a cluster of closely associated services. Testing from end-to-end is no longer a considerable endeavor; instead, we can target testing efforts to specific areas that we know have been changed.
Testing teams no longer have to run massive test plans to verify that a single, small change in the code has disrupted the functionality of the overall system. Now, they can make the test plans smaller, each geared towards a single service that covers the entirety of that service’s contract. Using tools and tricks such as mocks (smart test objects) and stubs (inert test objects) to interact with the service under test, you can quickly and efficiently verify if the service is fulfilling its contract.
With shorter software development cycles, the speed at which new services can be delivered increases, making manual testing difficult in some cases. The rate of new services provided by several development teams at once can easily overwhelm a testing team, given that development teams no longer need to rely on each other for integrating their services.
Microservices and automation
In this situation, automation will help the testing team greatly. Since contracts should be very detailed and easy to understand, we can use automation to check that our expected results haven’t changed since the last iteration. Any change to the contract would mean that only a small number of automation test cases would have to be updated. Testers have more time to delve into other aspects of testing, such as creating new automation, reviewing new contracts, preparing for delivery, or even some good old-fashioned exploratory testing.
There is no denying that the architecture of applications has changed. With that change comes new challenges and new opportunities. As testers, we need to look at how we can use these changes to further improve the quality of systems we’re testing and not get hung up on how the changes might impede us. The age of the microservice is here.
If you are thinking about how a microservice testing solution could bring value to your team, reach out! We’d love to work with you to help you build a lasting and valuable solution.