Containers are a specialized form of virtualization, but their increasing and varied usage make them a topic in themselves. Containers have been around since the early 1980s, but with the spreading adoption of DevOps methodologies, their popularity has taken off. Here we give an overview and look into some of the ways people are using containers to make their work easier and cleaner.

Containers are a way of virtualizing an operating system such that an application, and anything it depends on, can be run in a compact and fast manner to deploy form. The containers are composed only of the parts of the operating system kernel that they need, so you do not need to have a full instance of an operating system bundled with your application. This makes containers very lightweight.

Before containerization became popular, the standard was virtual machines (VMs). VMs were a full instance of an operating system, and could be configured as desired. However, they were large and resource intensive. Running a set of VMs on a single host machine required a lot of overhead and slowed the system down considerably. Containers, on the other hand, require far fewer resources, and are quick to start up because it is not necessary to boot up an operating system, making it easy to deploy multiple containers on a single machine.

There are a number of container makers, including Microsoft, Google, and Amazon, but the most popular currently is Docker. Containers have been around for some time, but in the past they were difficult to use. Docker changed that situation by simplifying the use of containers, and this resulted in them being quickly adopted by developers. Docker being open source was also helpful in increasing their popularity. An instance of a container, called an image, which contains an application and its dependencies, is specified in a Dockerfile. Once that image has been created, it can be shared anywhere. More complex, multi-container instances can be created using Docker Compose. With Compose, a single file specifies a set of services for an application. Then, with a single command, all of those services are started.

Part of the appeal of containers is that they can be used to modularize software applications as microservices. Development teams can then work on their own microservice, and run it in concert with all the other containerized microservices, allowing them to focus on their own areas. This also ties in nicely with the concepts of Behaviour-Driven Development, which emphasizes scenario-based testing of software, and is thus commonly end-to-end in nature. A developer can easily do a quick end-to-end test of their code by running it against a set of containers for each of the other software components. This is a great way of building quality into an application.

For testers, an example of container use might be to have a previous stable version of software in one container, so that testing of software under development can be compared with a previous version. Containers can even be used in bug reporting; when a bug is found, the system can be containerized in its current state and configuration, and attached to a bug report. From here, the developer investigating the issue can spin up the container instance and then debug.

An interesting aspect of containers is that people are discovering new uses for them, and some of them are quite innovative. Containers are being used to distribute bioinformatic/proteomic software as a community-based effort called BioContainer. NASA’s Land Information System, a high-performance framework for hydrology modeling and data assimilation, is being distributed using Docker, thus removing the barriers to use that configuring this complex piece of software entails. Perhaps, however, the best feature about containers is how they simplify system configuration; it is now easy to run any platform with its own configuration on any infrastructure.

Containers are revolutionizing the way software is developed and tested. Testers should definitely familiarize themselves with the technology, as it appears to be here to stay, and it can aid us in our work in terms of efficiency and reproducibility. There are multiple tutorials available, and the software is open source. If you have not already tried it, we recommend you take it out for a spin and learn its uses.

Jim Peers is currently a QA Manager at OXD, a former Test Manager at the Integrated Renewal Program at the University of British Columbia, and an alumni QA Practitioner from PLATO Testing.  

https://www.linkedin.com/in/jim-peers-70977a6/, @jrdpeers

Categories: Virtualization