On Monday, January 15, 2018, PLATO’s Director of Quality, Mike Hrycyk, brought together a group of PLATO testers to form an Automation Panel.

This Panel transcription is the second in a series of three. Part 2: Record and Playback and a Tester’s Path to Automation includes our testers’ views on the role of record and playback in the Automation world and the evolution of record and playback tools. Furthermore, since our Panel comes from an array of testing backgrounds, we talk about the paths that current and aspiring testers can take to become an Automation tester.

Before reading this transcription, be sure to read Part 1: What is the Role of Automation in the Testing World?, published on February 5, 2018, and check back next week to read the third, and final, part of this Automation Panel.


Mike Hrycyk,  VP, British Columbia: Currently the VP, British Columbia, Mike has been in the world of quality since he first did user acceptance testing 18+ years ago. He has survived all of the different levels and a wide spectrum of technologies and environments to become the quality dynamo that he is today.

Nathaniel (Nat) Couture, VP of Professional Services: Nat has over 15 years of experience in leadership roles in the technology sector. He possesses a solid foundation of technical skills and capabilities in product development including embedded systems, software development and quality assurance.

John McLaughlin, Senior Manager: John is a Senior Automation Consultant with more than 12 years of experience in software testing and test Automation. Coming from a background in Computer Science, John has designed frameworks and automated testing strategies for a number of software projects while providing training on test Automation and its various toolsets.

Jim Peers, QA Practitioner: As a QA Practitioner, Jim comes with the background of more than 16 years of experience in software development and testing in multiple industry verticals. After working in the scientific research realm for a number of years, Jim moved into software development, holding roles as tester, test architect, developer, team lead, project manager, and product manager.

Benoit (Ben) Poudrier, Senior Automation Specialist: Ben is an Automation Specialist who has performed manual, functional, automated and performance testing, in addition to building regression testing suites for several clients. While training many people on Ranorex Automation, he has also designed and implemented various Automation frameworks while working at PQA.


Mike: Do you find you have particular attitudes towards the concept of record and playback? Does it have a place in this world?

John: It depends on the lifetime of what it is you’re recording. Is it the simple case of, “I’m going to record it today and throw it away tomorrow”? In those types of scenarios, I see record and playback being useful because then you can work in tandem with the manual team and quickly put something together for them for the day. But, if it’s the long game, and we’re talking something that’s going to run over, and over, and over, over time, I typically wouldn’t rely on record and playback very heavily.

Mike: There are certain tools these days, like coded UI, where the main focus tends to be record first, modify the script after. Have you seen an evolution? Is it better these days? Is that a path to excellence, or is it kind of the same answer?

John: I think it’s definitely gotten better. We’ll use Ranorex as an example. I used QTP for about seven years and then switched over to Ranorex and the Ranorex repository alone was a like a lot of repositories I’ve seen from other tools, available right from the start.   With a tool like Ranorex, you can do a recording. You have a fair bet that it’s going to playback and it’s relatively easy to modify and update. A lot of the time, you just have to update a little path here or there to tweak it and have it sustain itself over time. I suppose, back in the day when I first started record and playback, the repository wasn’t as good as it is now. It likely is that the effectiveness of record and playback something that grows and gets better over time. I still think though, you’re never going to be able to record something and then just rely on it to work. A lot of those recordings are static recordings of static data that’s there at the time that you record. The likelihood that that data is the same over time is very small. It’s going to be a constant cycle of maintaining those recorded scripts, if that’s the route you want to go.

Jim: There are some other newer tools that do visual testing, in which they’re comparing screen shots. The thought behind this is that it performs hundreds of validations all at once. They’re looking quite powerful, but part of the problem with the record and playback tools is that they’re not fast. You can’t integrate them into a CI pipeline where you get that fast feedback you need to be able to run that speed. But, I’m quite interested in seeing how things like Applitools Eyes come along, and how they will progress in the record and playback kind of testing.

Nat: Coming from a technical background, and trying to look at it as objectively as I can, I think there’s a limitation to the tools that are designed for layman users. That doesn’t mean that there isn’t a use for that type of tooling. In the early days, I remember having complex forms in a piece of software that we were testing, and it would take us about 20 or 30 minutes to fill out the form. In order to speed it up, we used the tool called AutoIt. It was a very primitive tool, but we used it for a simplistic recording of the data elements we were putting into the form. The next time we came to that form, we could just click, and put in data sets 1, 2, and 3, and it would just automatically fill out the form. It was a very rudimentary test tool; it was a form of Automation and it wasn’t very sophisticated, but it saved us a ton of time. So, I think there are some uses for that type of tool, but I think that if you want a sophisticated automated solution, most of the time you’re going to need to have a sophisticated set of development work done. That requires a more extensive skillset like a computer science degree, or a programming background of some kind. I don’t think we’ll ever get to a point where a technical background won’t be useful.

Mike: That’s an awesome segue into my next question because Nat just laid down the statement that you have to go to school to become an Automater. There are a million manual testers out there, and we stated that there will still be space for manual testers in the future. But I think it’s obvious that the balance is going to shift. It’s going to be more Automaters just as a ratio to manual testers. I think that’s a truism, and so I think some of the manual testers out there have to decide whether, or not, Automation is for them. Most of those manual testers are heavy into technology and they understand a lot of things, but they’re not coders. Are we just going to say, “No, you can’t make it yet”? That seems ridiculous based on the fact that maybe around one third of coders are self-taught. Is the correct path to do some record and playback, and that leads to scripts, and once you understand those you’ll see how they all fit together? Or, is it better to go to school, learn Java, and then learn how to be an Automater? What are the paths here?

Jim: I don’t think you need to go to school for that. I think that you can learn a lot from the YouTube videos that are out there; from channels put up by somebody like EvilTester, who talks about how to do more technical kinds of testing. You don’t necessarily learn how to code, but if you can learn how to work with Postman or the development tools that the browsers have, you can do quite a bit. I don’t think you have to go back to school. I think you can just get in there, drill down, and learn a bit more.

John: A lot of testers have a creative bone – a very creative mind. So, find a creative mind that can break down test cases and components, or objects with capabilities and attributes. You could teach that person the layers of an application, and then how we can connect to the layers of the application to manipulate it over and over again. It takes someone who can break something down into its parts and components and tiny little pieces; the coding stuff is kind of secondary to me. Like Jim said, there are a million resources online now that can teach you how to code this stuff. The biggest challenge is for folks to break things down into their components, learn how they interact with each other, and to make them interact with each other in a reliable way. That’s the biggest challenge that I see for someone that doesn’t have programming knowledge, a CS degree, or any type of background that’s technical.

Mike: I’m going to give your response extra weight, John, because you are currently teaching classes of junior testers how to automate. As a follow-up question, I would like to know whether you are doing what you just said with our juniors, or are we jumping in and just starting to teach them how to Automate?

John: We are now in Week 3, and one of the things I wanted to do with this class was to cover some object programming, and the concept of objects inheriting things from other objects. I wanted to get that out of the way early, without even talking about C Sharp, Java, and so on, because to me, if I can get my team thinking about things like objects and treating them like objects, they’re going to understand the concepts of coding. Then when we get to tools like QTP, Ranorex, or Selenium, they’ll be able to understand what it is they’re actually trying to do, and they’ll be able to put it together in a little more friendly and reusable way.

Mike: Nat, since you started this line of questioning, I want to ask whether you really believe that manual testers don’t belong?

Nat: No, I don’t believe that at all actually. I just view that manual testing skillset as something very different, but still just as important – if not more important. A good tester understands what needs to be looked at in the software to determine whether it’s good or bad. This applies regardless of what level you’re testing at; either at the unit level, or acceptance, where you’re mimicking the business process. I think a good tester understands what needs to be verified, whereas an Automater comes at it with a different skillset. They look at everything with that one tool. Their hammer is the coding that they have in their toolbox. They look at it questioning which tool would be best. So, they’re not as focused on the actual test that they’re automating. That’s usually a deficiency in Automaters who strictly look at it as a coding problem. On the opposite side, you’ve got the testers who understand what to text, but often don’t have the coding background to make that happen in an automated way. I think both skills are needed. In some cases you have someone who can do both and understand both the coding side that is required to Automate the test, and they have enough knowledge of what should be tested within the software. More often than not, you don’t have that double knowledge in the same person, and I think both are needed in order to get the outcome that you want. It results in a lower amount of defects or improved quality in the end product. I believe that self-taught is just as good as going to school, so I think people can learn to code if they desire, but I don’t believe that everyone has the aptitude to learn how to code. Even so, I think it’s okay.

Jim: One more point. My favourite hobbyhorse is Behaviour Driven Development (BDD). That has got the developers much more involved in writing Automation than they were before. They become much more interested in it because they write these tests that are a level above unit tests, more like integration tests; they’re part of proving a behaviour that you write down – BDD scenarios for example. In my experience, the people I have worked with on this really embrace the idea of Automation. In doing that, they also embrace the idea of testing. These are developers who were not necessarily interested in testing before, and now they think of quality as part of something that they’re building into their code. I think it’s a really great development when I see this happen.

Benoît Poudrier is an Automation Specialist at PLATO Testing. Ben has been immersed in programming and QA for over 10 years. He has worked in various roles throughout his career and his experience working with a broad range of clients has provided Ben with a wide array of skills. He has performed manual, functional, automated and performance testing, and he has built regression testing suites for several clients. In addition to training many people on Ranorex automation, he has designed and implemented various automation frameworks while working at PLATO. Ben has spent over a year learning about security testing on his own through various online courses and through extensive research on a wide range of security-oriented topics.


Jim Peers is currently a QA Manager at OXD, a former Test Manager at the Integrated Renewal Program at the University of British Columbia, and an alumni QA Practitioner from PLATO Testing.  

https://www.linkedin.com/in/jim-peers-70977a6/, @jrdpeers

John is a Senior Automation Consultant with more than 13 years of experience in software testing and test Automation. Coming from a background in Computer Science, John has designed frameworks and automated testing strategies for a number of software projects while providing training on test Automation and its various toolsets.

Mike Hrycyk has been trapped in the world of quality since he first did user acceptance testing 20+ years ago. He has survived all of the different levels and a wide spectrum of technologies and environments to become the quality dynamo that he is today. Mike believes in creating a culture of quality throughout software production and tries hard to create teams that hold this ideal and advocate it to the rest of their workmates. Mike is currently the VP of Service Delivery, West for PLATO Testing, but has previously worked in social media management, parking, manufacturing, web photo retail, music delivery kiosks and at a railroad. Intermittently, he blogs about quality at http://www.qaisdoes.com.

Twitter: @qaisdoes
LinkedIn: https://www.linkedin.com/in/mikehrycyk/

Nathaniel (Nat) Couture, BSc MSc., has over a 15 years of experience in leadership roles in the technology sector. He possesses a solid foundation of technical skills and capabilities in product development including embedded systems, software development and quality assurance. With a decade of experience as CTO in two successful small ITC companies in New Brunswick, Canada, Nat has proven himself as a solid leader, capable of getting the job done when it counts by focusing on what’s important. He’s a very effective communicator who has mastered the art of simplicity. Nat has served as an expert witness on matters of software quality. He is always driving to adopt and create technologies that incrementally reduce manual labor and at the same time improve the quality of the end product.”

LinkedIn: https://www.linkedin.com/in/natcouture/,

Twitter: @natcouture

Categories: Test Automation