Automation is evolving—are you keeping up? In this episode of PLATO Panel Talks, host Mike Hrycyk is joined by Matt Heusser (Author & Lead Consultant, Excelon Development) and John McLaughlin (Test Automation Specialist, PLATO) to explore the future of next-generation automation testing tools. Together, they discuss what testing tools tell us about everything from the role of AI and machine learning in test automation to how low-code and no-code automation is changing the game and overcoming common challenges in automation adoption. With real-world insights from industry experts, this episode is a must-listen for QA professionals, developers, and tech leaders looking to stay ahead in an era of rapid innovation.

Episode Transcript: 

Mike Hrycyk (00:00):

Hello everyone. Welcome to another episode of PLATO Panel Talks. I’m your host, Mike Hrycyk, and today we’re going to talk about the next generation of testing tools, whether that be automation or, performance or something else. It’s really what’s out there? What are people starting to use? So that you have a viewpoint into where it’s at. Of course, for PLATO, this is a really important topic because that’s one of the things that our clients look for us to be is experts in all of these different things and the different tools. So, it’s a very interesting topic for us. So, now I’m going to turn this over to our panel of experts and let them introduce themselves. Over to you, Matt.

Matt Heusser (00:34):

Hello, my name’s Matt Heusser. I’m the lead consultant at Excelon Development. It’s my tiny little company in West Michigan. I started out as a programmer and project manager and QA lead before I went independent in 2011. Probably known best for my writing. Been on the board of directors for the Association for Software Testing. Last year, Michael Larsen and I published Software Testing Strategies: A Guide for the 2020s, which we think takes a different approach to software testing, which is more cognitive and intellectual. How do we figure out what we’re going to examine the software with? Once we know that, what do we actually know? What claims can we make? How do we visualize it so people could see it? How long does it take us to do what? How much variation should we have between runs so we increase our coverage? How do we measure coverage? We talk about all that. It’s not your classic, here’s how to create a master test plan and decompose your master test plan, and we don’t really think about testing in that way. That’s the two-minute introduction to Matt. Glad to be here.

Mike Hrycyk (01:43):

Thanks Matt. Thanks for coming out and with that description of your book, it’s now higher. It was always on my TBR list. It’s now higher. John, tell us about yourself, please.

John McLaughlin (01:53):

Hi, John McLaughlin in New Brunswick, Canada. I’ve been in software testing coming up on 19 years real soon. Most, if not all, of that time has been in the test automation space or some form of using technology for testing purposes, whether it’s just a standard coding language or different ways of looking at manipulating data and so on. Interested to be in this discussion. And I’ve known of Matt for a number of years now and looking forward to hearing some of his thoughts and ideas as we move along here.

Mike Hrycyk (02:31):

It’s always good to be thought of as a celebrity, right, Matt?

Matt Heusser (02:34):

There’s the first time for everything!

Mike Hrycyk (02:37):

Alright, so let’s start the conversation with a bit of level setting. And a lot of my questions are sort of geared towards automation, but we’re not locked into that in any way, right? When we talk about the next generation of testing tools, I don’t necessarily mean bleeding edge. I don’t necessarily mean some guy built it in his basement and he’s just looking for his first beta tester. I’m sort of comparing it to the fact that, in my mind, if we think about Selenium, and we’ve all been around long enough to recognize Selenium from when it was new, and it was the new great thing, but for me, that’s now part of the legacy. That’s part of the mainstream traditional, right? It’s the tool that if people are using automation, most of them are using it. So, let’s push those tools to the back and not have a conversation of – unless something is super new and coming out with it. Let’s talk about other tools that we’re seeing that are on the rise, and maybe they have a good base, but somewhere like IBM’s going to think about starting to use it or whatever. So let’s talk about that next generation and tools that people can talk their bosses into bringing into their organization and stuff like that. Is that something that we can all agree on? I’ll start with you, Matt.

Matt Heusser (03:45):

Sounds like a fine scope for me. Sure.

Mike Hrycyk (03:47):

You’re not still thinking about Selenium as the next new thing?

Matt Heusser (03:50):

I think I saw Jason Huggins, who was already employed at Google and it was the popular tool in 2007 at Google’s Test Automation Conference. So yeah, it’s a workhorse with a particular niche, but do you fit in that niche? Let’s talk about that.

Mike Hrycyk (04:07):

John, any disagreement with what I’ve said?

John McLaughlin (04:11):

Yeah, it all sounds good. I mean, the frequency that I hear the word Selenium brought up in this space, I think, kind of, at least for the last little while, has solidified its place as a mainstream tool. It continues to be one that I hear about that’s regularly and widely used, but there are others that are on the horizon that kind of gets some nods as well that I’m sure we’ll talk about as we go on.

Mike Hrycyk (04:37):

And there’s sort of a phenomena that I think exists in humanity, but especially in technology – as soon as a lot of people are using something, they don’t want to use it anymore. It’s like everyone uses that. That’s not interesting. You’re not new anymore. And I really feel that Selenium’s there, and there’s other tools that have reached that, but it’s like, yeah, everyone uses that. Let’s find something interesting. And so, that’s what we’re looking for. This conversation is about let’s find something that’s interesting. A second piece of level setting that I think we should talk about is we could spend this whole time talking about AI because when we talk about next-gen AI is the first thing everyone is thinking about, and it can be part of the conversation, but I don’t want us to isolate on that. So, the first question I was going to ask is are you seeing any new and interesting tools that actually aren’t talking about AI as being core and part of it?

Matt Heusser (05:28):

So, I guess it depends on your definition of interesting. Depends your definition of new. So what I’d say for niches, if you’re doing C# or you’re doing Java, that’s already, sort of, late majority kind of stuff. Then Selenium fits in just fine. If you’re doing Python, I like Playwright. Playwright, maybe Puppeteer. If you are writing the front-end code yourself, maybe Cypress, but of those really get – Playwright’s kind of new. But none of those are really toward – another one that I think is interesting is you got to look at what your defects are. If your defects are, all the links are there, you can click ’em, everything works, but it looks wrong, it looks bad, you blow the CSS, and everything is often to the side or something. Then, Applitools can do visual inspections. I think those are the three or four that if I was doing tooling and automation and wanted to do the next thing after Selenium, those would be the three or four I’d be looking into. And that doesn’t really speak to commercial tools. Implicit in your question was kind of open source-y feel was what I took.

Mike Hrycyk (06:43):

I think so. I mean, there’s no reason we have to avoid commercial tools, and I mean Tosca constantly tries to blow itself up as the next new thing, and it’s a commercial tool, and it does some interesting things. The big challenge for a lot of people we talk to is Tosca’s pricing. But I like everything you just said. Most of those, I think, talk about AI as part of their solution now. And some of that’s marketing, and some of that is true. Applitools has been talking about AI as their visual testing for at least a few years, and I believe that what they have is, to some extent, at least machine learning. Are any of those tools avoiding that conversation?

Matt Heusser (07:22):

In the marketing I try to stay out of. Right now, the marketing-o-sphere, there’s a lot of – it’s AI everywhere and I think that as testers, we should be called to sort of be skeptical of those claims. There’s a company that I like – oh, API testing. So Postman, I think API testing is going to become increasingly important. But SmartBear Software they have an entire collection of tools that I think are actually valuable for testing for testing’s sake. And, of course, the marketing department is going to put some AI on top of it right now. But I think there’s a lot of value – all kinds of things like browser stack, like tools, load testing tools, and I have no relationship with them at this point commercially. I have in the past, but it’s been years. But send every single possible mobile device to this website at this URL and come back and then show me what it looks like and render a thumbnail, and I can scroll through and look at it. Do a visual comparison and see which of those are the most different for me to look at to see if my responsive design is working correctly. Now with responsive design – and maybe that’s not the problem it used to be. They’ve got a whole host of tools that sort of stitch together that I like that I think have not gone over the top and we’re going to invest all our R&D into AI right now.

Mike Hrycyk (08:45):

That’s good to hear. John, same question to you, if you can still remember what I asked.

John McLaughlin (08:50):

I know the general premise is AI, and in full disclosure, I’m not a huge fan of AI and let’s say the enthusiasm around AI, I think it’s misguided. AI can mean a whole lot of different things, but some folks interpret it as one very specific thing that can answer a whole lot of questions when really it cannot. I think in terms of testing a piece of software that’s meant to be used by real people, real-world users; there’s no replacement for real people’s thoughts and the brain power, I guess, that goes into thinking about how to exercise those things. Now, using AI as, I’ll say, a calculator, as a tool, not a literal calculator, but a tool to help brainstorm ideas for, maybe, permutations of test scenarios or different combinations of data that you can run through a test. Using it as a tool is fine in that regard, but I think relying on it too heavily to be the main selling point of a tool is setting some unrealistic expectations for people that are going out and paying a price tag for these tools. Then they get them, we’ll say home and actually have to use them for real.

Mike Hrycyk (10:09):

Matt, your answer reminded me of a little story I had. So, I’ve got a buddy, I started working with him, I don’t know, 25 years ago, something like that. A long time. He has, for that entire duration, been a Microsoft fanboy. He’s a developer and an architect, and he’s just loved every solution Microsoft ever came from. He’s a consultant; he’s going out and pushing and selling and etc, which is just not a judgment; it’s just setting up who he is. And he’s starting a brand new endeavour and working on building this framework that’s going to help private consultants do their thing. And it’s cool. And he’s been talking to me and ideating around it and doing – we’ve had lots of discussions. And he was talking about how he was going to do something, and every third word from him is Copilot and how amazing it is. And so, that’s interesting, and that’s who he’s always been. So, that’s kind of what I expected. And I mentioned Playwright to him, and he had never heard of Playwright, and it just reminded me – and it’s really pertinent to this conversation, I think is that we live in a space where we hear about the new stuff, and we hear about the new stuff a lot to the point where we stop talking about it or thinking about it as new stuff. To us, it’s like, well, everyone knows Playwright because everyone talks about Playwright. And guess what? Not everyone does it to a lot of people. It’s a new tool.

And that’s why when I set the level at the start is that we don’t necessarily have to have this conversation be bleeding edge because some of the things we’re going to talk about today are going to be new to people that they haven’t – like there’s people out there who’ve never heard of Cypress. But I’ve been hearing about Cypress for five years, possibly it’s older than that, I don’t know. That’s just when I started hearing about and lots of people talk about it. And that’s just sort of a level set to our conversation I think that is interesting to me. I was just like, what? You’ve never heard of it? And he thanked me. And I’m like, oh, I’m not usually the guy telling you about new things. So, I thought that was interesting. And as a tester, it’s always good to have your perceptions about what you’re thinking about shaken because we always have to expand our brains and think about the way different users think. Either of you want to comment on that? Go ahead.

Matt Heusser (12:08):

I would just say that it’s interesting to see our little sort of sub-communities. So, SAP has its own little sub-communities where it has its own little conference. The Microsoft universe has their much larger sort of universe. And if we go back in time a little bit, they were talking about Microsoft Test. Well, if you didn’t use Visual Studio, it had no relevance for you at all. So, the people that do SAP testing, there’s a couple of really nice keyword-driven frameworks they use because their UI is standardized, and everything is kind of the same. And it’s these logical forms you update with a little bit of business logic to save back to a database, and they have standardized workflows, and they work really well to do component testing when you’re trying to do upgrades. Which is a thing they’re doing all the time. We’re going to upgrade to the newest version of 2.4 or whatever, or we’re going to migrate our stuff into the cloud, which has been their push for a while. Get off your stuff completely and get on our stuff. Well then, your custom things, how do they work with the newest version of – they have really nice keyword-driven frameworks that I really wouldn’t even know about if I hadn’t sort of had a little bit of hooks in a lot of communities.

Mike Hrycyk (13:20):

And when you think about some of your more traditional toolsets like Oracle Tests and Tricentis and etc, you might not have heard about new stuff coming from them because it’s not part of what you’re – but really what they’ve focused on is they’re all making sure that they have modules that help test with specific things like Salesforce and SAP, and they’re dropping a lot of research dollars into making sure that those are robust and work because the marketplace around those are big. And so, again, that comes to be sort of a niche. I mean, the recommendation to our listeners is if you’re working in a space that is using a big standardized tool, most of your tool sets out there probably have something focusing on it, not the brand new little bleeding edge stuff, but a lot of the other stuff does. So look into that if you want.

Matt Heusser (14:06):

I think that’s fair. I think it’s fair to say that the more boring old school tool that you use, if it’s been around a long time – Tosca is a good example, Selenium is a good example. They’re going to have those hooks. If you’re using any sort of strange custom UI, that is – they’re going to have the hooks, and you’re going to be able to test VR with Facebook Meta. The broader, larger, heavier tool sets are the ones more likely to have the extensions that you need.

Mike Hrycyk (14:37):

Alright, so let’s generalize a little bit. What are the high-level characteristics we’re seeing coming in new tools? If you’re researching a new tool, if you’re selling your new tool, what are the things that you expect? And the two examples I thought of in this is if you have a new tool, you have to make sure that it’ll integrate with your more popular DevOps integrations, your CI/CD pipelines or no code is a big term that’s out there. We can talk about those. You can talk about new things. So, when you’re seeing new tools, what are the let’s call ’em buzzwords? What are you seeing? We’ll start with you this time, John.

John McLaughlin (15:10):

Sure. Yeah, so you mentioned DevOps integration with other systems that projects use, whether it’s, say, CI/CD pipelines or GIT repositories have been, or any kind of version control repository has been standard for years now. I think, since maybe COVID-19 times, when there was the population of remote workers boomed, I’ll say making things flexible and easy for collaboration outside of a single person working on a solution is probably another big selling point. I know Tosca has come up a couple of times, and they have a whole structure around their solution to enable distributed teams, I guess, to contribute to automation projects. When it comes to something like Selenium or Playwright or any of the open source code-based tools, those are almost ingrained in just the nature of the way, we’ll say, software-coded style of applications work, but the commercial tools that sell themselves, you can usually almost bet that you’ll see some points about capabilities to collaborate across teams.

Mike Hrycyk (16:19):

Awesome. Matt?

Matt Heusser (16:21):

Yeah, and I’m going to again maybe be a little cutting edge in this if I remember the question correctly, the Silicon Valley companies that we’ve been working with, and not everybody has their problems. If you don’t make software, if you make a truck and your job is to build enough software to enable it, you’re going to have different constraints on your business. But what I see is, and I think John nailed it, in that if we can check our tests in diversion control right alongside the code, if they can be in the same programming language or a very similar programming language, if we can get it to run quickly within a tight CI/CD window, so I don’t leave for the day, you can run the tests. So, that we have to federate the tests and only run the subset, or do we run them in parallel? All those sorts of questions. And I’ve seen more success with we’re going to break the thing into components, and they’re all going to run their little subset of tests that can run in a very tight period of time. So, we push a diversion control; we run all the tests, and the tests fail. The programmer who made the change is notified, and that programmer has to change it.

Ideally, it only worked for some kinds of software, but if we can get that software running in a docker instance for that developer and they can run all the tests before they even commit their code, before they – or push that code into, maybe, they have a side branch and they push that code into a mainline working branch and that gets tested for that day before it goes into a master branch. I like one single branch. I like not branching strategy. But as the software gets more and more complex and you get more and more levels, some companies have found it helpful to move in something like that direction.

So, if you think about that, that’s a really tight cycle that’s write my code, run my tests myself, push it into version control, run the tests again, get the feedback, I could fix it. You still have a tester role. There still can be people writing these tools and these tests, but it’s a very tight feedback loop. So, the question is how slow is our feedback loop from the developer writing code to human tests, explores it, writing test automation to find a bug, and how long does it take to fix it? How long is that window? And in some cases, it’s a week or two to actually get one story really tested and really actually working and ready to merge in. And once we do that, then there’s going to be a bunch of problems with the merge. New people have written new stuff. So, what I see is a tendency to compress that timeline across all our client base. In the same way, 10 years ago, we were getting testing closer to development, and we were tightening that, I see it is tightening the code, compiling, and integrating the test cycle. Did I answer your question? I just blathered for 10 minutes

Mike Hrycyk (19:03):

Mostly. I did have a comment to come back with and I think you said something that’s interesting to me in that – so, I’ve worked in places that are trying to release multiple times a day, right? Going to production often and the tightness of the cycle is really super crucial for that. And a lot of people have highlighted that notion that that’s where that’s core. But I think that the fast feedback is positive in much more ways than that. And it’s really about context. It’s really about get that developer the feedback on what they’ve done and how it relates to others and what they’re doing as fast as possible so that they can fix it while it’s still in context. Because developers hate caring about that problem two days later, and they’re going to drag their heels, and then it’s going to be three days, and that’s going to impact other people. So, the tighter you can make that loop, the faster and more effective.

Matt Heusser (19:53):

Once you’re out to two days, they’re working on something else. And so, you send ’em your email or walk over to their cubicle or hail ’em on Slack or whatever it’s that you do and then they got to argue about how to reproduce it and argue about how to fix it and then they’ll work on it when they get around to it and then you’ll get a new build but you’re working on something else. So, you work on it when you get around to it. And that’s how we get those really wide – and what I’m seeing is that tightened feedback loops increase human performance. So, what we need is tools that can support that tightened feedback loop. And I think John nailed it, is it integrated with version control? Can it be edited by anyone on the technical team? So, some tools that record playback per seat are really only for testers only and that limits your effectiveness and can it fit in a CI/CD window? Those three things, I think, tend to skyrocket performance. That’s not for everybody, but I’ve seen that as one method of improving maturity.

Mike Hrycyk (20:49):

Something else you made me think about is when you threw Docker in the tools. The way we’ve done containerization is smart enough that the tools don’t have to be optimized for it. So, that was a thought that I had, right? They’re just going to work because of the way we’ve built containerization. So, I’m like, oh, that’s something we don’t have to talk about, even though I just did.

Matt Heusser (21:07):

Well, let me think about that. So, if your tests are a Windows-based tool, where are your tests running? So, you can set up Docker to be a web server somewhere in the cloud. Where are those tests running if you’re using a Windows tool? And it gets a little tricky.

Mike Hrycyk (21:24):

Yeah, the other thing that I thought of coming out of that is this is probably a characteristic is the capability of reporting on results, but what I’m seeing with a lot of tools these days is that kind of stuff is separating out. It’s like you integrate with TestNG, and that does your result reporting, or you let Jenkins do your result reporting, but probably you’re still integrating something else. And so the tools themselves aren’t spending a lot of time building out their reporting capabilities. They’re spending the time to say, yes, I can send my reports to another tool. And I mean, that just adds another thing to our list of potential tools that we could be watching.

Matt Heusser (21:57):

Yeah, absolutely. Yep.

Mike Hrycyk (21:58):

And so, that segues into my next question. Name three tools that you’re watching right now because they’re interesting. John, you tell me three, you’re watching

John McLaughlin (22:07):

The first one. Number one has already been mentioned a couple of times, and it’s Playwright. I’ve played with it a bit myself just to explore and experiment, and it is quite fast and efficient at what it does, and it has a lot of the flexibility that you would want in an automated test tool. The second one that I’ve been watching for a while now is WebdriverIO. It’s also a JavaScript kind of base framework that runs on Node[.js]. I think they’re up to version nine now, but if you watch the change log in the GIT repo, people are working on that constantly and advancing to the next useful step there. So, the capabilities of that tool appear to be endless or seem to be quite large. And then, the third one is, even though it has been around forever Selenium, in the sense of how people take new, or say new-ish, coding frameworks and then apply that in the Selenium context. So, an example is my current project dealt a lot with the Java Spring Boot framework. So, then Spring Boot has a whole lot of efficiencies built into it that keep those applications lean and clean and very concise. So, then, if you take that and translate that to the Selenium space, you get an automated testing Selenium solution that’s very clean, compact, organized, and scalable just by using, say, another Java application framework but applying it to the testing space. That’s as an example. And over the years, you have seen people apply different approaches to Selenium, and it’s always an interesting one to see whatever people come up with next.

Mike Hrycyk (23:43):

I was not expecting Selenium to be on the list. So, good job because you convinced me. Question: probably a gap in my own knowledge. Playwright, in my mind, is something that is focused generally on Microsoft-derived solutions. Is that not true? Is it good for wider testing

John McLaughlin (23:56):

Web-based applications. So, if it runs against anything in the browser. It can do API testing too. I’m not positive about mobile. May or may not know about mobile, but I think it’s probably bread and butter as web-based UI testing.

Matt Heusser (24:11):

I tend to agree. I think that Selenium is an open source – it’s an open-source community, so there’s a whole bunch of little weird Selenium projects. Web Driver being the name of the browser driver, and it’s interesting to see what that group comes up with. In terms of Playwright. I’ve mostly heard Playwright in Python, actually, but it does help the Microsoft community, and it is an attempt for them to take over browser driving, and I think that’s fine. It’s faster and uses sockets as a connection, which is faster, if that really matters to you in hitting tight CI/CD Windows, it’s interesting.

I like Postman. Only issue – Postman has a couple of idiosyncrasies. So, when you save it, its native format is binary or something, so you have to down save it to text to get it in version control or else so that you can dif it [use a too to compare versions for differences]. So, you have to export – when you want to put an inversion control, you have to export and import it every time. It takes a little bit of time to load. Which then is a problem if two people are modifying the same file at the same time. You can solve that at the professional level. If you buy a subscription, suddenly, those problems are more manageable or even go away. So, I think API testing is going to become increasingly important over the next two – if I had, what is your one bet, Matt, on whatever day we recorded this early in 2025, I would say it’s that API testing is going to be more relevant to testers or those testers are going to become less relevant to the organizations they work for. And if you want a little tool that’s all kinds of powerful? Hexawise is similar to Tosca in that it does common tropes to come up with your test ideas based on inputs. It can optimize for most powerful, less ideas, or give me more ideas, and then you can put those options into a CSV file or something. You feed that into your automation, and then it can run through all the combinations, maybe overnight or something like that. And it can look for errors and optimizations that you might miss if you’re just recording a few scenarios. But all it’s going to do is generate the test ideas. Here’s a bunch of test ideas and you pay for it per seat per month. It’s pretty cheap. So, those are the things that I’m looking at. Interested in. What do they have new? What’s going on?

Mike Hrycyk (26:28):

Cool. We’ve talked about API a few times here, and it’s worth talking about. What I haven’t seen is – it’s not a ton of new tooling, and I haven’t heard a lot of new stuff, although finding out that some of the tools you’ve been watching are doing API testing is interesting, John. But the interesting anecdote that I have from that, at least interesting to me, is so we run a meetup, a testing meetup here in Vancouver called YVR Testing. I had a young man reach out to me before our last talk, which was just last Wednesday, and he said, I’ve written a new API tool, and I’m looking for beta testers. Can I introduce it at your next meetup and see if I can get people interested in doing the beta testing? And I thought that was really cool that he just didn’t love the solutions he had, so he went to write his own.

And so, he is got this tool, QAPIR is how he’s calling it, but it’s Q-A-P-I-R so you can keep your eyes up for that. Except for it’s a no-code API testing tool. He wants to get it so that it’s no code. So, it’s accessible to more testers, which – and maybe potentially business. But Matt, I think that may help the old guard that is still somehow resistant to having any interaction with automation or API testing having solutions that will get them in. And I think it’s a gateway tool, right? As soon as you start doing it, you realize this stuff isn’t as terrifying as you think it is. It’s really something that makes sense; it fits with the same logic patterns you’ve been using for the last X number of years, and it’s a path to something.

Matt Heusser (27:48):

Well, we’ve had real problems with the things that weren’t designed for testability on the API side. People that are just doing standard rest APIs with standard authentication. But when you get to the bigger authentication schemes or to create an ID, you do a bunch of stuff that isn’t documented anywhere that’s written – that’s processed through a bunch of JavaScript to a bunch of API, you have to reverse engineer that to come up with the test to create an account to do the work. That can be some heavy lifting. But straightforward API testing with standard off is beautiful. You can test so many more combinations so much faster.

Mike Hrycyk (28:22):

Alright, we only got time for a couple more questions, so I’m going to focus. For me, and we sort of touched on this a little bit, the idea of one tool to do it all, so UFT or whatever, I think, is on its way out. Some people are still doing that, but I’m seeing more and more that it’s a mosaic of tools. It’s this 4, 5, 3 tools that are integrating together to get you the results that you want. Do you see this as a valuable approach? Start with you this time, Matt?

Matt Heusser (28:48):

Oh absolutely. So, I’m an old Unix hacker. So, we used to talk about the Swiss Army chainsaw, which is an entire set of small tools that you combined to do something bigger. And what we’ll find is that as you start to look around, you see, well, I want to mock out the API so that I can test them, but I want to run them into the end, but I want to test the GUI, but I want generate test data, but – how are we going to model load? And what about accessibility? There’s a whole pile of risks. So, you want to sort of assemble your jigsaw puzzle. Tools can help with that.

Mike Hrycyk (29:22):

Let me extend the question a little bit. Is that mosaic, is that one tester understanding each tool and working together collaboratively? Or is it super testers that have enough intelligence and experience to own – like Matt, I would’ve no problem with you being an owner of five different tools and making sure they’re all being used well. But in your standard, do you envision that being multiples with expertise or super testers that can do it all?

Matt Heusser (29:45):

Well, I’ll tell you a story that might be helpful. I was consulting in New York a few years ago, and we had this wonderful jive session, after hours, they brought in beer, and it was kind of a testing special interest group, and people that weren’t necessarily testers were invited. And it was Pete Whalen and I. And there was one of the workers there did a fantastic talk, but I realized he just kept talking about every project he worked on he used a different tool. And they’re mostly open source. And then he would move on to the next project, and that poor maintenance programmer that would come after would’ve to try to figure out. And I’d been on some of those. What happens is, eventually, you get someone doing maintenance who doesn’t know how to run this weird tool that isn’t really documented well, and they just fix it and test it manually, and they throw it all away. So, the idea of one super tester they might leave a bunch of stuff behind them that is undocumented, unexplained, forgotten, and unless you’re doing marketing websites for Mountain Dew that are going to go up for six months and just fade away and never be used again, you’re writing software that’s going to run, going to process transactions all the time. You do want to have a reasonably small number of tools that do what they need to do, that are well understood by more than one person. I would think. It depends on your context, right? Maybe your organization is six people, but if you’re bigger than that, you probably want to spread it out.

John McLaughlin (31:13):

Yeah, I would agree with everything Matt has mentioned. Now, I think starting with the mosaic of tools, it would take a whole lot of effort, extra effort than it’s worth, to have one tool that does everything. But if you have a tool that does something very well, plus another tool that does something very well, plus another tool that does something very well, the combination of the three is much more beneficial than the hacking and maintenance and stuff that will have to go along to develop and maintain a one solution answer to a problem.

To the one person doing, say, doing the orchestrating of things. I think it does go back to the maintenance aspect. In the modern world of projects, how fast they move, not very many people have time to organize their work in a way that’s easy and understandable by those that come after them. So, then, if you start putting layers of tools on top of that, it becomes a spider web of things for somebody to unwind later when, say, that work has to transition onto somebody else. I think there is probably a space where it would be advantageous to have an architect-type person that can see the pieces that would be helpful to pull together but have unique individuals within those spaces that have the abilities and the knowledge to pull it all together. One person could know all of these different tools and technologies that you’d want to orchestrate together, but there’s most likely gaps that they don’t know that somebody that maybe focuses on a specific tool might be stronger in that would benefit the team and the project as a whole, I guess to be a little more effective.

Mike Hrycyk (32:57):

I think that the thing that we can all agree to is mosaic’s good, but the tools that you’re putting into your mosaic have to be able to integrate into your build system together so that you’re not looking at different ways of them working. And then also integrate reports so you can get consolidated reports that give you a good idea of what your quality is like at any point in time. Alright, last question. Put on your futuristic cap, two years, five years, whatever you think is the reasonable timeframe, what’s the most popular tool in use? And it doesn’t have to be a name; you can describe it, but maybe it’s one that’s out there now. We’ll start with you, Matt.

Matt Heusser (33:30):

Well, I would like to see API testing continue to grow because I think for the front end, the thing to test it is probably a human. Testing the front end with tools is slow, and you either have to go with something changed! Oh no! And what programmers do is create change. So, that’s not good. Or I click this one button, and this number at the bottom is right. So, everything’s good. Doing anything with the GUI that’s in between that has more nuance takes a lot of work. So, if we can explore the GUI as humans and then have automation that runs through a whole bunch of combinations at the API level, having those two things, combining the human and the machine, I think, is very powerful. And this book [read: Software Testing Strategies: A Guide for the 2020s] is on the New York Times bestseller list. That is my prediction for two years. I think you said in a perfect world, right? So, one can dream.

Mike Hrycyk (34:21):

Tell you, what if our entire readership or listenership buys 10 copies?

Matt Heusser (34:28):

It’s a start! It’s a start.

Mike Hrycyk (34:29):

It’s a start. If they tell all their friends and they tell 10 friends and they tell 10 friends, then maybe we can get there. Alright, John, most popular tool in two, five, whatever years from now.

John McLaughlin (34:39):

So, two years, the amount of positive things that I have heard about it, I would put my bet around, or back on, Playwright. When I first heard about it, I was skeptical, I guess, that it would burst the Selenium bubble, but it hasn’t lost any steam since I first heard about it to now. And two years isn’t really that far away down the line. So, I think they’re only going to get better from here on out. So, I would say a structure or a solution like Playwright has put together is probably going to be the top popular one, not too far down.

Mike Hrycyk (35:13):

I would like to thank our panel, Matt and John, for coming out today, and we had a really good discussion about next-gen tools. I feel like we really just had the tip of the pie in this conversation. We could do this conversation every 12 months and be entirely different and not just because a different list of tools, but there’s lots of little different things to talk about. So, maybe we’ll think about doing something like that. I’m going to call out your book one more time, Matt. So, it’s Software Testing Strategies: A Guide for the 2020s by Matt Heusser and Michael Larsen, who are both great testers, and they have their own podcast. Matt, what’s your podcast name?

Matt Heusser (35:46):

The Testing Show sponsored by Qualitest.

Mike Hrycyk (35:49):

Awesome. So if you go out and look for that, guys, it’s a great listen.

If you had anything you’d like to add to our conversation, we’d love to hear your feedback, comments and questions. You can find us at @PLATOTesting on LinkedIn, Facebook, and on our website. You can find links to all of our social media and websites in the episode description. We’ll also throw a link to the book. If anyone out there wants to join in on one of our podcast chats or has a topic they’d like us to cover, please reach out through those channels.

If you are enjoying our conversations about everything software testing, we’d love it if you could rate and review PLATO Panel Talks on whatever platform you’re listening on. Thank you again for listening, and we’ll talk to you next time.