PLATO Panel Talks is back for 2023, and our host Mike Hrycyk is sitting down with two returning panellists and experts in Test Automation, Jonathan Duncan, Chief Delivery Officer at PLATO Tech and Sean Wilson, Senior Director of Engineering at Wind River, to ask “is Test Automation a silver bullet?” While the panel may debate whether Automation is a silver bullet for your testing needs, they also share how they set automation up for success on a project and the essential role that Test Automation plays in software testing today and in the future.

Episode Transcript:

Mike Hrycyk (00:03):
Hello everyone. Welcome to another episode of PLATO Panel Talks. I’m your host, Mike Hrycyk, and today we’re going to talk about automation and how it is or isn’t, a silver bullet with our panel of testing experts. I like this topic, and, we were having a conversation the other day and it came up because we still have plenty of clients that think that automation can solve all of the problems. So I thought that it’d be good to have a conversation and, bring some of that stuff out to light. So I’m pretty excited about it. I don’t think we’ve done a podcast about this before. We’ve maybe blogged in the past but it’s going to be great to have a conversation. So today I’ve invited two brilliant gentlemen to come and talk about this with me. The first is Sean Wilson. Sean, why don’t you tell us about yourself?

Sean Wilson (00:41):
Sure. My name’s Sean Wilson. I’m the Senior Director of Engineering at Wind River. My team is working on a test automation framework component. I have been working in quality assurance since I started in software longer ago than I care to remember, but I like it.

Mike Hrycyk (01:01):
Yeah, I think you even have a year or two on me. Oh, okay. Jonathan, please tell us about yourself.

Jonathan Duncan (01:07):
Yeah, so Jonathan Duncan with PLATO, as is Mike, I’m the Chief Delivery Officer. I’ve got about 27 years now, I guess, in IT with varying pieces in development, testing, and management. And it’s really that development and testing is where I think is the crossroad in automation. But we’re probably going to into some of that as to why it is and also why it isn’t. Thanks for having me.

Mike Hrycyk (01:31):
Awesome. Thanks both of you for coming. Okay. Well, let’s jump right in. I realize that the silver bullet metaphor may not work for everyone. So what do you think we mean by automation as a silver bullet? And we’ll start with you this time, Jon.

Jonathan Duncan (01:45):
Yeah, so I think it’s – and we’re probably all guilty of it. So I actually do believe that technology, whether it was back in my development days or in the testing days, that developing things and automating things, that it can all be done and it can do almost anything we want it to. The question is really about time and money, and everybody always forgets to tell customers or forgets it themselves that: “Oh yeah, this is not going to be a one shot deal.” And I think that’s the piece that tool vendors and service vendors like ourselves need to make sure the customers are aware of. So automation can save. It’s a matter of being selective and making sure that we’re not over promising. Unlikely that automating everything is ever the right answer. It always requires humans somewhere to test things and make sure that you’re really covering what you need to cover.

Mike Hrycyk (02:39):
Okay. Sean, anything to add to that?

Sean Wilson (02:41):
Yeah, I guess the thing that I’d add to that is I was working recently in the gaming world over at Ubisoft, and one of the things we talked about a lot in terms of automation is I can use automated tests to tell me if something works, but I can’t use automated tests to tell me if it sucks. And where you’re working on software that has a human component – where humans are going to use it and interact with it – you can’t just use an automated test to answer the question of, will my customers like this? And I think it’s really important to consider that that will always be a part of testing – making sure that this is the thing that will make our customers happy, not just that it works.

Mike Hrycyk (03:21):

Yeah. I mean, from my perspective, not that it matters as much as my esteemed guests, but my perspective is that the silver bullet thing is we have clients who come to us who have this belief that automation can free up all of their QA jobs or can make everything perfect every time, or you’ll implement it and it will just work forever, right. And that’s the idea of the silver bullet, which is that you got one shot and that will do it all. And that’s what’s scary. And, so that sort of leads me into my next question. So in order to solve this, we maybe need to understand how we get into this position. And so how do you think, and I understand that this is really just using your experience to do this. I probably don’t have studies on it, but how do you think that an organization gets to this idea that a silver bullet should exist? Let’s start with you this time, Sean.

Sean Wilson (04:08):

Sure. Well, I think the core of this problem when we talk about automation is that the word automation isn’t understood. And when you talk to different people, it means very different things. So automation doesn’t just mean automated testing. And often when we think about it, that’s what we’re thinking. You know, like, automated tests are gonna solve all my time and quality problems. Hmm, not necessarily. I think that automation means, you know, taking any sort of manual process that I’m doing towards a quality initiative and saying, can I automate that? Can I make it easier so that I’m taking humans out of the loop for things that humans don’t need to be in the loop for? I think that is part of the way that we get into it because we conflate too many things when we say automation will solve problems. And I think that the other part of it – and this is something that Jon alluded to earlier – it’s very easy for people to oversell the idea that when automation is in, it will just work. It won’t have maintenance, it won’t have updates, it won’t have, you know, changes or things that go on with it later. So they’ll do it, they’ll write it once, it’ll be done. Everything is wonderful. And when people sell automation collectively –meaning a large thing with that sort of add on to it, oh, it just works and you’ll never have to change it – we get into this thought where business leaders will be saying: “oh, I can just invest some money here and this all becomes perfect.”

Mike Hrycyk (05:28):

So you mean buyer beware works with software vendors too?

Sean Wilson (05:32):

<Laugh> It probably should. Yeah.

Mike Hrycyk (05:35):

Jon, anything to add there?

Jonathan Duncan (05:38):

No, I think that sums up well. And it’s not all – sort of what people are selling about automation it’s not a lie, right? So automation will free up humans from doing checks. Cause that’s really what, in the way we phrase it, test automation or automated tests are really automated checks at this point. I think in the future, maybe we’ll get to the point where humans will be confident that a machine can actually go and investigate something similar to them and come up with a suite of tests. And I just don’t think we’re there just yet. But I was reminded by a colleague that 20 years ago, I wouldn’t have ever believed that my car could tell me where to drive, let alone could drive me on its own. So who knows what the future will be. But right now, I think it’s about freeing up humans to do what humans do well, which is think and challenge no one’s out there and allow the computers to go and do what they do well, which is checking for ones and zeros – doing those checks.

Sean Wilson (06:39):

If I could add to that just a half second, I built a thing when we were at Ubisoft, specifically around using computers to help determine what to test and how to test it in the industry. We’re getting a lot closer to being able to use computers to help us understand that better. It’s a completely separate topic though, that we can talk about later, but it’s super interesting. Like there’s a lot of really cool things happening right now.

Mike Hrycyk (07:03):

Cool. We’ll have to come back to that someday when we also maybe layer AI into that. That would be great. So my next question sort of comes out of what you were talking about there, Jon, which is around the idea of coverage and the mix. So just a real quick definition of coverage, cause most of you out there probably know it, but in case you don’t test coverage is the idea that of all of the different requirements, features, functions, etc., that you have to look at, you have a test that covers each of those. In automation, it’s the same thing, it’s really hard to automatically track, but we’re not here to talk about that today. The question I have is, so if you state that you have 80% automation coverage, does that mean that you don’t have to manually test 80% of your application? Can some executive read that and go: “oh, I need 80% less testers?” Sean, we’ll start with you.

Sean Wilson (07:52):

So, no, I think the idea is great. The idea that, you know, if I am taking pure code coverage, 80% of the code has been tested by an automated system, I now need only two testers instead of 10 – that doesn’t work. One, because we never complete a hundred percent of the testing on time. There’s always things where there’s more testing that could be done to provide more value. Certainly moving earlier into the cycle. But often I think just because you’ve covered a path like a code path with an automated script doesn’t mean that you’ve covered the user interface of that as well. And having humans that are interacting with the system, they’re providing very different qualitative information about the system under tests than an automated script is. Again, automated scripts are looking at it and saying yes or no, they’re not giving you that sort of quality evaluation of it as well. So no, I don’t think that you can apply that and hope for good results.

Mike Hrycyk (08:47):

Well, and I think part of it is also predicated on the concept of 80%. So usually when you’re talking to people, 80% is based on the testing pyramid, which means a big chunk of it is down in the unit testing. And, I used to teach people unit tests are great, and if there’s a failure, you know that something might go wrong, but you don’t know that anything necessarily could go wrong because a single unit test will generally not define whether something will work at the user interface level.

Sean Wilson (09:15):

Yeah. I think for the most part, quality assurance folks have to be happy that engineers are creating unit tests, but they can’t actually rely on that to tell them that the application will work.

Mike Hrycyk (09:26):

No. Alright, Jon, your two cents?

Jonathan Duncan (09:29):

Yeah. So there’s a scenario where you could play it – I’m going to say yes, that could reduce the total amount of human effort towards that if you were actually testing all of it prior to that. And I think that’s really what happens is everybody – so over the life cycle of something built, I needed 10 people to do it and test it from end-to-end. And the developers over time as we maintained it, they continued to add features in, but nobody continues to add testers in, right? So that’s the piece that I think it’s picking up. It’s picking up those checks so that I can actually still feel comfortable that those 10 testers that are still testing it are actually allowing me to get through the rest of it. Yeah, it’s really picking up almost technical debt type stuff is really where it’s picking it up in my view. It could reduce the amount of human effort required if you were actually spending the amount of human effort that you should have been prior to implementing it.

Mike Hrycyk (10:25):

Well, and I think that something you just said there makes me happy and unhappy. So, if when you first build your application and you look at it and you do some really good planning and you say, I’m going to do automation around all of this stuff, and you get to your 80% coverage and that’s all great. And then even if as you build extra features and new things, you do manual testing to make sure that – the challenge for me is you come out of that first build of automation and you have this level of confidence and you have this level of happiness that when it runs, it’s going prove everything works. And so let’s say you’re five iterations down the road, you’ve tested each feature as you went and everything is happy. If you’re still relying on that confidence level that you had from that original automation build and you haven’t actually maintained and integrated your changes, your confidence levels should no longer be as high as it was. Cause you’re not proving the same thing anymore. You’re proving something for version one, not version five. Does that make sense?

Sean Wilson (11:25):

Yeah. You no longer have 80% either because you’ve added multiple more features in those additional iterations. And if you haven’t kept up your 80% to include 80% of the new features as well you’re definitely far behind.

Mike Hrycyk (11:39):

Yeah. Do these answers that we’ve just given, do they change depending on who does your automation? So you touched on this briefly, Sean? I.E. If all your automation is being done by Devs or if all your automation is being done by test engineers, and maybe the side topic that we’ll get to someday relates really well back to that. But, but do you have an opinion there, Sean?

Sean Wilson (11:59):

Yeah, absolutely. It shouldn’t, but ultimately it will. I very often – we have requirements, for example, on my engineering teams which have to have 80% unit test coverage across the board for what they’re building. So that’s the minimum, sort of, you can’t check it in unless you’ve got it. It won’t get into the build. But that doesn’t do anything for what we’re talking about with test. When we’re talking about the testing, when we’re talking about automated test coverage, we ignore unit tests entirely. We’re only looking at test coverage of the application at the compiled level. So the post compilation, post build stage. So, ultimately the numbers are completely independent of each other. So it depends a little bit in that case. If I’ve got a developer writing end user tester automation, which we do, then they’re both talking about the same number. But if we’re talking about Devs who are building the code and doing the primary thing of building up their unit tests and some underlying function tests, no, those numbers don’t match and they will be completely different.

Mike Hrycyk (13:03):

Awesome. Jonathan, do our answers to that question change depending on whether Devs or, professional test engineers are doing your automation?

Jonathan Duncan (13:14):

So yeah, I think it shouldn’t change the answer, but those two groups of individuals look at problems in a different fashion, right? And this isn’t anything against either one of those two groups. So, I hope we don’t get blasted with too much negative traffic, but one is looking for the path to make something work, the other’s looking for the path to make it fail, right? That’s their role, right? They want to validate that it works, but they ultimately want to find some problems and find out where it could fail. I think you need both, right? And I think in that pyramid that you talked about earlier, that is where it varies as to who should be the core of building out that set of tests. Doesn’t mean that the other group shouldn’t be involved in each one of the layers, but I think that there’s sort of a responsibility and accountability on each group at the varying different levels.

Mike Hrycyk (14:09):

Yeah, I think that’s fair. Personally, I think when we tie it back to our overarching topic of is it a silver bullet? I think that Devs get you faster to on paper completion, but test engineers get you faster to something that resembles closer to a silver bullet that actually gives you the confidence that you should have. That’s probably a debate for a different day. Let’s shift a little and talk about DevOps. DevOps is part of the new – automation and DevOps go hand in hand. They’re married together, you can’t have one without the other. Or you could probably have automation without DevOps, but why would you? So DevOps works best the more you can integrate things into it just in general. Since we’re talking about automation, how does automation – and I’m probably talking about software automation here, Sean, but you can talk about other things if you like – and so we sort of know all that, but how does manual testing fit into DevOps forward universe? Start with you this time, Jon.

Jonathan Duncan (15:14):

Yeah, there’s a whole bunch of stuff in that DevOps space around automation, right? Like, if I think of that group of people, and I look at them sort of in the post-production world, automation’s critical to everything there. Whether it’s the automation of the infrastructure on its own, or at three o’clock – I remember the first time when I was on 7/24 support or 24/7 support, whichever way you want to call it – and things would come in at three o’clock in the morning, and I’d have to make an update. Well, oh my gosh, I could have gone back to sleep a whole lot better if I at least had an automated set of smoke tests so that I knew that, yeah, it was all gonna at least hold together until I could get the entire team on it first thing in the morning. So that would’ve helped me. But then also knowing what are the manual pieces that I’ve gotta go and test, right? Cause I’m going to make a change or somebody on my team has made a change – what do I have to test before I put this back out? It’s a scary spot that things are getting released so quickly. And in order to release things quickly, we need those automated tests, automated checks, in order to help us do what we can do manually to sort of increase that confidence level.

Mike Hrycyk (16:23):

Okay. Cool. Sean?

Sean Wilson (16:25):

Yeah, I can build on that. One of the things that we’re building right now – and, it’s specifically to fit this – and just as what Jon was alluding to, in one way, you need to know what to run. So if a change has happened, it’s still a manual tester who is coming in and saying, before you get this build out of your CI pipeline or out of your CD pipeline, before you give it to me to test manually, please run these tests. It’s identifying, it’s having those manual people who understand quality, who understand change, and what has to be tested based on change, who are creating those plans for the automated tests. And we’re working on a system right now to explicitly build this functionality out so that test leads can come in and say, these are the tests and the test plans that I want you to run at this stage of the development pipeline. So when you’re in a continuous integration system and you’re doing a build and you’re deploying and you’re running some tests, run at least these tests. Run at least these, when you get into the proper continuous test build pipeline, right? I mean, there’s varying ways that we need manual testers who have that knowledge and experience to tell us what to test so that when we’re running the automated tests, when we’re using automation and the DevOps pipeline, we’re using the right automation. There’s nothing worse than going through a multi-hour build process for a large application and coming out with something that a human tester can find as garbage in five minutes. But it happens all the time because we’re not running the right automation as part of our pipelines and as part of our demo. So the manual fits in with knowledge and experience.

Mike Hrycyk (17:55):

And yeah, you’ve just given me a thought like to expand on that. So, we’ve all thought about that and there’s lots of manual testers out there who ask, what’s my role going be in the future? Do I have to learn to code? Do I have to learn to script? And, so part of the short answer is, yeah, you probably should. But there still is room for people who are purely automators. Here’s my minimum of what you need to do as a good manual tester in a modern environment. You have to provide the value to automators so they know what to automate and when to automate it. But even more importantly, I think is you have to have an understanding, after the automation is built, what value the automation brings. Because I’ve had manual testers in the past who said: “great, we’ve got automation, I can do anything now,” who then go back and still manually test everything. Which negates the entire value of having automated testing. What they need to do is understand the value that they’re going get from the automation so that it can then direct and sculpt the tests that they’re going to run so that you do get the additional value of the time that’s been freed up, right? So you can demonstrate that value. And I think that’s an important point that doesn’t get said in that way often enough. That’s a starting point. If you’re a manual tester, your first starting point is understanding what the automation can do for you and what it is doing for you. Any thoughts on that before we move on?

Jonathan Duncan (19:15):

Yeah, and maybe it’s directed directly towards Sean. Maybe it’s just been open-ended. But like we’ve always talked – testing’s always talked about requirements traceability, right? But if I take that one step further, there’s, we’ve co repositories and the linkages in there, right? There is a path where we could start to build out a – here’s the suite of tests that I need to run because I know that these are the five pieces of code that changed. Let me go just get those really quickly because I know that that’s where I most likely was impacted, as well as kicking over reports from manual testers. You have any thoughts on that, Sean? Or maybe it’s something you guys are already doing too.

Sean Wilson (19:52):

Yeah, so that is actually – so the human algorithm in determining what to test post-build is the first place to start. And, we are working on some AI solutions as well, but that first step of having a human come in and say, if these are the things that change, then these are the tests that I want you to run. And it comes from, I mean, what I did most of my career when I was testing is I would take a look at what change was coming and which developers were writing it. And based on that information, I would figure out what tests to run first to maximize my chance of finding out that there was garbage before I went and did the big long test passes that were boring, right? It comes from that sort of gut feel knowledge that you along the way. And when you can create a system that allows people to take that knowledge and put it into place so that we can build automation around that – and it is the human algorithm, you just have to make it possible to put that into the software. So if there’s been a change in feature A, B, C, and that’s part of my build, then I want you to run these five tests first. If those tests pass, fantastic, then run a whole big longer thing at the end of them, then give it to me for manual testing. But there’s little things that you can do to sort of run the right tests first and earliest to make that happen. The other thing that you mentioned about requirements traceability is another thing that happens quite a lot. And you see this more in the certified world or like safety certified software where there has to be requirements traceability. Like that’s a great place to be for a test team because you have a direct link between the code that went into the requirement, the tests that were created as part of matching that requirement, and you’ve got that traceability, but it’s only in that safety certified world. And we don’t bring enough of that professionalism, I think, to all of the rest of the software that we build for the world.

Jonathan Duncan (21:33):

Maybe that’s where the actual silver bullet is though, is back in some of these pieces that we left behind that only exist in certain areas. Like maybe that’s the real silver bullet to improve the likelihood of quality is actually taking a bit of a step back and saying, okay, let’s put some of these mechanisms in place at the beginning rather than trying to figure it out and hoping at the end that all is going to be okay.

Mike Hrycyk (21:58):

Maybe an argument for a future date, and this might get us hate mail, but one of the things that – agile brings us so many things, but one of the things that agile doesn’t focus on or sort of steps back from is the idea of traceability around requirements. It’s just a step of rigor that’s just not core. I don’t like releasing software based on hope. <Laugh>, I mean, I’ve done a lot of it in my career. I don’t like it.

Sean Wilson (22:26):

I mean, we’re all laughing at the same thing because yeah, you’re right. And the number of times we’ve all had to do it is far too many. But there is the difference, I think, in some software, right? I mean, if I’m working with a client and, we’re working on automotive software for breaks in cars, hey, you know what, maybe there’s more rigor going into that than the software that’s going into, you know, the entertainment pack, the thing that’s loading your MP3s and that’s okay, right? I think it’s, there’s a threshold for quality and a threshold for safety that maybe we have a slightly different layer. But Jon, you’re right, if we want to have greater predictability in quality, we have to bring some of that back into play.

Mike Hrycyk (23:06):

I just have thought about what you just said. You know what, I’m gonna disagree. So yes, brakes, hyper important but when your MP3 connection doesn’t work very well and you’re driving, people spend far too much time troubleshooting it and not watching the road <laugh>.

Sean Wilson (23:21):

That is very true. Very true. Yes.

Mike Hrycyk (23:24):

Awesome. Okay, that was a good conversation and there was some stuff that I’m definitely going to dig into and think some more about. So thanks for that. So this one’s going step back a little bit and talk about there are enterprise automation solutions out there. Everyone knows QTP or UFT, the systems that they promise to do everything. To automate all the things, they can do SPI, they can do do user GUI interface, they can check the database and so on. They sell themselves as a package that can do all the things and meet all the needs, which sounds very silver bullet-y to me. So that’s why I’m bringing it up here. In today’s day and age, what we’re looking at, what we’re doing, where do they fall short? Is it all just marketing speak? So I’m going to start with the person I know has solid and loud opinions about this. Go ahead, Jonathan.

Jonathan Duncan (24:11):

Ha-Ha. So I’ve probably waffled over this over the years, but I think – so I definitely think that they do a great job at marketing, but as I look at open source stuff is cheaper, it’s free, and those other solutions are not. But if I’m committed to – if as a software builder I say I’m committed to test automation, it doesn’t take me very much of a tool set being able to reduce effort to build a test before I can equate the value of that tool back to, wow, I just saved a whole lot of money because it was – My investment there shouldn’t be the cost of the tool. My investment is still in the knowledge of the people that are building things within the tool. So I think you need to look at that before you say, oh, that tool, it doesn’t do everything that they say it will. Like nothing on the internet does everything that it says it will. I think the tool companies do well at getting my attention, but I can attest to many of them being able to save an immense amount of time, right? That me, who hasn’t written code in a long time, can just jump into one and get valid tests built that are repeatable, that aren’t just record and playback –because you should run when you hear that cause those are likely going to cost you more in the long run. But, so yeah, I’m not sure if that’s what you thought my opinion was going be on that today, Mike, but that’s sort of where my head’s at these days.

Mike Hrycyk (25:39):

It’s far more 50/50 than I was expecting. Okay. Sean, how about you?

Sean Wilson (25:44):

So for the most part I find that to be marketing speak, – I think in our email, I used a different word from marketing speak. If you’re starting a brand new company, if you don’t have any code written, then I think maybe you could get some sort of real benefit out of a one ring to rule them all packet, right? Because you’re going to start your code from the ground up using their tool exactly the way they want it to. Where it stops working is where you already have code, you’ve already got tests that you’ve written along the way or manual testing that’s been done and you haven’t built your tool to work specifically for this product. You’re either going to have to go back and rewrite a whole bunch of things or you’re just not going to use some parts of it. And if you don’t use all of the parts of it, then you don’t get the value that they claim. Because the only way to get the real value of all of those big things is to do everything the way they want and nobody does that.

Mike Hrycyk (26:40):

Yeah, and I mean to be clear the cost of one of these packages is not just the package, which is usually pretty dear, most of them are not cheap, but it’s also the investment in people to understand how to use it properly, investment in standards so that everyone’s using it the same, and investment in time in building out your solution so it’s doing the things that you want to do, right? And so, but to be fair, those last three investments are there for any solution you build, I guess. But I think the big ones maybe take a bigger knowledge investment to use all of their things properly.

Sean Wilson (27:15):

And if you have existing software – back in the day when we were bringing rational into a company that was going to some of the development teams and saying: “Hey, I need you to add this object to every piece of code because this is how we’re going to get information about what’s happening inside the code so that we can run automation.” I was laughed out of the room. Some teams were very happy to do it, other teams were not. But because we couldn’t get consistency in that, we couldn’t get the full value of the rational tool product.

Jonathan Duncan (27:42):

Yeah. And I’m definitely not a big fan of let’s go instrument the code. Cause to me that adds one more layer of risk in of, oh, I’ve modified it again. Now I’ve technically gotta go back through and test it all now. But I guess my – and I think you’re right. For those swapping over that have already got something built. I’d argue, why are you trying this again? Has your commitment level changed, right? Because if you maybe went down a path and you built, I don’t know, let’s say 15% of your code and automated test with tool A and now you’re investigating tool B, you should probably just stick with tool A and not make that change in investment because you’re likely going to set it on the shelf again at some point. So I think those folks need to look at it from a higher level and say, am I really committed to this? It’s not for the faint of heart, the path down automation, you really need to be in it for more than the initial development, I think is probably the biggest lesson for everybody to pay attention to.

Sean Wilson (28:42):

That question of commitment is huge. I think that’s an excellent thing to bring up, Jon. And I think if more of the leaders who were thinking that automation is a silver bullet, were asking themselves that question: “am I really committed to doing this?” There would be less of this problem to discuss <laugh>.

Mike Hrycyk (28:57):

So you say that Sean, but think about the other side, think about the agile side and, and the, the advice that people give on automation. So there’s so many companies that, and then maybe this is a more in the past thing, but I think it still happens. So many companies that talk about automation, talk about automation, talk about automation, <laugh>, investigate, investigate, talk about automation, and they do that for years, right? And then, then you get people who are advocates for agile, let’s say stop talking about your tool, run a pilot find works good, and then iterate and then iterate and then iterate, right? So, there’s two really different concepts that need to be balanced out of that, which is you have to do. If you don’t do anything, you’re not getting any value, right? So you need some commitment, but you also shouldn’t have commitment paralysis because of you don’t have enough commitment. See what I’m saying?

Sean Wilson (29:50):

I completely agree. You have to commit to wanting to do automation though. Like it’s that first choice. I am going to commit the resources to create the automated tests. I’m building a test automation framework. I had to specifically take some of my engineering budget and say some of this budget is going to go to creating automated tests. That means that I will have less features because the same budget I had to build the product is the same budget that I’ve got. And I’ve taken one of my developer slots away and said, this is gonna be somebody who’s going create automated tests, but that’s a choice. Like, I had to commit to that before we got down this path. And I think that that is the point of commitment that we need is somebody is committing to investigate, put in time, and money. And then yeah, don’t let perfect be the enemy of good, right? Don’t wait and get into an analysis paralysis situation. Start something and make it better as you go. But you have to be committed to doing it. You have to have the money in the table to do it.

Mike Hrycyk (30:46):

Okay, we’ve only got a couple minutes left. So I have the most important, not of the talk, but the most important wrap up question to get your thoughts on. How do you talk a leader who has heard that automation is a silver bullet into a more balanced approach to the solution? I mean, it happens, right? Someone stands up, says, we’ll do automation, it’ll fix everything. How do you talk to them in a way that comes out the other end with a solution that’s going to work?

Jonathan Duncan (31:14):

I generally would go down the path of – alright so if they’ve said that they’ve heard something from someone that says, this is what you have to do, and maybe it’s a startup that they’ve got a whole keen group of folks and they’re agile, they’re just iterating through and they’re building automation as they’re going, and they’re really good at it. Maybe it’s somebody else that invested millions. But I suspect that they didn’t hear all of the pieces around. They only heard the positive about it. So, I think I’d go down the path of just talking with them, okay, well why is it you want do this? What is it that you heard? And allowing them to come up with: “oh, I don’t know if I’m really that committed, that I want to put a team of five people on to get me to where I’m supposed to be. And then have somebody around for life’s maintaining that.” Allow them to get there. And it’s not to unsell them on it. It’s so they can understand that you can’t do this and just drop it, that you need to be in it for longer than that initial build out. But I’d also talked to’em about what are you doing with your testing team, right? Because I think there’s probably pieces in there that they’re taking risks right now, or they’re – and the leader probably isn’t even aware of the risk that their team is taking on that leader’s behalf, right? So talking about that as you continue to build bigger, more complicated software and continue to add onto it, you’ve either gotta increase the size of your test team or you need to, to add automation in. Or you just need to be willing to take the risk of the stuff that’s not getting tested because it can’t all get done if you don’t increase your spend somewhere.

Mike Hrycyk (32:51):

Taking risks on your leader’s behalf might be the new phrase that causes nightmares and ulcers.

Jonathan Duncan (32:57):

I bet you we’re getting some phone calls now though.

Mike Hrycyk (33:00):

<Laugh>. Alright, Sean, same question.

Sean Wilson (33:03):

That was a great answer. I don’t know how to add to that. I think it’s funny because I’m a huge believer in automation. I’m a huge believer in creating automated tests and using them and using that to support my test teams as I build things out. I’m a huge believer in automating the processes and the things that we do. It’s the only way forward, it’s the only agile approach to software that we can take. So you don’t want to have the leader pull back and say: oh, I can’t afford that. No, no, no, no, that’s horrible. I don’t wanna do it.” So you have to balance that reality check that slap to the face of, hey, let’s, let’s honestly take a look at what this is really going to mean with the benefits that can come from, from actually investing in it too. What you said, Jon, about understanding the risks are being taken, whether you are aware of them or not, is spot on. Getting that in front of people who can then understand, but you’re not doing this now. You are not actually testing a hundred percent. There are thousands of test cases that you’re not covering because you just don’t have that much test time in your cycle. And this is the benefit that you’re going to get. You’re going to get comfort, you’re going to get a belief that you’re doing the right thing. More importantly, you’re going to get an acceptance that less things will come up in the public eye than would’ve before.

Jonathan Duncan (34:10):

Yeah. And I just got one thing to add cause it may have sounded like I was a disbelief in that, but I don’t think there is another path then automation, unless you’re going to ignore things. We can create tests through and have a computer automate those checks for it. Like we can automate the checks. I can’t create enough humans to pick up on the technical debt that we’ve already – to this point we’re already behind. So I think automation is the path and humans are still going to be required. So all of those manual folks out there, there’s no lack of work there. We’re just going to try to make your life a little bit easier with test automation.

Sean Wilson (34:44):

That’s well said. The first QA conference I went to Cem Kaner was on, and he said that, you know, to be a good tester, you have to be somebody who’s comfortable looking at someone and telling them their baby is ugly. And he was talking about how a tester has to go and talk to a developer about the code they’ve written. And I think as we bump up our levels and we, we get into positions where we have to go talk to leaders and executives about automation, we have to be willing to say the same thing. We have to be willing to tell them that their baby is ugly. The thing that they’re delivering maybe has a little bit more risk in it then they think it does right now. And you have to be comfortable with that discomforting conversation.

Mike Hrycyk (35:21):

You know, that puts into perspective those years of my junior and intermediate years where I took great glee in showing up at a developer’s desk and telling them there were problems the third or fourth time and that it was kind of why I loved what I did <laugh>. So apparently I was stomping all over their dreams and their baby.

Sean Wilson (35:39):

I am very careful when I’m having this conversation with teams to say, you know, test is finding the bugs that development put in the code. Because saying it that way forces people to think about where the bugs come from. It’s not the test teams that create the bugs. They’re not doing it.

Mike Hrycyk (35:56):

No, that’s easy to be vilified. Okay. Okay. But we’re over time now, so I’m going to have to wrap it up. So thanks guys. This has been a great conversation. I think that there’s five to seven other podcasts that can be dug out of the things that we’ve said that would be interesting. So I’m sure those of you out there and listener land will hear more of this. And if there’s one in particular that does intrigue you, you should reach out to us on our social or through the platform in which you grab the podcast and tell us that. And you can also add to the conversation through those channels as well. And we’ll be there to converse with you if you like. As you are all aware, we’ve changed the name to the PLATO panel talks and you can now go and visit us at www.platotech.com. We have changed our branding to be all PLTO because we think that our social mission is important and we like to have that as the focus. And we’re also starting to branch out a little bit beyond testing in other IT areas. And I think we’re going to in the next year start seeing some content that will broaden your sphere of listening. So, we’ll do that and then take your feedback and see how that should grow. So I would like to thank you for listening and we will see you with brand new topics and brand new conversations.

PQA Vice President of Customer Success Jonathan Duncan

Jonathan Duncan is the VP of Customer Success at PLATO Testing based in Fredericton, NB. Jonathan has over two decades of wide-ranging experience across all facets of the software development cycle. He has experience in a variety of industries that stretch from the public sector to start-ups in satellite communications and everything in between. Having worked in organizations from both the development and testing standpoints provides Jonathan with the ability to see problems from all aspects allowing for complete solutions to be delivered.

Mike Hrycyk has been trapped in the world of quality since he first did user acceptance testing 21 years ago. He has survived all of the different levels and a wide spectrum of technologies and environments to become the quality dynamo that he is today. Mike believes in creating a culture of quality throughout software production and tries hard to create teams that hold this ideal and advocate it to the rest of their workmates. Mike is currently the VP of Service Delivery, West for PLATO Testing, but has previously worked in social media management, parking, manufacturing, web photo retail, music delivery kiosks and at a railroad. Intermittently, he blogs about quality at http://www.qaisdoes.com.

Twitter: @qaisdoes
LinkedIn: https://www.linkedin.com/in/mikehrycyk/

Headshot of Sean Wilson

Sean Wilson started as a manual tester on a financial treasury application somewhen in the last millennium. His career took him on a winding journey through automated testing, development, project management, quality team leadership, and agile evangelism before he abandoned mainstream software and went where he could play games and get paid for it. In his current role as the WorldWide QA/QC Development Director for Ubisoft, Sean is focused on evolving the approach to Quality Assurance through a better application of technology. He is also justifying playing Assassin’s Creed late into the night under the pretense of “looking for excellent automation opportunities.”

LinkedIn: https://www.linkedin.com/in/jseanwilson/