In today’s fast-paced software development world, writing test cases is more important than ever. But what does writing test cases look like for the modern tester? How will AI and automation impact the way we write and execute test cases? Mike Hrycyk is joined by Kevin Morgan (Senior Manager, PLATO) and Ryan Hobbs (QA & Test Manager, BC Ferries) to bring together their extensive experience to dive into these critical questions for software testers and how it is shaping who is writing test cases and how. Don’t miss this insightful discussion on the future of test case writing.

Episode Transcript:

Mike Hrycyk (00:04):

Hello everyone. Welcome to another episode of PLATO Panel Talks. I’m your host, Mike Hrycyk, and today we’re here to talk about test cases and, specifically, what test cases look like in the new world of Agile and AI and all of those fancy things. Of course, test cases are the heart and soul of many many of the things that PLATO does, and all of our testers are dealing with them on some level every single day. So we feel like it’s an interesting topic. Now I’m going to turn over to our panel of experts and let them introduce themselves. So we’ll start with you, Kevin.

Kevin Morgan (00:37):

My name’s Kevin Morgan. I’ve been with PLATO for approximately six years. Prior to that, I was an independent consultant for 25 years, working in testing and test management in various industry verticals, including insurance, banking, and retail.

Mike Hrycyk (00:54):

Great. Welcome Kevin. Thanks for coming today. Ryan, over to you.

Ryan Hobbs (00:58):

Hi, my name is Ryan Hobbs. I am currently a Quality Assurance and Test Manager at BC Ferries. I’ve been in the test business for somewhere in the 20-25 year range. My experience spans power monitoring and control, hardware and software, lab information science software – so cancer discovery lab machine sample tracking – and now marine passenger ferries.

Mike Hrycyk (01:26):

Thanks guys. That’s great. Okay, let’s get into the conversation at hand. So, as with a lot of these things I’d like to level set a little bit. So, guys, tell me, what do you think the real purpose of a test case is? And let’s start with you, Ryan.

Ryan Hobbs (01:42):

For me, the purpose of a test case is to provide consistent guidance to those people who are both testing and producing the software or the work. It lays the groundwork for complete knowledge across the development team as to about what we’re going to be looking at.

Mike Hrycyk (02:01):

Kevin?

Kevin Morgan (02:02):

Well, I think it’s morphed over the years to a certain extent. The purpose of a test case is to validate the business requirement. It’s expressed the business needs in order to do their business. Nowadays, I think the test case has become more about business value in the Agile approach, where we talk about what’s the value in the user story and then how we write a test case to validate that business value. So, I think there’s been a fundamental change in what we talk about when we talk about test cases

Mike Hrycyk (02:33):

And I think there has been a change. You both sort of avoided something that I think is important or missed – something I think is important in test cases. So, one of the things that I always look at when doing testing is a risk-based approach. You don’t have time to test everything, and so you’re focusing. What test cases allow you to do is to build your test cases around what needs to be tested and do the thought and investment in making those decisions once. So, every time you’re doing the testing, you’re not thinking, okay, what do I need to test? Where do I need to focus? What are the things that need to be tested when you start looking at boundary conditions and negative test cases? And that’s kind of a drift from what you were talking about, Kevin, which is sort of the flip side, right? Is not specifically the requirement as written but the other things that might be impacted. And so, it’s to invest the time to understand that testing once and then leverage that investment throughout the rest of your testing cycles. Any comment there, Ryan?

Ryan Hobbs (03:34):

For me, a lot of that risk-based planning happens at the test plan level. So when I work with project managers, development teams, and Agile teams, a lot of the times, I get them into the mindset of writing and generating a test plan which helps guide their overall risk assessment of the application, have them walk down various paths that could turn out to bite them in the end. After that process was done, that then morphs itself into a series of test cases supporting sort of that thinking, that pre-work, towards the test case. But absolutely test cases, as much as anything, are there – for me, are there to jog people’s memory during the creation of the test case because that’s very important. Then, the execution follow-up is just basic planning of their basic execution.

Mike Hrycyk (04:23):

I think we were agreeing there. You invest upfront –

Ryan Hobbs (04:26):

–Yes.

Mike Hrycyk (04:27):

– so that you have the test cases so you don’t have to redo the work. Kevin, was there any follow-up on that for you?

Kevin Morgan (04:32):

Just from the risk-based side, I think one of the things that – what brings value to a test case is understanding the impact. And so, from a risk-based perspective, I’m going through that right now with the client that I’m working with where we’re writing user stories and test cases for their project, and I’m trying to instill in them thinking about the impact of what happens if this test case goes wrong? Is it a major impact? Is it a minor impact? Because now we’re, of course, in that place where we’re trying to determine from a risk base, if we can’t do all the testing, how do we prioritize? And so, I think, to a certain extent, the test case does have to be risk-weighted, but you also have to think about what the impact of that particular test case is if it fails. So getting clients to actually think in that mindset is one of the bigger challenges because a lot of the time, the clients are thinking, well, we have to test everything, we have to boil the ocean, and you can’t boil the ocean. We don’t have time. The last 20 years haven’t had enough time to do all the testing that I would’ve liked to do on one project. So I think it’s critically important that we understand the impact and also the risk.

Mike Hrycyk (05:38):

So, we sort of got Kevin’s perspective on how a test case has evolved in the last 15 years. But Ryan, do you have a sense of 15 years ago were test cases the same as they are today?

Ryan Hobbs (05:50):

For me, the largest change in the last 15 years has been my shift in industry. 15 years ago, I was dealing with restrictive test cases. Very proscriptive if you want test cases that were highly specific to an industry and the outcomes from a scientific response. So they were very, very detailed. Over the years, we’ve managed to migrate to a slightly higher level in the regions that I work in or the software industry that I work in, and you’re able to guide the process a little bit more from an Agile user story-type perspective in how people build and run those test cases. So, they’re slightly higher level, slightly more persona-based, slightly easier for a lot of people with experience in the industry or the area to understand and follow. Less restrictive. So, for me, it’s been sort of that transition from very called-out steps with high levels of detail – long test cases through to a persona-based test case approach.

Mike Hrycyk (06:54):

Excellent. Now you’ve segued nicely into my next one. So a point that Ryan had made earlier that we were discussing is the question starts prescriptive is a word that is thrown around a lot with respective test cases and Ryan questioned whether a lot was an accurate term for the use of prescriptive and now that you’ve used it in your response to the first question, I’m going to say yes. But anyways, prescriptive is a descriptive word that is used for test cases from time to time. So let’s just talk about what is a prescriptive test case and then let’s move into how valid are prescriptive test cases today. And I think we’ve already started down that path with you Ryan, so I’m going to go to Kevin first and then we’ll come back to you.

Kevin Morgan (07:32):

Sure. So to me a prescriptive test case is something that spells out not only what the test case must achieve but also in great detail. So what are the navigation steps? Where are each of the expected results? So for instance, even in a navigation step you may say log onto the system, well do I really need an expected result? In a prescriptive test case, I would expect that a prescriptive result or a prescriptive test case would say log into the system, your expected result is it would be taken to your homepage and that you’ll have a certain number of tabs that are available to you to do your job. I still think that prescriptive test cases are useful.

Clients that I have generally are clients that haven’t done a lot of testing in the past, and so, one of the things that I tell them – and you get clients that are keen on testing and you get clients that are less keen on testing, let’s put it that way. And so, I tell the ones that are less keen that if you want to get out of doing testing on the regular basis, write very prescriptive test cases. And the reason for that is if you write a prescriptive test case, somebody else can execute it. If you don’t write a prescriptive test case, then you need someone with subject matter expert that’s going to be executing that test case. I think the other advantage to prescriptive test cases for a lot of clients is that then they can bring in a company like PLATO where we can do execution for them using prescriptive test cases. If they don’t have prescriptive test cases, again, it makes it difficult to bring in external resources and do mass execution if you have something like an upgrade or a business transformation.

Mike Hrycyk (09:10):

So that’s the plus in a prescriptive test case and I do see value in that. But going back to one of the things Ryan said is he’s seeing a move towards the persona-based testing and one of the drawbacks I see in what you’ve just described, Kevin, is maintenance. And people don’t talk a lot about maintenance of test cases, but if you have 30 steps in a test case and really the goal is to prove at the end that the end point is right. And the navigation path to get there changes over time. Without maintenance on those test cases, which has a cost, that test case becomes less and less useful. Whereas if it’s persona based, I think the lifespan without maintenance becomes more reasonable. Does that just come down to balance and knowing what your situation is?

Kevin Morgan (09:56):

I think it does come down to balance to a certain extent, but I also think that there’s been changes in the testing tools over the years where you can actually use shared steps. So instead of writing prescriptive test cases with all of those 30 steps, you can write 26 shared steps and insert it into a test case and say, use these shared steps so that you have a centralized place where you can make the changes to the shared steps that will impact the other test cases. So I think you can still do it in a prescriptive manner, just you have to make sure that there’s planning in order to do that.

Mike Hrycyk (10:30):

Interesting. Ryan, any input into this?

Ryan Hobbs (10:33):

Probably the place I would like to start is just making sure that we’re specific on terms. So, there is a difference between prescriptive test cases and proscriptive test cases. And prescriptive test cases are generally what a lot of people in the industry, test industry, have been writing for years, where you call out a step and then you say log in and you’re fairly stepwise in the approach. Where proscriptive testing, which I’ve had experience in, like I said, in the life sciences field or even security-based, is very strict and focuses on compliance and rules rather than trying to help people get through a workflow, for example. This can be used, like I said, often in security testing, compliance testing, but also if you have the opportunity to do more extensive white box testing where you have internal knowledge of the system and you want to be incredibly specific in a test case so that you exercise specific branches of the code or parts of a conditional that potentially you might not get if you’re more generally guiding them through a workflow or a series of screens in an application, for example. So prescriptive, absolutely more common for me. Proscriptive less common but very important when necessary. Hopefully, that makes sense.

Mike Hrycyk (12:00):

Alright, Agile’s not new anymore. Agile’s been around for a while. We might even call it mature at this point. I don’t know if we would call it standardized because no one follows it in the same way, but one of the kind of common understood precepts of Agile was documentation light and process light and high velocity. In your opinion, does Agile work best without test cases? Do test cases have a role in an Agile project, Ryan?

Ryan Hobbs (12:31):

I believe that Agile still benefits from test cases. I don’t believe in many of the instances I’ve worked in Agile projects; they go as in-depth in test cases. A lot of the time that’s primarily due to just the restricted time period for any given development iteration. You’re working in a two or three-week cycle; you’ve got a little bit of time to analyze the work that was planned, write your test cases or test approach, and then execute on that. There’s still a great deal of benefit to test cases in Agile purely from a building confidence perspective in stakeholders, right? It’s difficult to convince people, especially of more of a traditional development background that you’ve got sufficient coverage without some form of documentation and often the easiest approach for that is test cases. So, I believe there’s still some value in Agile, for sure.

Kevin Morgan (13:29):

Wow. How to answer this because most of the work that I do is on enterprise transformations, and it’s quite frankly, while the word Agile is bandied around in those instances, it’s very seldom, in my opinion, used. We very seldom have developers and testers embedded together on a project working together in lockstep like we would in a normal Agile environment. So, in my case anyway, test cases have to be used at this point still. Yeah, I can’t imagine a project working with the clients that I work with without formalized test cases. Even the user stories that we generate, we get the development of a series of test cases underneath them in order to validate the business use and the business need. So I think there’s still a place for test cases, obviously, in the Agile environment.

Mike Hrycyk (14:25):

So, this next question sort of spins off of that, and in my opinion, there’s only one right answer, and I’m sure you’re both going to give it. I feel like this question is asked a lot of product owners, executives, and so on. So, do you really need test cases? Don’t acceptance criteria just fill that need? People spend a lot of time on acceptance criteria now, whereas they didn’t in the past. And Kevin’s nodded, but Ryan had to bite back a laugh. So I’m going to make Ryan answer first.

Ryan Hobbs (14:59):

This is a fun one. Where to start? I have yet to see a concise set of acceptance criteria that covers sufficiently the complexity of any application I’ve ever worked on. Specifically, test cases are a wonderful way to narrow in on exactly what the application under test is designed to do. Acceptance criteria are often written as a few sentences if you’re lucky slightly more, that describe the intent of the output as much as anything, and you miss things. It’s difficult to be specific. Acceptance criteria: my car must be able to drive me to the store. That’s nice, but do you want an automatic? Do you want a manual? Do you want a radio? windshield wipers – they’re handy. Do they work? Well, maybe, but it meets the acceptance criteria of driving me to the grocery store. For me, test cases, they’re just that next level down. You need that to verify the application.

Kevin Morgan (15:58):

Yeah, and I agree. I mean, I think when we talk about acceptance criteria and user stories, they are at a higher level. I think, as Ryan said, they express the business need, but the expected results in a test case express how you’re meeting the business need. And also, I think the other thing that acceptance criteria don’t do is – or most of the time, they don’t express anything about negative results and how negative results are going to be managed. Where I think expected results in a test case that’s explicitly what we’re saying is. To give a good example, the client I’m working with right now, we are writing expected results. Well, they’re looking at writing the expected results and saying, okay, what should we see? And I’m like, that’s great. Okay, so we’ll write the test case that way. Let’s walk through, and we’re walking through the steps to generate the test case so that we can emulate that and write more test cases, and immediately, we find that the system is doing something else that isn’t defined in the user story. So there’s an unexpected result, but the fact is the acceptance criteria would’ve been passed, but it’s doing something else as well that would make the test case fail that would cause something in the system to go awry. And that’s where I don’t think that acceptance criteria is refined enough for us to actually accept systems based on what that acceptance criteria is.

Mike Hrycyk (17:27):

I mean, acceptance criteria almost by definition defines the happy path, and if everything only followed the happy path, well, testers would let developers do the testing, what developers care about – I designed it to do this, look, it does that, and their job is done. Alright, who should write test cases then, Kevin?

Kevin Morgan (17:45):

Not me <laughter>. So, this is an interesting question because again, I’m working with a client right now that has very specific domain knowledge that’s required in order to write a lot of their test cases. So, test cases can be written by people with domain knowledge; they can also be written by people who don’t have as much domain knowledge, but the expected results need to be validated, and the steps need to be validated by somebody with domain knowledge. So, they can be written by somebody without domain knowledge, but when it comes to execution and validation of either the test cases themselves or the expected results, it really has to be somebody with business domain knowledge.

Mike Hrycyk (18:34):

So, that’s really fair. But what about the flip side? So, when you have someone with domain knowledge writing your test cases, you have someone who doesn’t really understand the intent and the how of a test case writing a test case. So, is it really not about both sides inspecting the work of the other and making it better?

Kevin Morgan (18:52):

Well, to a certain extent, I mean again, with the clients that I’ve been dealing with, it’s a matter of mentoring and training. So, what we do is when we come in as an organization, PLATO, and we’re working with a client, what we are doing is we’re mentoring and teaching them how to write test cases and how to maintain test cases. Because one of the big things that’s important for most of the clients that I’m working with is not only that we deliver a project, but we also deliver test artifacts that allow them to support the product in the future. And I think there’s a big, big difference between those two things. You can write test cases so that you can deliver the project and meet the requirements, but if you’re not careful about writing those test cases, they may not be reusable. Simple example, if you inject an employee ID number into your test case, it’ll pass for t’he project, but what if that employee in the future quits and you’ve got a whole bunch of characteristics about that employee that are part of your test case that you haven’t documented because you’re using that particular employee ID. So, I think you have to be careful again about who’s writing it and how you write them. To me, it’s a bit of an art.

Mike Hrycyk (20:04):

I think the point that you make is good, Kevin, and when you think about it, if the key part that you’re trying to pull out of that domain knowledge is the domain or the domain user is that domain knowledge. And so, if you have a professional tester doing execution on the test cases and they get a not professionally created test case, but it has the domain knowledge, they can execute that test quite successfully and if they’re a good tester and they’ve been empowered by their test lead, they will then fix the test case. So, that’s a feedback cycle that really makes that good. The reverse is not true. If you have a tester build the test case without domain knowledge and then hand it to a domain user to run the test, they won’t know what’s going on, and they won’t know how to fix it, and they’ll just be lost

Kevin Morgan (20:46):

Or they’ll just test whatever they think needs to be tested and ignore the test case.

Mike Hrycyk (20:50):

Well yeah, that’s going to happen a lot of the time anyways. Okay, Ryan, who should write test cases in your universe?

Ryan Hobbs (20:57):

In my universe, it varies. So a lot of the test cases I’m hearing discussed so far are test cases very, I’d say, mid to late in the development cycle. So, for me, the initial test cases are unit tests written by the developers, so there are early test cases written there, and that’s very specific to the code itself. Then you move forward into more general testing if you will. So, initial testing through to early integration testing that’s written by sort of a combination of either BAs or subject matter experts along with the testers. So, the BAS or SMEs paint the big picture and are there to answer the detailed questions, and then the testers translate that vision into a series of steps that hopefully can be repeated.

Finishing off that cycle is when we get into more of a user acceptance test phase. For me, it’s always been very important to have the specific users who will end up using the application once it goes live to write the test cases in absence of almost knowledge of the application as being built, right? They’re doing the verification of their workflow in the solution that we’re providing. So, there’s a little bit of a shift there as well. So, those test cases often aren’t as professional. They’re not as detailed and orderly and sometimes repeatable as the test cases overseen by the testers, but they approach it from a more practical standpoint from the end user. And I’ve found over the years that can catch so many things that even the subject matter experts or the business analysts or the testers themselves didn’t understand was a requirement. Tell me what you do and then show me what you do are often two different things and when a BA or an SME or an expert is helping out in an earlier phase, they’re often working off the tell me what you do and it’s the show me what you do that catches them up that hopefully the UAT testers get. So it varies.

Mike Hrycyk (23:09):

This raises a point that I like to make. I created a term a while ago, and it’s testing with your eyes open. One of the risks of test cases, of having a thousand test cases, is that you get trapped in just doing the test cases. But the most important skill testers still provide is when they test and they’re looking at everything, sure they go through the test steps for a test case, but then there is a stat out there that someone pulled, but things somewhere between 65 and 85% of defects found are not specific to the test case being run. Things were noticed outside of that test case that weren’t appropriately correct. Ryan, you had something to say?

Ryan Hobbs (23:51):

Yes. This varies down something I wrote down earlier around test cases versus exploratory testing. Exploratory testing was building momentum a few years ago, and I don’t really hear much about it as much anymore, but exploratory testing is a lot of what you’re sort of hinting at there. We have restrictive test case flows like we hand over a thousand test cases to a pool of testers. They can step through those, and that’s great. But it’s the exploratory work, the, I’m going off the beaten path that looked interesting, I’m going to click that button and then that results in another bug, where I think the industry was going down the path of, and to my detriment I haven’t kept up with it, was there were software applications that would help write the test cases based on your exploratory testing. So they would set people free, essentially record what they did, and produce that into a test case to be repeatable later if there was a bug found. Because if you find a bug while exploratory testing, your steps to reproduce are incredibly difficult because you then have to try and remember the path you took to get there. Where there were some tools that helped, but a hundred percent agree. A lot of the bugs that people find were because they probably lost focus realistically on the original test case and started to wander and it’s that wandering where you can find some really interesting things.

Kevin Morgan (25:17):

I think that’s actually the value of a professional tester. In projects that I work on, we have a term called Elmo when people start going too far down a rabbit hole, and it’s just in discussions. But I think the same approach kind of applies to professional testers is that we know when to go down a rabbit hole and when not to, or how far to go down a rabbit hole. I’ve worked with clients where they had wonderful people, subject matter experts, but the subject matter expert would go 35 steps into a test case and then just go completely off-path. Something that the likelihood of it happening in production would be in the zero percentile, but then would write a bug. And it was like, okay, but is this even realistic that somebody’s going to go and do that same set of 35 steps and then go off and do this other thing? And so, I think there’s a balance there too about exploratory testing and how far do you go? As somebody I worked with at one point, was we can’t program for people who are not intelligent, shall we say. We have to have certain limits within the boundaries of what we develop.

Mike Hrycyk (26:30):

I don’t know, it feels like a lot of the development we do has to protect against idiots <laughter> and a lot of what testing is, is how badly can things go when you set someone free on this thing. It reminds me of an old comic strip, Family Circus, which was never funny, but there’s this one panel where a mom is asking a kid, I just sent you to the bathroom to get me this. How did it take you so long? And then it has another panel, and it shows this dotted line of the root they took, and they kept getting distracted, and it goes in circles and spirals, and they visited the tree and then the dog host, and then that – right? Because they got distracted on their way and that’s a bit what exploratory testing can be like. So I can see where the tooling of that would really help.

But the same thing, Kevin, is a professional tester helps you if it takes 52 odd steps to get to a bug, it might not take 52 odd steps. It might just take that one wrong turn in the middle. And so, a professional tester, part of their role is to stop and try and figure out what’s the critical misstep that you have to take to cause this, right? You probably don’t have to do all 52 of those things in order to get there. If you do, then it’s one of those miracle bugs that people write down and remember and keep track of.

Okay, we’ve talked a bit about some of these things and we’re coming up towards the end. One last little silo of discussion, what’s the relationship between the test cases that we’ve been talking about and automation?

Ryan Hobbs (27:56):

Automation. That’s the end all be all, isn’t it? For me and my experience, I’ve spent a lot of time at the technical level of automation, so code-based, API based, so that’s slightly different, but works off of test cases in order to understand the flow of information through a system. You’re not doing it from the front-end, but it impacts your ability to do it from the back-end. Then there’s the approach – and I’ve had people do this for years – where you take a collection of, we’ll just pick a number, your top 10 test cases based on throughput in the system from your production system. So you’ve determined that out of your thousand test cases, 80% of the people on your website essentially follow these 10 test cases – these 10 paths through the system. Absolutely, I would love to have all of those automated. But there’s also the time when test cases just don’t make sense to automate where either it’s low risk, low likelihood of it happening, and I’m not going to invest time in paying someone to create, modify, maintain, enhance a test case that, like Kevin was saying, has a likelihood sometimes of 0%, right? It’s that unicorn bug – that unicorn test case. There’s definitely a high correlation, in a lot of the areas that I work in specifically, where test cases can equal automation, but I don’t believe that all test cases should be automated. It’s just there’s diminishing returns.

Mike Hrycyk (29:32):

If a test case is automatable, should test cases be written in a way that an automator could pick them up and just build the script?

Ryan Hobbs (29:43):

If it doesn’t necessarily impact the human’s ability to execute the test case, then I see no reason in adapting a test case structure to make it easier for an automation resource to understand and code up or record that solution. The balance will have to be how much we want to potentially impact a test case from its understandability from the squishy human to the machine. And right now, humans are really good at finding bugs through test cases. Automation finds bugs once – well, probably not even once, but once on average, and then only when the system breaks later.

Mike Hrycyk (30:24):

Automation scripts do not test with their eyes open. They test with a very narrow binocular vision. Kevin?

Kevin Morgan (30:31):

Yeah, so I mean, I think there’s a couple of things. The initial testing of a new application or a change to an application, I think we’ve all talked about that, but it benefits from human interaction based on the fact that, again, a developer can make sure that something meets the requirement, but there may be an extra undocumented feature that gets injected into the code that the automation may not see. So I think you can automate everything. To me, again, as Ryan mentioned, there’s not a lot of value in it. I think, from especially critical systems perspectives; there are things that should be automated. Banking systems – should we automate login and run login scripts to validate that the system is up and is accessible, those types of things that we want to make sure critical function is there. And so, I think it comes full circle to our first bit of the discussion, which is about risk-based analysis and understanding that you only automate the high-impact, high-risk areas of a system. Things that, as Ryan mentioned, are low value don’t have a lot of value in automation. They also have a tremendous cost to maintain in the fact that if you do have quarterly upgrades or things like that. If your scripts start to break, the maintenance alone to find the errors and fix the scripts can be prohibitive unless you’re doing it in an object-oriented type of framework. So I think again, the relationship between test cases and automation comes down to more than just the relationship of the test cases, but also your risk management approach and your criticality of the test cases that you’re looking at.

Mike Hrycyk (32:15):

So, my wrap up question is going to fit into the new internet rule. Maybe it’s Internet rule number 99? Which is that you can’t have a podcast in tech these days without mentioning AI. So I see two paths that AI could potentially happen. What I would like is your personal predictions around this or maybe even the third path. So, path number one that I see is if you have a really big set of prescriptive test cases with a lot of steps, maybe we could build an AI tool that could consume that and generate the automation that you’re looking for and maintain it over life. The second path, which is quite different, and it’s more along the eyes of sort of a chat GPT thing, which is you feed a bunch of requirements into the AI and it builds you on a set of test scripts. So, using your own capability of looking into the future, where is AI going to help? Is it going to be of any value? And we’ll start with you this time, Kevin.

Kevin Morgan (33:07):

Wow, that’s a big question. Honestly, I’ve already used ChatGPT quite a bit in testing, but it’s really more on the documentation side. And also to sort of be a little bit exploratory about the scoping side of writing test documentation for potentially clients that I don’t necessarily have a great understanding of their domain where I’ll get insight from a ChatGPT or another AI engine about the things that I need to test. As far as generating test cases, I could see where you could get to the point of loading the prescriptive test cases in and it generating some form of code-based automated test cases for you, and there could be great value in that, depending on the platform that you’re looking at. What was the other one that you were saying?

Mike Hrycyk (33:56):

You feed it requirements, and it feeds you test cases.

Kevin Morgan (33:59):

Yeah, that one I see fraught with a lot more challenge, quite frankly. And the reason for that is after 40 or 50 years of software development, we’re still terrible at writing requirements. Ambiguity in requirements is still probably the primary driver of defects in systems where we’re not writing our requirements in a way that they’re comprehensive and they’re unambiguous. So, I can’t see that an AI engine is going to remove the ambiguity in those requirements because it takes things verbatim the way that they’re written and then produces something because of those verbatim inputs. The old adage of garbage in, garbage out applies just as much to AI as it does to anything else.

Mike Hrycyk (34:41):

Just think of the developer that you’ve known that thinks the least. That’s kind of what AI is set to do. They take a requirement that just do it.

Kevin Morgan (34:49):

Well, and I laugh about it because, and Ryan, you’ll probably have experienced this as well, when we started in software development and writing test cases and doing testing and things like that, a lot of the developers that we worked with were not only developers, but they had intimate business domain knowledge. So, back then, a requirement was a sentence, and they’d write 5,000 lines of code, and it would work. Unless the AI engine has that intimate business knowledge, I don’t see how that idea works, right?

Mike Hrycyk (35:20):

Ryan, predict the future!

Ryan Hobbs (35:29):

Hmm. The future of AI – hold on, I’ll type that into ChatGPT. <laughter> For me. There’s a few nuances and interesting things with AI. AI itself is a very cool tool that is evolving immensely quick, right? You look back a year, the changes in the ability for AI engines, whether it’s ChatGPT or one of the other plethora of other ones to adapt and update and modify its thinking patterns is amazing. And I’ve played around a bit with AI because I wanted to see what it could do. So I’ve pointed it, for example, at a website and asked it to generate test cases, and it’ll look at the website, and it’ll start generating test cases. They’re very remedial test cases. They’re very basic. So, on its own, it has difficulty. If you look at it more from a programmatic perspective, feeding it information to enhance its ability to review applications or websites to generate test cases, that’s a step that will push it probably at least to the 80% of a human a lot of the times. It’s getting very good. The trick is that’s for, I would say, more of a regression standpoint, right? That’s from a preexisting application or website that you can have it mine.

When you’re looking at things like Kevin was saying, passing it requirements, passing documentation, that gets a little bit trickier. And the trouble with the AI engines at least is currently built, is they invent things in how they interpret. So you can ask them a question, and as part of the answer, they will insert non-truths into it. It doesn’t know that it’s doing it. But when it comes to testing and writing test cases, they’re inadvertently writing negative test cases, thinking they’re positive or they’re missing swaths of an application because they didn’t interpret that comma the same way that the person who wrote the statement intended it. I believe in the coming – I was going to say five years, but probably less than five years at the rate it’s expanding, say three years, there will be sufficient enough AI advancement that the testers may be at a point where they are confirming the output more than writing the output. They’re going to impart their skills and knowledge and working with the subject matter experts in the business, experts in tweaking AI’s learning process for their specific domain so that the test cases, be it manual or automated, start getting shockingly more realistic, I think is probably what’s going to end up happening. And that may be as soon as three years, could be five, but definitely not beyond five. It’s being adopted quite quickly in a lot of different areas. So, I think it’ll expand itself quite a bit.

Mike Hrycyk (38:05):

Are we all set to retire in five years?

Ryan Hobbs (38:07):

Oh, I hope so.

Kevin Morgan (38:10):

So Ryan, it’s interesting that you’ve noted that some of the AI engines will return untruths. One of the things that I asked ChatGPT at one point, because I had a fella that I went to hockey school and his name was Bruce Cowick, and he was a hockey player for the Philadelphia Flyers who won a Stanley Cup. But he never played a regular season NHL game before he won the Stanley Cup. And so I asked ChatGPT if there were any hockey players that had won a Stanley Cup before, they had played a regular season NHL game. And at first ChatGPT came back and said, no, that’s not possible. And then I said, well, how about Bruce Cowick? And then it came back and it said, oh yes, well, Bruce Cowick did and gave some details. The details were completely fabricated. There was never a Bruce Cowick that was a goaltender for one. They didn’t get the right team, they didn’t get the right era, and yet they had all of this information that was as far as I was concerned, completely made up. So, I kept probing and asking questions to try and figure out where did you even come up with this? And that’s – to me, that’s the danger of AI. People have a higher level of trust in it than they should at this point. And I think for us as software professionals, that’s even more concerning is that we need to be the gatekeepers of the truthfulness of this AI when it comes to developing in the future.

Mike Hrycyk (39:31):

Okay, well over time now, but thanks for that bit at the end there. I think that’s very interesting. So, thank you to our panel for joining us for a really great conversation about test cases. I think it was a really good check-in on something that’s core to our industry. Thank you to our listeners for tuning in once again. If you had anything you’d like to add to our conversation, we’d love to hear your feedback, comments and questions. You can find us at @PLATOTesting on LinkedIn, Facebook, X, or on our website. You can find links to all of our social media and website in the episode description. If anyone out there wants to join in on one of our podcast chats or has a topic they would like us to delve into, please reach out and give us your suggestions. If you’re enjoying our conversations about everything software testing, we’d love it if you could rate and review PLATO Panel Talks on whatever platform you’re listing on. Thank you again for listening, and we’ll talk to you again next time.