QA teams new to automation: Get started on the right foot! Automation experts Millan Kaul (Engineering Manager (QA) Kablamo Canada) and John McLaughlin (Senior Automation Architect, PLATO) join our host, Mike Hrycyk, this month and explain the 5 Ws (and an H!) of automation testing, helping you decide when, where, and how to automate for maximum impact. Learn how to free up your team and improve software quality. Take a listen for insights on building a strong automation practice!

Transcript:

Mike Hrycyk (00:02):

Hello everyone. Welcome to another episode of PLATO Panel Talks. I’m your host, Mike Hrycyk, and today, we’re going to talk about automation with our panel of testing experts. We’ve talked off and on about automation, and we often pick a specific topic, but for this episode, we came up with this fun gimmick that we think will be interesting, and we’re calling it the Five Ws of Automation with a bonus H. So, what, why, when, where, who, and then we added how. And so, this is going to be a little bit more introductory in automation with some depth. And we’ve pulled together some questions around that. For this conversation, we brought back John McLaughlin, who is the Senior Automation Architect here at PLATO. We’ve heard him before if you’ve been through our back catalogue. And I’d like to introduce someone new to the conversation, Millan Kaul. I met Millan earlier this year at a YVR Testing meet-up. As some of you may know PLATO is a sponsor of a meetup that I host regularly here in Vancouver to talk about all things testing. And Millan came out and got involved really quickly and even spoke for us this year. He’s quite involved in the testing community and very passionate about it. So I’m going to let Millan do some further introduction on himself and then we’ll let John go.

Millan Kaul (01:06):

Thank you, Mike. Hello, I’m Millan. Over the last 15 years or so, I have worked across more than three continents, delivering high-quality products using automation in small, medium, and enterprise-scale businesses. Currently, in 2024, I’m working at Kablamo Canada as a part of their Engineering Leadership Group.

Mike Hrycyk (01:28):

Alright, John?

John McLaughlin (01:30):

John McLaughlin here. I am, by title, Senior Manager of PLATO. I’m based in New Brunswick. My primary focus is and has always been various forms of automation, since I started testing in 2006, which was 18 years ago this past April. I’ve worked on many projects of sizes, large and small over the years with a whole bunch of different technologies. My very first experience was with Quick Test Pro, which is now UFT, doing GUI automation. I spent a number of years after that doing API testing with SOAPUI and Postman and custom API testing with C-Sharpe, a whole lot of Selenium projects with Java, JavaScript and C-Sharpe, and a number of process automation type tasks using Python in between all those years too. And during projects, I like to get in the mix of the technology of the project that I’m working on. So, currently our project is Spring Boot-based Java applications and my current drive, I guess is to build little Spring Boot applications to help the testing process along the way. Both to build comfort and familiarity with how the Spring Boot world works and then also just to hopefully contribute to our testing process here.

Mike Hrycyk (02:47):

Thanks John. I always love when we do these podcasts because I always write something down that I’m going to have to go read more about after. And now Spring Boot is on my list.

I was thinking about this when we came up with the idea. I think that most people have in their brain a cadence of how you say who, what, where, when, and why, because it sounds wrong when you do it in a different order, at least in my mind. And if I’m wrong, you guys can tell me about that out there in listener land. But I changed the order somewhat for the questions because they made more sense. And so, often in this podcast we start with defining our terms, and I think that’s the best way to start here. I think that where I’ve taken these questions is more at the basic end of automation, but I think that there’s some good dialogues to be had there. So, the question is, what is automation? What is automation? Why are we talking about it at all? So, we’ll start with you, John.

John McLaughlin (03:37):

So, I think of it in a bigger picture kind of sense in that beyond, say, test automation, and automation, in general, is using some form of technology to make a series of tasks that are repeatable, easier to reproduce over again and again and again without consuming person-time. The motivation and the payoff behind that is I find people, meaning Manual Testers, in the traditional sense, are better thinkers than a coded script that can do the same tasks over and over again. So, automation, to me. is the process of using technology to help smart people exercise applications a little more challenging than previously time would’ve allowed them to.

Mike Hrycyk (04:22):

Alright, Millan?

Millan Kaul (04:25):

Yeah, I like the idea of how John mentioned about functional testing experiences and the experts – yes. The idea is to make their life easy by automating things. That would be my definition of automation too. For anything which is repeatable, you just write some kind of scripts in any language, using any tool, or plugin to simplify your repeatable tasks. But then again, automation can be something which saves your time, something you want to run on scale. So, there might be various definitions of it, but I would still stick to the definition of it’s just a script, writing a script to make your life easy.

Mike Hrycyk (04:58):

And I agree, although I guess we have to be careful sometimes with terms because there’s process automation out there, and there’s software that people are using for process automation that helps someone in their everyday task, which is different than software testing automation. But we’re in a conversation about software testing. So, I don’t think that’s a big problem here.

But one of the things that I’ve noticed is, in discussions with clients, that most of the time when people say test automation, they seem to mean GUI automation, automating the stepping through of the process. And to some extent, that excludes API automation, and I don’t know why. But I think that our discussion here will encompass all of it. But then the other thought is automating setup processes, setting up your data so that you can do a test is just as valuable as automating an end-to-end test. Thoughts around that, Millan?

Millan Kaul (05:48):

Yeah, totally get that. And I second that, Mike. And I’m sure I’ll get a chance to speak about the experience and complexity as well as importance of the API testing, performance testing, and various type of automated testing we should be doing. GUI is definitely there in the top of the test pyramid. But then there are various line levels of automation testing which hold their own importance.

Mike Hrycyk (06:09):

You can’t do performance testing without automation, but we don’t think of it as an automation skill. We think of it as performance skill. John, anything to add?

John McLaughlin (06:18):

So, GUI automation, you can visually see something and people like visual things, so it looks fancy and neat, but when you understand what’s happening, I don’t think the value is as high for GUI automation as it is for other types of automation. There is value there, of course, for GUI automation, but you can do a whole lot more verification of workflow and business processes through an API or other kinds of backing layer data testing than you could with say Selenium through the UI. So, it’s a little hard. I know UI is popular mostly because people can see it doing something, and then that means something to a certain group of people. But I think when you understand it really there’s more value in nitty-gritty application logic automation versus making the screen move around.

Mike Hrycyk (07:11):

Okay, next question. This is our why. Why automate? And we’ve talked about it a little bit, but maybe we’ll get a more targeted answer for it. So, John, why automate?

John McLaughlin (07:19):

So again, as I mentioned in what is automation question, in the scope of a project and in the agile world where everything’s moving faster, applications grow at a pretty quick rate and people can only test them and keep up with testing and verifying functionality so much and so fast. So, if you include automation, it removes that scale of work from a smart person tester to then validate and verify with an automated script. Ultimately, then, it expands total test coverage of your application and confidence that when you hit your final stage in moving to production, there’s less likelihood to have surprises on the other side.

Mike Hrycyk (08:03):

Millan, do you want to add to that? Disagree with it?

Millan Kaul (08:06):

I wouldn’t disagree, but I have a different perspective about why to automate. Automation is all in-house. It is everything we do before going to production. So, automation is for our engineering teams, development teams, and businesses. The goal is to find effects, not a recurring defect, kind of catch it early before it goes to the end user. So, I would say automate just to deliver high-quality products. And why automate? To deliver it with speed or efficiency. So, that is what I usually use in most of my conversations are that should be the goal of automation, not just running something on a VM or remote pipeline. Yeah, I mean, if you have that simple goal set for your team and business, I think it completely makes sense to achieve that. And then you can literally break it down into what technology you want to use, how much investment you want to put in front-end automation, back-end automation, performance and all those test generation scenarios. But yeah, I think automation definitely gives you speed, and efficiency compared to an efficient functional tester doing it throughout the day.

Mike Hrycyk (09:07):

The next question – the “when” question – doesn’t always have agreement. It’s a question that gets a lot of discussion and I don’t know if we’re going to pull in the same direction. But often those disagreements come from PMs and developers. So, we’re going to start with you, Millan. When is the right time to automate?

Millan Kaul (09:26):

I would say every time. I mean before even you write the code, after you write the code, while developers are writing the code. I think all three phases are right, and yes, people do have takes on it, but as a quality engineer or QAs or all these titles, you have solid influence about the outcome it would give to you. So, let’s say, for example, you can even write automation – and I have done it in many projects –before the backend is developed. Before the backend is really hitting any API for the first time because even developers usually get API spec, for example, API Swagger Specification or something similar. And that is their requirement, right? User needs which they have to develop based on. Automation engineers can write automation based on the same script using mocks. You can still hit the same endpoint using mock and expect 500, 200 – whatever negative-positive scenarios you have to test. And as soon as the dev work is deployed, you just remove the mock, and run the actual test. Today, if your dev work is done in the morning, on the same date, in the evening, you are running your automated tests inside. And that kind of ties back to the speed and efficiency thing I was talking about in the “why to automate”, but then there might be more scenarios like sometimes you have to wait for the design to be complete. You cannot automate frontend for a mobile app or a web [app] if the design team or whoever is doing UI/UX is not yet done. So, we can still build a basic framework and expected tooling about the frontend automation when designs are being developed because any of the development framework automation framework needs details, functions, and reusable methods. There’s some very common logic we always do. You can still continue that. So, you can still continue automation before writing print and code as well.

But I think where the whole thing lies is – I don’t know if I’m totally correct, but 90% of the frontend has a backend. There is always a backend to a frontend effort. Either it’s a plugged-in backend, or it’s an in-house backend team. So again, going back to the test parameter, I think if we can focus on that unit integration layer, end to end layer through API testing, that should be focused, and that you can start as early as you can. Nobody should be stopping you or asking you to wait until the development is done. That’s my take on it.

Mike Hrycyk (11:26):

Okay. I might have a follow up question, but I’m going to get John to put in his two cents first.

John McLaughlin (11:31):

Yeah, I think I’d be pretty close to agreeing with what was said. My opinion would be – it didn’t use to be when I started with QTP and then the fancy GUI automation, it wouldn’t have been this, but today my opinion would be right away. And again, people that think GUI automation is the best form of things, probably wouldn’t understand how you can start automating things right away. But there’s so much you can do to help a software project right out of the gate if you pay attention to the patterns that are happening. Developing little helper scripts for either test design, that being manual scenario test design, implementing data sets, or building data sets for future use. There’s those mock APIs that were mentioned a few minutes ago, if you have that type of specification. So, when you get good at it, and you can recognize the patterns of things are already pretty repeatable and kind of established and are following that pattern, you can in my opinion, start automating from day one. It wouldn’t be those fancy GUI automation tests that folks like, but it would still be contributing value to a project.

Mike Hrycyk (12:46):

We’re going to talk more about the GUI stuff in the ‘where’. So two questions that I had that came out of what you fellas and said was – so one of them is a lot of people, a lot of automators who are running agile, it’s really common that you find that people automate for the features that they’ve done one sprint behind. I hear that over and over in a lot of places. And I think that comes from a place of stability. You’re going to manually test the stuff. It’s going to get delivered towards the end of the sprint. The testers are going to test to make sure it’s actually doing what it’s supposed to do and then there’s not enough time within that. So doing work early where you can helps with that. But what do you guys think about one sprint delay automation?

Millan Kaul (13:28):

Yeah, I think that’s a traditional mindset, I would say. So, we just have to change our mindset, not – the approach is still the same. Like I mentioned, you can start writing automation as early as you can. So I remember one of the projects I was working on and my team was like, oh, we’ll always be one sprint behind, and during the sprint, we have to do this all functional manual testing always. And then I actually took them to this API spec part and sat with the developer and said, are you using this API spec? They said yes. And I told my team why we are not using this API spec? Why we are waiting for a developer to write it in a fashion which you can hit? And in that project we did deliver minus one sprint for three sprints. But then after that, we never delivered outside the sprint, which means the same sprint, the development was done, automation was done.

I think it all depends upon how you set clear goals and map it to the expectations of the business. And the interesting thing is developers were contributing to our automation because we were using the same Java screen boot combination, and we were hitting some complex Kafka events streaming, which was even tough for developer to read. So, I’m talking about you do some frontend events. Let’s say you do a login and frontend screen, it generates some events for, let’s say, auditing and those events were supposed to be API tested. Are they getting generated? Are they in the right format? Do they have the expected template or schema? And if you change the mindset, you can get more people on board with you, including developers, and you can deliver every time on sprint. And I usually keep that goal for my teams to deliver during the sprint. Otherwise, this is really hectic for QA teams and energy taking because sprints are very busy. It’s not just automation. You have to do functional reporting. You might find bugs and those usually kind of get functionally retested. Yeah, I mean minus one sprint unfortunately is still a mindset I believe, but it takes an extra bit of effort, and things can be delivered and done within the sprint as well.

Mike Hrycyk (15:21):

So, something you said from the other question I was going to ask. So I don’t know if you’ve made the connection, you’ve already made this connection, the idea that you can write your tests from the spec, and with stubs, and make sure that the tests are working or doing what you think and then once they’ve developed it you can remove the box and test the thing to me sounds like it’s a natural progression of the idea of test-driven development (TDD). TDD is often written where the developer writes the test, writes the code, runs the test. In your idea, it seems like the tester writes the test, the developer writes the code, but the synergy for me there sounds more like don’t have the tester then remove the mock, have the test checked-in, available and make sure the developer’s aware, they write their code and they just take the stub off and run the test. So, within seconds they have validation of what they’ve written. So, the context of being able to fix it is as tight as you could possibly imagine it with having the benefit of not having exactly the same person write the test as writes the code, have you taken it to that next generation of your idea?

Millan Kaul (16:21):

Yeah, I think you hit the nail on head. It is for the teams who are not doing TDD, and it’s not just doing – let’s say they don’t have capacity to do that, or they don’t have the right framework, even developers face challenges with their day-to-day life, but not every project starts from scratch. Somebody might have started it, and people just join it. So, if they’re lacking that set up in their development framework, that’s where the testing team comes handy and helpful to them. And you’re right, as soon as you unplug any mock framework such as mock and very similar frameworks in JavaScript, we can literally replace it. Another beauty of this is if you really want to test in areas such as server down, you would never be able to test server down. And in the real life, if you go back to all the famous cloud service providers and check the status page they have been down for hours and –

Mike Hrycyk (17:10):

You can’t just ask Amazon to shut themselves off for 20 minutes?

Millan Kaul (17:14):

I would love to if that’s an option. But then the beauty is we have got the mocking frameworks and I do use them a lot. And the interesting thing is these are really hard scenarios for engineering teams, development teams to test until – unless they use things to mock them. And I do use them a lot to mock 500 and see what error would API show to the front end. Would front end and show the right error message. I think that’s a real value add. So, there are many, many benefits out of changing this mindset. But yeah, you’re right. I mean, if development teams are writing TTD, that’s definitely good. But the challenges I have seen with those teams is they lack the beauty of exploratory testing. I’m sure we’ll touch somewhere about that topic as well, but that’s where the QA role or the engineer testing role modifies, they need somebody to advocate about the quality.

Mike Hrycyk (18:03):

Agreed. Alright, John, we’ve talked a lot. Do you have any ideas across that?

John McLaughlin (18:08):

Well, the only thought I guess I could add is I think one of the first things Millan had mentioned was that it’s maybe kind of an antiquated idea to do the sprint behind pattern, which again maybe speaks back to your observation, Mike, that folks are interested in GUI automation because naturally if a GUI is being developed, how can you automate a GUI that’s not fully developed? And that would make sense. But there are so many other things you can do in the testing process to be effective and helpful either to the testing progress in sprint or planning, getting ready for that second sprint, where you can go back and automate those GUI features that are ready for you. The API level is a different conversation too. I think Millan hit a lot of the points there that if you have the definition, you can mock a whole lot of things and get things ready for when the code is real, and it can be actioned on again when it’s ready for full testing by the automation team. In general, there’s so many things beyond traditional test case, test automation that a smart automation tester can do that to say you have nothing to do until sprint two is probably handcuffing your project a little bit, in my opinion.

Mike Hrycyk (19:24):

So that segues brilliantly into the ‘where’ question. Where do you focus your automation? And originally, in my brain, I think this was around the test pyramid and where your automation is part of, but I think there’s a broad scope for this question, so I’ll take anything you want to talk about, and we’ll stick with you, John.

John McLaughlin (19:44):

So, where do you focus? And this is a hard answer because it’s kind of general, but again, it’s about recognizing those patterns. The patterns that are repetitive that are consuming people time when those smart people can use that energy and time at better, more challenging locations – more challenging to the application location. So where do you focus your automation? In general, I would think you want to focus on the repetitive tasks that are going to help your team accelerate and accomplish their goals of any given sprint, if you’re in that pattern, a little more seamlessly than if they didn’t have the help of a tool to guide them.

Millan Kaul (20:24):

Yeah, I would like to go a bit specific about where should we focus on automation. It totally depends upon context. Test pyramid is right, we know it very clearly where to focus and do as much as you can at the bottom. But if you think 10,000 feet above the way to automate, I would say you can decide based on three types of criteria. If it’s a new project, I would say start early and pretty much try to automate everything, which is going to be repeated more than two times. I think that’s a very basic foundation idea and you can read a lot about it in Google developer pages as well. But if it’s an existing project, you are pretty much joining in the middle of it. It was all functionally tested, the and idea is to automate that, I would say stick with the business, product owners, any project managers, try to find out those most critical scenarios or user journeys, let’s say logging page, signup page as for a business, those are the most important and basic pages and even payments page, just list on all those scenarios and have that as a goal for automation and start from there.

And the third category that usually comes to my mind is historically some projects, especially financial institutions, have mainframes – still use mainframes behind the scenes that is super stable. So, you don’t need to automate that. But if your backend is mainframe, for example, and you’re adding new fancy features in the front end or adding more API gateways or APIs in front of it, I would say make sure you test the new stuff so that you don’t break the old stuff. So, you don’t have to automate everything, but just focus on what would give you the best return for investment and value on automation. So if you can categorize your things in new, existing, or historic project, that would give you a context where to automate.

Mike Hrycyk (22:07):

I agree with everything you both said. I’m going to tackle the question in two different directions. Sort of when I said the testing pyramid, I find that expense is a good thing to think about and an expense comes in two different ways. So, one is time expense, and I don’t mean time to implement the automation so much. I mean the time it takes to maintain the automation, right? So, GUI automation is expensive to maintain because the front-end interface changes more often. And we’re starting to see tools that are getting more powerful with AI so that a certain spectrum of the changes the tool can just take care of that, or at least can accelerate the fixing of them. But that maintenance cost is expensive. Whereas the maintenance cost with API – APIs don’t change nearly as much when services don’t change nearly as much. And when they do change, fixing it is cheaper. So, before I get to the second aspect, I was going to look at, are there any thoughts from either of you around that or is it that’s just, yeah, that’s obvious, Mike.

Millan Kaul (23:08):

No, I think you actually said it right. It’s not that APIs don’t change. There’s a very clear versioning of APIs. So, all the URLs, CV then V2 and then it goes from there. APIs do upgrade. There are major updates, minor upgrades in API, and that’s very easy. And that’s why it’s very easy to identify and fix them. But in terms of UI, it’s tricky because, as you said, designs might change, and the company might just rebrand their whole website or bring up new logos, and the first thing it would do is it could break your front-end automation. And yes, that’s super costly, and like you said, in the test pyramid, that’s why it’s on the tip of the pyramid so that you don’t invest a lot of time in the front-end UI or GUI

Mike Hrycyk (23:45):

John?

John McLaughlin (23:47):

So yeah, APIs are less likely to change over time. They’re the core business logic, so they’re more stable. It takes a lot of change control process to go and change an API versus the UI, where some creative team could analyze the placement of certain controls or the groupings of certain controls and decide that they want them laid out a little differently. So, maintenance is again, a little more frequent and harder to do – I guess not necessarily harder to do, but it comes a lot faster and a lot more frequent. The excitement around AI is one that I find interesting because, again, I think it’s similar to the impression that GUI testing seems to have on certain people. AI is another one of those trendy words. Mind you, AI is cool at what it can do, but it’s still very new. I would not reliably trust that AI is doing the right thing all the time when it comes to things like self-healing tests or any type of AI mechanism that gets included in a test automation suite. It will be interesting to see the day when that happens, but I don’t think it’s now, and I think people’s excitement about AI kind of needs to be tapered a little bit to let it actually percolate for a while and see how it does.

Mike Hrycyk (25:06):

Awesome. Alright, so the other aspect of expense that I think about with automation is time to run. That was a big thing where if you are releasing to production multiple times a day, the runtime of your automation suite becomes really, really important. Because the idea is that you run your automation at build, and that it takes X number of minutes, and then you can push through to production. And the longer your automation takes, the further out of context of being able to help the developers are, right? So, keeping it tight is important. GUI automation is super expensive in terms of time to run, right? If you say, “Hey, I’ve got 500 GUI scripts”, now you’re talking about hours and hours and hours, no matter how heavy paralysation can help you. Whereas 500 API scripts can be under 30 seconds, right? It can be just significantly faster. And so, looking at that and understanding that and prioritizing your GUI scripts so that you can run at a reasonable clip is important. Open to both of you – any thoughts around that?

John McLaughlin (26:04):

So you hit pretty much everything, all the important points. I think, Mike, GUI automation is the harder one to do faster with grid computing technologies, distributed nodes doing multiple tests makes it a little bit easier, but then including a mechanism like that increases the complexity of your test suites quite a bit and then it becomes a little more challenging to debug and things go wrong. And it can put a question mark around the results you’re seeing if things are complicated from the grid structure. APIs, again, naturally are faster because they’re just little hits to your back-end logic. Again, that goes back to probably the biggest bang for anybody’s buck when it comes to automated testing would be the API layer that’s going to go directly through your business logic where, in a lot of cases, your money worries kind of matter. They all travel through your application logic that’s tied to your API layer. Your UI comes and goes, it is what people see and what people use, but what keeps your business afloat is probably your API layer. So being able to run that one at a fairly quick rate is certainly beneficial and a strength to be had for the API side of automation.

Mike Hrycyk (27:23):

So, the thing that I look at and see with automation or where to automate when I’m having that idea is a lot around the ROI. What’s the return on the investment of what we’re doing? Because there’s a big cost to automation. It takes time to write it, it also takes time to maintain it. So, the first factor I like to look at is priority of what I’m automating. So, there are high priority flows in any application where the money comes from, where the value comes from. So, make sure that you’re thinking about that as you focus. And logins are really important, but the basic logins important. The 27,000 different permutations on different passwords, that’s not important today or tomorrow. Complexity, things that every time you test them are going to take your testers 17 hours to get set up and then to run the test, automate that as soon as you can. And that stretches out to the idea of automate as much data creation as you can because it’s going to be repetitious, and it’s not worthwhile for someone to be using a GUI to do those steps before they can run their testing. So, you might not automate the actual test themselves, but automate the data set up.

And the other one, kind of obvious, when you want to test multiple different parameters, get that automated, but use data-fed automation for that so that you don’t have to write everything that you’re just pulling from data files. And really think about, are you automating in the right places because there are lots of reasons not to automate everything and that investment and the maintenance of it thereafter is important.

Coming into the end stretch, I think we’re going to – well, I don’t know, but we’ll ask the question, see what we get. Who is the right person to do the automation?

Millan Kaul (29:07):

I think everybody does automation. So, any software you’re using in production is an automated software if you do online shopping, grocery shopping, purchasing a dress. But then DevOps, they build automated pipelines, SREs, they build dashboards to hit those health endpoints, and check the observability of the involvement, provide their unit contract testing. So, most of these are obviously doing things which should not be manual. But yes, in terms of QA, all this evolution of QA terms such as engineers, development testers and that kind of thing, I think, QA should be involved in writing test and automated test. Those should be like end-to-end tests, like we discussed a couple of times in this podcast – integration, API testing in web. So, they should be the right person to do it, but I would say everybody should be involved in the roadmap and planning journey of the test automation so that everyone knows how much business value is there so that we – let’s say, decide who does the back-end automation. Front-end automation. Yeah, I mean, rough answer everyone, but QA we are doing it to make your own life easier.

Mike Hrycyk (30:16):

Great. Everyone. Hey John, how do you answer that question?

John McLaughlin (30:20):

Yeah, so testers, in general. Everybody who’s a tester is quite good at recognizing patterns probably to a fault and to the annoyance of their families maybe, but we’re good at recognizing patterns. The biggest hurdle I suppose is having the confidence to then approach it with a different tool to solve the problem, to then not have to repeat that same pattern again and again and again. So, a large pocket of testers, I know maybe they don’t have an interest, or they’re intimidated by a language like Java or C#, and they don’t want to go down that path, but they have lots of ideas on how something could be automated or how something could be made a little less tedious and redundant to repeat over and over again. I think there was a time maybe when there was a big divide between what a manual tester is and what an automated tester is, and automation people do automation and manual people do manual testing, but I think in general, if people get over that and keep eyes on the bigger picture, anybody on a software team that cares about what they’re doing is mindful of about those patterns can contribute to the test automation process.

Mike Hrycyk (31:42):

I agree. And I’m not afraid of the everybody idea either. And when we start talking about Cucumber and other plain language paths or record and playback, there’s value for everyone in a project. The important thing I want to keep in the conversation always is that we make the right testing decisions, and that’s even in an agile team where developers are doing a lot of the testing, I still want the quality, the QA trained mind that’s helping make sure that we’re making the right testing decisions

Millan Kaul (32:11):

On that one. I have one thing to share. I was working for a company where we were following the structure, and we had these tribes in multiple teams. And what we realized is half of the team were really good at writing automation. I’m talking about the developers. They were even writing GUI test, API tests and everything. And what they were lacking was the knowledge and the pattern of testing – what John was mentioning. And half of the team were like, we don’t know how to automate at all. So, we ended up having the structure of quality team with quality advocates and quality engineers. So, teams who would need somebody to write code will give them quality engineers, and they just write most of the time and they build automation frameworks. And a few teams really needed quality advocates who has the knowledge of automation, but they kind of run the team extra desk sessions, test data set and all. I didn’t mean to interrupt you, but that was something interesting that came to mind and how we solved that problem.

Mike Hrycyk (33:03):

Yeah, no, and I agree and that’s one of the basic tenets of agile working, right? Is that your tester on the team is the quality advocate. It’s not the only person doing testing, it’s the quality advocate who understands testing and helps other people understand. Okay, our wrap-up question, our last one, our bonus H as it were. How do you get started? This is a question that lots of people have answered in lots of different ways, but starting with you, Millan. A company has no automation. How do they get started?

Millan Kaul (33:33):

Yeah, just go to my GitHub account, check out my repository – but no, honestly, start with knowing your own skill. If I’m one of the first few testers in the company, writing automation and testing, don’t fall in a trap of these fancy tools. Tools might change every two years. I would say go with the language, if you’re familiar with the language – it can be Java, any of these. Let’s start with them. There’s a lot of opensource work done. Check out a boilerplate project and then a basic simple test of API, like single test, hit google.com, for example. Building front end? Just spin up your mobile app or your webpage and just try to log in. Start simple, start from what you know and as soon as you’ll grow and learn and you’ll build your automation framework would naturally become more complex based upon the business and the context you’re trying to automate. That would be my first few steps if you have no automation, and of course, it’s an ongoing continuous learning experience where you might change your decisions, you might change your language or framework. So that goes with any of the frameworks like John was mentioning, like Spring Boot, right? Frameworks technology do deprecate, right? And we need to continuously maintain them as well.

Mike Hrycyk (34:45):

Great. John.

John McLaughlin (34:47):

Again, I think it’s a practice and mindfulness to pay attention to repetitive things in your day-to-day. Pay attention to the technologies that are used on your projects and then you’ll probably ask yourself the question, how could I make it so that I have to do this one task to then do these thousand tasks automatically for me in one shot? The language aspect of it is scary for some people, but there are lots of resources around to pick a small starter kind of project with a language that’s fairly steady and easy to use. Python is a good language for folks to learn. It’s not strongly typed. It’s a little more relaxed on some of the syntax rules. It’s very powerful in what it can do, but it’s a very good language to learn with lots of resources on picking up Python. And then just don’t try to solve landing on the moon in one go. But do small little increments until you automate one tiny little piece, and then expand on it and automate that next tiny little piece. Eventually, before you know it, you’ll be building that big automation ship that can land on the moon anytime you want to.

Mike Hrycyk (35:56):

Yeah, I think my answer is show value early. So, first thing get started. People can spend so long training, exploring ideas, building a framework from scratch – no get started, show some values. So find tests that you can then demo to people that they go, oh, I see why that would be good. Even if your first is setting up data that demonstrates that there’s value in what you’re doing. Another way to do it fast is a smoke test. Demonstrate that you can validate a build for its testable. So it’s going to save time. Do that early and don’t be concerned on day one about having the perfect framework. Agile, when it works, is about being able to refactor, get started, and show value. Don’t go too long before you’re building a proper framework. There’s a lot of economy of scale and development in having the right reusable objects in your framework, but you don’t need that on day one. And if you look at doing things right-ish, then what you’ve done will scale into a framework.

Alright, any closing remarks? Any rebuttal on what I just said since I’m the least technical person in the call?

Millan Kaul (37:05):

No, I think you’re right. Start somewhere. You have the highest business value and the easy win. And you actually said one important thing which I missed that is showcase your work. If you’re just doing automation for months and months, nobody knows about it, and you’re just happy about it, that won’t help you to buy in your senior management team. You really need to showcase to get the feedback. That’s really important. If they see something in action, they will definitely tell you why don’t you invest time here? Most of the automation frameworks and teams grow with this kind of initiative.

Mike Hrycyk (37:36):

Not just feedback, engagement. If you can get the developers interested in what you’re doing, they’re going to throw things at you off the side of your desk, or they’re going to make things better, or they’re going to incorporate it into their – you might not have to work to get it into the build. They’d be like, wow, I could use that. And they’ll pull it into the build process, right? So, its showcase, be passionate, be excited to build that engagement from other people.

Well, thank you to John and Millan, our panel, for coming out today. I think this was a really interesting and good discussion about automation. I did want to throw out, when I was pulling this together, I did a lot of different questions, and I wanted it to be tight. But what I would like to throw out to our listening public is a few of the other honourable mention questions that if you want to start a conversation around those, come up to us in our social channels or where you get the podcast, and we can start a conversation about those. So, some of the honor group mentions is what do you automate? How do you make those choices? Where’s the right place to your repository? We had a really good conversation on that at our last YVR Testing, and it hadn’t really come up before. What is the right target coverage amount that you should be going for? When should you not automate, and how do you show your ROI, not just to have an ROI – how do you demonstrate your ROI? So, I think those are great questions that we definitely don’t have time for today, but if you guys want to start a conversation across one of our social channels, that would be great and we will participate in it with you.

So, you can find more great content and discussion on automation and more @PLATOTesting on LinkedIn, Instagram, or on our website. You can find links to all of our social media and website in the episode description. Speaking of if you’re enjoying PLATO Panel Talks, we’d be incredibly grateful if you could take a moment to rate and review us on your favourite podcast platform. Your feedback helps us grow and reach out to more testers. I want to remind everyone that you can go and look up Quality with Millan to hear more of his thoughts. And thanks again for joining us, and we’ll see you next time on PLATO Panel Talks.