This month, host Mike Hrycyk sits down with QA (Quality Assurance) experts Snehal Lohar (Slalom) and Richard Bird (PLATO) to talk all things test management tools. Whether you’re still using spreadsheets or navigating your way through ADO, qTest, TestRail, or Jira add-ons, this conversation covers why having the right tool matters – and how to make it work for your team. If you’re evaluating tools or just want to get more out of the one you already have, this episode’s for you.


Can’t use the player?

Listen to this episode on Spotify

Episode transcript: 

Mike Hrycyk (00:01): 

Hello everyone. Welcome to another episode of PLATO Panel Talks. I’m your host Mike Hrycyk, and today we’re going to talk about test management tools and how they can be leveraged for better success in your projects. Of course, I’ve used lots of test management tools in my past, from FogBugz to Bugzilla to Jira to so many others, and they’re all a little bit the same, and they’re all a little bit different and their specialties around them. And a lot of you out there are using test management tools and might want to be able to up your game just a little bit. So, I’ve brought together two experts in this area, two people who’ve been using test management tools a lot in their career so far, and I’ll let them introduce themselves. 

Snehal Lohar (00:37): 

Hi, I’m Snehal Lohar. In my current role, I’m Director of Quality Engineering at Slalom, and my current focus is obviously anything and everything about testing. We do a whole lot of testing-related activities across all kinds of environments and all kinds of industry domains. Having been in the industry for over two decades, yes, I’ve used a lot of test management tools, and I’m really excited to talk about it today. 

Mike Hrycyk (01:04): 

Perfect. Richard, tell us about yourself. 

Richard Bird (01:07): 

Hi there. Good day. Yeah, first of all, thanks for including me in this. I’m a Senior QM Manager with PLATO, and I’ve been with the company since 2013 and have worked in a variety of software industry verticals over the last 30-plus years. Originally from South Africa, I spent time in the UK, but I have been in Canada for the last 20 years. 

Mike Hrycyk (01:28): 

Great, thanks for joining. So, test manager tools, some of our listeners might not know what it is or some people might be using them and not have used that term before. So, let’s go ahead and define them a little bit. In my questions, I’ve said TMT a lot, which I don’t think I’ve heard a lot of people use a lot of, but we might have this conversation. So, TMT is a test management tool if I accidentally slip that in. So, we’ll start with you, Snehal. What is a test management tool? What are the essential functions? 

Snehal Lohar (01:57): 

Sure. I believe the test management tool, I like to see it as the central nervous system of any test-related activities. This is where the whole centralization happens. This is – there’s a whole lot of the planning, organizing, executing and tracking test-related activities. When I say test-related activities, obviously, test cases. So, test case management is an integral part of any test management tool where we’re writing test cases, we’re also executing them, we’re creating different suites, and we’re also tracking them as per different releases, different milestones and whatnot. And there are some tools that I’ve seen, they may or may not have a defect management piece added to it, but I like to see a test management tool as one shop where I want to understand the latest and greatest pulse of my testing activities. 

Mike Hrycyk (02:50): 

Excellent. Anything to add or subtract from that, Richard? 

Richard Bird (02:54): 

The only thing I can really think of is, because I’m doing that right now, very around reporting and getting the metrics out of the TMT as well. I know that, as I say on my current project, we’re definitely being pushed to provide a good insight to management around where we’re at around test case creation. But yeah, that’s the only additional thing at this point. 

Mike Hrycyk (03:16): 

Excellent. Okay, and we’re going to touch on reporting a little bit later in the conversation, so that’s good. So, I’ve worked in, as I said, lots of places. On projects where we’ve done everything we’ve done, we’ve done it with spreadsheets. I’ve worked with another place that did everything in SharePoint, and there were pluses and lots of minuses with both of those. What are the benefits of using an actual test management tool? Richard, let’s start with you. 

Richard Bird (03:40): 

So again, Snehal sort of mentioned this, a lot of these points and I guess we’re going to be rediscussing them as we go through the questions, but really it’s around having a repository that shows us where we’re at today but also provides us scenarios and so on that we can use for future releases to ensure that we are getting good coverage around regression and all that sort of thing as well. Definitely around, as well, traceability, auditability, as well, for people who are in those sorts of regulatory environments where you’ve got to provide all of that insight into what has happened and what is expected to happen. For teams to be able to see where other groups are at and ensure that they’re getting the coverage and so on that is required. Those are some of those points, I think. 

Mike Hrycyk (04:30): 

Yeah, I agree with that. Snehal, anything to add? 

Snehal Lohar (04:33): 

Yeah, I would say I definitely come from the camp where there was a time where Excel worked perfectly fine for many years, but then there was a time where when as the teams grew and as the scope grew and as the version controlling became a really important piece, any Excels or SharePoint was really not useful until they stopped working for us. So, obviously, a whole lot of – when you say the real test management tool, of course, a centralization piece comes into it, the traceability piece comes into it. I think we talked about version control as well, and then there’s a whole lot of collaboration. I might have multiple QA/QEs working on the different sets of functional test cases, but under one herd. So, there’s a whole lot of collaboration that could also work well into the test management tool. So yeah, I think that that’s, I would say, the broader perspective is the benefit. And of course, we touched upon, as Richard mentioned, the feedback loop of reporting and dashboard that is an extra cream that gets provided. 

Mike Hrycyk (05:38): 

I think that what you guys have said has brought up another sort of factor for me. So, we live in a very Agile project world these days and we can air quote the heck out of Agile there since it’s all different definitions, but in one way that makes me think, okay, we have lots of different self-governing project teams that are tackling a project and maybe Excel would work within the auspices of a single team. But when you think of the fact that a lot of us work in an organization that has multiple Agile teams that are pulling in the same direction with the same sort of target, but if each of them is self-organizing and doing things on their own suddenly you have disparate ways of organizing these things and working together and people shifting from one project to another. Whereas a test management tool, while hopefully flexible, still puts you in the same playing field, discussing the same sort of information and allowing you to compare what people are doing and offering just enough rigidity to the process so that they are comparable. Thoughts on that? 

Snehal Lohar (06:37): 

No, true. I mean we have had this happen where back in the days when we were using Excel or even SharePoints compared to that we would always struggle as like, okay, I’m in to release 21 and I need to use only feature blah blah blah X and Y Zs test cases and I knew that only particular person was probably keeping those test cases up to date. So, there is a lot of giving the timestamp, the updates, and the version controlling that kind of played a role, and there was a lot of human dependency. I would reach out to a person who was a point of contact to keep the test cases up to date or whatnot. So, I think that dependency that definitely got removed. And of course, I mean, we live in a lot of tools, right in Agile as you mentioned, there are so many tools for everything and it could be seen as another tool for test management, but this is a very critical process and this is very critical central piece where we can gather all the sources under one roof. And that’s why this test management tool, having a tool first of all, kind of plays a critical role and it’s kind of a one-stop shop for me. That’s why I referred to the central nervous system earlier to go to one system and find out everything, what’s going on, where we are and who all the people are related to it. 

Mike Hrycyk (08:00): 

Yeah, I mean once you have that test management system, if you go up to a QA’s desktop and they don’t have it running every hour that they’re working, you’re like, there’s something wrong here. 

Snehal Lohar (08:09): 

Yeah. 

Mike Hrycyk (08:11): 

So, here’s the bigger question. Are all test management tools the same? Why would you pick one or the other, Richard? 

Richard Bird (08:21): 

Generally, where I’m at the moment and working with PLATO is that we generally have to go with the tool that’s dictated by the client. So, recently, the client didn’t have a test management tool or anything. They were relying on another integrator’s tool to do their work, and then it became a case of supporting the QA lead with making a decision on going with a tool that would actually benefit them. So, do I have a preference? Not really. I go with what the client has, and if they don’t, then I try to push them in a direction. But generally, the overarching decision then will be whether it is going to come down to budget or appetite for change. So, I don’t really have a preference as such. 

Mike Hrycyk (09:11): 

Snehal, you’re also from a consulting background here, so similar idea, or do you have a different opinion? 

Snehal Lohar (09:17): 

No, it’s pretty similar to what Richard mentioned, being into consulting, which gives us a chance to get the flavour of all the different tools that clients use in their ecosystem. But I definitely have seen some of the tools are lightweight than compared to others. Some are well integrated into the ecosystem and then others are not. So, it also depends on what different industry and what are the important factors that the client has really looked into. Sometimes it’s ease of use, ease of integration. Then the customizability. Sometimes they want to add extra columns, extra reports and extra dashboards. There are licensing costs and whatnot related. So, it’s very hard to pick t one favourite tool that has everything, but there is definitely a good spectrum available in industry, as for the needs, one can pick what works best for them. 

Mike Hrycyk (10:07): 

Do you lean towards favouring one that has a decided path for how things should work, which I think works best when it’s built by and for testing professionals, versus other tools that have added it on as part of what they do. And I know the answer seems really obvious, but I mean GitHub has a bug tracking reporting functionality and no matter what you do, you’re jury rigging your needs into it. How painful does that become? 

Snehal Lohar (10:37): 

I think it’s again the maturity of the team. I know I’m going to give an answer it depends, but it also kind of really depends on the maturity of the team. How well and how often are the other integrated tools you’re going to touch upon during the process of your STLC? So, it’s definitely, I feel like depending on where in phase, and if you are really starting – if you have a liberty of you’re just going to start on the project and you have a liberty to pick the right TMT – test management tool – or pick something that is just provided by GitHub or provided by ADO if you’re working on Azure. So, I think a lot of times that flexibility is not in our control because of the licensing cost, but if it is, then I think there’s always a case that one can put in like hey, these are well integrated with, let’s say, Jira, these are well integrated with all the other systems that we’re using throughout our NCLC, right? So, this is not necessarily the one thing that’s going to give all the flavours. I think again, it totally depends. I’ve had multiple clients who left and right use TestRail, but they don’t use TestRail to the max of its efficiency. Then we have had clients where we’re using ADO for test management, and it’s not the best experience. Again, I think what’s the longevity of the project? What are the needs? How much can we really spend on it? That all those factors come into play. I don’t know if I answered your question. 

Mike Hrycyk (12:06): 

You distracted me with your information enough that I’m not sure anymore. 

Snehal Lohar (12:10): 

<Laughter> Sorry about that. 

Mike Hrycyk (12:12): 

It’s quite all right. So, I know Richard, you’ve had ADO forced on you, and it’s not an originally built-for-testers piece of software. Are you feeling the pain? Is the flexibility of it without targeting testing helping? Is it hindering? 

Richard Bird (12:28): 

To be honest, I’m quite enjoying it. That’s my first – my actual first experience with it. Would I go for a DO over any other ALM? Hard to say. Would I go for it over any non-fully integrated TMT that then has to rely on a bunch of integrations? I would go for the ALM solution for the ADO piece. And as far as this current client is concerned, we’re actually forcing it from a test perspective, and that’s driving the rest of the development team, including the functional team, et cetera, to actually utilize ADO. So, QA is actually the one driving it into the organization. It didn’t exist before. And it’s been a learning curve, but it’s actually been kind of interesting because we’re having to bring things in from the integrator using their ALM tool for where everything has been stored to date, and we’ve been dictating how that’s going to work. So yeah, it’s been good. 

Mike Hrycyk (13:28): 

I’m going to pause for a second and do some acronym explaining for folks. So, TMT we’ve covered, it is test management tool. ADO is Azure DevOps, it’s gaining popularity, and most people know it. But I’ve talked to people who are like ADO and Azure DevOps are two different things in their brain. I’m like, no, no, that’s the same thing. And then ALM is an application lifecycle management tool. Generally, when we talk about ALM, we want to also include test management, and most of them do, but it also extends past that to include requirements and task management and things like that. Have either of you worked with an open-source tool for test management that is worthwhile on any level? I see Richard shaking his head. So, I’ll start with Snehal. 

Snehal Lohar (14:10): 

Yes, I have used TestLink in a very recent example and I think they’re great for the teams with the leveraged budget and if you still have some customization. That being said, it’s an open source tool, so it may not have the great polishing, the great integration, or in support for that matter, right? But it’s definitely a trade-off. It’s a freedom and flexibility versus your convenience and support. So, I’ve seen it work. It could cause some issues sometimes, but they were not big enough to go ahead and replace the system altogether. So yes, I would say yes, there definitely are tools and there are tools too in the industry. In my recent work, I’ve used TestLink, and that was doable. 

Mike Hrycyk (14:55): 

The couple that I’ve used are Mantis, which started out as bug but does have test management capabilities. It’s okay, it’s quite like Jira. Then the other one is Bugzilla, which I haven’t used in a really long time, but I just heard someone had been using it recently, and it had a lot of capabilities, and I quite enjoyed Bugzilla. 

Snehal Lohar (15:12): 

Bugzilla is pretty old. I mean, I’ve used Bugzilla years ago, and it was great then too because it was very specifically honed into the backtracking, test management, and whatnot. So, I’m pretty sure they have probably now polished the product to close to perfection. 

Mike Hrycyk (15:28): 

Yeah, one hopes maybe. We’ll use it again sometime soon. Alright. Okay Richard, how can using a TMT drive quality in a project? 

Richard Bird (15:37): 

This goes back to things that we’ve spoken about, but really, we can ensure that we get full test coverage. Again, this depends on whether we’re actually bringing in requirements, because ultimately, we do need to be able to provide that linkage back and that traceability back to requirements to ensure we’ve got coverage there. We can also figure out if we’ve got any gaps and highlight any gaps that we’ve got in our test coverage, and hopefully identify that early on. We could also show who’s done what, where and when. So, promoting accountability and transparency across the team and then at the end of our test cycle, we’re actually going to be able to make an informed decision around all of those things that are coming through by using reports and dashboards to actually go have we completed everything we need to do to make an informed decision before making a release. 

Mike Hrycyk (16:30): 

Alright. Snehal, anything to add? 

Snehal Lohar (16:32): 

I think only one thing I’ll add, I think Richard probably already covered this, but there is automation integration as well, whether the test cases were automated or whatnot. So, it kind of gives that automation overall scope and the coverage as well. So, I think that’s what I would just add in addition with Richard. I think he covered almost all the points. 

Mike Hrycyk (16:53): 

And just to add on that, it’s not just that integration of, it’s the integrated reporting, so your definition of done and what’s been tested and what other results? You can integrate the results for automation and manual and get a good picture of everything. Alright, do TMTs make things more efficient? And if so, how? 

Snehal Lohar (17:12): 

I mean they definitely make it more efficient with obviously all the points that we’ve been discussing so far, but there has been some projects where I have seen like, hey, we had only month or two months of engagement and probably this is just a side integration piece, which may not be part of our big functionality per se, where we have gone without using specific test management system. 

(17:36): 

But given that we are again working on a medium to large project, definitely I would not go without a test management tool because of the benefits it is going to give me. And one of the big pieces is when I’m talking to stakeholders, when I’m talking about my test coverage and whatnot, of course I need to have the right data. And that – pulling that data could be really sometimes nerve-wrecking. So, I don’t want to go in on just a one-day before where I need to inform my stakeholders how much could be covered, what our test coverage was looking like. So, definitely having that information handy from these dashboards and reports is going to be very critical for making any subsequent decisions across the product. So, yes, I would always for – I mean, I would always vote for having a test management tool in a project. 

Mike Hrycyk (18:27): 

I think one word we haven’t hit on in our discussion so far is reproducibility. TMTs make reproducibility of your tests, where you were and what you did very important. I was just thinking about if your timelines are tight and you really focus on exploratory and you’re not going to have a big set of test cases, I think you can still leverage your TMT to track what you did. So, if there is a problem when you get to production, it’s like did you even test there? Well, you have some semblance of notes that say, you know, I did, I spent my time there, and whatever we found is not the behaviour we were seeing. So, something has changed. 

(19:02): 

So, Richard, requirements, you brought it up, so I’m going to throw the question to you first. Is it important, is it beneficial to have your requirements inside of your TMT AL,M or is it good enough just to have some sort of linkage between the two? 

Richard Bird (19:18): 

So, I think it’s necessary definitely when we are, again just talking currently, there’s a challenge around the expected or – expected results that we should be getting from those requirements and being able to see that inside of the TMT is really useful. The fact that we can share access to everyone concerned around what’s in those requirements, people can feed into them in one place, and have one source of truth is critical to me. My concern, and this can happen, is that we have requirements that exist outside of the TMT, and then you’re not able to – that is not being communicated. Those changes are not being communicated to the testers who you are basing their test scenarios on those changes being made. If you’ve got it in the TMT, they can see straight away, they can be advised straight away when requirements and so on change. And then the fact that the traceability all the way through, if you’ve got defects and all of that sort of thing, is all connected inside the TMT. 

Mike Hrycyk (20:30): 

I think what you just said is important, but I think the reverse path is also really important. Testers spend a lot of their time getting clarifications on requirements that were unclear. They go back and they talk to the business, they go talk to someone who wasn’t part of the loop, they talk to the developer to find out how did you do this? Why does it work? And if they’re tied together, when the tester writes down what they’ve discovered, then it’s there for everyone. If they’re separated, they don’t do that because it’s just harder or they don’t have access or whatever, then that means the next time someone works on those requirements for the next feature or clarification of the feature, they’re working from a place that’s not as stable and not as understood. Okay. Snehal, over to you. What do you think about requirements in the system, separate systems, etc.? 

Snehal Lohar (21:17): 

I do think that the requirements being in the separate system worked out so far so good in my experience because there was a time where we wanted to maintain the requirements in test management system and it kind of created this whole duplication where hey, we were trying to verify against this story, but this story – being an Agile right? The story changed, or something happened, and now my test management tool doesn’t necessarily have those updates. So, it kind of created that chaos a little bit. Again, if there is a smoother integration, I can see definitely that being up to date all the time. On the contrary, I have definitely seen the systems being given their separate responsibilities. The test management tool is just doing the test management, and let’s say the other systems that are tracking the requirements, for example, Jira, or other things. 

(22:14): 

So, it kind of becomes very clearly traceable. There are different types of reports and dashboards that one can drive, but I haven’t seen them recently. It’s not that obviously it’s not possible. Obviously, Richard mentioned, in his experience it’s possible but it again depends on how much we’re disciplined to make sure that the integration is smooth and how much information updating per given user story, let’s say, for that testing scope. So, I feel that if we differentiate and keep the context of test management and the test management tool and the requirement in the requirement management system and as long as the integration is smooth, they will be great. 

Mike Hrycyk (22:58): 

So, one of the things – one of the benefits that we’ve talked about a little bit about so far about what TMTs bring to the table and bring to us is the idea of dashboarding and reporting. So, let’s just talk a little bit about what is valuable in the reporting function and why it’s valuable. And we’ll start with you, Richard. 

Richard Bird (23:16): 

So, dashboarding, again, this is a new piece to me with ADO. I have limited ability to generate cross-query dashboards. That’s purely because of access, but absolutely, dashboards are really helpful to me on a day-to-day basis. I can see quite quickly the sort of coverage and where we are getting at around test case generation, right now that is our current area of focus. But then I haven’t as yet generating reports out of the tool that we’re using right now. But I’m generating reports myself using input from those dashboards to senior management. I find them really useful. Yeah. 

Snehal Lohar (23:58): 

I think I would say is the – one of the most important reasons why I would maintain a test management tools because I do want to get a greater pulse and latest pulse, as well. What has been my test coverage? How has my test been passing, failing? And if we have an automation integrated, how much has automation happened? How has the automation has been responding? So, I think to get to know those kind of behaviours in different reports is going to help our team to make the right decisions. Whether to even push for another phase of testing or maybe, hey, we have too many defects coming. I know we didn’t touch upon the defect yet. But let’s say we have too many tests that have failed, so maybe obviously our release is not ready, and then what is the priority of these tests? So, do we need another phase or whatnot? So, I think it can definitely derive into important decisions, and that’s why those dashboards and reports give you that pulse of where we are, how it’s looking. So yes, they’re definitely very, very important. 

Mike Hrycyk (25:00): 

In your experience with the TMTs you’ve used, are the canned reports the ones where you just go to the tiny little report builder, you check off five things, and you get a report, it’s fine. Is that good in a strong functionality? Or are you mostly having to do some custom queries to feed into the reports? 

Snehal Lohar (25:16): 

I think most of my reports are custom reports, but of course, out of the box, there are obviously all pass/fail reports, and then what a test sprint looks like, or what the test scope and coverage look like. Those kinds of reports are always useful, but most of the time I lean on my custom reports because there are times where some specific function or feature is more important to me and I want to make sure that I focus on that. So, the flexibility of customizing different attributes and pulling the report out of it is more something that I lean on more. I think maybe I found out that works great for me. 

Mike Hrycyk (25:53): 

Okay. Richard, from your perspective? 

Richard Bird (25:55): 

To be honest, I’d rely more on dashboards for that aspect of it. I think reports are more going to – as I said, they’re more driving up towards senior management. They’re providing more records of what’s happened over the long term because you are generating them, and presumably you’re going to be sending them out a lot more at regular intervals to management so that they can get a sense of how things are going over the course of the project. So, I’d be using reports – again, I’d use the canned ones to maybe be my starting point for generating what I ultimately want to create, but I very seldom would go with a standard report. It would definitely be more custom to suit the needs. 

Mike Hrycyk (26:37): 

Alright, so now’s where your expertise has to come together into one solid piece of feedback. So, what are three tips that you take into any new project that you know if they’re not doing them will help build it up and make it better? We’ll start with you, Richard. 

Richard Bird (26:54): 

Traceability to me is one of the biggest things with all the way through from requirements, to test cases, to defects. It’s one of the biggest selling points of a TMT in my opinion. As well as the fact, and you actually touched on it, just it hadn’t been raised before, but it’s the whole reusability of test cases across releases, across deployments, whatever it is. And then future projects as well, as long as they’re well organized. And then, again, to the last point around reporting, dashboards and providing that insight into what’s happened. That’s what I’d be looking for. Yeah. 

Mike Hrycyk (27:32): 

Great. So, Snehal, now that I gave Richard the chance to take all of the good ones, do you have – what are your three tips? 

Snehal Lohar (27:37): 

I think mine overlaps with what Richard mentioned. Of course, traceability is the one. Obviously, the first critical piece. Then obviously, execution tracking that we talked about, and the reporting and alerts. Sometimes I’ve also seen the alerts set up if something kicked off, hey – and this is going again towards reporting to the higher management. So, reporting and alerts, I would say, are one of the other aspects that I would see – I would check for the new project setup. 

Mike Hrycyk (28:08): 

Something that you gave me that I think is powerful is tied to the reusability. So, if you have multiple project teams working in different areas of a project, having Team One build some test cases that Team Three can leverage for their own smoke test means you don’t have to go back to another team to make sure that stuff is still coherent and good. And TMTs make that so that it works well. If you were doing it in Excel or SharePoint, you’d have to spend a day and a half doing KT just so they could understand what you’ve done. And in the end, you would just do that testing, and then that could really impact your own throughput and pull you away from your tasks and stuff. So, I think that’s a valuable thing. 

Snehal Lohar (28:44): 

It is, I mean I think we follow the DRY Principle here as well – Don’t Repeat Yourself – and if you have the test it for that module somewhere and that module is getting integrated, the team is going to look at it, obviously you’re not going to end up creating, but hey have that available and have that awareness to go connect with those tests. And if something changes, update everything. And make sure that it’s up to date. I think again, the collaboration piece comes into place here. But yeah, very well raised. 

Mike Hrycyk (29:12): 

I hadn’t heard the DRY Principle before. I like that. Don’t Repeat Yourself. That’s good. 

(29:16): 

Alright, so, we do have a couple of seconds left, and I had skipped a question. So, let’s sort of put these things into two classes. So, there’s the big ALMs/TMTS like qTest and like ADO, there are those big ones, and then there’s this other area where you take Jira, which originally was a defect management tool and then has become a bit of an ALM with requirements and task management but never did do test management on its own. So, it has add-ons like Zephyr and TestRail, etc. Is the one doing it all by itself better? Is the one with the, because then you have a specialty system that understands it better? Or is it roughly equivalent, depending on what your needs are? Opinions, Richard? 

Richard Bird (29:55): 

Opinions, yeah. So, I really feel that if a tool is being developed completely as an ALM, it’s going to work a lot better. It’s going to be integrated better with itself. Adding on all of various bits and pieces, I get the idea of it being – having specialized components, but at the moment I am enjoying what I’m doing with ADO, right? So, how that is working, in my mind right now, for the client that I’m on, it’s the right solution. Would it be the right solution for every other client? No idea. It would’ve to be assessed at that point. So, I’m happy with what I’ve got at the minute, but then I have to be because it’s not my decision ultimately. 

Mike Hrycyk (30:33): 

Although you did help drive the use of it. Snehal? 

Snehal Lohar (30:37): 

Yeah, I’m also going to say it depends. It’s definitely not a cookie-cutter formula, right? For a lot of systems and a lot of projects, it’s going to work – that one system is going to probably work, but then the minute we have other nuances, like other details that are driving your business. So it becomes important to have those specific tools integrated into a system. I’ve seen both work fine, depending on the needs of the project and depending on the team’s maturity and collaboration within the team. So, I would say there’s not a cookie-cutter formula, but a lot of times I think if you’re trying to explore for the first time, I would say just go with one solution off the shelf and then see if at all you need anything rather than integrating a bunch of other tools. And then you don’t know how they’re going to integrate in future and whatnot. So, I would rather go for one big solution like ALM or Jira and then think about that as a secondary to we really, really need different tools? 

Mike Hrycyk (31:35): 

I like your second example. That was the example I was going to use to defend the add-ons. So, Jira, I think, is a really good example. Jira is one of the most popular tools for everything. And when you go into a company that really has Jira embedded well across their development system, but they don’t have test management, I think that the tools, the add-on tools for Jira that our test management – Zephyr and TestRail are really well and they integrate really well. I mean, when you look too closely, it’s a little bit collusion-y, I mean, a test case is just a bug with a little bit of difference. But it’s done pretty well, and it works. And so, the benefit there, I think, is that you don’t add another big, powerful, separate tool. Add the one that integrates and lets you see across it. Richard wants to disagree with me. 

Richard Bird (32:16): 

I don’t want to disagree. I just want to say that at a previous place where I worked, not when I was with PLATO, was where we actually integrated Jira with HP ALM and used the APIs because the developer team actually refused to look at bugs that we were creating inside of HP ALM. So, we figured out a solution on how to integrate it using the APIs from the various systems, and it worked really well. And given that we were also an ISO certified shop as well, we provided full traceability and everything. And it all worked great, but there was a significant amount of integration that had to be done. We were a very mature development organization and all sorts of other things. So, it can be done. It’s just dependent on the environment you’re in and what you need to make it fit. 

Mike Hrycyk (33:05): 

I understand the statement of developers who didn’t want to work in HP ALM when they’re already using Jira one is a much more complicated playground. Alright, well, that’s our time. So, I would like to thank you, Richard and Snehal, for joining me in this great discussion about test management tools. I think that for those people out there trying to make a decision or understand why you would use one, I think we’ve really brought some information to help them out. 

(33:30): 

If you have anything you’d like to add to our conversation, we’d love to hear your feedback, comments and questions. You can find links to all of our social media and website in the episode description. If anyone out there wants to join in on one of our podcast chats or has a topic they’d like us to talk about, please reach out. And if you are enjoying listening to our technology-focused podcast, we’d love it if you could rate and review PLATO Panel Talks on whatever platform you’re listening on. Thank you again for listening, and we’ll talk to you again next time. 

 

Categories: PLATO Panel Talks