The more information we have, the better decisions we can make. That’s as true in testing as it is in life, and on this episode of PQA Panel Talks, we are talking about the way testing teams capture that information: metrics. Join host Mike Hrycyk, along with Sean Wilson of Ubisoft and PQA’s Jonathan Duncan as they break down everything from KPIs and KRIs to which metrics really matter for the team and which ones your stakeholders will be looking for to help make business decisions.

Mike Hrycyk:
Hello, everyone. Welcome to another episode of PQA panel talks. I’m your host Mike Hrycyk and today we’re going to talk about metrics with our panel of testing experts. There are lots of different conversations you’ve probably heard in the last few years about metrics. And so I wanted to focus a little bit on this conversation about how we can use metrics to convince management that we’re doing a good job. So just a little shift on the focus. We’re still going to talk about metrics, but I don’t want this to be a talk where we list off our 25-favourite metrics and what good values in them are. So, we’ll still talk about that, but we’re going to just shift the focus a little bit and hopefully, that will be interesting to everyone out there cause I think it’s a little bit different. Of course, QA has had a long history of metrics and we’ve had a lot of clients and I’ve got a lot of firm opinions on metrics and how to use them. And so, you’ll probably hear some of that as we keep going, but more importantly, we’re here to hear from our experts. So, without further ado, Mr. Sean Wilson, please introduce yourself.

Sean Wilson:
Hi Mike! So my name’s Sean Wilson, I’m the worldwide QA/QC Development Director for Ubisoft. My focus is on how to bring technology really into the testing and game development process to make games more testable and then do the testing of them. So super interesting space for me. My background is in sort of mainstream software that’s where I met Mike years ago and I’ve been working in that space since late 1998. And I really think metrics and data collection, in general, are such an important thing as a tool to help us understand where we are and where we’re going.

Mike Hrycyk:
Awesome. Thanks, Sean. And Jon, tell us who you are.

Jonathan Duncan:
Thanks, Mike. So, those who have tuned in for some of our previous episodes are well aware of who I am, but for those that are coming to us for the first time, I am Jonathan Duncan and I’ve been around for about 25 years in the space and played on the dev side and now on the testing side and with all the things that are going on at PQA we really need the information that can come from metrics to try to help understand where the status of different projects are across the country. So happy to be here and look forward to hearing some of Sean’s thoughts and some of the questions we’ve got for today.

Mike Hrycyk:
So let’s jump right in, but let’s jump right in with a levelling question as we often do. So metrics, KPIs, KRIs, what are all these things? How are they different? Let’s start with you, Sean. Cause I know you’ve got a pretty good solid definition.

Sean Wilson:
Yeah. I have this part of the conversation quite a lot. I see people who throw something up and say, metrics are the solution! And then don’t really know what they’re talking about. A metric is any standard measurement, right? It’s an agreed-upon way of measuring, and something agreed upon is the key there. We all have to agree that this thing, this piece of data is a way that we are going to measure something effectively. What we’re saying in software then is that it’s a data point where everyone who’s using it agrees what it is and what it tells you.

Mike Hrycyk:
Alright, how’s that different from a KPI?

Sean Wilson:
So a KPI is by definition a key performance indicator. This is a threshold value. When you have a series of data points, metrics, you now have something – a KPI gives you a specific metric value upon which you can flag something. So we want to have 10,000 listeners in the first two weeks of distribution. Cool, that’s now some sort of threshold. The metric is the number of listeners we have over a period of time. Is that differentiated well?

Mike Hrycyk:
I think it does. I think that the terms have morphed though, I think that people are using KPI interchangeably for metrics almost sometimes.

Sean Wilson:
I agree. And I do my best to sort of help differentiate where I can just because when I take a look at a collection of data and I see this a lot in my industry, we get data points on thousands of things and with, as much work as we’re doing in automation and using the sort of tools to help us understand where we are, we see these data points just across the board. The KPI though is still really important to differentiate, to say, this is the threshold. You know, I know I’m in a good space, if this thing, or I know I’m in a bad space, if this thing, and that would be closer to a KRI really. Having that differentiation between just random data points, and some sort of key specific place that I want to stop and look at and say, this thing tells me something.

Mike Hrycyk:
Okay. So, Jon, we’ll get to KRIs in just a second, but Jon, anything to add or, or disagree or whatever, with what Sean said so far.

Jonathan Duncan:
Oh no, I’m not going to disagree with any of that whatsoever. I do want to highlight the piece that Sean brought up around, making sure that they’re standard and making sure that they’re standard early. That’s really critical to… Being able to metrics and historical views don’t really provide a whole lot of value if we keep changing them over time because the flavour of the day has changed. And I know we’ll talk about a bit more around sort of my thoughts as to how that should work if you don’t pick the right ones off to start, but I really just wanted to highlight. So I think that’s a great point, Sean.

Sean Wilson:
Yeah. If I can just jump on it. I saw this actually very recently between two different game projects and both of them were counting. They had a metric for defects found by customers in production. And they were labelled exactly the same thing, but they meant different things. To one game team, they were talking about defects of a certain type of severity that caused a crash or a major problem for our customers. And the other one was just talking about any defect that was found by any customer in production on the current release or previous releases. So, the answers that we got back with completely different, even though we were looking at something called the same thing and that specificity that you were just about John, I think is really important for us to make sure that we have when we collect metrics.

Mike Hrycyk:
Well, yeah. I mean, if you were in the wrong context and you were thinking about the ones that crashed, but you had the numbers for any bug, that’d be terrifying.

Sean Wilson:
Yes, yes. That’s exactly what happened.

Mike Hrycyk:
So I’m going to counter-question. Counterpoint, maybe. So one way of thinking about it’s slightly different in that metrics is just a collection of data points that you’re capturing and the way you’ve said KPI, it’s a level within those data points that is important. And maybe what I’m going to say, isn’t so different from that, but maybe a KPI is taking your set of metrics that you’re gathering data for and saying yes, but the key performance indicators, the key metrics are these five and those are the ones that we’re going to track really well. So maybe you’re collecting data for 20 and really you can’t focus on 20 anyways. That’s maybe a different talk, but I tell my people that don’t tell any client, they should be tracking more than five to eight metrics because it’s too many and you can’t focus. And maybe the KPI is actually delineating down from what you’re actually collecting data for, to the ones that you can actually track. So does that follow in any way?

Sean Wilson:
I think that’s better but I don’t think it goes far enough, and I’ll explain why. So I think we should collect lots of metrics even metrics that we won’t use, or we don’t know how to use today. I’m collecting those data points are incredibly important because again, we don’t know what data we’ll find value in tomorrow. So having that data set can be super useful for us to be able to grow from and extend things that are important. And it sounds like what you did there is you said, “Hey, yeah, we’re collecting all the data cool, but we’re only really going to pay attention to a particular set that we really understand and that we all agree on is the same thing and we know what they are.” And I think that can be super useful. And I think you’re in the right direction, but for me, the thing that differentiates a KPI from any other type of metric is that gives us predictability. So in my mind, it’s reframing the way that we talk about a KPI, instead of just saying it’s a thing or a threshold. I can say, think of KPIs as a place where you can say if this specific thing in this sort of specific timeframe or whatever, then we’re on track to this. Based on some numbers, some metric data point that I see, I can predict the future. If I have this many bugs by beta I’m on track to have a bad release or a great release. But it’s that sort of predictability in the number and that threshold at which I can make that prediction, that for me is the core of a KPI or a KRI in reverse.

Mike Hrycyk:
Yeah. I think that makes sense to me. Jon, anything to say there?

Jonathan Duncan:
Yeah. Just to add on, I really agree with both of you. I think we can’t buy back time. Right so I don’t collect the data, but I’ll never really be able to it at the instant, But I think some of what you’re really getting to Mike was around the, I don’t want to overload somebody with information. So I like to have metrics pulled up to a level that the audience that cares about the is actually seeing what they care about and then have all that backup data, should they say, okay, well, wait for a second, maybe we need to drill into this particular problem. Or maybe we need to start reporting on this. And then that way I at least habit and I can maybe see trends over time to be able to figure it out. But if I don’t capture it, then I can never go back and try to understand what happens that I can do better at, as Sean says, predicting what the end state is going to be. Now, all that said, right, we’re here as testers. We don’t want our testers spending most of their time collecting data, right? So I think a bunch of it is around finding data that we can easily collect and easily collate to get answers

Mike Hrycyk:
For sure. I mean I really liked the idea of collect all the data. You can do that early, do that often so that when you do finally figure out what your key metrics are, that is good, but you have to have balance. You can’t spend all of your time doing that, especially if there’s any manual steps involved. Like that’s just not very successful I don’t think. Okay. So, you said something there, Jon, that I’m going to come back to in a couple of minutes and that’s about an audience that cares about metrics, because I think that comes back to the core goal of what we’re talking about today, but I don’t want to drift too far from KRIs and defining before we get there. I’m actually relatively new to using the term KRI and Sean, you defined it for us a second ago. Key risk indicator. But maybe tell us a bit more about what you think one is and how to use it?

Sean Wilson:
So I’m also new to using the term. I’ve always just thought of a key performance indicator as that threshold and it’s either positive or negative. And I think that’s super common. Key risk indicator is different. It’s not quite a key performance indicator in reverse. And I think, I don’t know the origin, my guess is that it’s more of a hearkening back to sort of traditional project management, having a risk matrix and being able to identify the point at which your risk should be triggered. And that sort of key risk indicator is now your data point in your metric and your threshold for as soon as I achieve this threshold at this data point, my risk has been triggered, or I can predict that my risk will become triggered immediately. I think that’s probably a bit more of the complexity around it from a simplicity standpoint, key performance indicators predict good things and key risk indicators predict a potential failure or problems that could lead to it.

Mike Hrycyk:
Yeah. I mean, I think I kind of look at it as it’s a subset of your KPIs and if you hold with the definition of a KPI or the use of a KPI as a limit or a goal that you have to track against. Then a KRI is simply one that attracts against this. This is a risk that we’re tracking and here’s a metric that helps us track that risk, Jon, any thoughts about it?

Jonathan Duncan:
No, just something Sean said about the origins and sort of having done that PM, “Hey, here’s all my risks.” I think creating a key risk indicator, takes some of the human emotion pieces out of it. Right? Whereas I may see something and just have this awkward feeling in my belly that, “Oh, this is scary and right.” But putting numbers around, it tries to eliminate some of that, which I think gets rid of some of the noise for steering committees and stakeholders that are looking at things because they now know that it’s or whether they know or they at least have the perception that it’s based on something factual as opposed to just how I feel

Mike Hrycyk:
Well. Sure. And that really ties back to the idea that testers should define their entrance and exit criteria before they start the project. Because once you’re in the weeds and you’re emotional and you care, it’s harder to have that step back view and, and unemotional and pick and make sure you understand. So everything needs to evolve and everything needs to have new interpretations, but you shouldn’t be doing that just at the height of emotion, right? In the moment you should be doing it when clear thoughts prevailed. Any thoughts Sean?

Sean Wilson:
Actually, I gotta tell ya I will take that definition, Jon, and run with it. I think kind of the background of that, the idea that you can use this phrase, key risk indicator as a way of easing the conversation, not to say: “Hey, in my professional experience and with everything I’ve done, this is, this is looking bad,” right? But to actually drop some numbers on that, call it a key risk indicator and have something that we agree collectively. Because it’s a metric and we have to agree on it. This thing tells us that’s a great way to get stakeholder engagement and remove the emotion from. That emotion is super, super high on different teams and in different projects. So I think that’s a great way to frame it. Now I will completely use that. I like it.

Mike Hrycyk:
Awesome. It’s good to know that our podcast is helping at least one person. So what if they’re a person in it?

Sean Wilson:
It’s all good.

Mike Hrycyk:
Let’s move on a little bit because metrics are for a whole bunch of people and there’s probably multiple consumer groups of metrics. Often the way that that conference talks about metrics and talks that I hear about metrics, focus on how the team knows what’s going on. And today, as I said, we’re going to try and pull back a little bit from that, because as cool as that is, I think that most teams would just never look at metrics if it wasn’t for the fact that there were sponsors or stakeholders or whatever who needed to know what’s going on with their project. So maybe let’s just discuss that dichotomy, that difference a little bit like who are metrics for? Let’s start with you this time. Jon,

Jonathan Duncan:
They’re ultimately for everybody, right? Like everybody wants, especially in this day and age, everybody wants data to help them understand. So I know in the development world, when I was writing code, I had it wasn’t really a thing, but I did pay attention to, okay, well, how many bugs get created in that module that I created, right? As I started to manage development projects, I cared less about – I still needed the information about specific bugs in specific areas to see if there was some root cause that happened, but I cared more about it from an end-to-end perspective. So I wanted a higher level on, I didn’t want to have to dig through lines upon lines and pages of just raw data. I wanted something higher level. So it really is around who’s the audience and what is it that I care about, right? What’s my, what’s my end goal, right? A PM may care… Guaranteed they’re going to care about timeline and budget, right? So they’re going to want burndown-type things, both from a timeline and a budget perspective, whereas the developer or the tester on the project may want to understand more about what is the actual quality of the code that I’m creating or the code that I’m testing. So it really does depend.

Sean Wilson:
Yeah, I completely agree with that. And I think different people will have different needs in terms of data. In addition to different people, even at the same job level look at data differently. So they have different requirements and how to consume metrics, but we can get to that. I think an example of, I think, where teams are starting to see metrics use more often is I think to what Jon just said, something like a burndown. And yes, your project manager wants your burn down in terms of the entire project and where we’re going, but your team, if they’re doing any sort of agile methodology, particularly scrum are looking at their sort of, you know, iteration or their sprint burndown, or are we getting close? How do we know we’re on track? How do we know when they’re not on track? That kind of thing is a metric in and of itself. The aggregation of that team level metric into sort of a more comprehensive view that can be looked at by a larger group by, you know, sort of the scrum masters and then the scrum of scrum masters, the project managers, or the product owners who want to know when they’re getting stuff done. All of these things just aggregate up, but it starts at that what the individual person is doing and what the individual teams are seeing. So I think the metrics, you can’t divorce, the metric that the CEO needs to see from the metric that the team needs to be engaged in. One is very often built upon the other.

Mike Hrycyk:
So as a tester, as a doer, often the amount of overhead, or even just the amount of discussion that they participate in with respect to metrics can feel annoying. But I think, I mean, we’re talking about we all agree in this call that they’re important, but what message can we give back to a doer? So not the stuff that is integral to the day-to-day metrics that they care about, what message can we give to them that says, you know, these other metrics, the ones you don’t care about as much about, but your CEO or your VP, or whoever does care about this is why they’re important. And not just a trite because they say so sort of thing, but what message do you think that we can use there and let’s start with you, Sean.

Sean Wilson:
So it depends on the level of the doer. I have got a couple of different strategies that I’ve found have been fairly successful when I’m talking to team leads when I’m talking to sort of people who are managing small groups so not big managers, but swore managers. I talk about the value of metrics towards resource allocation. If I don’t know that you’re underwater, I can’t help you by giving you more people to do the work. I have found that that makes those team leads incredibly incentivized to collect that information, to build processes around collecting the information freely or cheaply. And that’s something we should probably talk about too, metrics aren’t just about a manual person taking a box or writing a thing. It should also be about building a process that collects them automatically where possible. When I’m talking to individual testers, very often, I try and avoid big metrics because the concept of big metrics I have seen with individual developers and testers makes them feel that they’re being monitored. And the more they know about that metric, the more they attempt to game the system. And I think this is something that happens by human nature. I know in my first testing job my manager evaluated me based on the number of bugs I found. As soon as I found that out, I found a lot of bugs. They weren’t necessarily all the same severity but I found a lot of bugs because bug count was important. That was the metric that she was evaluating me by. And I think there’s a reason to, you know, maybe not talk too much about metrics that are needed way up the stream with everybody, or at least not make them the thing that we highlight.

Mike Hrycyk:
Well, and I think an important piece there is that a key performance indicator should not be about an individual’s performance because that’s where you start getting gaming feelings. It should be exactly project performance at some level. Or you might have to think about testing as the testing project, but it should still be at the project level because they’re supposed to be health indicators, not person indicators.

Sean Wilson:
Exactly. And I think this is where, I mean, we could have a long conversation on, you know, our metrics used for good or evil? I think that’s where you can use that data incorrectly. If you really choose to, and you have to talk to your teams and convince them that when you’re collecting this data, we’re doing it for good purposes. And like I said, with the team leads, it’s about if I don’t know your underwater, I can’t help you. I can’t get more resources. I can’t advocate for them because I won’t be able to connect sort of, we can’t get all of this work done with the people we have and Hey, that’s getting late and lead to us not getting this or being able to hit this key performance indicator for success. And that’s how I justify more resource allocation.

Mike Hrycyk:
Anything from your side, Jon?

Jonathan Duncan:
No, just as you guys were chatting there, it started to make me think about, okay, well, how do we build teams that can get better velocities? Right. And I think highlighting my deficiencies in front of an entire team doesn’t help anybody at the end of the day. Whereas reflecting overall team goals to the team helps them truly work together to reach and attain one common goal. And then my individual metrics are there for my manager to say, okay, “Hey Jon, it looks like you’re having problems getting through this. Is there something going on? Is there something I can help with? Do you need more help with data entry on a particular case to get through certain tests?” So I think that they can be there for individual assistance, but it’s the team type metrics and the project metrics that are important to sort of motivating that team to hit the finish line that you need to hit.

Sean Wilson:
And I think if I can jump on it straight away, you have to be able to tell the team that the team metric is the one that’s most important, not your individual metric. I can’t tell you the number of times I’ve worked with scrum teams, developers, and testers, where I will hear from someone I finished all of my work. My story points are across the board and yeah, but the sprint failed because that guy sitting over there didn’t. And when somebody tells me, “Oh, I took on a new task because I had more time. I finished my work early”, but somebody on their team failed. That’s where I think you can use exactly as you’re saying Jon point back to sort of the teams as a collective unit and point to that team metric, that team collection and say, this is the thing that’s most important for the team to focus on. And you’re right that can be a great way to sort of build that team, team-ness? Like what would you call that? That group of that high-performing team is a team that looks after each other.

Jonathan Duncan:
Yeah. And right, in my role, over the years, I’ve always had more than eight hours of work a day. And the members of my team, there was one moment it was like 5:30 or 6 o’clock at night. And before this one individual left, all she did was walk by my desk and say, “Hey, is there anything you need help with?” Right? And that’s really what I like to see in any of the teams is, and maybe there wasn’t anything she could help with, but I knew that we were all in it together just by the mere fact that somebody asked, can I help you, Jon? I think that’s the environment that we need to create, not an environment where it’s like, “Oh, Jon’s slow again. And we’re not going to make this sprint because of Jon.” Right? Okay. Well, if you know that, then go and see if you can help Jon.

Mike Hrycyk:
I like to use yourself as the example there. I think that one of the precepts of agile, and if you’re not doing this, you’re probably not doing it successfully is to give the ownership of the success of the team, back to the team with that, when you transition someone into agile and you get people to understand that they don’t readily understand right away, that means they might have to start thinking about success in terms of finding bottlenecks and finding things that need to improve. And so you tell them that, and that’s great, but then that doesn’t give them the appropriate tools to help them be successful at that. Right? And that’s one of the things that comes with the modern agile framework is a bunch of tools like team velocity, like being able to understand how to relate story points, getting things done and how to put those together. So again, we’ve drifted maybe a little bit back towards the individual level and the level of the team of how they can succeed. But I still think it’s valuable in that it’s our job as leaders to help people appreciate that there are tools and in a lot of ways, those times maybe we’ll change the names so they don’t know that team velocity, “Hey, that’s a metric.” Because, they’ve decided to vilify the word, but it really is something that they need as a team. Okay. So I’m going to pull back. I think we’re good on that. One of the thoughts I had about how to convince people that metrics at the bigger level are important. And so I think yours is really good at it. And maybe coming from a more positive place than mine. But one of the things that I think is a powerful convincer is it’s not that hard to convince people that the powers that be the higher managers, or as you call them Sean, big managers, need to understand the health of a project without understanding the health of the project for all of their projects they can’t guide their organization towards its goal. Most people can accept that’s an idea. And then, so you can tie that back to, so what a metric does, what a dashboard does is helps that big manager get an understanding of that health so they know if they have to worry or not. If you can’t provide that confidence, because that’s what metrics do, right? They convey confidence. If you can provide that confidence, they’re going to have to dig in and stick their nose in a whole bunch of places so they can figure that out on their own. And what is really easy to get a hands-on team members to understand is you don’t want a CEO sticking their nose in your business all the time, because that just is a pain in the took us. And so if you can say, “Hey, the little time you spend on these metrics, help them understand that everything’s okay and they’ll just stay in their own lane.” That’s going to be pretty powerful and help people understand that. Yes, even though I don’t see the tangible value of this number, I see the tangible benefit to myself.

Sean Wilson:
I like it. I think that’s good. And it does highlight an honest reality. What winds up happening is when the big managers, the tops, when they go, am I going to ship on time? And nobody can answer the question or the question is, oh yeah yeah, of course, you know, that person is now concerned and they start looking. And the problem with looking for problems is that you can always find them. I’ve never been on a software project that hasn’t had some problem that if somebody just looked at too much could derail us entirely. Being able to answer that question with confidence, yes, we are going to ship on time or no, we’re looking to be late because this is the information we have… Being able to provide that predictability I think is critical. And yes, I think that can be an excellent motivator for individuals to sort of participating in the collection of data and metrics that can be used to show that.

Mike Hrycyk:
Awesome. So moving on a little bit how do we convey metrics? I mean, I use the word dashboard reports, graphs, podcasts. But what’s your preferred method, Jon?

Jonathan Duncan:
Having a lot of information – I like nice graphical representations of things, as long as they’re being honest. Right? And I think it’s around coming up with a setup, you can take any set of numbers, or any statistic or any data point, and you can twist it a little bit. So it does come back to that – Let’s all agree on what they are, and let’s all agree on how they’re going to be represented because even if I get a lot of information in a report, it’s not hard to decipher if I’ve consistently received that same information and the information is in the same spot as it was last time. Cause one week I might care about X and then the next week I care about Y. So I need to know where to find X and Y all the time. Sometimes it’s a pie chart that makes sense. Sometimes it’s a line that I can see the trending path of it. So it depends on the metric, but I don’t like just Excel sheets filled with numbers. I do like a graphical representation as to what I’m seeing based on an agreed-upon convention as to how it’s going to be displayed

Mike Hrycyk:
Well. And I’m sure that there are studies that we could refer to that indicate that if you have a big chunk of data, a visual representation allows more people to absorb it quickly and better but I don’t actually know any of that. So if any of our listeners out there have good data on that, it’d probably be interesting. And you could feed it back to us. That’d be cool. Sean, sort of the same question?

Sean Wilson:
Yeah. I mean, for me, it depends on the person consuming the data. Different people, of course, consuming data differently and different types of data need to be consumed differently. I was talking with some fairly high-ups recently and they just wanted each of the projects that they were caring about, they just wanted a stoplight that was its project name, red, green, or amber. That was it. They were happy. Because that was enough for them to start going and asking questions. I think that becomes the point of a visual representation of some sort of metric or KPI it’s it’s where you have hit that point where you now have enough information to go ask the right questions or know that you don’t need to ask them. When you think about a dashboard, like an information radiator is often how you’ll hear it referred to in agile practices. You will see we need to have big screens on our wall with tons of data tracking all the things, and we can see it. And when something goes wrong the group collective will see that and be able to fix it. And I think that that could be accurate, but different people will consume it differently. And making sure that the person who needs to consume that data gets it in a format and a mechanism that is useful for them is probably your best chance for success.

Mike Hrycyk:
I’m not personally fond of investing in metrics that feed to a singular person. And I understand sometimes your hands are tied because there’s some CEO that says, I need to see this, but there’s always a ton of metrics you can look at. I want the metrics that are focused on to be metrics that have layers of data that are available for different people that are usable for different people. So in the example of the stoplight, I think that’s beautiful because a person at a certain level, just needs to know that it’s green or yellow or red. Right. But I think that there’s someone in the middle that needs more granular data. And so it can be the same metric and just a different way of presenting it. But I’m really not a fan of the let’s do all this work to gather this data for Joe and Joe is the person who needs the data and no one else does like that. That is not a great use of the time I don’t think

Sean Wilson:
No agreed. I think you actually hit a really good point there. So I’m thinking in terms of role, right? This type of role requires this type of information to make decisions. So as an individual, unless they came up that I’m thinking more, what does a project manager need to see versus what does a scrum master need to see versus the product owner or the development leader? So, they will have different needs for data that at some point are indicating to them some piece of information. And one of the things I mentioned earlier is that the data that gets into that stoplight that for the CEO, that, that red, amber, green, that data that made that thing possible for us to be able to predict that is an aggregation of everything right down to the bottom layer. So that isn’t one set of data that collects that, that is an aggregation of data sets that led to different key performance indicators all the way along with the board. So the stuff that was interesting to the test project manager for a team of a hundred testers, working in an area, doing a thing, as well as a development project manager, as well as the product owner, the marketing person, all of them are contributing data that is being aggregated together to get to that last point. And this is where I think we hit that weird difference now in terminology is a metric the thing that we’re looking at, like the actual threshold, the KPI itself, or is it the data that generated that thing because there’s a lot of it to get to that sort of red, amber, green thing for the CEO.

Mike Hrycyk:
Jonathan, anything to add there?

Jonathan Duncan:
No, just that we mentioned the stoplight. That should almost be enough for, other than the guys on the ground performing the work… But in order to get to the point where the stoplight is enough, right? It has to be on historically, I trust the stoplight report because anytime I’ve asked a question, Sean’s been able to give me an answer. So when Sean says it’s yellow, I agree with them. Or when he says it’s green, I know that I’m good. Right? So some of it comes with trust that’s built up over and the ability to go and say, okay, it’s yellow because of A, B and C. And I have all that detailed information for you trying to build that up so that people can consume the information quickly for what they need and move along or going, ask for details when something is alarming that they need to pay attention to.

Mike Hrycyk:
I think though that a point that comes back to what you just said, Jon, is that they have to sort of think of it as a hierarchy of stoplights. So the CEO, get a singular stoplight for project X, say a $5 million project. So, the PM cares about that. They care that that stoplight is green, right? But as soon as that stoplight is yellow, they need to look at the stoplights that they have. Maybe there are two or three from development. Maybe there are two from the test. Maybe there are two or three of the BAs gathering the requirements. Maybe there’s a couple from UAT or production or whatever. So they need their own set of stoplights and they might not be stoplights. Like maybe that’s oversimplifying what they’re looking at, but it’s a hierarchy of metrics that sort of feed to that one stoplight. And when you get to the test lead, they have their own little set that helps them understand if they should be putting a yellow or green up to the PM and so on. And so, yes, but there are many different “stoplights” – and I’m air quoting that but you guys can’t see it – because realistically stoplights become much less valuable but you go because you just suddenly need that data to figure out where to point your troubleshooting arm.

Sean Wilson:
We’ve heard the air quotes in your voice. It was cool. It’s the benefits of good quality audio. But you’re exactly right. And I think it’s interesting, you made the logical conclusion from the CEO’s perspective and certainly the higher up you are, every individual stoplight all the way down to the bottom is less important than the aggregation of all of them. But none of it works if you don’t have that. If you don’t know, I’ve got, you know, a hundred people in my team size and I’ve got teams of 10. I’ve got 10 teams. And I need to know you know, for every two weeks going from now until the end of time, right? Do I know that that team is on or are they working well together that stoplight for that team for their individual sprint becomes one of the key parts that gets you to that place where the CEO can now see six months before release? Yeah. We’re in the right spot or no, we’re not because I can look at the last 10 sprints these teams have done and I’m getting sort of more ambers than I am greens. You know, it is that aggregation, but where each individual stoplight is less important to the aggregate, they’re the most important thing to actually collect because they’re the only thing that’s real.

Mike Hrycyk:
Yeah. And to give it some perspective, if you’re the CEO, that’s going to pull the trigger on a $6 million marketing campaign based on a release date, that health trickles all the way up from bug count. That’s important. Right?

Sean Wilson:
Absolutely.

Mike Hrycyk:
Wow. This is really gotten away from us. We’ve had such a good time that we’re already approaching our end. So I’m going to have to look at the remaining questions here and pick one that’s going to bring it all together for us. So we haven’t talked about it. And I think we need to talk about it at a fairly high level, starting with you, Sean, tell us about what’s your philosophy on tooling and gathering metrics?

Sean Wilson:
The best metric is the one that I don’t have to think about it. I just collect for me that’s because I believe it is probably just because of my own world view of the world. But I believe that humans will game the system when they have data. If you want to make that not a factor in your, in your gathering and your metrics, you have to have metrics to just collect themselves. And it could be things like bug counts. It could be things like bug count by severity. Whatever it happens to be the number of check-ins per build. It’s these more interesting data points that you’re collecting along the way. Now, how you get the data to actually make those useful is having an efficient and fluid process for manual intervention as well, where that data is collected as just part of normal day-to-day work. So for me, the best metrics collection process is the one that I don’t have to do anything intentionally. It just happens as part of my normal job.

Mike Hrycyk:
You are aware that you’re pretty cynical, right?

Sean Wilson:
Yes, I am. Absolutely. Honestly, it truly does come from my very first test job. In my very first test job, when I found out that my manager was valuing the work that I did base on it, at least partially based on the number of defects that I found. I went from finding fewer bugs that were higher severity and actually far more valuable to the Dev Team to finding far more bugs that wound up being lower severity because that’s how I got my raise. And I know that not everybody will be as cynical as I am on that, but that is the reality of my very first test job. And of course, development came back to me and said, stop doing that. We’ll talk to your manager. But that thing that where that metric had been put in place and where I was told that I am being valued based on this thing, I targeted that thing. So, maybe it is just me. I’m okay if it is. But I do want to make it possible for people to just collect the data that we will use for those big metrics dynamically.

Mike Hrycyk:
Well, the one thing that I really liked is that the most common tools that we all use such as JIRA or ALM are really, you can tell that they put some heavy brainpower into metrics, collection, and data gathering, and then reporting and dashboards. Right? So that, that the tester had to be less thinking about it, less knowledgeable about it and, just sort of do their jobs. So then they’re not trying to game. Okay. Same question for you, Mr. Jon.

Jonathan Duncan:
Yeah. So to me, it’s all-around using the tools that the teams currently using the same, same idea, less around gaming the system. Although I do 100% agree that that is a thing. That if people determine that that’s how they’re going to get compensated or that’s how they stay off somebody’s radar, that they’ll do it if given the opportunity. But my goal is all around – we want the testers to be testing. I don’t want them filling out spreadsheets, and I don’t want them going through extra hoops. I want them to be able to go and do their job. That’s what they like to do. Using the existing tools that they’re already using, maybe with that extra tweak, right? Maybe there’s another field that they need to fill out in that tool, but not having them jump all over, I think is the best way to do it. And the easiest one to get them to create metrics for us, just as a matter of fact of this is what my job is.

Mike Hrycyk:
Yeah. And, and it’s really true. I was just thinking that may be the Holy Grail of testing, and if anyone out there has a brilliant idea, you can probably make the trillion dollars from – it is if you can figure out a really easy way to collect tests, coverage, data, around requirements and stuff that can be tracked nice and easily, automatically in a tool without having any manual intervention, you can make yourself a trillion dollars. You really can, because way too much of test coverage has to be manual. And therefore is less valuable. Sorry, Sean, I cut you off. You were gonna say something?

Sean Wilson:
Well, no, we could have a completely different podcast on just that topic of how to assess test coverage. I’ve spent a ton of time doing a whole bunch of work on that recently. And it’s a super interesting area. And I think brain space to sort of thinking in terms of metrics that we roll out with the team, if we adopt an agile philosophy to our process, as well as our development cycle and test cycles, we can use an agile methodology to help generate better data. And what I mean here is we have some sort of concept of a sprint where we say, this is our process right now, but we believe that we will get value from tracking this piece of information. How can we start tracking it? Iteratively build through that, do retrospectives on it, figure out what’s working, how it’s easier to collect the data and roll it out so that the entire team is engaged in the collection of that information. I have had great success in helping the team, understand the data by making them a part of the decision-making process. Again, that’s a separate podcast on sort of how to roll out agile and a whole bunch of places. But it’s that idea that if the team is engaged and if they are starting to see the value of the things that they’re doing overtime, they’re more interested in collecting them and less interested in gaming.

Mike Hrycyk:
I think Sean, you told me that you were working on an article on metrics – that a whole team approach to metrics is an article that will get a lot of hits.

Sean Wilson:
Everybody has to be engaged in the collection of information and they have to understand why they’re doing something. And I know we’re running on time, but I can give you a really simple example of this. There’s one of the ways that we calculate on one of our teams, some of the core data about sort of coverage is through a part of the game telemetry and what’s happening inside of it. When the development team is sort of passed down from on high, the instruction to the developers, make sure that the telemetry is included in this way at this time in your development, it was done inconsistently because developers looked at it as an extra task. Something that they had to do eventually. And the testers knew that it wasn’t a feature so they didn’t pass the data coming back that the telemetry information was useful.

Sean Wilson:
It wasn’t until we took one of the teams aside and explained to them how that information was showing us that, “Hey, this feature has made it into the game.” And now through exercising this feature by testing the feature, we can see the information coming back. As soon as we explained that the test teams were all over validating that the telemetry information coming back was accurate because it was showing how they were interacting. And the development team was all over putting it in as part of their initial development process, rather than add on to the end. And that type of thing, where you explained it, and you say, this is how we’re gathering really critical information, made the teams engaged and want to change the way that they work rather than us just telling them to do this thing. So yeah, that, that team approach has to be what you do, I think to make it successful.

Mike Hrycyk:
Awesome. Well, okay. We are over time. I would like to thank our audience for sticking with us to the end. And thank you, Sean and Jonathan so much for being part of our speakers. Metrics, when I went into this topic, I was like, “Oh, that’d be a little bit interesting. It’ll be okay.” But you know what? I don’t really lose track of time and podcasts very much. And I was poked and said, “Hey, we’re near the end.” I’m like what? And so that’s really cool. So it turned out for me that this was a really interesting discussion. I hope that for our listeners, the same is true. And I’d like, all of you, if you want to continue the conversation reach back out to social, you can find us at PQA testing on Twitter, LinkedIn, Facebook, or in the dialogues that surround the podcast, wherever you’re picking your podcast up from. You can find links to prior podcasts and chats wherever you find your podcasts. Please, if you’re enjoying these, put up some ratings and share them amongst your own network, because the more people that are listening to these, the better the conversation that we can have. But I think it’s been great, thank you everyone for your participation today. And we’ll see you next time.

Jonathan Duncan is our Head of Partnerships and Alliances at PLATO based in Fredericton, NB. Jonathan has over two decades of wide-ranging experience across all facets of the software development cycle. He has experience in a variety of industries that stretch from the public sector to start-ups in satellite communications and everything in between. Having worked in organizations from both the development and testing standpoints provides Jonathan with the ability to see problems from all aspects allowing for complete solutions to be delivered.

Mike Hrycyk has been trapped in the world of quality since he first did user acceptance testing 20+ years ago. He has survived all of the different levels and a wide spectrum of technologies and environments to become the quality dynamo that he is today. Mike believes in creating a culture of quality throughout software production and tries hard to create teams that hold this ideal and advocate it to the rest of their workmates. Mike is currently the VP of Service Delivery, West for PLATO Testing, but has previously worked in social media management, parking, manufacturing, web photo retail, music delivery kiosks and at a railroad. Intermittently, he blogs about quality at http://www.qaisdoes.com.

Twitter: @qaisdoes
LinkedIn: https://www.linkedin.com/in/mikehrycyk/

Headshot of Sean Wilson

Sean Wilson started as a manual tester on a financial treasury application somewhen in the last millennium. His career took him on a winding journey through automated testing, development, project management, quality team leadership, and agile evangelism before he abandoned mainstream software and went where he could play games and get paid for it. In his current role as the WorldWide QA/QC Development Director for Ubisoft, Sean is focused on evolving the approach to Quality Assurance through a better application of technology. He is also justifying playing Assassin’s Creed late into the night under the pretense of “looking for excellent automation opportunities.”

LinkedIn: https://www.linkedin.com/in/jseanwilson/