Performance testing isn’t always required, but knowing when it is can make or break your product. In this episode, Mike Hrycyk and guests Ryan Hobbs (BC Pension Corp.) and Praneeth Eadara (PLATO) explore real criteria for deciding whether to invest in performance testing, what’s at risk if they decide to skip it, how to scope a first project, and what managers should consider before committing time and budget. Whether you’re a QA manager, developer, or business stakeholder, this panel will help you to understand the value (and limits) of performance testing – and how to plan for it long before peak load becomes a problem! 

Can’t use the player?

Listen to this episode on Spotify (opens in new tab)

Episode Transcript:

Mike Hrycyk (00:01): 

Hello everyone. Welcome to another episode of PLATO Panel Talks. I’m your host, Mike Hrycyk, and today we’re going to talk about performance testing. We’ve talked about performance testing in the past, and those topics were interesting, but today we’re going to focus on the idea that you’re a manager, you’re a person in a company, maybe you’re just a QA, and you’re trying to decide, does my project need performance testing? And so, some performance testers might say always. You always need performance testing, but this isn’t true. You don’t always need to because there’s an expense and there’s a cost to it that should be judged. And so, we’re going to talk today about helping you assess and helping you figure out, and maybe a little bit of how to get started. Ryan, can you introduce yourself? 

Ryan Hobbs (00:38): 

Absolutely. My name is Ryan Hobbs. I’m an Assistant Director of IT Engineering at BC Pension Corporation. I have been working in IT for about 25 years. Many years of that were spent in quality assurance and automation, manual testing, and management leadership roles across medium to large-sized quality assurance teams. 

Mike Hrycyk (00:59): 

And Praneeth, tell us about yourself. 

Praneeth Eadara (01:01): 

Hello, Ryan and Mike. And myself, Praneeth, and I’ve been in IT for close to 15 years now, mainly into testing with the specialized skill of performance testing. I’ve been doing that for over 14 years now, and I’m happy to be at PLATO. 

Mike Hrycyk (01:17): 

Thanks for joining, guys. One of the reasons I thought of Ryan for speaking on this topic is that, in his last iteration as a QA manager, he was the guy who had to assess and decide that he needed performance testing and then called us to do the performance testing for him. So, that is a nice big reason. 

Ryan Hobbs (01:33): 

Still a great decision. 

 Mike Hrycyk (01:36): 

Awesome. So, everyone, at least in our listenership, is going to have a general idea of what performance testing is. Maybe we’ll just get started. We’ll tell our audience: What are you trying to prove when you do performance testing? And I’ll let you kick it off, Ryan. 

Ryan Hobbs (01:49): 

For me, what I’m trying to prove with performance testing is that the application under test meets the expectations of the customer. And that’s not just the one person in the boardroom – say we’re generating a website for a customer – not just one person in a boardroom clicking it and that it’s performant, but the expectations of the customer base at large, making sure that it meets and exceeds their expectations for their experience in the application. 

Mike Hrycyk (02:15): 

In my experience, you can never exceed the expectations of the public but understood. Praneeth, do you have anything to add to that? 

Praneeth Eadara (02:22): 

Slight additions to that. Most people think that performance testing is something to test against an application, which should be fast, or stable, or reliable, but in reality, performance testing is something about testing certain conditions, certain expectations that you are discussing with the client, right? And you want to measure the expected loads that you have defined. 

(02:46):
In some cases, you want to measure the futuristic volumes as well. So, you test the system against the expected load. You check if you have any bottlenecks in the system, either by testing methodologies or by asking the business whether they have encountered any bottlenecks. 

(03:04):
You also want to test the stress system under extreme conditions. Let’s say you have an application which is ready for go-live, and you want to measure the performance of the application six months in the future. So, you want to slightly increase the volume or performance tests based on the futuristic volumes, and you also want to test out by breaking the system, right? You want to know when the system could fail. So, part of that you are testing the scalability of the system as well. At the same time, whether you’re meeting the service level objectives, they’re checking the key metrics that you should be checking in. So, it’s just not something which is how fast a system behaves or how reliable it is. It’s testing against various metrics, and we are testing what is expected and unexpected growth in the future. 

Mike Hrycyk (04:00): 

I think something that’s overlooked when people think about performance testing is when you test the breaking point. So, you should know when that is, and then you can monitor against that and throw up big red flags when you get close or mitigate and try to figure out for it not to break. But a really important part of what you’re testing at that point is when it breaks, what happens? How catastrophic is it? Because that can help you understand, okay, so someone can’t log in, that’s bad, it’s not the end of the world, but if the system gets so choked up, it freezes, locks up, can’t be restarted, and it takes you half a day to get back, that’s a much worse impact that you should be aware of, and performance testing helps you to figure that out. 

(04:45):
So, there’s a couple of obvious things that came out of that. So, if you are Amazon and you’re going to have a Black Friday, everyone understands that the system might get overloaded, might get slow. You need to test for that. The other end of that spectrum is basic performance, and whether one person does their job reasonably well, and so, functional testers test for that. They know that when I press submit, it always takes 38 seconds, and that sucks. Okay, you don’t need to spend on performance testing to know that; maybe spend on performance testing to understand where that’s coming from. And so, we know that both of those need performance testing. So, let’s generalize it a bit more. What types of projects definitely need performance testing, and the converse? What types of projects maybe don’t need performance testing? And we’ll start with you again, Ryan. 

Ryan Hobbs (05:33): 

Interesting question. The reason it’s interesting is that my default answer would be that every type of project or application could benefit from some type of performance testing or investigation. Whether or not you act on that and spend the money is related to how your customers are going to deal with the potential slowness. Examples of that could be website performance. Everybody has an expectation that almost any activity on a website is going to take you less than three seconds, or faster now. If you click a button, if it takes you more than a second to see an action, customers tend to be fickle, and they might leave, or they might complain. So, performance in that aspect is incredibly important. If you’re dealing with software in an environment that is more accepting of a delay, let’s say you’re in a scientific lab, for example, and you’re reviewing the output from a sequencing machine, there is some idea that that may take some time, whether that time scales to the complexity of the activity or not. It’s the perspective of the user at that point. And I’ve worked in both these industries, often there won’t be a great deal of performance testing in those types of applications, where there is a great deal of performance testing and emphasis placed on interactions, where the impression of time – that desire to have a fast interaction – is quite high on the end user’s wishlist. As far as when to performance test? I would always performance test to some degree. Again, it’s entirely dependent on the scale of the application, the infrastructure that you’re testing it on, and the complexity, as to how deep you go in that performance testing. Yeah, for me, I kind of always want to do some, even if it’s just a thumbs-up, thumbs-down approach. 

Mike Hrycyk (07:14): 

So, I think the one thing you said around your example of sequencing, where you already expect it to take a long time. You know it’s 20 minutes to do the run, etc. The importance of that is that you have a proper baseline measurement with limited factors, so that you can see degradation. Because even though you expect it to take a long time, there’s a cost when it takes longer. If you used to be able to do three an hour and now you can do two an hour, that means your same resources are returning less revenue. And so, it’s really important that you have that so you can spot it and then dig into it when there is regression. 

Ryan Hobbs (07:48): 

Absolutely, and where that becomes important is we often think of performance testing as something done as part of your development lifecycle or as part of the project that you’re working on. What you pointed out is essentially post-testing, monitoring, and trending of application performance. That’s often customer-facing, or if it’s a hosted application, you can start using applications – I’m not going to name any particular brands. But there’s applications out there that can monitor your application performance going forward, which isn’t strictly what a lot of people think of as performance testing, but ongoing performance monitoring with thresholds and alerts and everything worked into it so that you can then react to potential changes either in the software world. One of those potentially could be that you’re building up an excessive amount of data, so now your queries are slower, which you wouldn’t see unless you had that data built up. Or network degradation over time. Maybe they’ve shifted the data center you are in. Because it’s a fluid model, you’ve shifted to the eastern side of the country versus the western, and now you’re seeing some slowdowns there, exactly what you’re saying, that the trending of the performance is a very important aspect. 

Mike Hrycyk (08:54): 

They’re always slower in the west. So, Praneeth, same question to you. What types of projects need performance testing, and what types of projects don’t? 

Praneeth Eadara (09:03): 

So, in terms of what type of projects we could test, any project which has a very high traffic, like Amazon, Home Depot, Walmart. So, we can also consider performance testing for banking applications, school websites, and maybe education platforms. So, one of the examples that I could give is that there is a central body for immigration that is handled by the British Council. So, they conduct the IELTS [International English Language Testing System] exam. I think it was way back in 2021 when immigration was at its peak in Canada, and at that time, the government announced some new policies to increase immigration, and the British Council website got a lot of hits. So, their website was down for two days. So, that impact will lead to loss of business, and obviously, customers or users have to move to another provider. 

(09:57):
So, those projects, definitely yes, I would say migration projects as well. So, let’s say if our application is migrating from one hosting provider to another hosting provider, yes, we should be looking into performance testing early in the cycle. Retail platforms, yes, we should be doing the performance testing against that. Resource constraints, data processing applications, and any bad jobs which involve heavy lifting of data. So, that should also be considered. These are a few of them, but I could name more. 

Mike Hrycyk (10:31): 

So, that’s good. Are there projects where you think No, you probably don’t need performance testing? What type of project would that be? 

Praneeth Eadara (10:37): 

So, there are a few. Not every project has to be performance tested. In some cases, where internal tools are involved with very low user usage, I would definitely say not in-depth performance testing, but at the same time, exploratory testing is possible in that case. Early prototypes, I do not think that it’s a good option to do performance testing on that because it’s way too early in the cycle. You’re just proposing for user feedback. We don’t need performance testing for that. Some POCs [Proof of Concept], we don’t require performance testing for that. And wherever there is low performance risk, or where the user volume itself is low, and if there are no scalability issues, we might not consider performance testing. 

Mike Hrycyk (11:23): 

An important thing to remember coming out of that, for me, and I’ve seen this in the past, is you might build an application that everyone pretty much agrees doesn’t need performance testing because again, it’s low user, low data, low flow, et cetera. What that doesn’t mean is that three years from now, someone’s going to pivot and say, You know that internal tool, we’re going to release that to the universe. And it was built – it’s not multi-threaded. It doesn’t support direct access to the database, or it does support direct access to the database. That’s when you do need to pick up and say, well, maybe we should performance test it because we didn’t engineer it that way, right? 

(11:56):
So, this is a good segue to the next question because if you have an internal tool internally used, etc, there’s no formal performance testing. I will say that. But people are paying attention to performance, but that’s generally your normal testers are saying, I got this application. You know what? It takes 20 seconds when I press that button. Maybe we should do something, right? So, that’s not – your regular everyday tester still always has their eye on performance or should, but what do you say to a manager that says – because we always have to sell, it’s a budget cost, right? You have to pay for performance testing. What do you say to a manager who says, We’ll just have our manual testers, or devs, do the testing with a stopwatch. We’ll start with you this time, Praneeth. 

Praneeth Eadara (12:37): 

So, manual checks, in some cases, to some extent, yes, that’s beneficial. I’ve seen in my experience where we have done it using a stopwatch or just exploratory testing. It is beneficial in some cases, but there are flaws associated with that. We are not testing something which is data-driven, repeatable, or realistic. So, it’s all about just eyeballing it. Seeing how does the application look like? How the performance is? You’re not capturing the full statistics, or how the reaction time is. We are not testing the concurrency of it. It’s not repeatable. We don’t have a deep dive into how the system is behaving through the metrics. So, only at the surface level are we trying to go through the application. Yes, in some cases it is possible. I’ve seen people testing it that way, but performance testing is largely into scoping, data-driven, repeated tests, and fixing the bottlenecks that we encounter. 

Mike Hrycyk (13:40): 

Alright, Ryan, I see you’ve been taking notes, so I’m expecting brilliance. What do you say to the manager who says, We’ll just use a stopwatch? 

Ryan Hobbs (13:47): 

For me, I’ve used the stopwatch approach in a few select areas. The main one was that we have a group of testers, be it professional manual testers or user testers. It’s made its way through the test – formal testing process. Now we’ve got user acceptance testing going, and there’s reports of slowness, right? Everybody is sitting around a room because we’ve managed to collect five people, ten people from the business, and they’re exploring the system. They start talking amongst themselves, but this is slower than they expected. So, all of a sudden, performance becomes a big deal. What I’ve found when that happens is if we hand everyone a stopwatch – and we physically have handed people stopwatches to use in those instances. What we do is we encourage them to write down their exact steps and follow those steps with specific timing. Often, what we find is that their perception of how slow or how fast a system is varies across the various people in the room. So, when you hand them a stopwatch, all of a sudden, there’s clarity. They see, oh, well, I thought it took 10 seconds every time I clicked this button, but it’s actually four, it’s not 10. So okay, we’ll scale it down. And that’s a great way to establish your running performance baseline in those situations. 

(15:02):
What it doesn’t do, and I mentioned this earlier, is change their expectations of the system. It just helps us add bounds to the discussion. We understand performance is a concern. We’ve taken the time, spent an afternoon with stopwatches, and a fairly detailed test plan, which is great. We now know where we stand. That is valuable information that we can then pass along for a far more labour-intensive operation of more formalized code-based performance testing, right? That’s a large investment. We want to make sure we clearly understand where the problems are. If there are problems, it could just be the perspective of the users. Maybe they’re used to a previous system in which one of the steps was faster, where the overall process now could be more performant; they just see certain aspects that they’re used to being slower than it was before. 

(15:51):
So, a stopwatch can definitely be beneficial. It’s fast, it’s very inexpensive, and it can help point things in the right direction and highlight issues. Would I base an entire system’s performance testing on a room full of people with stopwatches? Absolutely not. But I feel that it’s an area where you can start the investigation. People understand a stopwatch, where they don’t always understand a group of highly skilled automated developers sitting in a room down the hall, cranking out numbers that you don’t have a tangible understanding of what that means. 

Mike Hrycyk (16:26): 

That was good, but it wasn’t the original intent of my question. So, those at listener land, he just threw his notes up in the air in frustration. My idea was, okay, you have a manager who has said, the only performance testing I need is a stopwatch, right? And so, you’ve done a good job of justifying why we might do stopwatch testing. How do you go to a manager and convince them that more than stopwatches is important? So, this is your sales hat. How do you get the funding you need for proper performance testing? 

Ryan Hobbs (16:54): 

Alright, time for a new set of notes. So, for myself, what I’ve often found is that people have different interpretations of what performance testing means and the actual scope of it. So, generally when I’m approached by someone who says, Oh, we’ve got a stopwatch. We’ll just give some timings, it’ll give us a great understanding of how performant the system is. That’s great, assuming you don’t have a system that scales. Assuming you’re not going to have – using your Amazon example – you’re not going to have a Black Friday sale where now you have 7 million people on your website. You cannot simulate that with a stopwatch. There is a certain amount of investment that you have to make in order to prove out the stability and the resiliency of the system that you’re producing and that you’re releasing to the wild. Stopwatches just can’t do that. They’ll help. They’ll give you that warm fuzzy feeling if you’d like. But in order to truly understand the system at large, and how well it scales and how well it deals with all sorts of things. Like part of your infrastructure failing, do you maintain performance when one of your machines in a cluster goes down? That’s a difficult thing to be consistent in and reproduce accurately with stopwatches. Switching to a code-based developer-structured model for performance testing really provides you with that lens into how your system’s going to work going forward. 

Mike Hrycyk (18:14): 

I think an important thing that is also missed in the concept – all that’s good, right? That’s important. The other part is when you think about performance monitoring, and if you’re just doing stopwatch testing and you’re not looking at the performance networks, server load, and wait time from the database, and metrics like that, if you’re not watching those, what you may not see is –your stopwatch might say, oh, it came back in four seconds, but it’s not going to know that that server was redlined at 99.8% and that two more users would blow the entire thing up. Your stopwatch doesn’t show that. All you’re seeing is like, ah, it seems okay. But if you’re doing performance testing, which includes performance monitoring, you’re going to say, you know what? The individual response times at the users were okay, but there’s these giant red flags that we have to be concerned about. 

Praneeth Eadara (19:07): 

It’s all about convincing the manager of what we are trying to achieve, right? So, in terms of testing with the stopwatch, let me give an example of Tim Hortons stores. So, we want to test the application which is being used at Tim Hortons. Let’s say in a day, there are a hundred customers going into Tim Hortons. Concurrently, we have 10 people in the store. There are two lines with two to three people in each line. Other people are waiting in line or sitting at the tables. So, we want to test just with 10 users. Yes, it can be done. You can time the system, and you can place your crossing system there. So, yes, it is possible with the stopwatch, but when your volume increases, if you want to measure against something which is too concurrent, multiple levels of users are there doing different operations, some people placing the orders, some people waiting in line, you want to have a good user experience. You need to have a deep dive into performance testing. Either it is through performance testing tools, or any of the additional monitoring tools like APMs or the cloud-based infrastructure. So, you need to have that in place when volume increases. 

Mike Hrycyk (20:23): 

One of the things that we’re trying to focus towards in this conversation is getting started in performance testing – if you haven’t done it and you’re a manager and you need that kind of stuff. So, following down that path, what kind of expertise is needed to do performance testing? Do you need to go to an expert consultant with eight years of experience? What is the depth of experience needed? So, we’ll start with you, Praneeth, who is one of those experts. 

Praneeth Eadara (20:48): 

Not always do we need to go to an expert. We have to ask ourselves and the customers what we want to achieve. In most cases, to get started with the basic project, some level of basic scripting knowledge is required, whether it is Java or JavaScript. Along with that, we may need an understanding of the architecture of the application, whether it is HGTP or an understanding of the browsers, and what APIs are involved. Also, you need to have some understanding of defining the test scenarios that we are trying to test. Good knowledge of performance metrics, like the buffer ratio, throughput, and hits-per-second. So, those performance metrics, we need to have knowledge of. Analysis to some level. Nowadays, with the introduction of APM [Application Performance Monitoring] tools, a lot of these features will provide you with full-stack monitoring, but internally they rely on the same metric. 

Mike Hrycyk (21:47): 

Sorry, Praneeth. What’s APM? 

Praneeth Eadara (21:49): 

APM is Application Performance Monitoring. Nowadays, we call it management as well, because it does not just monitor. It does manage your infrastructure resources. 

Mike Hrycyk (22:00): 

Alright, cool. So, Ryan, you’re coming at us from a different perspective. When we started engaging with you, you were learning about performance testing, and you saw what we did, and then you were doing some of it on your own. So, from your perspective now, what kind of expertise is needed? Do you need an expert to get started? Is it something that anyone can bootstrap? 

Ryan Hobbs (22:18): 

Really going to simplify this down. I’m going to say there’s two sides to performance testing. One side is a solid understanding of the business, what the expectations are of the workflow, what your patterns are, and general business processes. You have to have a solid understanding of that. That generally resides within the company itself. The other side that you need experience on is the methodology of the performance testing, the tools, the techniques, the monitoring, and what your actual hands-on implementation site is going to be. Without the two together, you’re seldom successful in performance testing. 

(22:54):
So, for me, it depends on the nature and the complexity of the performance testing as to whether or not you want to jump out and hire a consultant or a contracting firm to help you out with that. Often, if a company has never done it before, the practices, the methodology, and generally the overall structure of performance testing aren’t trivial, and I would suggest getting help. Keeping in mind that it’s not something you can solely place in the hands of the consultant coming in to help you. They will heavily rely on business experts and SMEs [Subject Matter Experts] in order to guide that work in order to ensure that they’re doing the right thing. They might be the ones clicking the keys, but they need business acumen to help guide them in which keys to click. 

Mike Hrycyk (23:39): 

I think the poignant example for me here is that you can teach a developer or technical QA how to script a repeatable performance test. You can have a business person who is worried about Black Friday, right? And those are small pieces of what you need. The example that I go to is soak testing. That’s the type of performance testing. You don’t do soak testing a lot, but in order to determine that you need soak testing, the business has to provide enough information to talk about how people interface with their application on an ongoing basis, to start having a discussion to know that. But the business isn’t going to know what soak testing is, or even that it might be needed. So, they’re not going to have that discussion. The scripting technical person who doesn’t have enough conceptual knowledge around the theories of performance testing also isn’t going to know what soak testing is. 

(24:31):
And so, that’s where I see the value in an expert consultant. Not just the complexity of the situations and getting the tools to work, which is another sort of problem, but it’s having those initial discussions with the business to understand, okay, this is peak load, this is how it works. Sometimes we have this kind of problem where we have a peak load that lasts for two days, and we have an application that has these risks – oh, well then we should do soak testing for this. It sometimes comes down to the complexity, but I think what you said there, Ryan, that’s really important, is, maybe their first time when you haven’t done it before, bring an expert because they’re going to know the nuances and they’re going to start having those questions. And then they get you set up – that was one of the things that we [PLATO] did once you were set up, Ryan kind of took over the performance testing, and that was fine, that worked for them. But you had the experts in there to help initially. 

Ryan Hobbs (25:16): 

I guess the one thing I would add to that is having a consultant come in and help explain the nuances of performance testing, help explain some of the methodology, the practices, the dangers, the pitfalls, really, can help you from a business perspective, sell it to upper management to help improve your overall performance. You may, as the line manager, have an understanding of how performance is important, but your managers above you, the VPs, the CEO-level people, might not truly appreciate the complexity and the difficulty of improving and proving out the performance of your system. So, having an expert come in can explain clearly why you would do this, what the benefits are, and what the dangers are if you don’t, even just to get the budget to move forward with the testing. 

Mike Hrycyk (26:05): 

Alright, we’ll start with you, Praneeth. What kind of tools will you need for performance testing? We know one of them is potentially a stopwatch. What else do you need? 

Praneeth Eadara (26:16): 

So, there are multiple ones in the market to categorize. First thing is, we need a load generation tool. Historically, when performance testing was in the early stages, I think way back in 1990, LoadRunner came into the picture. It provided support to a lot of frameworks and protocols within the software products. Aside from that, we have JMeter, which is again a load testing tool, open source. We have Artillery, which is widely used for testing the APIs to some extent. So, these are some of them for load generation. Just for load generation, we cannot completely rely on testing results. We need to monitor the infrastructure. So, for infrastructure monitoring, we can either go old school, logging into servers with commands like ‘top’, ‘sar’. We have tools called Nmon, if we have invested in something, APM, Application Monitoring – a performance monitoring tool, which would do the full stack monitoring. 

(27:17):
Some of the tools will provide the observability statistics as well. So, that’s a keyword. Recently, a lot of APM tools have been coming up with observability implementation. So, basically, what it is by looking at the output or screen, you’re going to comment on the current state of the application. So, mainly relies on the acronym called MELT. MELT is metrics, events, locks and traces. So, the combination of all of that is the implementation of observability. So, a lot of APM tools do provide that. We may need front-end browser stats, so we may use dev tools within Chrome or other browsers – in built dev tools. We may also use Lighthouse webpage test, which is again a browser-based tool. 

(28:06):
To keep it simple for basic testing, for performance testing, start with JMeter, LoadRunner or k6 along with the browser dev tools. If you have advanced testing, include the APM tools or system monitoring. If you’re scaling up, use it with cloud-based solutions, either through AWS or Azure. Or some performance tools give a cloud-based solution where they can scale up the infrastructure resources. So, BlazeMeter or Flood.io provide that. 

Mike Hrycyk (28:38): 

Ryan, same question? 

Ryan Hobbs (28:39): 

Alright, so I’m probably going to look at this slightly more from the management perspective versus the technical perspective. And by that I mean, am I going to hire, train, or retrain staff, or retain contractors to do the performance testing going forward? And the reason that’s important to me is that it helps me focus on what types of tools may be available for me to use long-term. Am I looking at tools that are more UI record and playback because the staff I’m going to have taking this over, that suits their technical level? So, they’re a record and playback type of employee. Or is it something where I’m going to rely heavily on code-based performance tests, so I need somebody with a slightly more development background? Whether I have that internally, whether I have to hire, or retain a contractor to do that, that factors into my approach on tools I’m going to have available. 

(29:33):
The tools also depend on the type of application under test. Am I testing primarily in API? Am I testing database performance? Am I looking at purely website front-end performance? Or even something as different from those as a hardware performance, like a ticketing kiosk, a bank machine. That all plays into technology choices when it comes to selecting a tool for performance testing. So, for me, it’s less going down the path initially of I’m looking at APIs, maybe I’ll do something simple with Postman, or maybe I’ll do something simple with JMeter, for example. Or I go more complex and a larger, more commercial solution, as well. Again, it comes down to a longer term from my management perspective, who do I have to run these tests in the future? Who do I have to run these tests now? And what is my upskilling required to maintain these tests? Because they may be written once, but they’re going to be maintained and upgraded for the life of the tests. 

Mike Hrycyk (30:31): 

Yeah, I mean, you sort of highlighted that complexity is important and the complexity of how you generate the events. And one example that is often a thorn in the side is, do you have to support single sign-on to generate what you need? That requires a much more capable technology to do that. 

(30:46):
The one thing that both of you didn’t hit on that I think of is that for certain clients, the ability to generate your load from different geographies was important for certain tests, right? So, if you are an international seller of things, you want to make sure you’re generating your load from different places because you can have vastly different things that happen just because of the Toronto corridor trunk, which changes performance on either side of it. So, that’s important. There are a lot of tools out there. There’s some big ones that sort of own the markets on the money side, LoadRunner on the open source side, JMeter, which is extremely popular, and a lot of people use it. 

(31:22):
So, in the same vein, how do you scope your performance testing? You’re going to run a project. How do you figure out if this is a one-month engagement? Is it a seven-month engagement? I want to reassure most of our listeners out there that the average engagement for performance testing from the PLATO experience is about a month. Sometimes longer, not often, particularly shorter than that, but how do you figure out what that scope is? We’ll start with you this time, Ryan. 

Ryan Hobbs (31:49): 

Alright. How to scope it? For me, the normal approach I take when trying to initiate a performance test of a given application, my go-to is I ask the users or the business folks responsible for the application, what are your three to five most commonly used workflows in your system? What are those key areas that would be seen by the customers the most? Would they have the most impact if they were sped up or if they slowed down for you? So, once we have a solid understanding of that, the next step would be how much the application changes over time. You’ve mentioned Black Fridays or soak testing, or what’s your average run? Get an understanding of how variable the conditions are. Is this application or software being used in an office environment that for the next 20 years will probably only vary 10 users in its total load base, or is this something that’s web hosted and you’re expecting to go viral, if you will, in the use of the application? It’s going to quadruple in size quarterly for the next two years. That sort of sets my baseline. What are the key areas, and what is your load pattern going forward? 

(32:58):
That helps me personally understand and structure and scope the engagement. Is this going to be something that we’re going to be able to pull off with a given technology within a month, within two months? Knowing the complexity, knowing the variability in the testing does help me narrow it down. I have found over the years that once I’ve set up a plan and dug into it, the investigation phases following the initial performance testing often take as much time as the performance testing. So, I always have to factor that in as part of a performance testing package. But yeah, it’s variable, but I would agree with your assessment, a month to two months is a reasonable amount of time for a lot of different applications for me to do performance testing on and investigations. 

Mike Hrycyk (33:45): 

Praneeth, anything to add or disagree with there? 

Praneeth Eadara (33:48): 

No, Ryan mostly covered it, but I do it slightly differently, trying to scope for the performance testing. I usually categorize based on the risk. We do have that column in the performance strategy whenever a client needs it. So, what is the risk for not doing, or not doing a certain part of the feature release with performance testing, and how does it impact the business? So, we do take up all those requirements and then try to come to an agreement with the customer, so and so cannot be performance tested or can be ignored. 

(34:21):
Aside from that, what is the usage? We usually have the initial contacts with the business stakeholders. So, part of that is we pick up something like what are the three or five most critical scenarios that you have? In my case, since you mentioned, on average, we have one or two months’ engagements, so usually that time is not sufficient for full-fledged performance testing. It involves a lot of thinking and the strategy to build that, but if we have constraints on time, we pick up three to five business-critical scenarios, high-risk scenarios or processes. There could be some bad jobs which are running in the background, which are resource-constrained, so it may include that. Not just the scenarios, we might also look at the broader picture, not just targeting the specific scenarios. We might also include the end-to-end processes and the overall user flow. We should account for some of those scenarios as well. And we may also have to involve the business stakeholders at multiple levels to gauge the progress. Are we doing it right, or should we include so and so as well within the performance testing life cycle? So, it’s all part of the scoping. So, early in the cycle, if we have those, it’s always beneficial. 

Mike Hrycyk (35:40): 

Yeah, good perspective. I’m going to say to our listeners, just to help them understand, I generally think of a performance engagement as being four phases. The first phase is requirements gathering and building the performance test strategy. So, that’s where we figure out what your target loads are, what are types of testing we need to do. The second phase, I call it scripting, but it may involve scripting, it may involve other things. It’s a way of figuring out how we’re going to generate the load that we need to run a performance test. The third phase is running the tests. So, you run a test, you get some interim reporting, and then you maybe cycle tests again because they’ll find, oh, we need to tweak this, we need to increase the server size. We do these things. Then we retest and retest a couple more times, and that’s the testing phase. And then the final phase is the formal report. It’s important to most of our clients that they get a formal report so that if the crap ever hits the fan because there’s a performance-related incident, those managers can justify and say, No, this was an unknown. We did our due diligence, and we did the performance testing. In the same way, our company may have to defend itself, 

(36:38):
Okay, Praneeth, explain to our listeners what a concurrent user is. That should be pretty fast. But then how do you help your clients choose their load targets, right? It’s not as simple as last September, when we had 64 people using it. That’s what we’ll test against, right? 

Praneeth Eadara (36:58): 

Concurrent users are nothing but the number of active users using the application at once, not the total number of users. So, that’s the concurrent users. So, how do we get that? If a client is aware of their business targets, yes, it’s easy for a performance tester to get the right questions and then get that from either the business person, or maybe developers have some understanding. There’s a different approach for that if they’re not aware. You may have to look at their competitor. Not by talking to them, but just by giving some analytics into it. Asking the right set of questions is the key here. Simulating the peak scenarios as well as adding some buffer to it, right? So, if your concurrent user count is a hundred, maybe six months down the line, you may have 125 users. You may want a performance test for that. Start with the baseline test, which is usually 10-20% of your concurrent volume, so that you are sure that your tests are going to run smoothly. And then move on to load test, then stress test, either it is 125x or 150x the volume. You may want to add the endurance later on, but usually, with time being permitted, you are restricted only to the load and stress test. So, you want to have the system working with the expected volume first and then go on to further loads. 

Mike Hrycyk (38:25): 

Ryan, anything to add on that? How have you picked your targets in the past? 

Ryan Hobbs (38:29): 

For me, it’s all about the data, and I know it was mentioned earlier. So, if you have access to analytics, analytics are your best source to guide you in the right direction. My last performance testing was heavily focused on the performance of a website. As such, we were able to look at our Google Analytics going back four or five years. We could see the trends, could see the growth patterns, could understand the peaks and the averages. So, what I would do at that point is look at the trends. I just selected four times the highest peak as sort of my upper value, and then we tested that. Looking across different industries, looking at the trends, four seemed to give us well over a realistic peak. So, a spike that we would expect, while remaining reasonable. I could easily pick 10 times. The trouble at that point becomes How do I burst enough concurrent users to get that many concurrent users in that particular scenario? How do I validate the costs? Odds are I’m going to have to go out and maybe use some cloud-hosted infrastructure that is going to provide me with these extra users, because we’re talking at that point, hundreds of thousands or more users. There’s a cost imparted in that. So, it’s a balance of I can test a massive amount and see where it fails, or I can be realistic, talk to my stakeholders and say, Looking at your data, this looks like a reasonable path forward. I believe four is a good number; double could easily happen. Four gives us that safety margin 

Mike Hrycyk (39:47): 

For perspective. There used to be some tools where you could just pay by the user across a certain amount of time, and you’d pay X, and you were done, you test in an hour, and that could be done reasonably effectively. Some of the bigger tools, you have to look at a subscription model, and we just looked at some pricing for 50,000 concurrent users, but you had to buy at least a month’s worth of that, and that month’s worth was $500,000. There is a cost. Pick reasonable targets that make sense. 

(40:16):
Our wrap-up question today is, what are the first things, and I’m going to limit you to three each, what are the first things you do when you’re considering performance testing? 

Ryan Hobbs (40:25): 

For me, geez, three things I would think about. Scope, meaning what exactly do I want to test? Budget, how much money do I have? And then skills, do I have the skills to do it? 

Mike Hrycyk (40:37): 

Okary. Praneeth? 

Praneeth Eadara (40:38): 

I’ll skip the budget one because I’m not usually involved in those discussions. But for me, starting early is the key. Performance testing is all about planning and clarity. Defining your goals, what you want to achieve, identifying the scenarios, and testing against that. Involving the right people at the right time is the key to good performance testing. 

Mike Hrycyk (41:00): 

The one that both of you missed, and maybe it’s from my own perspective, but for me, it’s the number one thing. It’s how you get the budget. What’s the risk of not doing it? Yeah, monetary, there’s lots of reputation risks – still comes back to monetary. Why are we doing it? We’re doing it so that we don’t have this impact. What’s that potential impact? Black Friday, going down for 20 minutes, guess what, that’s a hundred billion dollars, or I made that number up. It’s a big number. 

(41:24):
Alright, thank you, panel. This has been really interesting. As we were going through, there were many offshoot questions that I wanted to ask. I’m like, no, stay on target. You don’t have time for this. There’s a lot here. We’ll have another conversation another time. Thank you to our listeners for tuning in. If you have anything you’d like to add to the conversation, we’d love to hear your feedback, comments and questions. 

(41:43):
You can find us @PLATOTesting on LinkedIn and Facebook, or on our website. You can find links to all of our social media and websites in the episode description. And if anyone out there wants to join in on one of our podcast chats or has a topic they would like us to cover, please reach out and talk to us. If you are enjoying listening to our technology-focused podcast, we’d love it if you could rate and review PLATO Panel Talks on whatever platform you’re listening on. Thank you again for listening, and we’ll talk to you again next time.