Testing deadlines rarely unfold the way we plan, and in this storytelling-focused episode of PLATO Panel Talks, host Mike Hrycyk brings together a collection of real-world experiences from the PLATO Team that prove just how messy, human, and meaningful project scheduling can be. Testers share moments when a single tough call halted a major release, times when endless manual regressions pushed teams to their limits, and experiences where automation became the unexpected hero that transformed an impossible timeline into an achievable one.

These stories move beyond the mechanics of testing to highlight the emotional and collaborative side of the work. Late nights spent in the trenches together, the frustration of tedious tasks that desperately needed automation, the tension of high-stakes outages, and the deep sense of accomplishment that comes from solving problems side-by-side. Across these experiences, a common thread emerges: no matter how carefully you plan, schedules will shift, deadlines will slip, and surprises will happen, but navigating these challenges together builds stronger, more adaptable teams.

If you’ve lived through your own unforgettable deadline or have a story that shaped your testing career, we’d love to hear from you. Connect with us on PLATO Testing’s social media or on our website!

Can’t use the player?

Listen to this episode on Spotify (opens in new tab)

Episode Transcript

Mike Hrycyk (00:00):

Hello everyone. Welcome to another episode of PLATO Panel Talks. I’m your host, Mike Hrycyk, and today we’re going to do something a little different. Rather than engaging a panel, we thought we’d talk about the trials and tribulations of project scheduling with a testing focus. I think it’s important for people to realize that project scheduling doesn’t always go smoothly, but even when things go pear-shaped, there’s a light at the end of the tunnel.

(00:21):
As big believers in the idea that shared knowledge is knowledge gained, we think that sharing stories of scheduling hardships will help you realize that you’re not alone, but also that there is much that can be gained from scheduling mishaps. To this end, we have gathered stories from a number of testers of their fondest, or more probably, least fond scheduling trials. And to kick us off on our testing deadline stories journey, is Sarah Savoy.

Sarah Savoy (00:45):

Hi everyone. First of all, I’m Sarah Savoy. I’m PLATO Vice President of delivery in Gatineau-Ottawa, and Sault Ste. Marie. Looking back at my testing career, I do have one deadline story that stands out in particular. It happened years ago. I was working in a large project team as the only third-party tester within the team. We were working on a high-priority project for a client, and there was a lot of pressure to deliver. So, the team was working many evenings, working the weekends, all as the deadline approached. I remember it was a Friday, it was the end of the day, and I was doing testing of the final remaining defect in the release builds. I had the lead developer, the project manager, literally looking over my shoulder, and everybody was pretty much holding their breath waiting for that final thumbs up. And I was doing the testing, and I could not give it. I could not give the thumbs up. I found an issue in the builds that ended up being a showstopper, and you could almost feel the air go out of the room.

(01:53):
The project manager had to call up the client, tell them that we couldn’t deliver that day. It was really disappointing for the project team because of all the effort that everyone had put in leading up to that moment. But for me, actually, that moment ended up being an important one. I really kind of had to step out of my comfort zone to say, No, the severity of this issue is too big to let slide. And the manager, that project manager, told me afterwards that he really respected that and respected that I didn’t minimize the impact of the issue. So, that stuck with me. That’s why that story came to mind. It was a tough day, but it really taught me something in the end that I remember to this day. And that integrity in testing does mean having the courage to speak up even when it’s out of your comfort zone.

Mike Hrycyk (02:50):

Thanks for that, Sarah. That’s really important for QAs to realize coming out of that story, because I’m so glad the PM said he respected you for that. The point is that everyone vilifies the tester for finding the problem as if it’s your fault that someone else made this mistake, whereas we should be the heroes because we caught it before it went public and caused the company to look really bad.

Sarah Savoy (03:15):

Absolutely.

Mike Hrycyk (03:17):

We have to help ourselves remember: listen. If they think you’re a villain, that means you’re a hero. Great story. Thanks, Sarah.

Sarah Savoy (03:24):

Thank you.

Mike Hrycyk (03:25):

The next story we have is one relating to global logistics from Evert. Go ahead, Evert.

Evert Garcia (03:30):

Hi, so I’m Evert Garcia. I’ve been with PLATO for three years now. Basically, in terms of the testing deadlines that you mentioned, I always put in my core memory, really, it is the first project that I had when I was beginning in terms of software testing. My first project was way back when I was in Accenture, way back in 2006. And I’m just starting as QA, and it’s a big multinational shipping and logistics company in the US. They have this code in terms of expectation setting, where we should deliver as fast as they deliver in their shipments. So, it’s kind of like high pressure for us. I always remember the quote, “No pressure, no diamond”, right?

(04:08):
So, in terms of this application, it’s a Microsoft application that we were testing, and we always conduct a monthly regression test. And imagine back then, 2006, there’s no automation yet, so everything was manual. Initially, we were able to finish it in five days, but we were able to finish it in three days every month. It’s just because we need to test it on different Windows OS like Windows 95, 98, 2000, NT, ME, XP Home, XP Pro. And that’s the current one, back then, XP Pro, and along with the 15 different languages as well. So, as far as I remember, I only remember English US and UK, and then French, Polish, Japanese and then Chinese with different Chinese dialects, which is like the Cantonese and Mandarin, something like that. It was a really tight deadline because we’re always there Monday to Sunday, almost overnight. We’re working on it. It’s always in my core memory, and I was always thinking, what if today we were doing that and what kind of automation we’re going to use, but happy ending story, they’re very satisfied with what we did, even though we have a test lab room that we have with different machines, different virtual machines as well. So, yeah, a tight deadline that’s really in my core memory every time.

Mike Hrycyk (05:26):

So, people tell stories like this, and they go, That’s horrible. How did you live through that? But one of the things that I always noticed, when you get in the trenches with someone, and you put in a lot of hours, especially those nighttime hours, there’s something special from the memory at the other end that you get that you did it together, and you accomplish something real. And even if it’s silly that you had to do it, you still come out with that accomplishment and that memory.

Evert Garcia (05:49):

True. I agree, and I always share this because when I switched to IBM as my second company in the Philippines, I always share this with the junior testers, entry-level hires. So, it really amazed them in terms of the patience that we always have, and that’s really one of the skill sets that they need. We need, in terms of testing, patience because of the repetitive activities, and attention to the details, really, even though it’s taking a while. But testing is a process. You don’t need to make it speed up. Although we already have some automation, but again, it’s a process. I always say software testing is half art, half science. You always need to analyze everything, but it’s like an art where you need to prepare in terms of how you’re going to execute it.

Mike Hrycyk (06:33):

Thanks, Evert. That was a great story. For our next story, I turned to one of PLATO’s great Automators, Jordan Coughlan.

Jordan Coughlan (06:39):

Hi, I’m Jordan Coughlan. I’m a tester at PLATO. I’ve been a consultant here for about eight years now. So, there was a project years ago, and it was so long ago now that I don’t remember really most of the specifics, but there was a project where there was a large amount of configurations that needed to be set in a very specific way. I wasn’t initially involved in the project, but it was way behind schedule, and so there were a bunch of people who were asked to come in and help work on it during the weekends. I ended up just donating a weekend or two against it. There was a team of maybe 20 people altogether. We were manually just comparing an expected value in a set of Excel sheets to what we were seeing in an application, and there were thousands of pages that needed to be checked, and they were all basically the same. It was just truly awful. It felt like there was really no end in sight.

(07:21):
There’s no way we caught everything just because it was so boring, and you can only look at it for so long before you just glaze over. This was the exact sort of thing that should have been automated just because it was so monotonous and important. I’m only a person; I only contributed a couple of days to it. There were people who were doing that all day for about a month, or maybe more. A script would’ve produced a way more trustworthy result. More importantly, it would’ve saved the group of us from losing our sanity for a couple of weeks.

Mike Hrycyk (07:45):

To your knowledge, did it launch okay?

Jordan Coughlan (07:48):

No idea. I think so. I hope so. It was eight years ago now, or something like that.

Mike Hrycyk (07:53):

That’s tough. It’s entirely transactional. You’re not embedded in the project. You don’t know what’s going on, and it’s just here; compare these things.

Jordan Coughlan (08:02):

And it was all completely new to me. I was out of my element. Really, the value that I provided was very minimal, just because I felt like I was wasting the people’s time who were on that project and familiar with the application.

Mike Hrycyk (08:12):

Well, and I think there’s a lesson in that leaders, testing leaders, have to make sure that when they’re driving for a deadline, they’re having people do a lot of work, that they’re not just ticking boxes, they’re not just wanting to market as complete, but they’re still making sure that they’re focusing on getting value out of things.

(08:30):
So, first off, yes, you’re right. They should have made a decision about automation in the first place because having human eyes looking at line after line comparatively, you’re not going to have the added quality that you want. But at the same time, if it’s not going to provide the value, then it’s just cost, and it’s just extra, and it’s not very valuable. But it’s a memory you have, and I’m sure it drove you towards your automation career and embraced it a lot more strongly.

Jordan Coughlan (08:54):

Those were really a couple of weeks that stand out.

Mike Hrycyk (08:57):

Thanks, Jordan. Definitely some stuff to think on. It might sound a little hokey to say it out loud, but I really believe it’s the weeks that stand out that help us to find who we are as testers. For our next story, we have the first PLATO employee working in Vancouver – Afshin Shahabi.

Afshin Shahabi (09:12):

Hi, my name is Afshin Shahabi. I’ve been with PLATO for the past 15 years. I’m a senior consultant and have been involved in more than 50 projects with PLATO. Most of my career has been spent on the client-side and working with different clients. One of the clients that I would like to describe is a financial institution that I worked with. They have had a financial system for the customer profiles, and the follow-up with the customer profile was expected to be an application. There are some files that are received from the customer, and they have to upload that information into Azure. The system was live all the time. We were testing the system while it was actually on CI/CD, so it was the pre-environment, and we were testing on that. And I was assigned a hundred test cases for a three-day execution time. Test cases were already written, and we were supposed to finish them within three days, all one hundred.

(10:16):
We tried first, the record that I got was six days, and then we tried again. It went down to five days, and then next time to four days. But we never reached three days. No matter how hard we tried. The system required some files as Azure to ingest them, and those files were Excel. So, each customer has a line of 55 fields for each test case. There’s a customer, and you need to actually adjust those 55 cells. Almost 35 of them will somehow need to be tweaked and then uploaded to Azure. Then wait 5, 6, 7, or 10 minutes; it will actually appear in the application. Most of the time, we were not spending on testing the system. We were spending it on preparing the data to feed to the system so that we can actually go to the system and pass/fail the situation. So,

(11:08):
I came up with the idea that – actually, they have tried it before, so they had six or seven testers, including the QA manager. They said that we couldn’t actually automate the process of writing the spreadsheet with all the data. And I said that, give me a chance, I’ll just prove that it might be doable. They gave me a chance, and they provided me some time, and I actually developed first with the SQL server scripting. I provided the 50 test cases, like 50 rows, each time based on the day and the time that I wrote in the script. It was providing the latest updated script within 30 seconds, and all I had to do was just grab it from the SQL server and then modify the name of the file so that the name represents the day of, and the customer’s information, and then upload it to the Azure.

(12:03):
So, the 50 already were like a minute or two minutes ready to be uploaded, instead of each one taking 10 minutes before. So, each row was taking 10 minutes for me to actually adjust them, and sometimes Azure was not available, taking time to actually ingest them. So, that was all lost time, and I compensated for the lost time and basically had 50 test cases in a matter of two minutes. They were so happy that they said, Okay, can you change it to a Java application? It took me three to four days. I worked on that, and I changed it to a Java application, and the Java application was doing the same thing. So, we had the system now so that it provides and calculates and all those stuff. In terms of the data needed for the system, we uploaded them, so we tested that too. So, I used it, and there was another coworker with me; she used it too, and we were happy. We proved that it can be doable; it just needs some time, some thought, and some skills to actually put them together and provide the data feed for the system for them.

Mike Hrycyk (13:12):

Did you get it down to three days?

Afshin Shahabi (13:14):

We did. Down to two and a half days. So, we had tested a few times, and they were so happy, and we proved that by just adjusting some processes in the execution and introducing the automation into the execution, we can actually deliver on time, even less than the predicted time.

Mike Hrycyk (13:36):

Thanks for that story, Afshin. I really think that one of the unsung heroes of automation is for creating test data and setting up test scenarios so that you can proceed with your manual testing. The time savings there, as we just heard, can be immense. Automation isn’t just a tool for replacing testing effort, but it’s also a great tool for increasing testing efficiency. And now we hand you over to me, where I talk about how all of the best plan scheduling in the world will sometimes blow up anyway, and you have to work through it to be successful on the other end.

(14:05):
So, this goes back a number of years. I was a QA Director at the time, and I was working at a retail company that had as a client, a very, very large retailer out of the US. And what we did was we worked on photo websites. So, a place where you upload your photos, maybe get prints done of them, maybe get them put on a mug or whatever. And we had been working on a brand new vision-meant version of the photo book product, right? So, you upload a whole bunch of photos, and you tailor-make your own photo album that you get printed, and then you hand it out to all the relatives or whatever. So, there had been a massive push to get this photo redesign out. We weren’t pushing for the Christmas rush. We were pushing for a month before so that people had time to get these ready and made for their Christmas.

(14:49):
That itself is its own story of pushing to get it ready, because in that environment, the testing includes physical representations, right? You have to build the stuff in the software and make sure all of that works, and then you have to ship it off to the fulfillment vendor, who prints it, and then returns it, right? There’s many cycles of that to get it because it’s really fiddly, all those little apertures, and sometimes you want a curved picture, and all this stuff. But that’s not the point of this story.

(15:14):
We had gotten all of that done. We had a product that was ready, and in parallel with this retailer, we’d been negotiating how to release it. So, most of our releases with that retailer were done in the middle of the night. We hadn’t yet figured out daytime releases. I think that client was big enough that we had 20 servers that ran in parallel. So, when you, as a user, logged in, you might end up on any one of those servers. So, in a normal release at 11:00 PM, we would take two of those servers offline, update them, and bring them back online. QA would do a test of those, see if it was okay. If there were problems, we would be fixing that, and that wouldn’t be impacting your average customer.

(15:52):
Unfortunately, with the photo book release, we were going to change a bunch of stuff in the backend, including the database, some pretty important stuff. And which means we needed an outage. So, we went back and forth. And we didn’t want to do it all in the middle of the night because that was also hard for people. So, we negotiated with them, and we figured out 7:00 PM on a Friday evening, so that was when the least people were building photo books. And so, we negotiated all that. We had everyone lined up, and we didn’t do our releases from the office; we did them from home. It was the only sort of thing we did from home. Everyone was on Messenger chats, and we would have a QA chat, a deployment chat, and everything was good.

(16:29):
And I was the push master. I was the person for making sure everything came together and worked really well. And then, if something went wrong, figuring out the path to get something right. Well, something went wrong. We got started, we had the right people, and we had taken the sites down, and when we came back, the deployment seemed to go okay. When it came time to bring the site back up, the entire site did not come back up, and we did not know what was wrong. And so, I had the right people there to start digging into it, and we started, and we still couldn’t figure it out. So, we escalated. We had the entire data team there helping. The CTO was there helping. We had a bunch of senior developers, so I think there were four different chats that I was running with different people talking about different things, trying different things, and it continued through the night. We started having people go and sleep in shifts. They would go, and they’d sleep for a couple of hours. We had the CEO up, and he was talking to someone at the retailer to make sure they understood and were handling all the yelling, so we didn’t have to. Around 2:00 AM, we got the rest of the site back, which was good. Cause there were lots of other things you could do, but not photo books. We kept working. Again, shifts. I was the only person who didn’t vanish. Once the CTO got there, I think he stayed there except for one hour, but I was there until around noon the next day.

(17:46):
So, Saturday at noon, we figured that we kind of knew what it was. We had to do some tests, and we had to figure it out. I had a family commitment at 3:00 PM, an hour and a half’s drive away, and we got to the point where I was at the point where if I don’t go, I’m missing this event time. And we had it fixed to the point that we did a very basic test, and it was going to work. So, I handed it off to one of my seniors to stick it out, and I walked away, but I kept track as I was driving to the event. And it all worked out, it all came back, but that outage time is lost revenue potential of hundreds of thousands. There was a reason there were 20 servers.

(18:21):
So, yeah, called war stories for a reason, right? Because of the bonding that comes from people who go to war together. And I think there’s value in stories like that. If we had just pushed and it went a couple of hours, and we’d all come back and everything worked, there’d be a little tiny bit of bonding that comes with that because you have success. But having that failure that you fought together to defeat and figure out that really does bring a team together.

(18:44):
This story and its theme of hardship experienced together building a stronger team segues nicely into our next story, where Nicole tells us of her experiences scheduling a team in a pandemic.

Nicole LeBlanc (18:55):

Hi, my name is Nicole LeBlanc. I am a senior manager at PLATO. One of the most memorable stories from my career would be back during the COVID-19 pandemic when I was working with the provincial government as a QA tester. When COVID hit, I had been working on an application for the department I was assigned to, which happened to be with [the Department of] Early Childhood Education. So, that was one of the departments that many things happened very, very quickly. We had daycare subsidies, a bigger need for daycare registrations, and device subsidies that people could apply for because their children were at home trying to learn remotely. We had just a shortage of people in general. A lot of people had gotten sick, so the priorities were shifting and changing, really by the hour. You did not know from one day to the next what was coming. People were stepping up. It very much became an environment where there was no, that’s not my job attitude anymore. People were stepping in to do what needed to be done. Many of us were very far outside of our comfort zones, and the work that we were doing really, really was important.

Mike Hrycyk (20:14):

And so, I mean, in a really, really stressful time, being a part of a team making those changes, was that empowering? Did that help make you feel like you were making things better?

Nicole LeBlanc (20:26):

It absolutely did. It had the stress of a go-live with the feel-good adrenaline of a go-live. It’s something I’ve kind of talked about before, but it was for months and months because when one thing was finally resolved, something else was coming out of the woodwork that had to be dealt with. Things were just changing day to day, so every day was different. We didn’t know what was going to happen. It’s incredibly difficult to have an application that’s scalable when you really have no idea what’s coming next.

Mike Hrycyk (21:02):

I think that no one wants to have a pandemic, but I think the experiences we all have going through it have been life-changing and reaffirming, especially around careers and testing. What we’re doing is important when the things that we’re doing move the dial, and I’m not glad we had to go through it, but I’m glad that we did and that we came out of it. So, coming out of that experience, was there anything you learned that makes you a better tester?

Nicole LeBlanc (21:25):

What I learned from that whole experience is that flexibility is everything, and adaptability is everything when you’re on a project. When it comes down to it, when things start to go not the way that you expect, if you’re not able to adapt to what is going on, then things are not going to turn out well.

Mike Hrycyk (21:48):

Absolutely. As a company, we also participated in a couple of pushes to get vaccination tracking software done, and those are big, complex pieces of software that we’re starting almost from scratch in most cases, but they had to be out and delivered in weeks. Whereas months to years would be an estimate that you would have. And we succeeded, and we didn’t have to work 24 hours a day, but people worked long hours, and people worked hard, and they became really important exercises in prioritization to get a minimum viable product out that would work because it was saving lives. It was an interesting time.

Nicole LeBlanc (22:27):

It really was, and we often would make jokes – there was some truth to it, but we often found out about requirements from a press conference, rather than from a business analyst. It was not uncommon that we would be watching the press conference when they were having the daily press conferences, and then we would find out, okay, this is what we’re doing for the next 12 hours.

Mike Hrycyk (22:53):

That is, without a doubt, the furthest from best practices for requirements gathering that I’ve ever heard, but it’s brilliant.

Nicole LeBlanc (23:00):

I mean, it’s not ideal, and obviously, there were some communication issues, but I guess that’s to be expected in that situation. There were so many different levels of people and different things going on that it was nearly impossible to have everything flow the way it should have.

Mike Hrycyk (23:20):

But in the end, we all pulled through

Nicole LeBlanc (23:23):

We did.

Mike Hrycyk (23:25):

I would like to thank all of our guests today for sharing their testing scheduling stories. I think that a couple of the themes from today are important to reflect on. There’s a quote attributed to Mike Tyson: “Everybody has a plan until they get punched in the mouth.” No schedule is set in stone. They’re dynamic and change as circumstances evolve. Our job is not to panic and just figure out the best path forward. The other important theme is that working through scheduling trials together and having each other’s backs brings us together for a stronger, more bonded, and cohesive team.

(23:54):
If you had anything you’d like to add to our conversation, we’d love to hear your feedback, comments and questions. I would also love to hear your stories if you have a good scheduling story! You can find us at PLATO Testing on LinkedIn, Facebook, or on our website. You can find links to all of our social media and websites in the episode description. If anyone out there wants to join in on one of our podcast chats or has a topic they’d like us to address, please reach out. And if you are enjoying listening to our technology-focused podcast, we’d love it if you could rate and review PLATO Panel Talks on whatever platform you’re listening on. Thank you again for listening, and we’ll talk to you again next time.

Categories: PLATO Panel Talks