Are you a software tester looking to level up your skills and protect your organization from cyber threats? In this beginner’s guide to cybersecurity, host Mike Hrycyk sits down with experts David Shipley (CEO, Beauceron Security) and Joel Vautour (CSO, PLATO) to demystify the world of cybersecurity. Learn about the latest threats, discover how to stay ahead of the curve, and understand the critical role of software testers in building resilient systems.
Episode Transcript:
Mike Hrycyk (00:04):
Hello everyone. Welcome to another episode of PLATO Panel Talks! I’m your host, Mike Hrycyk, and today we’re going to talk about cybersecurity with our panel of cyber experts, in this case. Cybersecurity has always been closely related to software testing. Every one of the testers that I’ve ever trained, you teach them certain things and certain capabilities and that, but cyber is also a very deep well of knowledge. And so, there’s connections, and I think it’s good for testers to understand better always what cyber is. And it’s also a vastly evolving field, right? You can learn something new about cybersecurity every day, and it’ll be stuff that wasn’t there the day before. So. keeping up on it is also important and is good for testers. So I think this is going to be a really interesting topic. So David, why don’t you introduce yourself?
David Shipley (00:51):
Sure. My name is David Shipley. I’m the CEO and Co-founder of Beauceron Security. And the work that we do is we help motivate people to make good choices about technology so that they can reduce their cyber risk and thrive in a digital world. And how we do that is through a software as a service, cybersecurity motivation and awareness platform that gamifies the entire experience and gives people a Fitbit-like or Apple Watch-like experience for exercise, but applied to cybersecurity, helping people know where they are, how they can improve, and recognizing when they learn from mistakes.
Mike Hrycyk (01:30):
Awesome. Alright, Joel, tell us who you are.
Joel Vautour (01:33):
Yeah, so Joel Vautour, I’m Chief Security Officer here at PLATO, which is, of course, a software testing company in the technology services. I have 30 years of experience in the IT industry. I’ve held senior management roles in the span of a security operations center and director of security and worked for IBM for quite a period of time in the security space. So, I have been quite focused on security in my career.
Mike Hrycyk (01:59):
Great. In the idea of full disclosure, we have used Beauceron for our internal security training, not for software testing, but so that we’re all aware of security and I learned a lot through phishing through the Beauceron app. And it’s been really enlightening and useful and the gamification makes it a bit more fun.
Alright, let’s jump into the questions to give our listeners out there in listener land a bit of insight is we do have a prep conversation that we have before we go into these meetings, and they help me define the questions that are going to be asked. And David mentioned something in our discussion that I wanted to bring up here. So, cyber gets talked about a lot. Cyber the word cyber is everywhere, what cybersecurity is, and you gave it some context and meaning that I hadn’t really seen before. So, I’m going to come back to you, David, to get us started by talking about what cyber means in context for our discussion.
David Shipley (02:50):
And I think this is really, really important because it gives us hints as to why this has been not the success story that we would all want it to be. In terms of you look at the headlines today, things are worse than they’ve ever been, and why aren’t we making progress? A lot of people hear cyber they think technology, but the word cyber actually has three critical components. And those components are people, technology and control. Now why do we know that? Well, we know that the field of cybernetics, which was created by Norbert Wiener, an MIT mathematician and philosopher, actually coined the term cyber. But Wiener borrowed it from a Greek word. And the Greek word was Kubernetes. And Kubernetes means the helmsman or the steersman on a ship. And Wiener chose this Greek root word to be the basis of his new science deliberately because it illustrated the three elements he wanted to study.
So, if you think about this ancient Greek ship in your mind’s eye, you’ve got the back of the ship, the human being. So that’s a critical element. Then you’ve got the oar, the rudder, the ship steering wheel, that’s technology. And then you have the third element, control. Now we believe that it is a positive story for humanity when people are in control of technology. That’s the story of progress, of innovation. But the story of technology in control of humans, whether it’s terminators and Skynet in that movie franchise in science fiction or the cold, hard, tragic reality of the errors of Tesla autopilots killing people on the highway or the Boeing MAX 8 disaster. The story of technology in control of humans fails, and it fails for very predictable reasons. So, if we want to change the story, we’ve got to focus on people, and that requires a lot of work.
Mike Hrycyk (04:35):
It really does. But this is one of the reasons that I love that we’re having this conversation today is because testers – and people outside of software testing don’t always get this – but testers spend a lot of their time putting themselves in the shoes of end-users and thinking like the end-users so they can try and figure out where are the bugs going to come from. And so, as soon as you make the statements you’ve just made, I’m like, wait, that really should be resonating with the people who are listening to this podcast. Joel, anything to add around what cyber means for you?
Joel Vautour (05:04):
Well, from a people approach, I think David’s said it well. I mean from – I’m going to say from a testing approach from my side when I think of cyber is vulnerabilities like potential risks. And, from our testing point of view, we are trying not just to be functional testers, and making sure something’s resilient against any risk or potential threat. So, I’ll add that element to it, and that’s what I feel cyber adds to the more technical side of things.
David Shipley (05:37):
And just to connect that vulnerability point. Software does not create itself. Software is written by humans and it reflects our incredible creativity, but also our very human flaws. When we’re tired, we make mistakes. When we’re hungry, we make mistakes worse. When hangry, hungry and angry, we make even more mistakes. FYI, don’t email when you’re hangry; your body is setting you up for failure. So by its nature, our technology is always a reflection of us as humans. And I love that point that was raised earlier that the goal of a tester is to put their selves in the seat of a human. And that’s really important. What hackers do, criminal hackers, is they put themselves in the seat of the developer and they think about, okay, well, this is what the developer intended, but what are all the ways that I could make this do what I want and work around that intention? And they think with creativity around that.
Mike Hrycyk (06:32):
I think that’s a great segue into our next question because testers also do that. We also put our minds into the idea of the developer because a lot of our figuring things out is there’s a requirement, and the tester interprets the requirement in the way that the end-user is going to have wanted it, but then they also interpret it, this is maybe how a developer would have interpreted that requirement. And then they figure out the gap and they help the developer understand what the end goal actually was rather than what they wrote it for. It’s not the exact same thing, but there’s parallels. Alright, so continuing on with the idea of basics. So, what is the role of software testing in cybersecurity? We’ll start with you this time, Joel.
Joel Vautour (07:14):
Well, I think software testing obviously plays a critical role in cybersecurity. As I was saying just earlier, just identifying vulnerabilities before an attacker does. The testers need to ensure the software is just not functionally working and has less bugs, but we need to be able to look out for those potential cyber threats. So, they’re responsible for how an application behaves under different circumstances, maybe even simulating potential attacks just to ensure that the security standards are met and they protect both the company and the users themselves.
David Shipley (07:48):
Yeah, I’ll add in that the advantage of a good software testing validation process – and so we’ve got a small but mighty QA team inside of Beauceron, for example – is that humans will make mistakes even with the best intentions. We are going to have flaws. Finding and fixing and learning from those flaws and particularly finding out, hey, you didn’t do really good input sanitation here. Okay, here’s how you can learn from this and not make that same mistake again. Building that collaboratively – collectively. Another example is have you ever tried to edit your own writing? And if you have, what you’ll find is usually most people are terrible at it and particularly if you’re in a rush, because oftentimes you’ll find your brain will insert words into your paragraphs that you didn’t actually write. And so, you’re like, oh wait, I’m missing a word here. Or you will read it five times, and you won’t see the typo because your brain is like, no, no, I’m good. I’m not going to waste the energy reading every single letter. We can move on. Good software QA folks are going to find those sometimes built-in brain biases and flaws in thinking and help people learn from them.
Mike Hrycyk (08:59):
So, I think maybe we need to make a distinction in this conversation too. And that comes into the next question. So, one is there are security testers, and the security tester is a deep specialization, and it includes ethical hacking, and it includes doing all of the things and penetration testing and etc. And if you – I’m going to call them an average tester, they don’t have that depth of skill and they don’t necessarily have that, but there still is a role for them in ensuring cybersecurity. And so, let’s maybe talk about that a little bit. Most of our listing audience are on that side, right? They’re on the more average tester side. So David, what do you think an average tester’s responsibilities are around ensuring cybersecurity?
David Shipley (09:40):
I think it’s always important when testing things is to understand the context it’s actually going to be used in. And so, if you’re testing something that’s going to be capturing highly sensitive personal information, financial information, that’s going to be really important to giving people the information they need to make decisions, maybe about their health or other things. A) making sure that the system works. So the context here is, well if I’m providing health information for a reason, I need to make sure that I’m healthy. So working is important. That it works clearly, and people put the right information in there and only the right information can go in there. And that the system does with that information what they expect to do and it does it in a safe way. I think those are still in the realm of the software functioning as design and ideally making sure that design by security is within that scope.
Now, the distinction on the penetration tester side is, yeah, okay, it functioned within scope, but now, can I break out of the scope? Can I do things that are out of context and not in the best interest of the individual who gave the information or the organization? And I think it’s a whole continuum. So, can we reinforce really good security by design, OWASP Top 10 principles in the first pass QA? If we do that really well, and we’ve noticed this has got bad input sanitization, deal with it right then and there. Don’t give that easy win – and I can hear the Pen Testers screaming at me right now – don’t give them the cheap wins. There’s nothing more they love than finding a whole bunch of easily findable scannable vulnerabilities, fill the report, file it, and onto the next thing. Make them work for it. Make them really work for it. You know what, when you’re doing that, you’re making the criminals really work for it too.
Mike Hrycyk (11:34):
Well, and they want the easy win. So if you make it too hard to get into this place, there’s someone else that’s not spending the time, right?
David Shipley (11:41):
Well, and just to add onto that, the more an attacker has to work a system, the higher the probability that a detection is going to happen. So imagine for the sake of mind’s eye that a hack is happening below this theoretical cyber radar level. So they’re under the radar, but when they keep going at it, keep going at it, the chances they’re going to pop above that radar and now they’re going to get seen increases. So, when you’re increasing the difficulty level, you’re frustrated in the hell of them, yay. But you’re also greatly aiding your detection systems. Like, hey, wait a second, why would this user be pounding at this page 500 times in 15 minutes? Whoa, whoa, whoa, whoa, whoa. Make ’em work.
Mike Hrycyk (12:21):
It brings to mind the analogy of escaping a bear. You don’t have to be able to run faster than the bear. You just have to be able to run faster than the other people who you were with. Joel, anything to add to average testing?
Joel Vautour (12:36):
Yeah, the average tester certainly doesn’t need to be a pen test. I mean, they’re not cybersecurity experts, but still, every test that they run is an opportunity to uncover a security issue, as far as I’m concerned. Identifying an odd behaviour, a gap in getting access or control or how the data is being handled. An average tester, or I would say junior testers, they’re still on the front lines of looking at that application through the lens of a user. Not everybody can look at the application from the hacker’s perspective, but still, every test they do contributes to the overall security posture. So, it’s important that they’re not just finding bugs but just trying to uncover issues.
David Shipley (13:16):
And here’s the thing, does the system that you’re testing do exactly what it’s supposed to do and only what it’s supposed to do? And in that line of inquiry, when you discover it did something it’s not supposed to do, whether perceived beneficially or not, that’s a problem, right? Because that introduces uncertainty into a system.
Mike Hrycyk (13:41):
We use the word risk a lot in software testing. And so, you’re inserting risk. Undocumented features equal risk.
David Shipley (13:48):
I mean, simple test? If I was an average tester and had that skillset – and I say average tester like it doesn’t require a fine-tuned sense of curiosity and skill – I think it does. I think we undervalue testing at our folly. And, I would say this, if I was so inclined, I would do Johnny DROP Tables every opportunity for input I got, right? If I could blow your SQL database up, did it right? And it’d just be funny. And I’m just saying I would have a list of things I would do consistently for my own amusement just to see what I could do. And that, to me, is pretty valuable.
Mike Hrycyk (14:22):
Well, and so, I think you’ve hit the nail of the head for my next question, which is a little off script. My basic standard for security awareness for testers has always been to be aware of what the OWASP Top 10 is and be aware of things that you can do to help ensure that the OWASP Top 10 is understood and respected by the developer. But I think maybe a good thing to do in this conversation right now is let’s tell our people what the OWASP Top 10 is, just in case they don’t know. So, pop quiz, you don’t have to give me what each of the 10 is, but what is the concept of the OWASP?
Joel Vautour (14:55):
Great. So obviously, it says a top 10, but there’s definitely a lot more from testing a software application. So it’s basically [OWASP] Top 10 is a list of the most critical security risks to a web application. So, it includes things like injection attacks, broken authentication, and maybe the data’s being exposed in some way. It’s not phishing by any means, it’s just the exploitation of vulnerabilities in that application. So, there are weak points obviously in the code that could be manipulated and we want to make sure that those threats are mitigated in some way. So OWASP Top 10 provides you that guideline and that checklist you might say to go through of what David earlier said, the easy ones. The ones that should always be covered in your application and shut down from a locking perspective or door. It should not be able to be opened.
David Shipley (15:51):
When I think of the OWASP Top 10 for me, these are your basic first aid. And what I mean by that was these are the 10 most likely things that if you can stop the bleeding – these following things are the things that are most likely going to get you killed, get your application owned, right? So if you do these well, that’s great. Does that mean that it’s the entire scope of medicine? No, it is not surgery. It’s the basic lifesaving things. And take it another way, fire prevention does not make you a fire marshal, right? Hey man, having smoke alarms, having fire extinguishers, practicing exit drills, these will actually save your life in the event of a bad thing. Does that mean I am now a qualified firefighter or a provincial fire marshal? No. These are the floor. They’re not the ceiling. It is not like, Hey man, I did the OWASP Top 10 application secure! You, my friend, have built a reasonably reliable application. Good job. Now there’s more work we need to do to even go further. But if you’ve got these top 10, you’re talking about the top 10 easy ways.
I got started in this field because the University of New Brunswick got popped. It got popped by a hacktivist group called Team Digital, and the root of that vulnerability was a custom application that had been developed for the news and the way that it was written and the way that it queried the SQL database allowed for SQL injection. This is what we call somewhat stereotypically script kiddies level exploits. Now that’s in OWASP Top 10. But really this burned us even more badly than it needed to because of the database architecture and database provisioning and a whole bunch of other things. And that’s what I mean about the, had we not had the SQL injection, we would’ve – that attacker would’ve been successful. We still had deeper underlying security issues. And that requires additional expertise to identify – well, ideally architects, so you don’t do that thing, identify if you did do it and fix it. Hopefully that adds a little bit of color into that story.
Mike Hrycyk (18:04):
Another point to be made for people is that it’s great for you to learn what the OWASP Top 10 is this year but it might not be the same next year, and it might not be the same that the year after as the world evolves and security evolves. So, don’t think of it as static knowledge. Don’t lose the knowledge you gained when you learned it, but pay attention because it’s going to change. When I first learned it and did training on it, I think at least four of those are no longer on the top 10. It’s not that they’re no longer security issues. They’re not the 10 biggest risks.
David Shipley (18:32):
Absolutely. And by the way, Generative AI – the last list update that I just quickly saw was 2021. Oh, some things have happened since 2021 that are intimately affecting the massive scalability of – what do the Russians call it? Unauthorized penetration tests.
Mike Hrycyk (18:53):
Alright, let’s change our angle for a moment. I think that a lot of what we talked about, a lot of our listeners will have some familiarity with, but let’s talk about phishing a little bit. Because it’s not what we normally consider a tester’s duty to care about phishing. So, let’s start with what is phishing? And we’ll start with you, David.
David Shipley (19:12):
Well, you would think that this is going to be a simple answer, but actually, the industry does not have a very good common definition of that. Well, what do I mean by that? I can talk to some of our customers and government who’d say phishing is an email targeting a user with a malicious link or attachment or trying to get them to infect their machine, and that’s solely what they consider phishing. My view is very, very broad. So, phishing is the expert use of emotional manipulation – that’s key – to get something from someone that is not in their interest. That is going to actively cause them harm. And so, in that broader definition of what phishing is, is it acknowledges that it fundamentally works by preying on our emotions. That it’s not about the fact that it just comes in email. It can come now in text forms, in voice calls – I refuse to say all of the variations of phishing that certain industry folks have forced upon us because it’s still all phishing at the end of the day if it’s about emotional manipulation to get me something that benefits me. Which means a lot of security professionals consider things that I consider phish; they call them spam. And I think it’s really, really important to understand if phish is the expert use of manipulation to get me to do something that causes me harm; what about Amazon deals or the latest Temu email targeting my wife? Now, I would argue that that causes me financial harm when Temu or Sheen hits up my household, but the reality is that it is a legitimate commercial email that is selling legitimate products that we may or may not need, but it does not actively cause me harm. That’s spam. And so it’s really important to have that distinction. Phish is about causing harm. And by the way, a lot of folks say, well, phishing, if it doesn’t infect the endpoint, there was no harm. The 2.9 billion dollars, I’ll say that again, the 2.9 billion dollars in wire transfers from global businesses to the hands of criminals empowered by that thing we call business email compromise – aka phishing – is three times the amount of losses from ransomware. So, I would say it’s important that we focus on harm.
Mike Hrycyk (21:21):
Yeah, absolutely. Joel, anything to add to that?
Joel Vautour (21:25):
David’s quite thorough in that. No, I think obviously phishing, I won’t say to me or to anyone else, but it’s definitely exploiting vulnerabilities of a human behaviour is what I see. It’s when you’re looking at an application or the design of an application for that matter versus getting an email or whatever it might be, you’re taking advantage of someone’s, as David said, emotions or their behaviours of how they might react to that application or ad or whatever it might be. So it’s quite serious, I think we can say today, David, you can probably, I know that I’ve heard it already in other podcasts, is we used to be able to detect phishing quite easily in the past. And obviously with the buzzword of the advance of AI today, it’s hard to really detect those now. We don’t see the spelling mistakes, we don’t see the bad email, or whatever it might be that we’d say, oh, that’s someone that just doesn’t know proper English, and I’ll just cast it away. But today, it’s just much more advanced, much more manipulative, taking advantage of people’s way of reacting to whatever it might be.
David Shipley (22:28):
Absolutely. And a couple of different things. Some of the ways in which we trained people in the past about what phishing is are now coming back to haunt us. So, Beauceron has been doing a lot of work on what we call cyberpsychology. And so the application of understanding around psychology, neuroscience and biology, how the brain actually works into the field of cybersecurity. So one of the interesting things is in psychology, there’s something called the cognitive bias or framing effect. What the framing effect is, is how things are actually positioned to us shapes how we interpret that information. So in the case of phishing and cybersecurity, if we think that those are a technical thing – that we think that cybersecurity is all about technology – and if we think that having things like antivirus, email filters, and firewalls completely protect us, what our research has shown – and by the way, this is over millions of phish, hundreds of thousands of people that we surveyed to ask how they feel about cybersecurity and they perceive how these tools work. If you think they’re a hundred percent effective, you are 50% more likely to click on a phishing email.
And also what I was getting to earlier, there’s another cognitive bias called the anchoring effect. So the first time we hear something, it becomes the anchor in our mind. So, when we taught people about phishing and we said it contains typos, that became an anchor in our mind that a phish has typos. Now it’s very rare for phishes to have typos, and so a lot of people are more susceptible to phishing than they ever were before because we trained them to look for the old way of doing it. And –
Mike Hrycyk (24:01):
So it’s your fault?
David Shipley (24:02):
Well, we have some things to own as an industry. Absolutely. Which is why the work we’re doing on the psychological level is so important about mindfulness and emotional intelligence. And we did some product development based on research in the United States, and we built this material about emotional intelligence and a mindful approach to phishing. People who take this course are 50% less likely to click again on a phish, which is pretty amazing. So, learning to listen to your gut, if you’re freaked out, and all these other things, stop, pause, and breathe. What we have learned about the brain is that we think that we have one brain, and for those listening, we actually have two. We have the neocortex, that fascinating supercomputer, the Mr. Spock that makes rational decisions and all that fun stuff. And then we have the limbic system, which is the old brain, which is the, in my Star Trek analogy here, the James T. Kirk. It’s all emotion, all reaction action. It’s where we get our drives as humans. When I phish your employees to teach them, I am trying to trigger that old, prehistoric, evolutionary brain, and it can seize control, which is why falling victim to phishing, by the way, has nothing to do with intelligence. If there’s one thing I can leave your listeners today, it is not about how smart your employees are, it’s how effective that lure is in landing that shot in the limbic system and getting them to react fast. And the work that we’re doing inside of Beauceron is motivating people to care more about this topic. Phishing is a problem, security matters. And also teaching them to know how to take back control of their own mind. And so that’s the important thing.
By the way, I’ll leave you with one other stat – we’ve been doing a lot of research lately. People that do not believe that they are a target, that the information in their custody is a target, are 30% more likely to click on phishes. Why does that apply to software testers? If you think that your software testing company is not part of an attacker chain on somebody else, let me just get rid of that optimistic bias that you have baked into your brain. Like, bad thing’s not going to happen to me, it’s going to happen to somebody else. You are absolutely a target, and you are important in protecting your firm and, even more importantly, your customers.
Mike Hrycyk (26:19):
Yeah, because you are helping create a system. You don’t have to see the results of an attacker’s attack in the next 20 minutes. They can be putting a backdoor into the software. And so, that sort of segues into this question, which is everything we’ve just talked about is about risk and how risky you are, and someone’s communicating with you and you might let them in because of that. And that doesn’t relate directly to the software – or it relates to an individual doing their job, but it doesn’t relate necessarily to the role of software testers. So, we’re teaching people the concepts of phishing, and we’re giving them the tools to help them be less at risk for an attack, but that’s not quite the same thing as a lot of that is coming through some method of social communication, whether it’s email or text or whatever. But when we’re doing our software testing and we’re testing an application or a website etc, do the concepts that I’m learning in my anti-phishing learnings – do those apply to a software tester’s day-to-day tasks?
Joel Vautour (27:17):
Oh, well, of course. I mean, you have to think of the behaviour. I think what we talked about earlier is the way that the application might be designed for that interaction in the case of the user itself. But if you’re thinking of phishing techniques that are looking at your behaviour or how you’re going to use that application, I think it comes down to making sure that your understanding of the application is coded in a way that it’s not manipulated in any way. So, understanding what phishing is about, you want to make sure that in the testing world, that the application itself, I say from a GUI perspective, a usability perspective, all those things you’re learning that the testing application is again not easily faked. Is it secure in a way that while you’re doing that testing that the product is sound? And whether you’re mitigating those threats in the right way. So yeah, it’s certainly teaching them what to look out for.
David Shipley (28:11):
I think understanding how attackers look at targeting your clients and their end users is always going to have benefits. I think that the first benefit for the employee is you are a vital part of the technology supply chain. The work you do is valuable because it is valuable, it’s a target. And then if you can take what you’ve learned and then incorporate it into your role, you are adding tremendous value to your organization and to your customer. So to answer your question, Mike, I think it’s absolutely vital. It gives a window into an important part of the criminal process. And I think it would be great if everyone saw that as part of their role. If people do not see that security is an important part of their role, I can provably show you they are dangerous to themselves and their organization more than I think is an acceptable threshold.
Mike Hrycyk (29:15):
And I think there’s two big points there that are important. So one, there’s a lot of IT professionals who think, Hey, I’m really smart, I’m not at risk. And that’s a giant X across that you’re more at risk. And then the other thing that I think that we’re highlighting for me here is that when you learn the concepts and the common attacks of ways that people come in phishing, then when you’re testing an application, you can look at the way we’re presenting data, the way we’re informing people of things because that might turn around and become an attack.
One of the things I learned about maybe just a month, month and a half ago is that you have to be careful in the amount of information you put into an out-of-office because that out-of-office message is going to go to everyone. Which means someone who is probing and just sends an email and gets that bounce back, and what they can do is take the information that’s in that OOF – that out-of-office – and then send an email to another person on your team. And now they can act like they know things, but they don’t know things. They just know what you put into your out-of-office, but they can make it sound like they’re actually connected in some way, and that can be there in. And so, it’s understanding – having a more complex understanding of how the information can be used will help you in raising a flag.
David Shipley (30:24):
Yeah, I’ll give you an example. This happened to me when I was a CISO [Chief Information Security Officer] for a university. An executive was on vacation. It was known through social media and other things. The accounting department got an email, an urgent PO, needs to be filled, need to transfer these funds. As you know, I’m on vacation here, which was true. And so, that little bit of truth combined with the fact that this is something that people would’ve expected from that individual swung the gate open and made them $50,000. So that matters. And you might think, okay, I’ve got an internal and an external out-of-office. The external one should be extremely limited. “I am absent,” maybe if you even allow the external one. Question, do you really need to do that? And then the other side, your internal one, right? Just because it’s internal doesn’t mean it doesn’t end up in the hands of an attacker if they’re already inside your organization and they’re just trying to figure out how to play; again, I’m thinking about a financially motivated attack in this example because people inside your infrastructure generally ends poorly one way or the other. But other offices are a great example on that side.
Joel Vautour (31:22):
Yeah, when I think of attackers, they often exploit the fact that users are predictable. So you’re having a sense of that person does a particular job role, it’s very predictable how they might respond to things. So, getting back to the tester, I think the same thing in terms of a product. You need to think about how could someone, an attacker, use malicious intent to manipulate this feature or processor page so that a user would predictably answer some information and give up the information.
Mike Hrycyk (31:51):
Great. We are practically out of time, so I’m going to ask this one more question. So I heard the term the other day. I think I heard it from you, David OWASP Top 10 for humans. Do you want to tell us what that is?
David Shipley (32:02):
So, this is a concept I’m on is actually leans towards the psychology side of things. So, I’m going to give you the big four. So very quickly, human brains are wired for an optimism bias. So understanding bad things can happen to you. It’s not about how smart you are. I believe that I could fall victim to a phish because I know that the right phish at the right time, in the wrong state of mind, etc, could get me. Staying humble – huge win on that side. So, always worry for the optimism bias. I mentioned earlier the anchoring effect. Just because you learned something in the past doesn’t always mean that thing is still true. So, whether it’s learning the next iteration of the OWASP Top 10, think about that. Three, the framing effect. How you’ve been taught about something, how you’ve been communicated to often shapes how you emotionally react about it. Think about the way that information is being communicated to you, and make sure you’re thinking about how it’s being framed and what people are trying to get you to do with that. That’s kind of like my human version of the injection account. Make sure you sanitize those inputs. What are you trying to get me to do here? And then the fourth one, I’m going to call this the beginner’s bubble. It’s part of the Dunning-Kruger effect. People with low levels of competence in a particular topic, no or low levels, tend to vastly overestimate their skills. Stay humble and stay hungry to the optimist bias also plays out in the beginner’s bubble.
And for you IT experts out there, you might be thinking, I’ve been doing this for X long. Do you know the most dangerous time for pilots? The killing zone is between 600 and 800 hours of flying time. That’s when they get complacent. That’s when they get a little slack. And by the way, it doesn’t just apply to pilots, it applies to surgeons. Fun fact, surgeons are their most dangerous between 15 and 20 surgeries. So, just remember that. Avoid complacency. Don’t think it’s not going to happen to you. Think critically about the way information is being presented to you, and don’t be content with it just because you learned something back in the day, like spelling in phishes, is no longer the signal it used to be. So those are my OWASP for humans on that side that I’d love people to lean into.
Mike Hrycyk (34:07):
I’m going to do something different. Joel, I’m going to give you a different capping question. How can testers help overall security awareness on their teams, with their families, in their work?
Joel Vautour (34:16):
That’s a great question. I think from just a spread of security awareness by sharing their knowledge is definitely one with the team or with their families. What they might learn about phishing, they may have tested an app for a banking system, what they might learn is obviously key to maybe pass that on to anybody that they may know. But in the team or with the companies that we’re working with, ensuring secure code practices, highlighting potential threats in their testing results, simply just by discussing it with the team. It’s just making a more security conscious culture would be key. And I think educating their families and communities in safe practices of what they learned should become second nature. Hey, I learned something new today in the conversation through social, whatever it may be. I think that testers could be a big way of adding that security-first mindset to everybody.
Mike Hrycyk (35:05):
Question for you guys. So I’ve pretty much given up telling family members on Facebook to stop answering things that ask for their birth dates. Have you guys given up, or do you still every single time say don’t answer those questions?
David Shipley (35:19):
I still have hope. I think we have to provide better tools to people outside of the workplace to maintain motivation for good cyber hygiene. It’s something that my company is exploring, actively taking some of the psychological work and the behavioral principles we have into this space because people do these things because they believe it gives them a benefit to do it. There’s a motivational aspect to that. And unless you have an active counter to why they shouldn’t do that and an intervention. Ideally, I would love for a browser plugin that I could go around installing for my family and say, listen, if you want tech support from me, here’s why you’re going to have to install this. And that plugin is going to literally intervene every time there’s a birthday field. And it’s going to say, who are you giving your birthday to? And that’s how I want to solve that problem. So if there is anybody out there who has built the magic Firefox or Safari, or God forbid Edge – I’m kidding, I’m beating on Microsoft Edge – or worst of all Chrome, looking at you, memory hog – a plugin that I can actually have to intervene when necessary when family members are going to do this. I don’t know if that Futurama gif for Fry is holding his cash up, but I will pay you money.
Mike Hrycyk (36:28):
I think you’ve just demonstrated for us the transition from good to evil, which is where you write a malicious app that uses phishing to control out there, that installs your app on everyone’s machine.
David Shipley (36:40):
Oh, trust me, poison plugins has been well thought of before now. So yes, be very, very careful what goes into your browser. Yeah, I would love to claim criminal originality on that, but no, I’m trying to do it for good.
Mike Hrycyk (36:54):
So thank you to our panel, Joel and David for joining us for a really great discussion about cybersecurity and the role of testers within that. I think that it’s a very deep and a very broad topic, and we’ve only covered a little tiny wedge of that. And in fact, I think David, it gave me a great topic for us to think about having next, which is software is created by humans, and I’m like, yeah, that was true last year. Now let’s talk about how do we do security for software created by AI. But we’re not allowed to dig into that now. We’re out of time, but I think that’s a good future topic. If you, the listeners have had anything you’d like to add to our conversation, we’d love to hear your feedback, comments or questions. You can find us at @PLATOTesting on LinkedIn and Facebook or on our website. You can find links to all of our social media and website in the episode description. Try not to use that against us in a phishing attack.
If anyone out there wants to join in on one of our podcast chats or has a topic they’d like us to address, please just reach out. And if you are enjoying our conversations about everything software testing, we’d love it if you could rate and review PLATO Panel Talks on whatever platform you’re listening on. Thank you again for listening, and we’ll talk to you again next time.