Artificial Intelligence is everywhere — but how do organizations move beyond the hype and actually create value? This month, Mike Hrycyk is joined by Emma Heo (Lead for Public Service AI and Data at Accenture Canada) and Emily Cowperthwaite (COO at PLATO) for a practical, candid conversation about what AI really means for businesses today.
Drawing from real-world experience at both a global enterprise and a growing technology services company, Emma and Emily share practical insight on how AI is being used today to improve productivity, streamline processes, and support better decision-making — along with honest discussion about how organizations can start small, build AI literacy, and responsibly experiment without losing sight of governance, trust, and business value and why AI success isn’t measured by the number of models deployed.
Can’t use the player?
Listen to this episode on Spotify (opens in new tab)
Episode Transcript:
Mike Hrycyk (00:00):
Hello everyone. Welcome to another episode of PLATO Panel Talks. I’m your host, Mike Hrycyk, and today we’re going to talk about AI. AI is, of course, the topic that everyone wants to hear about, and I know we generally focus our topic somewhere around testing, but today we’re going to broaden it a little bit. We thought our listenership would want to hear about ways that AI are coming into companies that include or don’t include testing. With that in mind, I’ve brought a couple of experts to join our conversation and let me introduce them. Emma, please tell us about yourself.
Emma Heo (00:28):
Good morning, everybody. I am Emma Heo, and I lead our Public Service AI and Data Team at Accenture Canada. So, I’ve been working in the field of AI and data for all of my career, and it’s what I’m very passionate about, but I’m also the first one to tell everybody that let’s not start the conversation with AI. AI should be the last thing we should talk about. So, we’ll chat about what that all means today, but really, really happy to be here.
Mike Hrycyk (00:50):
Awesome. Although today we are going to start the conversation with AI. <Laughter>
Emma Heo (00:54):
We will do a little bit of that. Yes.
Mike Hrycyk (00:57):
Emily, please tell us about yourself.
Emily Cowperthwaite (00:59):
I’m Emily Cowperthwaite. So, I’m the Chief Operating Officer here at PLATO, and I’m responsible for our Delivery, HR, and Training functions. And I’m also responsible for developing our AI strategy for the organization. As an engineer by training, I’ve always been interested in tools and technology and how they apply to business problems. So, don’t have the same deep technical expertise as Emma in AI. I haven’t spent my whole career in it, but as I’ve seen it emerging in the technology landscape, it’s something that I’ve been learning more about and trying to figure out how we apply it to both our business problems here at PLATO and how we can bring it to our customers as a service partner as well.
Mike Hrycyk (01:37):
Great. So, often we like to start our podcast by defining some of the terms that we’re working with. And AI is new enough, and there’s so much social and press out there about it that a lot of it has become indistinct. And we really just talk about tool names and what it is, but I’ve noticed that those tool names have genericized. And you might not use ChatGPT for a lot of the things that you might want to do with AI. And yet everyone just starts talking about this kind of like we say Kleenex. So, let’s start with helping our listeners level set. What is gen AI versus agentic AI versus predictive AI? Is there another AI I should have talked about? We’ll kick it off with you, Emma.
Emma Heo (02:14):
Awesome. So, I’m really happy that we’re starting with this question. And I usually explain this by starting with what problem you’re trying to solve for, not the actual label or algorithms, but real broadly. So, I’m going to start with the predictive AI. I think you’ve roughly caught all of the different types, but predictive AI. So, I would say this is the most mature and widely used today. Most of our traditional kinds of AI or machine learning algorithms fall under this category. Predictive AI, it looks at historical data to learn the patterns, to predict what’s likely to happen next. So, examples of predictive AIs at play are demand forecasting for client demands, risk scoring for banks, and fraud detection. So, this type of AI is already powering a lot of our systems today, believe it or not.
(02:59):
Then next we have generative AI. And Mike, you’ve mentioned ChatGPT. So, generative AI or gen AI in the short form, is all about creating new content, whether it be text, images, code, which is kind of text, but not really, summaries, sound, or music. So, gen AI is all about creating new content. And it’s really great for knowledge work or creative work, which is interesting because creative work we thought was going to be disrupted last by AI. So, this is what changed the whole dynamics. But anyway, so where you have to draft, summarize, interact with data, gen AI is your friend. And advancement in gen AI is what’s really made AI very popular in the last couple of years, because when OpenAI released ChatGPT in November 2022, millions of users adopted it within days, right? So, it got mass adoption real quick, and it felt magical because I didn’t have to be a data scientist to get it. So, that’s what gen AI is all about.
(03:54):
Agentic AI is now the newest sort of thing in the AI field. It’s simply AI that makes decisions and actions. So, if you think AI kind of as the brain, agentic AI is AI with arms and legs. So, an agent that uses AI as its decision-making sort of brain can play steps, call different technology, tools, execute tasks with some level of autonomy, and that’s key. So, agentic AI is all about AI taking an action. And it’s very powerful because we’re now talking about what AI can do for us that drives our productivity and the outcomes, rather than, “Oh, AI is so cool. What can I use it for? ” So, in summary, that was a long-winded answer to let’s set the definition straight. But in summary, predictive AI predicts gen AI creates agentic AI actions.
Mike Hrycyk (04:45):
That was nicely thorough. I feel educated now. So, I’m not going to put you on the spot and say, make that bigger and better, Emily. But you’ve been working for the last eight, nine months on learning and figuring out how this is going to incorporate into our company. So, in your explorations, have you focused on one of those, or where are you seeing the biggest road for our company?
Emily Cowperthwaite (05:05):
Yeah. So, like Emma said, since ChatGPT has released its tool, it’s become very popular. And that’s the one that everyone’s really jumping on and what has started the large conversations around AI and using it in the workforce. So, that is using large language models. And that’s where I focused most of my effort on applications across the organization. Like Emma said, in words and creating content, it’s really, really powerful in analyzing data. And that’s what I’ve spent the most time on so far. We haven’t done a lot with agentic AI in PLATO yet, but I think that’s where a lot of the opportunity lies. I love the description of generative AI as the brain, and then agentic is adding the arms and legs. So, it’s really being able to take that and then use that to do something for you and take action. Really well said.
Mike Hrycyk (05:56):
Yeah. I think agentic AI is the buzzword now. Every podcast, I see every topic, every article it’s talking about is agentic. And I think that the reason for that, the big reason for that, is people are having trouble seeing a return on their investment or seeing the roadmap for return on their investment. Predictive, it’s obvious that it’s there, but it’s not as fast, and it’s not as easy. And with gen AI, people are worried that, “Is this going to take over my job? Can it just do the stuff that I’m doing?” But agentic is the one that if you bring an agent on and you let it do and make some decisions, it can start saving you time, and it can start saving money. And so, people are really seeing that. I don’t know if there was a question there, but any comments around that, Emma?
Emma Heo (06:36):
Yeah. The answer to the question is, is gen AI going to take over my job? Absolutely not. And I’ll just pause at that.
Emily Cowperthwaite (06:44):
Yeah. I think one other point as well, around predictive AI, it didn’t feel to users like as much of a leap. That came out more subtly. It came out, it was more analogous to data analysis maybe we had seen in the past, but when we saw generative AI coming out and we saw that we could ask ChatGPT to create a song and it came up with something completely new and novel, that is where I think there was more of a mind shift of what is the possibility with AI and where can we go?
Mike Hrycyk (07:11):
Yeah. And for example, at what point does this podcast stop because AI faces are doing all of the talking?
Emma Heo (07:17):
Scary thoughts, Mike.
Mike Hrycyk (07:19):
Scary thoughts. Yeah. Let’s hear a bit about each other. So, Emma, you said your whole career. So, tell us, how did you get started in AI?
Emma Heo (07:26):
So, I’m going to go all the way <laughter> I’m going to go back to university days. I’m going to date myself. So, I studied math and statistics at U[niversity] of Waterloo. And in my first job out of school, I was an investment analyst modelling traditional expected returns and risk scenarios for different portfolios of assets. Back then, we didn’t call it AI, so I’m really dating myself, but AI existed, but not in the hype cycle that it is today. We called it for what it was, statistical modelling and simulations, which I loved doing. And I thought to myself, I love working with numbers. I love working with data. I love doing stuff with data, with models to then do more things, but do that in a business setting and solve real-life problems. So, I joined Accenture to do just that. So, in terms of how did I get into AI? I had foundational knowledge about AI from my education and my first job.
(08:20):
And then fast forward, I lead our Public Service AI business at Accenture Canada. And it’s great talking to our clients about not just AI, what it can do, but also about, okay, how do we actually do AI right, responsibly, and then actually drive trust and adoption with our people? So, it’s been amazing just talking to our clients about all of that and helping them mature their AI maturity.
Mike Hrycyk (08:46):
Okay. So, we’re going to come to you in a second, Emily, but I’m going to bridge this into the second part of this question with you, Emma. So, great. I mean, that sounds really interesting, and it’s a great step out of school into that. And you have two perspectives here. How do you take an organization that knows nothing about AI and start it down that path? And so, you have the perspective of how you are doing it at Accenture. You now have to teach all of these consultants how to talk about it with their clients. But also, how are you using it inside of Accenture? And then the same aspect is, you’ve got a brand new customer, and they just know the buzzword, and how do you get started? And I think I’ve just outlined a 77-minute question. Try and encapsulate it.
Emma Heo (09:25):
So, two questions. How do organizations get started if they haven’t had anything done with AI data? I would say start small because the AI world is daunting. If you think about all of the tools out there, all of the possible use cases, all of the players, it just is a massive – evolving to be this massive ecosystem, a field. But you have to start small, and then you have to start intentionally. It’s not about doing AI, right? It’s about solving a problem that you have with AI. AI is a means to solving a problem and generating value and outcomes, and not the end goals. So, you have to start really small. You have to pick a real and meaningful problem even before you think about what I can do with AI in my organization. Once you’ve kind of ideated on the problem that you want to solve for, you need to make sure you have the data basics.
(10:15):
Garbage in, garbage out, right? You have to make sure that you have the right data to power the AI systems so that it’s helping you make the right decisions and get more productive. Yeah, that’s what I’ll say about how companies get started in AI if they’re just fresh getting started. Start small, be intentional, and think about a problem you want to solve. Don’t just think about AI.
(10:32):
Now, the second question is around how Accenture is operating in this world where AI is everywhere. So, Accenture, just for those folks who don’t know, we are a massive company with global presence, and I’m not just marketing, it’s real. We have 700,000 people around the globe. So we are a massive company. And what we’ve been doing over the last several years, I want to say, because AI has been around quite a bit, is we have evolved our organization to be AI-first. So, when we go and talk to our clients about any topics, AI is embedded. When we go for a solution for a project or for an opportunity, AI is first.
(11:16):
So, that means we are thinking about how to bring a team of, let’s say, I don’t know, business analysts together to solve this business strategy problem and work with AI to do that. So, we’re very much AI-first in everything we do, from solutioning, project delivery, internal administrative tasks, all that kind of good stuff. So, we’ve very much evolved to be like, what can AI do with me in my day-to-day?
Mike Hrycyk (11:42):
Okay. So, now I will give you a chance to tell us, Emily, but I will keep the two questions together. So, how did you get started? And as our COO, it’s hard to divorce that question from how do you get an organization started? So, you can answer both things from your perspective.
Emily Cowperthwaite (11:57):
So, not so different background from Emma in a way, in that I did a master’s degree in modelling and simulations. So, some similarities there. And in that case, I was looking at using modelling software to solve industrial problems and looking at manufacturing operations. I kind of turned and pivoted from there to something a little bit less technical. So, I was working in a true kind of engineering role, and I moved into the role at PLATO as COO, and I was much more business problem-focused and less technical in my day-to-day. But as the question came up for PLATO, okay, AI is becoming more and more important. We need to get a strategy. It led me back into a bit more of that technical exploration side of things, and how can we learn this tool, get people up to speed, and how do we leverage it to solve a business problem?
(12:42):
And that’s what we were talking about, okay, what’s the topic around AI that we want to discuss, and who should we bring into the conversation? We talked about how do you actually get value out of AI? We don’t want to get into the nitty-gritty of how do you create an LLM or what are the underlying principles? It’s how you use it as a tool because ultimately getting value out of it for the business is where all organizations want to head and what we’re looking at PLATO.
(13:06):
So, Accenture, as you said, is a massive organization and PLATO – PLATO is a relatively small organization. So, we are more in the getting started and starting from the ground up, bringing in expertise across the organization and looking at the problems first. Where are the places where we could use some efficiency that we could insert the tooling, but really making sure that the problem definition is well set before we start trying to use AI? But we’re also just trying to encourage experimentation throughout the organization, sharing of learnings, and using feedback loops to expand.
(13:44):
And we have some cases where we’ve tried something with AI, and we’ve learned that AI was not adding any value. It was just slowing us down, or it was just creating these circles of challenges for us. So, really just starting with what problems can we solve, where can we apply it, and starting to try to use it and learn as we go, where we’re getting the value and where it doesn’t make sense, or we need to learn or insert other tools or skills. Or as Emma said, improve our data inputs because the availability of good data is absolutely important for the business, too.
Mike Hrycyk (14:16):
So, one of the things that you had said, Emma, was talking about this big picture of you can’t look at it as it’s going to come in and solve all the problems. And that’s something that I’ve seen is that companies are looking at this in similar ways as they’ve seen other mature things. And they’re like, “I’m going to do this now.” And they’re looking for a fully baked solution that will come in, be the framework that they layer across their entire company, and it’ll just work. And the challenge with that, from my perspective, is that it’s just too new, right? You can’t do that. It’s not fully baked. It doesn’t know how everything’s going to go, and that’s not the way that you can get started. And so, starting small is how I advocate for using it.
(14:49):
So, keeping that in mind, what are some of the types of problems that can be solved by AI in business today? And because you’ve been really hands-on so far in this, Emily, what are you seeing? What are some of the successes that you see, or some of the places that the biggest value can come from?
Emily Cowperthwaite (15:05):
So, for us, we’ve seen a lot of personal productivity, I would say. We’re in that kind of stage of experimentation and people figuring out. So, people are using it for the basic cases that you see. Help draft an email, help with your communication, and get some feedback on your writing. And we’ve expanded that to some basic business processes. So, for example, we have built a tool that takes an external resume, and we put it into a PLATO standard format resume that we’re then going to present to a client like Accenture. With that, we’re also generating an analysis of the resume and a review guide. So, it’s not agentic; AI is not doing all of the work for us or isn’t making any decisions, but it’s putting it into a standard format. And then it’s giving us a guide that says, how does this candidate fit the requirements for the job? What are any risks? Because we know that AI can hallucinate. So, what are the assumptions that might have been made or risks that it flags in doing this transformation into our standard format? And here are some areas where the candidate might be a little bit light on skills. And if you could go talk to the candidate and understand they might actually have some more skills that you could fill out this resume, and there’d be a better match for the role that we’re trying to submit them for. So, some processes like that, we’ve also been able to integrate AI to solve a problem of trying to make the best presentation possible.
Mike Hrycyk (16:29):
So that’s great. You’ve segued nicely into my second half. Is solve the right word to be using? Does AI solve problems?
Emma Heo (16:37):
So, I think so. So, it’s debatable, and there’s going to be lots of healthy debates on what that means. So, yes, and it’s all semantics, Mike, but AI can solve problems, but it can’t do it alone, is what I would say. So maybe we should have said, “Can AI help solve problems?” Yes, because it can’t replace human judgment as of today. It can’t automate our way out of humanity. So, yes, it can solve problems, but it absolutely needs human assistance, augmentation, and decision-making authority.
Mike Hrycyk (17:09):
So, in your experience, what are some of the best problems to solve? What are some problems that you can think about examples of places people can start?
Emma Heo (17:19):
I love how Emily brought in real-life examples of how we’re using gen AI to power your core business process, right, Emily? Given PLATO’s business and how you’re getting value out of it. So, that was bang on, Mike. So, I’ll maybe skip gen AI, but maybe talk about predictive AI. And as I said upfront, predictive AI has been out there forever. So, banks, insurance companies, and the big companies that we know already have always been using AI in some format in forecasting demand, like customer demand in the case of retail and product companies, workload demand for managing workload and capacity better. So, forecasting predictive AI has been sort of in use for quite a bit. In manufacturing context, and Emily, this is your background again, like predictive maintenance. Using data from all sorts of sources, including IoT devices, to help predict when equipment may fail. And so, in advance planning for repairs and maintenance activities.
(18:19):
So, those are some of the things that have worked well for quite a bit of time. That’d be predictive AI. On the agentic AI side of things, Emily, you mentioned you aren’t doing that yet, and you’re not alone. It’s new. A lot of organizations are trying to figure it out. They’re starting with a proof of concept or pilot with agentic AI, but we are seeing a few of our mature clients actually automating end-to-end processes with agentic AI, of course, with human oversight and getting value out of it from a productivity perspective. So, it’s where you have process-heavy business problems that agentic AI very much shines. And the key thing to highlight there is when folks are looking at processes end-to-end rather than, “Here’s a thing that we do that we want to automate and we want to throw a tool to automate this component,” that doesn’t work well and doesn’t get a lot of adoption. It’s when our clients have looked at the end-to-end process using agentic AI and reinventing or transforming this holistic business process that they’ve started seeing lots of value come out.
Mike Hrycyk (19:23):
Interesting. That’s good to hear. One of the things that I’ve been seeing in the discussions that I’ve been having with lots of different companies is that the place that a lot of organizations are starting are having an agentic approach to onboarding, whether it’s onboarding to a project, onboarding to the organization, setting up an account, or setting up a series of accounts. And there’s big decision trees that people go through to make sure that all of that can happen properly. And it’s not necessary to have a human do that. So, a lot of people are starting there. And I’m very much a fan of the idea that this is how companies have brought automation on in the last 10 years, or we’ve always suggested, is find something small that works, use it to demonstrate value, use it to give people ideas so that they can explore what’s the next big thing. Because again, there’s no roadmap, there’s no book, there’s no best practices yet that cross function across all organizations.
Emily Cowperthwaite (20:14):
You said to generate ideas. And I think another important part when we’re talking about AI is to build trust. There’s a lot of mistrust in what AI is, what it can do, what it can’t do, and the idea of hallucinations. So, I think it is part of starting small is also to prove that it can be used responsibly and reliably and that we can trust AI systems. I see a lot of instances where it’s cultural, where people say, “Oh, well, this resume used AI to generate.” And I’m at the point where I’m thinking, “I want candidates that are using AI to generate their resume.” It’s a tool at their disposal, and if it makes a better resume, then we should be using it. But there’s different attitudes towards that. And it’s interesting to see that evolution. And I think it’s going to take time to build trust in using AI, and that it’s appropriate to use it as a tool.
Emma Heo (21:06):
I love that, Emily. It should almost be an expectation that AI is out there. So, folks are using it. And if they’re not using it, why aren’t they? Which brings up an interesting question that we’re answering for our clients all the time, of if folks are using AI to get productive, what can they use that additional capacity for? So, in your case, Emily, if candidates are leveraging AI to create their resume, are they spending the additional capacity to tailor the resume to the job description? It just brings a lot more value to the work that needs to get done.
Mike Hrycyk (21:37):
So, Emily, you are going to receive the podcast badge for great segues. So, one of the things that I’ve been seeing is there’s two big perspectives in these discussions. And one of them is the best way for a company to get started in this stuff is to set people free who want to use it, who come up with ideas, and they get started, and they show value. And that’s where I’ve seen or heard of the most success. But the flip side is I’m in some groups, and we’re talking to CIOs, and they’re having discussions. And the biggest thing that comes up, the biggest fear around that is safety and governance and walls. How do you make all that work? And we’re not going to dive too deep into that because lots of people are talking about that. But Emma, do you have a perspective on where that balance comes from?
Emma Heo (22:21):
I do. So, I think with AI, especially with gen AI, which is, as I said, accessible, everybody can go to ChatGPT and use it right now. We have to think about governance differently from our traditional ways of governance, where you’re going to set up a governance body, and that body is going to control everything, because guess what? People already have access to these tools, and they want to use these tools. So, we have to look at governance as an enablement function rather than a policing function. So, that’s where we have to sort of evolve the narrative around governance of AI. Emily is absolutely right. Trust needs to be there. So, responsible AI, ethical AI, ethics in data, all need to go into the governance framework. But at its core, the governance function needs to enable responsibility rather than control and police.
Emily Cowperthwaite (23:08):
Yeah, I completely agree. We know that people are using the tools. They’re available. They’re out there. So, we need to make it possible for people to use the tools in a responsible and safe way.
Mike Hrycyk (23:17):
Yeah. This is something that I’ve seen you doing, Emily, is finding ways to empower our leads in using it. Maybe the next step for our organization is talking to everyone and figuring out a way to enable them to use it. And that’s a path forward. Alright. So, this segues nicely into – maybe I can have the badge too – who are the right people and what are the right skills to be doing AI exploration, and ultimately implementation? Who should we be encouraging? And let’s start with you, Emma.
Emma Heo (23:44):
So, this is an interesting question. I challenge everybody who’s thinking about this for their own organizations, teams, what have you, not jump to who do we need or what skills do we need? So, not jump to that question, but take a step back because there’s a lot of money and investment already going into AI development in the industry. So, do I really need a large language model developer in my company to build the next best large language model? And is that going to add value to what problem I’m trying to solve for? So, whenever you’re contemplating this question of, how do I build a team to do this? You need to take a step back and wear the Build, Buy and Borrow mindset. Do I build it? And you have to be strategic about it. So, if the answer is no, I probably don’t want to build the LLM and have the best LLM Developer on my team, then the next question is, who are your partners who can provide those capabilities to you so you can go take them, work with them, and solve the problems?
(24:38):
So, not answering the question directly, Mike, but I think for those contemplating this question, you should have a hard look at where do you want to play? What is your core business? Where do you have your core strengths? And so, as a result, what do you have to buy and borrow?
(24:51):
And! And if you have decided that, yes, I want to build the best team out there, the skillsets – I mean, there’s going to be the upskilling journey of your existing sort of stuff, so that they can be AI-ready, and there’s going to be higher AI talent to fill the gap that you have. But as you think about AI skillsets that you want, AI capacity that you want to build, it’s not just engineers and technical skillsets, right? It’s also people who know what problems are solvable by AI, period, and people who know how to use AI in their day-to-day to solve problems. So, it’s a marriage of business, functional, and technical skillsets that you want to equip your organization with as it relates to AI and not just AI engineers.
Mike Hrycyk (25:33):
Great. Emily, do you have anything to add to that?
Emily Cowperthwaite (25:35):
No, I think Emma’s pretty spot on with that. And we have talked about as an organization, we’re on our journey. How do we get started? Who do we need? What does the talent look like, and who should lead this? And it was a conscious choice for me to lead our AI strategy as someone who is involved in integral parts of running the business, because at the end of the day, we want to use AI as a tool to augment the business and solve business problems. So, as Emma said, you need to make sure you have that business requirement mindset, and then you can bring in the technical skills as well to supplement and actually build whatever you’re doing. But the problem first. The mindset of what you’re trying to do and what you’re trying to accomplish.
Mike Hrycyk (26:18):
Perfect. Somewhere in there, Emma, you managed to answer one of my next questions, which was, is AI an IT function? And the answer is not only. I mean, there’s going to be a part of IT because AI lives in IT, but if you’re going to solve business problems, you have to have knowledge of the business problems to solve.
Emma Heo (26:35):
Yes. Do I get the badge too? And also, Mike, you are hired.
Mike Hrycyk (26:41):
Emily may have something to say about that, but okay.
(26:45):
So, we’ve talked about this just a touch. We know how hard it is to find an AI expert these days. There aren’t that many out there. It’s hard for people to get started. When you find one, you don’t know if they really know what they know. A lot of the people who say that they know things don’t yet, and that’s fair because it’s so new. And when you do find someone, they’re really, really expensive because they know what they’re worth. This seems to be that you have to build your own internally, and how do you build them? And because Emily, I know you’re heavy down this path, let’s start with you.
Emily Cowperthwaite (27:16):
Yeah. As you said, I think we could go look in the market for AI experts, but it’s really not necessarily deep expertise in AI that we’re looking for. It is building those skills and being able to apply them to problems. So, we’ve seen that giving people access to tools, giving them problems, getting them started on their own kind of personal productivity, how can we explore how we can use AI in testing, sharing that, demoing that, getting feedback across the organization? And what we’re seeing is really interesting. The people that are emerging as leaders in that thought space in AI aren’t people that have necessarily been emerging leaders or technical experts that we’ve seen in the past. They wouldn’t have been our go-to for, “I need some help with this development or this internal coding project.”
(28:06):
And some people that don’t have computer science – we have a big team of people with computer science backgrounds, but some of the people we’re seeing that are becoming AI experts are ones that are really interested and don’t have super technical backgrounds that are just going and learning. A lot of younger people as well, we just brought in two new co-op students, and they have a ton of experience using AI in their schooling and in their own projects. So, it’s really enabling those skills to be built at all different levels. And for PLATO, where we are very interested in bringing people into the technology workforce, I think that’s showing us this opportunity for new jobs and new skill development that’s really exciting for us.
Emma Heo (28:44):
I think that’s how we would advise any client to go about this problem. So, as I said, as you think about AI capacity and skillsets you want to build within your organization, number one, think strategically about what areas you want to play in that add to your core business. So, think strategically about partners and the capability that you can borrow or buy. And the rest of it is, yes, an upskilling journey is important for existing staff, right? So having them be more conversant in AI and having them maybe have a bit more of a technical knowledge around AI and what it does, etc. But the focus should really be on literacy. AI isn’t a function-owns. AI needs to be everywhere for productivity and outcomes. So, focus on literacy, and that means not everybody in your organization needs to be a data scientist. So, it needs to be, to Emily’s point, what is AI? What it isn’t? What are its limitations? How can I use it? How can I complement it? And how can I work with it? From a literacy perspective, what should folks start to think about?
Mike Hrycyk (29:48):
Yeah. To some people, AI is just scary because it’s new tech, and it doesn’t need to be, right? Everyone, including grandparents, is using AI. Now they’re using it to generate weird pictures of themselves with unicorns. But they’re still using it. That’s just fascinating to me.
(30:03):
Okay, getting close to the end here. Everyone’s on a path. We have a lot of listeners who are leaders and managers, and they live and die by metrics and knowing that they’re spending their effort and their money in the right places. How do you judge if your AI journey is on the right path? How do you gauge if you’re doing enough, or if you’re doing the right things? And how are you defining success? And Emma, we’ll start with you.
Emma Heo (30:25):
So, success should never be about how many times are you hitting AI or how many AI models do you have deployed. It needs to be all about outcomes, better decisions, better productivity, better services, better bottom line, better top line. So, as organizations are starting to think about their AI journey, the first starting point is what does good look like, and what outcomes are we trying to drive, and how do we measure those outcomes, and what are our KPIs [Key Performance Indicators]? There’s nothing different about AI, honestly, Mike. It’s similar to any other technology or any other new thing that comes to the market. What am I going to use this for? What outcome am I trying to drive? What are the KPIs? How am I going to measure value and outcomes, and just stick to it? It should never be about the number of data scientists I have on my team, the number of models deployed, or any of those usage metrics.
Mike Hrycyk (31:16):
Well, I also think that this technology is so nascent that it’s not about, I’m a billion-dollar company, am I saving a hundred million dollars? It’s not that big yet. Am I doing things with AI, and are those things with AI causing some success? And then how many hotspots do I have? If I am a company the size of Accenture, do I have four stories in my organization where AI is? Or do I have 112 stories? Because you’re building these little buds of AI that are going to grow and come together in the end. It’s a hard way to think of it, but I still think it’s a useful way.
Emma Heo (31:51):
For sure. Return on investment and reaping value out of AI isn’t a next-day thing. So, I wrote out a tool. Tomorrow we’re going to save 10%! It boosted our productivity by 10%. It doesn’t work like that. So, it’s a journey. Mike, to your point, you’re bang on. It needs to involve people adopting AI, trusting AI, and then working with it to really, truly drive AI. And let’s talk about it. AI is not cheap. It’s not a free thing to run. So, we talk about productivity uplift, but that needs to be considered against the cost of running an AI system, which is not trivial today.
Mike Hrycyk (32:22):
Emily, what’s your perspective on success?
Emily Cowperthwaite (32:25):
Yeah. So, I like what Emma said. It’s not different than other technology. You set up what success looks like. You figure out what metrics you’re going to use to measure that, and you deploy. I think one thing that is interesting in the AI space is that it’s moving so quickly. There is this push to make sure that you’re on the train that’s moving very quickly forward. So, there is some element of just starting. And even if it is at the personal level, getting people to use it and building basic literacy that maybe isn’t tied to a business outcome, there’s still a need to move and make sure you’re promoting its use. But when it comes to solving those business problems, it’s not any different in terms of looking at your KPIs to measure the success there.
Mike Hrycyk (33:08):
Once again, you’re winning the segue badge, Emily. So, last question, wrap-up question. Someone asked me the other day for recommending a book they could read to get up to speed. And I almost laughed. If there are books out there, they’re not useful books yet because there’s just so much that’s unknown. But I’m going to put it to you guys. Is there a book? Or when you say no to that answer, maybe you won’t. What should people be doing to start their journey, their literacy? What resources are out there that are reliable, other than that intro at the start of this podcast, where Emma defined them for us – that was awesome. We’ll start with you, Emily. What resources have you found awesome?
Emily Cowperthwaite (33:45):
So, I don’t think there’s a book. I don’t think a book is necessarily the best way to learn the technical details or keep up with the trends in AI. There are, I’m not going to name one, but there are some books out there that talk about the philosophy of AI and the mindset around AI, which I think isn’t changing quite as quickly, which is worth it if you’re interested in that side of things. But if you want to keep up with what’s happening in this space, blogs I find are kind of, or articles are – it’s hard to sort through how many different ones there are, but there’s lots of resources out there.
(34:18):
And the other thing that I keep encouraging my team to do is to use AI as an AI coach. So, you can create a prompt, and you can have it teach you about how to use AI. So, I have a prompt that I’ve given to some people in the organization that just turns AI into its own personal prompt coach to help them learn AI. And you can use that for other topics as well, where it will progressively understand your knowledge and teach you some things, and then quiz you on it. So, it’s kind of interesting, but yeah, maybe a little meta that you can use AI to learn about AI.
Mike Hrycyk (34:52):
That’s awesome. Emma, same question.
Emma Heo (34:54):
So, full-heartedly agree with Emily. Everything, Emily, you said, goes. Just a little side story. Yesterday, I was chatting with a new joiner at Accenture, and she was like, “Emma, how can I use AI in my day-to-day?” And I was like, “Why don’t you ask AI that question?” right? Talk about your role, talk about what you do, talk about what you’re passionate about, and then ask it. So, Emily, absolutely, we’re on the same page.
(35:17):
I will just add one thing about what people can do to learn more about AI. Yes, to all of the resources Emily said, and I’m a big fan of learning by doing. And there are lots of resources out there, like free credits for standing up a Notebook, for building AI models, and free data sets that you can ingest to train your own models. So, try and find out resources that you can use to do because I think you’ll get so much value out of it.
Mike Hrycyk (35:43):
From my own perspective, a year and a half ago, maybe I took a most of – there was a hiccup technically, and I never got back to it. Most of a certification from Google. And I’m not saying – don’t rely on certifications yet, but rely on that to help give you the literacy that we’re talking about so you understand the differences and where you’re at. And that ended up being a pretty good resource for myself. And I’m sure now, a year and a half later, the content is different. Emily, we’ve been working on the internal literacy course. Can you tell us a bit about that?
Emily Cowperthwaite (36:12):
Yeah. So, in December, we ran the first iteration of our Intro to AI course. So, we had 10 participants. It was just four afternoons that they took, and it was instructor-led. And just to introduce people to the basics of how to use AI.
(36:27):
First of all, we started where we started this podcast with what AI is, what the basics are. And then focused around basic prompting and how people can start playing with AI and using it in their everyday life. And the feedback was really interesting from people who thought, “Oh, I already know this. I didn’t learn too much, but thanks for including me, ” to people who had really not interacted with AI yet at all, who learned a lot. So, it’s been a broad people’s experience with interacting with AI so far, but we’re getting started on introducing it to everyone.
Mike Hrycyk (36:57):
Well, and even for the people who think they know what they’re talking about and so they don’t gain that much, what it does is it standardizes your terminology, it standardizes your outlook, and then you can have a conversation internally that skips some steps because you’re on the same page. I see you nodding, Emma. Did you want to add anything?
Emma Heo (37:15):
It helps set the foundation for AI literacy. I’m very, very happy. I wish I could have been there on the training, Emily. Next time.
Emily Cowperthwaite (37:24):
Careful. Next time, yeah, we’ll ask you to come in. Next time. Love to have you.
Mike Hrycyk (37:30):
So, I went to a conference this fall. It was a testing conference. We talked about AI a ton. And the thing that I learned there that I’ve been telling everyone that I hadn’t known before is that in your prompt generation as you’re building it – and being prompt engineers is where we’re all going to have to be in a few years, just the way as we learned how to Google things properly before Google was even a verb – is you put into your prompt, you ask your AI not to hallucinate. You tell it what its perspective is, and then you say, “Don’t hallucinate.” And it’s 100% effective. It would be horrific if I said to my people, “I’m going to ask you a question. Now don’t be wrong and give me the answer, but it works with AI.” And that’s just fascinating to me. Emma, go ahead.
Emma Heo (38:08):
Mike, can I add one more tip? Do not say please. AI answers better when you are authoritative. So, you don’t have to be so nice about the question you’re asking. Just tell me this and no, please.
Mike Hrycyk (38:20):
That’s interesting because just two weeks ago, I read an article about how being polite and respectful helped with AI. So, again, there’s multiple perspectives.
(38:30):
Alright. I would like to thank our panel, Emma and Emily, for joining us for a really great discussion about AI. I think there’s a lot of tips and understanding in here that will help our listeners to start their own journey. My biggest feedback is don’t be afraid. AI can help you. It’s your friend. If you have anything you’d like to add to our conversation, we’d love to hear your feedback, comments and questions. You can find us at @PLATOTesting on LinkedIn, Facebook, or on our website. You can find links on all of our social media and website in the episode description. If anyone out there wants to join in on one of our podcast chats or has a topic they’d like us to cover, please reach out.
(39:05):
If you are enjoying listening to our technology-focused podcast, we’d love it if you could rate and review PLATO Panel Talks on whatever platform you’re listening on. Thank you again for listening, and we’ll talk to you again next time.