Voted Number One PR Podcast in Goodpods
June 9, 2024

Embracing AI as a Stakeholder in Public Relations

Here’s a different perspective. What if we didn’t regard AI as a technology? What if we thought of it as a stakeholder?

AI has the power to influence and even revolutionize industries from sustainable development to climate intervention. At the same time, we know the risks involve navigating ethical complexities and misinformation.

Maybe we need to expand our perspective beyond this tool we’re learning to use, and accept it as a complex voice we need to influence for the greater good.

Guest: Rupert Younger, Founder and Director of the Oxford University Centre for Corporate Reputation
X | LinkedIn | Oxford University Centre for Corporate Reputation | Enacting Purpose Initiative | Activist Manifesto   

Publications
Rupert’s original article in Oxford Answers
The reputation game: the art of changing how people see you

Rate this podcast with just one click

Leave us a voice message we can share on the podcast  https://www.speakpipe.com/StoriesandStrategies

Stories and Strategies Website

Do you want to podcast? Book a meeting with Doug Downs to talk about it.

Apply to be a guest on the podcast

Connect with us

LinkedIn | X | Instagram | You Tube | Facebook | Threads

Request a transcript of this episode

Support the Show.

Transcript

Ruth Dewitt Bukater (00:14):

And why do you have two steering wheels?

Captain EJ Smith (00:16):

We really only use this near shore.

Jack Phillips (00:18):

Excuse me, sir. Another ice warning. This one's from the northern

Captain EJ Smith (00:22):

Thank you Sparks. Oh, not to worry. Quite normal for this time of year. In fact, we're speeding up. I've just ordered the last boilers lit

Doug Downs (00:39):

Altogether, Titanic's radio operators received seven messages from other ships warning of drifting ice seven. But the captain EJ Smith famously ignored those warnings received by new wireless technology. Now, some would say he didn't understand the new technology, and maybe that's partly true. Wireless was like the AI of its time. But really, captain Smith failed to recognize the wireless as an informing stakeholder in his decision-making process. The wireless messages we're seen as just pieces of information, not critical inputs from vital participants. Just as wireless technology on the Titanic was more than just a tool. Generative AI today is more than just a new technology. It has the ability to create information and shape opinions, making it a significant influence on various aspects of our lives. We need to think about AI as a stakeholder, integrating its outputs and insights into our decision-making process today on stories and strategies. We're all in a deep ocean of information now, and ignoring the whispers of the future could lead us straight into unseen danger.

Freddy Fleet (02:03):

Is there anyone there? Yes. Iceberg. Right ahead. Thank you. Right ahead. Hard a Starboard.

Doug Downs (02:26):

My name is Doug Downs music off the top, Hymn to the Sea by James Horner, of course, from the movie Titanic back in the nineties. A couple of reviews to share, both from good friends of mine, actually. TylerYYC left a review on Apple five stars, and Tyler writes a rock solid podcast with awesome guests and an engaging host. It's almost like I bought him a beer or something that he would say something like that. Tyler's a good friend of mine here in Canada, and I bumped into a longtime friend of mine in line at the bank last week. Mary, our kids used to play hockey together, and I had no idea that Mary has been listening to the podcast for the last three, three and a half years, regular listener. Almost every episode. She was quoting things from different episodes, what guests had said. Mary, thank you so much for the kind words, and it really means a lot that you've checked out the podcast. My guest this week is Rupert Younger, joining today from Oxford, England. Hello, Rupert.

Rupert Younger (03:20):

Hello,

Doug Downs (03:20):

Doug. How are things in Oxford? Are you in that rainy season or is it beautiful, glorious sunshine these days?

Rupert Younger (03:27):

Well, it's Oxford, so the usual reflective pace of things has slightly a, in may, it's slightly increases in pace as exam fever takes over as a party season starts to loom, as the prospect of the end of term becomes closer and as people start to misbehave. So yeah, the pace of life is hotting up a bit here in Oxford.

Doug Downs (03:53):

Beautiful. Rupert, you're the founder and director of the Oxford University Center for Corporate Reputation and Chair of the Enacting Purpose Initiative, which is a multi-institution partnership between the University of Oxford, university of California, Berkeley, federated Hermes, and the British Academy. You're also the academic director for Oxford's Corporate Affairs Academy. You've published two books, the Reputation Game, and there's a link to that one in the show notes. It's an international bestseller now in seven languages. My goodness, that was co-authored with David Waller and the Activist Manifesto Manifesto, co-authored with Frank Partnoy. That one is currently in two languages. So you're everywhere. You're everywhere. So Rupert, what do you mean when you say generative AI should be regarded as a stakeholder and not just a tool?

Rupert Younger (04:49):

So Doug, thank you for having me on number one and also for picking up this provocation that I wrote, which is really just a, it was to throw out the concept rather than a definitive set of ideas. But the concept is very simple, which is that generative ai, and by generative ai of course, we mean AI that is not just looking at repetition and patterns, but is actually taking that and then creating stuff. When AI creates stuff, then it becomes, to my mind, something similar for organizations like a journalist or like a politician or like anyone who has an opinion. And in that respect, why not treat it as a stakeholder? And the other provocation, or at least the reason why I put this piece out was that people are feeling a lack of agency over ai. AI has become so ubiquitous, it's become so prevalent everywhere in our lives, and I think most human beings are feeling somewhat powerless in the face of this massive technology, which is going to now or has or at least threatens to have control or influence over all sorts of our lives. So this idea of having some sort of idea of generative AI as being a stakeholder emanates from those two pillars.

Doug Downs (06:09):

When I think of stakeholders, I think of people I should be listening to in my project work. Are you suggesting we need to listen more to AI or treat it as a tool that has human influences and humans who can be influenced to use the tool for the greater good?

Rupert Younger (06:30):

Yeah, really. So I study organizations and leadership at Oxford. So I've really come at this through an organizational lens. And when you think of organizations and how they treat stakeholders, by the way, which includes shareholders as one very important stakeholder, they do indeed, as you say, the very good companies spend a lot of time listening, decoding what stakeholder priorities are what they want, but also they have a series of strategies to put in place to shape opinion, to make sure that those people commenting on or interacting with the organization actually understand the organization well enough. They maybe support their strategies. They understand how and when and why to engage with them. So it's a diadic process. Yes, they're listening, but they're also engaging.

Doug Downs (07:23):

Okay, so it's influence. How do we do that? I've read your paper and there's a link to that in the show notes too. And you outline three areas for stakeholder engagement, algorithm developers, users, which is you and me. And as we record chat, GPT went down yesterday for hours and hours and hours, drove me nuts, put my entire process hours behind. So algorithm, developers, users, and data sets. How should organizations prioritize those areas and what practical steps can they take to effectively engage with each of them?

Rupert Younger (08:03):

So again, I put this as a provocation, and so it's not comprehensive, this idea, but it starts off with this premise that if you have a stakeholder in that ai, generative AI is a stakeholder. If you have an important stakeholder, you need to find ways to influence how that stakeholder sees or comments about you. With that starting point in the paper, I put three ideas forward. One is to work with the developers. So of course the people who write the codes, who start off and set the algorithms off on the journeys that they take, it would of course make sense for organizations to start with them and say, look, what type of prompts, what type of goals have you set in these algorithms so that we can at least see where they're headed and try and shape what the developers do in this starting point.

(08:56):

So that's the first obvious step, and I think that is step one when you talk about priorities that I think is step one. Step two is then to think about the data sets because every algorithm trains itself on a data set. And half of the problems with algorithms that you've found is that they're trained on data sets that are either one dimensional, maybe wrong, may have biases, implicit or otherwise in them. And so AI is a reflection and it draws off and influences these different data sets. So perhaps working with data sets and limiting your AI to certain data sets is the second stage. And by the way, that already happens. So the major consulting firms, the banks, the lawyers, what they do is they train the data, the algorithms that they have off the data, that is their own proprietary information so that the algorithms actually produce relevant insights for them and their clients. And the third area then is to engage with the users. So every organization, every partner, if you like to an organization will also be reliant on and engaging with algorithms for all sorts of things. So working with the users of algorithms, whether they be organizations, leaders, you and I, anyone, that also seems like a sensible way to start all those three things together, I think give you a very important sense of agency over this very powerful technology,

Doug Downs (10:23):

And it does give us something strategic and tactical that we can actually focus on. It becomes less elusive in terms of an idea. In your paper, you also draw parallels between the moral agency of corporations and generative ai. Can you give me some examples where AI has demonstrated characteristics similar to those of moral agents influencing organizational behavior and decision making?

Rupert Younger (10:49):

Yeah, so this rests on, by the way, very valid complaint against my argument, which is that you can't hold AI to account as a stakeholder because it's not a moral agent. It can't make independent decisions, it's not sentient. I've put a counter argument to this, which is that organizations are routinely held to account by us for their actions and that we also can engage with them. We organizations elicit very strong emotions, anger, disgust, love, lots of different reasons why we can construct the organization as having a moral frame and being able to be treated as moral agents by us. In terms of examples, so I think probably the one that I think is most compelling is ai, when it's used in our cars. So AI is used in, I think 165 different in our cars. AI gathers information on our weight, on our bearing, how we sit.

(11:59):

It gathers information on how we're listening to things On the radio, it decodes and passes back information to the manufacturers on the way we drive, how much we break, and it sends that to insurance companies. So there's an enormous amount of decoded information which is being worked on through ai. Now they are all of those subject to extremely strong moral questions. Should they be gathering our weight? Should they be gathering what time we wake up in the morning? Should there be gathering? These are lots of should questions, not could, but should questions. The one particular thing which rests on, I think a moral question might be if you have an accident. So if you have an accident in a car and it happens to have happened because another car has come around and the lights have blinded you, now the actual decision to dip the lights or not is done by ai.

(12:57):

AI now dips or doesn't dip your headlights, and it spots whether it sees another car or what it thinks is another car and decides to dip the lights or not. If that gets, if AI gets that wrong, how do you hold AI accountable for the accident that you've had? These are very, very big questions. If I'm a diabetic, let's move it into a different frame. And I have a piece of technology which injects the insulin that I need at different times based on the diagnosis of my blood sugar levels. There's a piece of AI that does that. If that AI decides that I need more or less, that I actually need, that's extremely dangerous for me. So who holds that morally accountable? Is it the technology? Is it the technology maker? Is it the doctor who inserts it, or should you treat that particular AI as a stakeholder and therefore hold it morally accountable?

Doug Downs (13:49):

One of the things I liked about this paper is I get a great sense of optimism. I certainly enjoy reading and listening to, I think his name is Noah Yuval, the Israeli commentator. It might be Val Noah, I apologize. Very doom and gloom about generative ai basically saying it's going to lead to the end of humanity as we know it. That's a bit scary. Your paper gives me more of a sense of optimism in that we can influence, if not take control, we can influence some of the outcomes here. And one example is that AI can lead to the greater good, for example, by contributing to sustainable development goals. You write about that in your paper. Expand on that for me.

Rupert Younger (14:34):

Yeah, I mean, AI has the power to do so much good. It can really transform the way that we handle huge data sets that we're able to decode very complex information at speed and at pace. So two examples of why I'm very optimistic about AI with a caveat that I'll finish off with, but the optimism comes from just in many ways, but just two examples. One is think about the drug delivery and drug development, the r and d market for new drugs. Currently, new drugs, when they're developed, they have to be tried out on patients, and that takes time, ethics, approvals, et cetera. Digital twins through AI could rapidly, rapidly scale out the drug testing environment. That's a good point, to the point where drugs could be brought to market very safely, but very quickly because they're working through digital twins and looking at the impact on individuals at scale and at speed.

(15:40):

And then if you take a look at the sustainable development goals in agriculture, for example, there's a wonderful company called Deep Planet, which does work at the very micro level, analyzing the data from soil samples all over the world. And it can spot when there's a deficit of water, when there's nutrients not getting into individual plants much, much quicker than any farmer can do at scale across thousands of acres of fields. And so it can start to give you insight in how to intervene quickly to make sure that agriculture performs and is maximizing its potential. So lots of things that can happen when AI is deployed across the sustainable development goals. The FT wrote a very, very interesting paper, a piece the other day, and it drew on a piece of research in nature, which identified 134 of the SDG targets could be helped by ai. So huge potential for good. Now the caveat, so I'm going to channel my val here, which is that without asking the question of what should AI be able to do as opposed to what can AI do, we risk AI for bad, not AI for good. So it's an urgent call from my side for people to focus much more keenly right now on the ethics of AI use AI deployment, and as my paper talks about AI governance,

Doug Downs (17:17):

Okay, who are you calling? I know it's an idea paper, but who should hear this and take it as a call to action governments, large organizations, multinational corporations, who takes the lead on this kind of work?

Rupert Younger (17:35):

Well, I tend to start from a belief in free markets as opposed to government control. But the efforts to try and help companies to, or companies to deal with these questions early and to invest significant amount of time in developing responsible AI don't seem to have worked very well. The power of this makes it very hard for companies not to try and deploy at scale the dramatic intervention power that can come with ai. So I don't think that leaving it to the market is working, which means there is a role for government. So I guess my primary call is that there needs to be a much, much closer and tighter set of chains put onto the developers of this incredibly powerful set of technologies.

Doug Downs (18:29):

If it's going to be government, then it needs to be us that pushes the government. Governments only do what we the masses force them to do.

Rupert Younger (18:37):

Indeed, indeed. At least that's certainly the case in democracies.

Doug Downs (18:42):

Absolutely. Okay. Lot to think about here. So let me put you right on the spot in one sentence because I can imagine listening to this head is swirling in one sentence, how would you summarize the need for organizations today to embrace AI as a stakeholder or governments to embrace AI as a stakeholder?

Rupert Younger (19:00):

Because AI has become part journalist, part pr, part search engine, and part script writer. It's a content creator. And like any content creator, it needs to be carefully managed.

Doug Downs (19:21):

Well said. I really appreciate your time today, Rupert. Thank you. Thank

Rupert Younger (19:25):

You very much. Doug.

Doug Downs (19:28):

If you'd like to send a message to my guest, Rupert Younger, we've got contact information in the show notes, Stories and Strategies is a co-production of JGR Communications and Stories and Strategies podcasts. We use no AI in the making of this episode. Just a footnote. If you like the episode, please leave a rating and possibly a review. Those mean the world to us and we read the reviews on future episodes. Lastly, do us a favor forward this episode to one friend. Thanks for listening.