AI is playing a serious role quickly in public relations and marketing. Rules and Regulations are inevitable... but is it too soon? Do we even understand AI yet?
What are the rules?
In an era where artificial intelligence is revolutionizing communications and marketing, this episode dives into the critical conversation about the impact of AI, the void of formal regulations, and the industry's quest for ethical self-regulation.
It may well be up to us as a profession to craft our own ethical guidelines and envision what effective self-regulation could look like to safeguard the future of communications.
In this episode, a glimpse into what those regulations might look like. And should we even have rules for something we don’t really understand yet?
Listen For
5:29 The Revolutionary Impact of AI and the Need for Self-Regulation
6:47 The Broader Influence of AI and the Challenge of Regulation
9:53 The Premature Nature of AI Regulation and the Importance of Education
14:07 Self-Regulation, Ethical Guidelines, and the Potential Role of Gen Z
Guests:
Professor Christian Stiegler
LinkedIn | Guiding Light | Website
Prof. Dr. Christian Stiegler is the Director of Guiding Light - an international organisation for ethics and sustainability in technologies. As an award-winning researcher and internationally renowned expert on emerging technologies, he writes and speaks extensively on subjects such as XR, AI, technology ethics, the metaverse and emerging technologies.
Manuel Hűttl, CEO Milk & Honey PR
LinkedIn | Facebook | Instagram | Website
Manuel Hüttl is Partner and CEO of Milk & Honey PR, a global Public Relations agency dedicated to shaping brand reputation. He is an industry veteran with over twenty years of experience, and opened Milk & Honey’s first continental Europe office in 2022. Manuel Hüttl holds various board positions and leads the AI Steering Group at Milk & Honey PR.
Download the AI Ethical Playbook by Milk & Honey
Rate this podcast with just one click
Leave us a voice message we can share on the podcast https://www.speakpipe.com/StoriesandStrategies
Stories and Strategies Website
Do you want to podcast? Book a meeting with Doug Downs to talk about it.
Apply to be a guest on the podcast
Connect with us
LinkedIn | X | Instagram | You Tube | Facebook | Threads
Request a transcript of this episode
Doug Downs (00:05):
One of the most compelling stories from the Industrial Revolution revolves around the Luddites. They were a group of English textile workers and weavers in the early 19th century who became famous for their resistance to industrialization. The term Luddite originates from the supposed myth of Ned Ludd, a young apprentice who was rumored to have destroyed two stalking frames in a fit of rage in 1779. The problem with that story is it looks like Ned never existed. Someone just made up the story, but it worked. And by the early 19th century, it had inspired the actual Luddite movement, a group of textile workers who saw the new industrial machinery as a direct threat to their jobs and way of life. These machines which could be operated by unskilled labor, were making traditional skills obsolete and driving down wages due to their efficiency and lower operational costs. The Luddites would break into factories at night destroying textile machinery as a form of protest.
(01:12):
The movement was not against technology per se, but was a reaction to the unregulated exploitation of labor and the lack of protection for workers. During a time of significant economic transformation, the British government responded to the Luddite uprising with harsh measures. In 1812, the Frame breaking Act was passed, making machine breaking a capital offense. The government deployed the army to areas with high Luddite activities leading to several violent confrontations. By 1813, dozens of Luddites had been executed or transported to Australia, and the movement was effectively suppressed. The Luddites have since become a symbol of resistance to technological change, but their story is more nuanced. It's really about the complexities of navigating economic and social transformations brought about by technological advancements. It really underscores the importance of considering the human impact of progress and the need for systems that protect workers from the negative consequences of rapid technological change. In today's world, there's a growing debate on how to manage the swift development and deployment of AI technologies with discussions on ethical AI use, privacy surveillance, and the need for regulatory frameworks to protect societal interests. Today on stories and strategies, the timeless quest for ethical guidelines in the face of technological advancements.
(02:56):
My name is Doug Downs. My guests this week are Professor Christian Stiegler and Manuel Huttl. Manuel, did I say that right? Was I close? At least Huel?
Manuel Huttl (03:05):
Yeah. That was a really great attempt I have to say.
Doug Downs (03:08):
Okay, Christian, you are joining today from Vienna, Austria. How are things where you're at?
Christian Stiegler (03:14):
It's a beautiful sunny day doc.
Doug Downs (03:16):
Yeah. And you're up in the teens now Celsius with your temperature, right? March settles in with spring and starts to become nicer. No
Christian Stiegler (03:24):
More heating. That's terrifying.
Doug Downs (03:26):
Oh, awesome. Manuel, you're joining today from Munich in Germany. How are things where you are?
Manuel Huttl (03:31):
This is right. If I look out of the window, the weather is not as nice as it is in Vienna. It's a little bit rainy, and I really miss snow because the winter so far was not that good.
Doug Downs (03:46):
Gotcha. Gotcha. Christian, you are an award-winning researcher and internationally known expert on emerging technologies. You are the director for Guiding Lights, an organization for ethics and sustainability in technologies. In short form, your goal is to create an international platform that supports the benefits and improvements of technologies like ai, extended reality, robotics, blockchain, smart and green technologies. Manuel, you're a partner and the CEO of Milk and Honey's first Continental Europe office In Munich, you've held leadership positions in several international agencies. You're also a senior vice president with the CMO Council, which is a global peer group of 16,000 marketers in Europe, the Middle East and Africa, or the EMEA as the acronym goes. Your book, which is written in German, but I'll translate the title into English, A good reputation is a measure of success, is a pretty standard reference these days for reputation management. So gentlemen, November, 2022, our world changed when open AI made chat, GPT open and accessible, and there were lots of people who did see this coming. I'm not one of them. There were lots of people who did see this coming and already lots of discussion about ethical challenges, but suddenly that month we were in that world and it was very real. So let's start at the highest level. What are some of the ways that you see ai, particularly generative ai, which is the new form that's shaken us impacting the communications industry?
Manuel Huttl (05:29):
Well, I mean definitely it is, as you said, it feels like it arrived on our desks, and it feels like being in that industry is like a revolution happening. And for many, many different reasons, much and foremost, AI is creating a lot of support and efficiencies in terms of cost, time and resources. But to me, overall, and no one really knows where all this AI development's going to go to, which is by the way, a very curious situation. And I like this situation because there is movement going on in our industry, but I think a lot of professionals and agencies needed to redefine themselves to a certain extent. And overall, my perspective is that AI should be always compliment and not replace any PR professional, but the services that we also provide to clients will change a little bit, and we will have good discussions over the next hour on these things.
Christian Stiegler (06:47):
Yeah, I mean, I can definitely on the line what Manuel just said, you and what you said before, Doug. I mean, you could have foreseen it. Some people expert anticipated something like that because open your eyes is nothing less, more than than Google was a couple of years ago. It made something that was already there popular to a mass audience. So basically everybody can use it now. It's nothing specific anymore. And that leaves a lot of questions open, right? We will talk about the ethics of it, but we will also talk about the purposes of it. There is not one industry that doesn't talk about AI at the moment, even if it's not really used a lot in that industry so far. And that makes a lot of discussion points as Manuel said.
Doug Downs (07:29):
So if it is this step change and the possibilities, Manuel, what you were leading to there is the possibilities perhaps aren't endless, but they are unknown. How do we put rules? How do we put limits on something like that when we don't know all the possibilities? That to me, is the ethical dilemma. And at the same time, I agree there certainly must be rules. I just don't know where to put them right now.
Manuel Huttl (07:58):
And I would say it's probably a never ending conversation on exactly what you were saying, doc, if we look at a different discipline in data protection, privacy, we still have lots of different regulations, but we provide tough times to the courts to sanctioning them. And that's just because when we invented the internet, when we invented the beast, when we invented social media, it was meant to be a borderless communication tool with all its potential. But then we have nationwide different cultures, we have different regulations, we have different laws, and we are still struggling in finding the Altima ratio and also on ai. That will never ever happen. It's interesting that we have this podcast today because five days ago, the EU was ratifying the AI Act, and that's a first attempt to come up with an official regulation on ai. But then it always makes me smile because the nation's now committed to that regulation, but it will be out there in 2026 at the earliest. And look at what happened since we invented AI in November, 2022 already. I think the problem of regulations is that they are already old when they are launched.
Doug Downs (09:40):
So then, let me back up one second, Christian, should there be rules and regulations in the first place? Should we have any, at least at this point in time?
Christian Stiegler (09:53):
I would argue it's a little bit almost too early. I mean, what Manuel rightly said, with all the downsides of the European AI Act, it comes very late. It takes two years till all the stages are implemented, and it has in the core itself something that is typical for humankind or humanity to work when it comes to technology, which is actually fear. So everybody kind of knows a little bit what AI is. A lot comes from narratives, from movies. You might've seen some things. It doesn't matter if you've seen a Kubrick movie or something more recent. You all kind of have a little bit of thinking what could be ai. Everybody out there has, but that is the problem. These are all kind of dystopian narratives that basically talk about the end of the world as we know it. And at the same time, it mixes a lot of things that could have something to do with ai and very often have, but not necessarily always.
(10:51):
So you have machine learning, you have robotics, you have the big term, umbrella term algorithms, and that all gets mixed up in this bag called AI. And now you have this huge kind of nationwide protection in the European Union. But by the way, it's not the only one. The US does regulation, more statewise, it's much, much more fragmented. You have China a huge player in the industries, which of course is very, very state regulated, so very different approach compared to the European Union. And so there are regulations. So there are rules and there will be penalties. The things that we talk about today when we talk about generative ai, this comes under the more general AI usage. So with very minimal risk, those companies that use generative AI will probably only have to do transparency. And we've seen, and Manuel has talked about data protection. We've seen what happened with Facebook now called meta. We've seen what happened with Google, all those huge players that actually would have a certain kind of transparency obligation based on data protection, the European Union, in the end, there is not much enforcement that you can do.
Doug Downs (12:08):
And this is really interesting because more and more with these algorithms, it leads to a personalized form of communication where an email could say, Hey, Manuel, we know you're in the market for green checked shirts and framed glasses. Well, we've got a special on. And you know what? That's exactly what we want as consumers. It may freak me out that my phone is listening to me and that later today I'm going to get an ad for green checked shirts that's going to come across my social media. It'll freak me out later today, but I secretly want that. Is that a good thing? And how does AI contribute to that?
Manuel Huttl (12:54):
Well, I think we cannot stop technology and we created that piece and now we have to live with it. But what you mentioned is, and there's a fundamental difference already between how you guys in the US for instance, treat data protection privacy versus European population. Because in Europe, your data is belong to you and someone can only use it if you personally give consent. And this is also what the GDPR is all about. So that means that the crawling machines need to do a different job in Europe than they do in the us. And see, this is what I meant before by saying that when we invented the internet, it was not meant to focus on the different legal situations in each country. And then we have Europe, and then we have different countries in Europe because the regulations may be different to France, to Germany, to Switzerland, to Austria, and they are.
(14:07):
So I personally would say that having general rules in place per what Christian was saying is needed, but little bit more on the meta level. And on the micro level, I think one thing will replace the usage of regulations. And because also, as I pointed out, regulations become old once they're launched. They're not made to cope with this speed of today's business, and I'm part of that business. I need to say speed is a new currency, and no one can deny that. So I think one approach is self-regulations in your hands as a company to show responsibility, to show authenticity and how you as a company are dealing with this kind of challenge and you provide yourself a self-regulation. So to me, this is a way more flexible way to work. And there are of course already some ethical guidelines out there for our business. For instance, the principles from the PRCA, there are guidelines from the PR council. There are local guidelines, but there are no regulations. They're only guidelines. So you do not get sued if you do not comply with them.
Doug Downs (15:53):
Right. And I wish I shared your optimism on self-regulation amongst big companies because on this side of the pond, I don't know if that's going to work. Does it create an unlevel playing field, Christian, when you have this kind of situation where different jurisdictions, Austria has its set of regulations, Germany has, its set, the UK has its set, so on, so on on, it feels like there's different rules, but the playing field itself is quite global.
Christian Stiegler (16:26):
Absolutely. And the big players, they will have an advantage. They already do have an advantage except for the smaller ones. And those are very, very strongly US focused. So to bring that context again in when we talk about the rules and regulations, Manuel kind of rightly point out, it's almost like you have a speed limit and nobody knows how to drive a car. So it comes very early in incoming, this kind of rules how to use ai, and most people don't even know how to use it yet on this global mass level. And so first of all, people would need to learn that. And then in the next step, you can make rules where it doesn't work out. And obviously again, it comes from this point of fear, the point of fear, what could be misused is, well, I also don't believe much in that as Manuel knows, simply because we've seen in the past that particularly ethical concerns are something that sometimes are getting left off a little bit, particularly when something happens, the damage that particularly social media has done to younger generations, to the way that we perceive our bodies, the way we perceive ourself.
(17:48):
This is something you see only 15, 20, 30 years later. Yeah, this is nothing something immediate. It's not like posting a wrong picture and then ha, it's something that happens later on. And these kind of ethical dilemmas you had already. Now, Google was a very prominent example a couple of weeks ago when they obviously try to promote diversity in everything they do. They try to show that they have very diverse data that generative AI is working with. But then you look up something like, well, show me pictures of soldiers from the second World War. And then you get African-American soldiers with Nazi. So that obviously is not correct, right? So this is a huge problem, and there's no discourse about this. It practically gets left off
Doug Downs (18:36):
Well to that idea of self-regulation. Manuel, is it possible that Gen Z might be the generation that could lead this? A generation that grew up with the internet, with social media, it's part of the fabric of who they are. It could well be that us, the older, slightly older generation doesn't know how to self-regulate when it comes to ai. But perhaps the younger generation might have a better sense of it.
Manuel Huttl (19:05):
But I think we have a responsibility with all our knowledge, with all our knowhow, with all our transparency of where technology can go wrong in terms of educating, also in terms of creating media competence for our younger generation. And I think it's really truly key to make them aware of the boundaries, of the barriers, of the problems, of the challenges that new technology can bring along. And that to me is one of our biggest tasks. And this is a responsibility that we all have in terms of our younger generation. And I mean, look at the elections, look at the elections in Russia, look at the elections potentially that you guys have later in the year and look at fake news and all of these challenges and dangers to our modern information society. And I think this also not only relates to media competence, but also to democracy competence.
(20:19):
Because media has become so commodity in these days, and the usage of this technology is commodity. And now we have this automated technology out there. And by the way, I mean if we talk about marking sources in an AI text and all of this, we all may wonder if we're going to meet in a year from now where we are with things and how automized these things are. And I think as human beings, we definitely need to fight hard and battle for the things where a machine can never ever be held accountable for, to provide context, to be disruptive, to question things and to provide perspective. This is all human power. This is not being done by a machine ever. So I think that these are the most important things that we also need to promote also by ourself regulation.
Doug Downs (21:30):
You know what it feels like, and you talked about Christian, the sense of danger that will drive regulations. It does feel though, like we were waiting for a Titanic moment before we introduce regulations and rules. Does it feel the same to you?
Christian Stiegler (21:48):
I think the problems that we have with this is you have to see the bigger picture of it. You have to start actually in educational facilities, in schools. You have to teach young people. What can technology provide for your lives? What do you have to do? I mean, no one can live without social media these days. It's almost like, as Martin said, on commodity, but also a necessity for a lot of things. If you want to apply for a job and you're not visible on Google, because obviously HR will Google you after a while. I mean, it's almost, you don't exist. So it's a necessity. That's right. But at the same time, you're creating some sort of a digital mirror of yourself, which is not you, but just this kind of ideal self that you show. So there's a lot of traps in there. You need someone like Taylor Swift in our society.
(22:37):
Remember when the US came into the whole idea of deep fakes through ai? Because there was a lot of deep fake porn images generated through generative AI from Taylor Swift, but this can happen to anyone, but you need someone on that level to have a problem. So regulators, et cetera, and society talks about this, but as you mentioned with the Titanic moment, we shouldn't wait for this, right? Because we should know, okay, obviously these tools, generative images, so obviously they generative also inappropriate images. And I'm not even talking about things like zoa, a text to video generator from OpenAI where you can easily do videos just by typing in some text prompts. Yeah,
Doug Downs (23:26):
This is part of the reason I love podcasts is that it's humans having a conversation. The irony being that gen ai,
Christian Stiegler (23:33):
Well, you think that you
Doug Downs (23:34):
Can now generate our voices right perfectly with our tone and the three of us were human today. No, AI was sacrificed in the creation of this podcast. Gentlemen, thank you so much Danke Schoen for your time today. I appreciate it.
Manuel Huttl (23:53):
Thank you very much for taking, being part of that very important topic.
Christian Stiegler (23:58):
Thank you, Doug. It was a wonderful time.
Doug Downs (24:02):
If you'd like to send a message to my guests, professor Christian Stiegler and Manuel Huttl, we've got some of their contact information in the show notes. Stories and Strategies is a co-production of JGR Communications and Stories and Strategies podcasts. I want to take a second of your time to encourage you today to leave a rating for this podcast in Apple or Spotify. If you're listening on one of those apps, ironically, there is no AI benefit. They don't feed the algorithms, but they do appeal to the human eye and only the human eye. It encourages more people to listen. And lastly, as always, do us a favour forward this episode to one friend. Thanks for listening.