AI and Democracy – BBC Radio Ulster Talkback Show

  • 01/10/2025
  • BBC Radio Ulster

Andrew appeared live on the BBC Radio Ulster Talkback program hosted by William Crawley alongside Professor Gina Neff to discuss how AI will shape the future of politics and democratic governance.

Andrew shared critical insights on how artificial intelligence is reshaping our democratic landscape. Key takeaways:

The AI Challenge:

  • Technology moves at unprecedented speed and scale
  • Misinformation and deepfakes pose significant democratic risks
  • Citizen curiosity and understanding are our best defense

Critical Observations:

  • AI development is concentrated among a few powerful corporations
  • Deepfake technologies are becoming alarmingly sophisticated
  • Authoritarian regimes can leverage AI to reinforce control

Practical Recommendations:

  • Stay Digitally Curious
  • Develop critical thinking skills
  • Use verification methods (like family passwords)
  • Understand AI’s capabilities and limitations

The Bottom Line: We’re not passive recipients of technology. By being informed and proactive, we can shape AI’s role in our democratic future.

You can read a full transcript below.

We are continuing now with our series on AI and how it may shape the Futurist and government you’ll remember two weeks ago, we started with speech writing and how political messages could be shaped with the help of AI tools. That one was prompted by a controversy here in Northern Ireland about whether a Stormont Minister’s team had actually used AI tools to write a speech he gave in the assembly. Then last week, we looked at how AI is already being used by political parties around the world in election campaigning. Today, we are turning to a much larger question, a much bigger question, a more fundamental question, and it’s whether AI is likely to affect democracy itself. Democracy is something we often take for granted, but it is under threat in many parts of the world. Sometimes that’s called democratic backsliding. Sometimes, the threats go much further and deeper than that with the use of AI tools by political parties and sometimes foreign countries to manipulate voters to deliver the results that they want. So this isn’t just the future. We’re talking about today. This is already happening. Let’s get into that now. We’ll take your calls of course as well. And the question I’m putting raise a very blunt question, but it’s this is AI a friend to democracy or a potential enemy? What do you think of that? 03030 8055 55

Gina Neff is with us, Professor of responsible AI at Queen Mary University of London. And Andrew Grill is the author of Digitally Curious, your guide to navigating the Future of AI and all things tech. Welcome guys. Welcome to both of you. And I’m assuming Gi na, like with all things technological, there are plus sides and negative sides. There are benefits and there are threats. You know, that’s how it always works out. But when it comes to the question of democracy. Are you optimistic for the Future of an AI democracy, or a bit more pessimistic?

I’m pessimistic. We have a lot of work to do to get to an optimistic future if we want that. These technologies represent really powerful forces of centralization and control, making it hard for people to have a voice in decisions about them. They’re fueling ideologies of unchecked economic growth, so they’re saying we should go for growth in a particular kind of way, rather than other other paths. The technologies are prioritising efficiency over accountability, and that’s one vision of how people might want their Futurist to be. But but we’ve got a lot of work to do to get those other visions on the plate, on the on the plate. And then finally, what AI is promising is absolute control coupled with unaccountable power, and that’s where I think we’ve got a lot of work to do to strengthen democratic tendencies if we want to have a positive future.

Some scary Futurist there, Andrew, where are you on that spectrum of optimism to pessimism?

Well, I’m probably a little bit more optimistic, but I’m also a realist. And I think the challenge is that unless you’re curious as a as a citizen, unless you understand what the technology can and can’t do, then that’s where it’s going to ride rudshot over us all. And we talked before about the issue of writing speeches with AI and those sort of things. I think there are lots and lots of people that are getting positive benefits out of AI, but the jury is still out. I think in terms of, you know whether it’s a force for good, but I think part of it is not everyone knows where it’s going and how to use it properly.

So we can’t really talk about the world these days at all, can we without talking about social media and so much of politics is driven by media and social media Gina, we know what deep fakes are. We know what misinformation has looked like. We know that there are bots out there being run by companies for political parties. We know that there are governments with bots out there trying to divide us, to polarise us to spread misinformation. What’s the state of play in all of that right now?

I think that’s what everyone’s really worried about, that we got something wrong in the social media age. We didn’t, we didn’t get on the front foot to come up with a really good plan for the kind of social media that we want. And therefore, everyone’s got a little bit of FOMO about missing out on regulation that will help us get AI, right. You know, in 2024 we called that the year of democratic elections, and there were 50 countries that held competitive national elections. In that independent scientific group found 215 incidents where Gen AI played a role in misinformation. Fully four fifths of the countries that had elections in 2024 had incidents. The vast majority of those incidents were involved in Gen AI conte nt being created.

Yeah, exactly deep fake audio messages messages or images. Almost half of those incidents had no known source, so that information could have been coming from a country or from a political party, people didn’t know and almost two thirds of those so two thirds of those incidents, in 80% of countries, had a negative or harmful impact on the election. So we got bad AI slope coming into our collections.

But aren’t all countries nowadays doing it? It’s not just there’s some bad actors and the rest are good people with a rather, you know, sincere approach to democracy. Don’t all countries get up to this kind of dirty trick stuff?

Well, listen, we released a report that called on political parties to do the right thing, and political parties have a lot, yeah, exactly. Well, exactly. But, you know, we have laws in the UK that say who can do political advertising, and Gen AI is pushing at the boundaries of that, making it possible for foreign actors and others to have a role that they previously would have been harder to reach audiences than they can now.

Interesting text here from someone, Andrew, just respond to this, because I can understand why the person is texting this, but they say, Look, what’s the big deal here? You know your own mind and you mark your ballot paper. It’s as simple as that. In the world of AI, Is it as simple as that?

Well, I don’t think it is. And the whole misinformation that’s happening, the fact that you mentioned social media, you now have the ability to broadcast from your own bedroom, and if you’re broadcasting the wrong information, and it looks legitimate, and we’ve had instances even the last week, of world leaders reposting things that were done by AI and thinking maybe it was their own content. I think that’s where it comes down to, are you making

It’s Donald Trump, actually, isn’t it?

I think you’re right there. Donald Trump posted a video. I’m not sure if he thought it was his own content or thought it was funny, but it posted a video featuring himself being introduced by his daughter in law. Yes. So people will be impacted by that. So it’s much easier. Now what AI does really well is things at speed and scale. And now that you’ve got companies like Google and others that can do video that is essentially broadcast quality, even I’m being questioning what I’m seeing. Is it real or not? Yeah, and that’s an issue for everyone. I’m just assuming it’s not real now, as an operating hypothesis, just utter scepticism, until I’ve found evidence that it is. But about being curious, do you? How do you then sense check it? How are you as an average citizen, able to go and check whether that’s real or not? That’s that’s an extra load on the average it absolutely, it’s energy. And most people, Gina, don’t have that energy. They just take what they get. Well, absolutely, and we have this challenge, right? Because the point of a lot of misinformation is not to change voters minds, as the as the listener who texted in said, it’s literally to disenfranchise us and cause us to say, This is too complicated, or I can’t understand it, or I’m just going to stay at home. And that’s where we have to kind of change the goal posts in a way that strengthening our ability to have those conversations about important democratic those important democratic conversations about what we want as a society. We have to strengthen that rather than simply say, well, we’ll, you know, we’ll just sit this one out. We have to double down on that. And we’ve seen it in real elections. We’ve seen it there recently with Moldova. We’ve seen it with Estonia. We’ve seen it in the United States. We’ve seen it with Brexit, reported at the time as well. There are foreign actors who want to divide us, polarise us, confuse us, make a stay at home, make us vote in a certain way. They’re out there. So we might say to that listener, you may think you’re voting with your own mind, but how did you get your mind on this issue? How was that decision you took shaped? Was it simply you? Are you an island in the world? Are you part of the Digital Ocean, like the rest of us, even if you don’t realise you’re part of that ocean. Andrew, more about deep fakes, please, because we’re going to see a lot more of this in the future. How dangerous is it for democracy? I think it’s now even more dangerous, because in the last even 12 months, the deep fakes have become so good. Voice has been there for a while, but now, video, you can basically deep fake anything. It is a real worry, because I said, it’s this extra load, and we will probably see other things happening with deep fakes. We’re going to see politicians. We’re going to see people of influence. The challenge is, you may even see your own family members being faked, asking you to vote a certain way, and you really believe that it’s them. And then if you catch a politician in a classic old fashioned journalistic sting being corrupt, it’s now an available option for the politician to say that’s just a deep fake. You didn’t catch just AI, it’s just AI, and we lose that accountability as well. Wayne, some more on point. Hi Wayne,

good afternoon, William. I have a positive note. The way AI works,

my opinion, is the.

Helps a lot of people with dyslexia, who maybe have learning difficulties. They can’t write anything, but they can say it, yep. And machine, the machine types, it fills in the blanks, pulls in the question marks, does all those and

then it gives them the confidence to fill in things, to answer questions, to get important debates, because not everybody can write or spell or put the right pronunciation things, but it’s great in that respect. It gives them encouragement to go on, go forward, because it gives them that extra bit of help. People,

to me, it’s good upside. It’s a great upside. Yeah, struggle. It’s a very important thing. You know what I mean? Weight, respect. Very good. Thank you very much. And obviously people with visual impairment as well. AI is doing amazing things to equalise our world in all kinds of way. Gina, we can’t be just completely negative about it, right? Absolutely. And I think that’s the that’s the hope, you know, where I do have optimism is if we get involved in the kinds of questions about what kind of society we want in the Futurist, what kinds of guardrails do we need to have in place? What kinds of ways do we need to make sure that these technologies are fair and they’re working for us, that’s where I think we can get really great outcomes. Our team just did a report about how techno AI technologies can help people who use British Sign Language and they are really the community is really clear that their voices need to be heard in the procurement and the choices of using these tools and technologies. They shouldn’t just simply be written over with AI. So I think the more we can get more people involved and invested in building that optimistic future, the better off we will all be. Andrew, we need to talk about corporate power, don’t we? There’s an awful lot of money trillions in the world of AI development these days, and there are a lot of corporate big wigs, billionaires and in the future, trillionaires who will have an awful lot of power in shaping our world. Yeah, it’s been concentrated on this few small firms we saw, and it’s the sort of centre of gravity is in the US. So Oracle announced they’re going to be paid $400 billion or something, and OpenAI said they’re spending 300 billion on servers. This is a lot of money concentrated in one place. I was asked by someone the other day at a talk I was giving, why are they spending so much money? And I think we have never seen a technology like this. I’ve been involved in this game for 30 years. I’m an engineer, I’m a technologist. I just haven’t seen technology like this go ahead and leaps and bounds. So I think a lot of people are making some big bets, but unfortunately, to play in AI, you have to have deep pockets and deep money, and there aren’t many of those people around. So that’s where the challenge will be, concentrating that power with only a few small company, a few companies. It’s always been a thing, you know, billionaires with only who own newspaper groups can can influence power, can influence politics, but we’re into a four dimensional version of that, with with AI question here coming through for both of you, because it’s an everyday experience for people wondering these days, if they’re on social media, is this a bot I’m interacting with, or is this a real person? Gina, what do you do to try to answer that question? That question? I had the experience this summer of interacting completely with a bot, with an AI agent on WhatsApp in another country, where I asked for an in another language, where I asked in English for an appointment. I was directed. Got the appointment in 30 minutes, I was told, here’s your address, here’s your here’s your appointment. These kinds of AI tools are rolling out on on on tools like WhatsApp and other countries outside of that’s useful. Yeah, it’s incredibly useful, right? So I think when it’s transparent and I know that I’m interacting with a bot, that that can be really good. I hate seeing it when it comes to my social media feed, and it looks like something that’s been written by a friend of mine, but has actually been a bot that is impersonating and you’re not going to be able to tell the difference. Are

you? No, I think that you know that already. We’re starting to see in some of these deep fakes. I get asked a lot by fact checkers, if you know how we know some picture or some video is deep fake, and they’re getting very good. And so I think we have a lot to do to make sure that we’re holding on to our grip on reality. Yeah, and grok got in trouble the other day, too. Andrew, well, the other the other month, when it turned a bit Nazi, and it turned out that the algorithm was at fault. So algorithms are behind all of this action, and the algorithm can show a moral and a political bias in what it offers people absolutely and who is responsible for that source of truth. Just going back to what Gina said and your point about, how can we tell whether this is AI or not or it’s bot? What’s becoming more and more important with these deep fakes? If.

If someone in your family calls you or you think it’s them and they’re asking you for money, what I suggest to my clients is have a family password. Have a phrase that only in you and your your dearest know. So if someone rings up, Hey, Dad, I’ve lost my phone, I need you to transfer 100 pounds. What’s the family password? They’ll hang up. We actually need to use old school technology to beat the cyber criminals who are leveraging AI. And would you think that’ll do it, or will the AI find a way around that too? Well? You may need to change your family password once it’s been used, but that seems to be a way to start pushing back against these AI robots.

I know some people are very keen Gina on the possibility of online voting in the Futurist rather than old fashioned bits of paper or pulling a lever in some states in America or whatever. But does the AI generative, AI revolution that we’re going through make that future more or less likely? Are politicians going to be more open to that or less open to the idea of using tech for voting.

Well, listen, voting in democracies is about having a set of rules that are clear to everybody. And when those rules are clear and people play by them, then we feel like the elections are free and fair. We don’t have those rules in place for what Gen AI means and how to use it. So I think the challenge will be a little less electric vote, electronic voting versus ballot paper, etc. Then how do we make sure that we’ve got rules that everybody feels like serve the fairness of the competition? I saw one study just the other day Andrew suggesting that 70% of the world’s population, currently, 70% live in an authoritarian country or regime of some kind, which is a scary thought. Is AI going to challenge that or make that worse?

I think if you’re in control of the AI and the regulations of that country, you have even more power. And so that’s why, coming back to being curious as a citizen, you need to understand what it can and can’t do. But if you’re in a in a country that’s restricted, that’s going to be very, very challenging, and some of the power we’re seeing, there’s a talk about what we call sovereign AI. So a country has control of their own AI capability, which means that it conforms to their rules and those sorts of things, which means what comes out of it and could be used in in policy and decision making could be amplifying the country’s view.

Gina, I don’t know if you guys intended this, but you’ve made me more nervous about democracy and AI at the end of this conversation, I think so i My hope is that we can inspire people to get involved. Yeah. Thank you, both. Get involved. Really appreciate it. Gina Neff, who is professor of responsible AI at Queen Mary University of London, Andrew Grill, the Author of Digitally Curious.

author avatar
Andrew Grill Global AI Keynote Speaker, Leading Futurist, Bestselling Author, Brand Ambassador
Andrew Grill is a Global AI Keynote Speaker, Bestselling Author, Top 10 Futurist, and Former IBM Managing Partner with over 30 years’ experience helping organisations navigate the future of technology. He holds both a Master of Engineering and an MBA, combining technical expertise with business strategy.
  • Time : 12:40 - 13:00 (Europe/London)

Related Events