BBC News on the issues with AI deepfakes

  • 29 January 2024
  • BBC News

Andrew appeared live on the BBC News Channel on Monday, 29th January 2024, to speak with Ben Thompson about the issues with AI deepfakes, following controversy surrounding deep faked images of Taylor Swift that were circulated on Twitter in January 2024, leading the social media network to ban the ability to search for “Taylor Swift”.

Below are Andrew’s talking points for the segment

The Taylor Swift Fake AI controversy has once again raised the issue of deepfakes – something that has been around for a while. Still, with the advent of new Generative AI tools available to almost anyone, we can expect to see more of this happening.

Established platforms already have robust controls in place, so it is unlikely that we will see more deepfakes like the Taylor Swift one produced by well-known platforms; however, open-source generative AI tools are already capable of this.

This means it will be much harder to regulate, so the calls for regulation by the US Congress go only part of the way to addressing the issue.

My concern is that with 65 countries and around 40% of the world’s population going to the polls this year, we should expect to see more deepfakes appearing and misinformation will become a real issue.

A few weeks ago, someone created a voice clone of President Joe Biden and started robocalling voters using his voice – most likely with a tool available online for around just £4/month.

In mid-2023, Google’s top result for Dutch painter “Johannes Vermeer” was an AI-generated version of “Girl With a Pearl Earring” – so even Google was fooled into thinking this content was real.

Just as news organisations such as the BBC highlight misinformation issues and banks warn of financial scams, raising awareness among the general population will be critical in tandem with technical innovation and regulation.

As an Actionable Futurist, I’m actively trying out these technologies to see what they are capable of. I’ve already developed a perfect clone of my voice, and I’m about to create a near-perfect video clone.

I’ll use these positively and disclose that AI has generated the content, but I fear not everyone will do the same.

So what can be done?

How will consumers know if they are seeing fake or manipulated content? Encrypted watermarks can be added to AI content – and reputable platforms should insist they are added.

Search engines and social platforms should label AI-generated content from these watermarks as AI-generated or label that they are likely to have been developed by AI – allowing end users to make their own decisions about the validity of the source.

Back in 2020, the alternate Queen’s message on Channel 4 was convincingly generated by a deepfake to raise awareness among a broader audience – watch below.