With the world’s first AI Safety Summit hosted by the UK Government at Bletchley Park Trust concluded, it is timely to reflect on what was discussed. I was asked to speak about the potential extreme harm risks AI might pose for the EOS at Federated Hermes client day last month. In this summary video below, I look at what might constitute extreme harm.
I was pleased to read that the group communiqué from the safety summit addressed these issues – quoting from the statement:
“the recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed. We also note the potential for unforeseen risks stemming from the capability to manipulate content or generate deceptive content. All of these issues are critically important and we affirm the necessity and urgency of addressing them.”
While there was much posturing between the US and UK for “bragging rights” around AI supremacy, now the potential safety risks are out in the open, and the public is more aware of these, this will mean that both regulators and developers will need to take more care in assessing and building these systems.
The role of an Actionable Futurist could not be more relevant in late 2023. While traditional futurists will paint a picture of what the world might look like in 20, 30 or 50 years, and forward-thinking leaders such as Elon Musk might predict that AI will “end jobs”, those clients I speak with want to be able to plan for 2024 or close the quarter. They won’t be working in that role in 10 years, let alone 50 so they need to know now what they should do next, not what might happen in the future.
What I’ve found over the last 12 months is that I’m winning new engagements when clients want to look at the impacts of AI because I provide near-term and very actionable advice for what they need to do next.