I joined Kait Borsay on Times Radio to discuss OpenAI’s new “Industrial Policy for the Intelligence Age” paper. Unsurprisingly, the headlines have locked onto one line: Sam Altman’s call for a 32‑hour, four‑day week as an “efficiency dividend” from AI.
On air, I argued that focusing on that soundbite masks three much deeper issues.
1. A four‑day week is politics, not a product feature
OpenAI can’t hand us a shorter week.
Yes, AI can make some tasks faster. But as Kait highlighted, Harvard Business Review and ActiveTrack data show a familiar pattern: AI tools often intensify work rather than reduce it – people just get given more to do, more emails, more messages, higher targets.
We’ve been here before.
Mobile phones were meant to let us “work from anywhere”. Many of us just ended up working everywhere.
If we ever do get a four‑day week, it will be because governments, unions and employers hard‑wire it into law and contracts – not because a lab in San Francisco suggested it in a policy paper.
2. You can’t fix broken work by pouring AI on top
I said to Kait that I’ve spent the last three years talking to senior leaders about AI, and almost all of them are hampered by legacy processes.
Right now, in most organisations, AI is being layered on top of broken workflows – which is why we see efficiency gains turning into workload intensification, not time back.
To make AI genuinely work for people, we need a complete re‑imagination of how work gets done, task by task, not just “add ChatGPT and stir”.
3. The policy blueprint is thin on skin in the game
On paper, I actually like many of OpenAI’s ideas: robot/“AI” taxes, modernising the tax base, public wealth funds, portable benefits.
But as I said on Times Radio, if this 13‑page blueprint had come from an independent policy think tank, I wouldn’t have blinked.
The fact that it comes from one of the major AI players matters. They warn against AI power being concentrated “in the hands of a few” while being exactly that.
They suggest governments shift tax burdens, rebuild safety nets and create new institutions – but there’s very little about OpenAI itself putting serious money on the table or accepting hard constraints on its own behaviour.
Beyond a line about up to $100k in grants or some API credits for good ideas, there’s no sense of “we will pay this share of the bill” or “we will accept these limits even if it slows us down”.
That’s why I told Kait this feels more like a clever marketing and positioning exercise than a doctrine.
On the show, I made the point that AI now demands better humans.
If AI surfaces more information, faster, our job is not to switch our brains off. It’s to:
Ask where the answer came from
Challenge the “source of truth”
Apply critical thinking before we act
Being Digitally Curious means not just asking “what can AI do?”, but “what is it built on, who does it serve, and what happens if it’s wrong?”

