AI Job Loss Fears with Darryl Morris on Times Radio

  • 14/02/2026
  • Times Radio

Global AI Expert Andrew Grill joined Darryl Morris on Times Radio for an extended interview to explore a “doomsday” AI scenario: what if Mustafa Suleyman is right and most white-collar tasks are automated within 12–18 months?

Rather than arguing about the exact timeline, we asked: what would that world actually look like?

A few key ideas we discussed:

🍰 Jobs won’t vanish overnight – they’ll be sliced.
AI will first take the repetitive, low‑value tasks in roles like law, accounting, customer service and software development. The real disruption comes as enough slices are removed that roles must be fundamentally redesigned.

🏃‍♂️ Technology moves faster than organisations.
Even if AI can technically do the work, large organisations still have 20 years of processes, compliance and culture to unbundle. That’s the real brake on change.

❓Who owns the upside?
If companies cut large chunks of white‑collar work, we may see higher productivity and profits on paper—but also pressure on the middle class. That’s where ideas like universal basic income resurface, along with profound questions about how we tax, support and retrain people.

🅰🅸 AI as augmented intelligence, not artificial intelligence.
I encourage people to think of AI as “augmented intelligence”—humans plus AI. The goal should be to focus on what you love and automate the rest, freeing humans for judgment, creativity, care and empathy—things AI still cannot genuinely replicate.

❓🧠 Digital curiosity is now a career skill.
The biggest divide I see inside organisations is not between “technical” and “non-technical” people, but between the curious and the complacent. Those who actively experiment with AI, learn what it can and can’t do, and redesign their workflows are the ones future‑proofing their careers.

Andrew’s core belief:

We are not 18 months away from total upheaval—but the change is coming, and faster than many are comfortable with. That gives us a vital window to get curious, experiment, and deliberately reimagine how work gets done.

🔍 If you haven’t yet, start small: take one boring task this week and ask, “How could AI help me automate this?” Then build from there.

Interview Transcript

Darryl Morris 0:00
Lots of chat this week about AI, and that classic conversation about it replacing jobs, AI tools becoming more sophisticated, the prospect getting more and more serious. I think a lot of those conversations that we have about AI are sort of moving on by the week. Well, one week we can say, oh, it’s no it can’t do this task very well. Next week it can do it better than us. That’s how quickly things are developing. This week, the head of Microsoft told the Financial Times podcast just how drastically it could change things and just how quickly

Mustafa Suleyman 0:29
I think that we’re going to have a human level performance on most, if not all, professional tasks. So white collar work where you’re sitting down at a computer, either being, you know, a lawyer or an accountant or a project manager or a marketing person, most of those tasks will be fully automated by an AI within the next 12 to 18 months, and we can see this in software engineering. Many software engineers report that they are now using AI assisted coding for the vast majority of their code production, which means that their roles shifted now to this meta function of debugging, scrutinising, of doing the strategic stuff, like architecting, of, you know, et cetera, et cetera, putting things into production. So it’s a quite different relationship to the technology, and that’s happened in the last

Darryl Morris 1:17
six months. That is Mustafa Soliman, okay, so let’s park the whole debate about whether or not that’s going to happen. Let’s just assume. Let’s just take this at face value and assume that that does happen, and it happens quite rapidly. What does that world look like if that prediction comes true? What does it mean for jobs, for the economy? What does it mean for politics? What does it mean for us as people, our friendships, our relationships, our lives? Let’s roleplay that out for a bit. We did that with Andrew grill, who is an AI expert and the best selling author of a book called digitally curious.

Andrew Grill 1:47
What will happen is jobs will actually not disappear immediately. They’ll get sliced so the parts that you can automate will be sliced away. And I think if we believe this doomsday scenario, it’ll happen quicker than we thought. There was a paper by Matt Schumer, who likened what happened in covid We didn’t believe what happened in February 2020 and by the end of March 2020 we were locked down. So what you’re describing is a covid like scenario where it just happens so quickly, it catches us unaware.

Darryl Morris 2:13
And so you do think that that is a possibility,

Andrew Grill 2:17
actually? Well, you said not to debate. Yeah,

Darryl Morris 2:21
very quickly. God, yeah, god,

Andrew Grill 2:23
okay, so if it is, if it is possible, I don’t think it’s possible, because I think what gets in the way, and what the anthropic boss and Matt Schumer and others have come from a very technical point of view, they’re talking about coding going away very quickly, which is true what I’m seeing now. I’m in the trenches. Every day. I go into large organisations around the world, and I’m called in, not because they get AI, because they’re not sure what to do. So I’m seeing it through a very different lens. What I am seeing, though, is that people are saying we don’t understand what it can and can’t do. And by the way, we’ve got 20 years of processes that we have to unbundle. So even if the most sceptical scenario happened, the covid example, happened in 18 months. Everything happens. How do you then change all the processes? All the way we get work done has to also change so quickly because you have humans involved. It’s not about going home with a laptop. It’s about completely overhaul how we do customer care, how we do customer outreach, how we do our expense claim. There is so much to unbundle, I don’t think anyone could do it in that time.

Darryl Morris 3:21
Okay, that’s interesting. It’s interesting. So let’s, let’s park that. Let’s assume you’re wrong. Then, for a moment,

Andrew Grill 3:27
have to be proven wrong

Darryl Morris 3:28
sometimes, yeah, of course, I will. Let’s, let’s, let’s kind of play with it, with the possibility that you are wrong, or at least, I mean, maybe you’re right on the sort of speed of the of it. But this is kind of where we end up eventually, right? What? What does that mean for let’s think about what that means in terms of the businesses. What does it mean for the sort of companies as we know them today?

Andrew Grill 3:48
Well, it means that jobs look very, very different, because what has been automated, what we’ve had to sit down and do by hand, is taken away from us. So we’re then sitting there going, Well, where did we fit in, and where do we use our judgement and those sorts of things. So that’s a scary scenario, because we’re not used to that. We’re used to doing these things. I think what probably will happen is that the speed of adoption will actually speed up because of AI. Now, I said before that the barriers to entry are processes. What about if AI suddenly gets it right and is able to miraculously fix all those process issues and allow things to roll out very quickly. I think what will catch people unaware is understanding how to use the tools. Now, I was one of the first in my friends group to have a mobile phone back in 1994 everyone said, why they got this piece of plastic. And slowly, slowly, people got a mobile phone. They saw it was useful. I think we’re going to have to have to assume, with this scenario, that everyone gets up to speed with AI really quickly. And again, from being in the trenches, I just don’t see people having that aha moment where they go, Oh my goodness. I didn’t realise it could do that, because it is actually the capabilities growing so fast, even I can’t catch up every week there’s a new feature or function. And I think I’m a smart person. I can’t keep

Darryl Morris 5:00
up, yeah, which is also bleeds into the assumptions that you make, right? Because you think, Oh, well, AI, I’ve tried to do that task with AI. I’ve tried to use it for this purpose, and it wasn’t very good, or it was got it wrong, where it was saying, actually, that that’s changing really rapidly too. Let’s imagine then, let’s imagine then the sort of mass job losses that the boss of anthropic is describing that right, and fewer people needed in the process. Many, many people let off. Businesses can shed people and save costs. How could that filter through the economy, the wider economy?

Andrew Grill 5:33
Well, if lots of companies slice out the bottom of white collar jobs at once, you know, through fewer salaries, less spending, you have a more brutal middle class, even as profits and productivity look great on paper, the big question is, who owns the upside from that slicing? Are we curious enough collectively to design policies where we actually do something with those displaced workers? Because if we have all these idle time, what are we going to do it all if the robots are doing it all for us?

Darryl Morris 5:57
Okay, so, so we’ve got idle time, but we’ve also presumably got, not got cash.

Andrew Grill 6:04
Well, people have also said, Is it time for a universal basic income? Because if the robots are doing everything, we don’t have to do everything. You know. Elon Musk has said that, in the future, salaries won’t exist because people won’t need that sort of or jobs won’t exist because people won’t need to to do that. I think that humans are crazily drawn to work of some sort. They like being around each other. I think now that we use the covid example, I went to an event last night. I really enjoyed being in a room with inquiring minds debating topics, ironically, about AI, you can’t, kind of do that. And I think we would lose that, and we would miss that, and we would redesign the way work gets done to make sure there is agency of humans interacting with other humans, and maybe it looks very different. But just let’s go back a bit and assuming that your hypothesis is right, a lot of the tasks in jobs today are pretty boring. Think of the parts of your job that you don’t enjoy doing, but you have to do them in a world where that gets removed. I’d actually be quite happy about that doing a drudgery task. And if you know you trained to be a lawyer, and you spend your life going through NDA, you’d probably think, when am I going to start to use what I’ve been trained for? So maybe, maybe some of the low level tasks should go away and they should be candidates for process re engineering, and we re imagine how work gets done,

Darryl Morris 7:25
but some of those, though, are presumed with the backbone of what people do, right? And in that, that piece that was sort of semi viral this week, you refer to the author of that piece, quoted the managing director, a partner of a legal firm, who was basically allowing AI to do the job of some of the junior Associates, and it was doing it better than some of the junior associates. It was much more able to sort of construct arguments and look through contracts and look through documents and find flaws and find risks and all that, that sorts of stuff. And therefore that, that that is the backbone of what some people do, so that that is surely going to sort of lose, that we’re going to lose some people down the cracks of that in our imagined feature.

Andrew Grill 8:03
Well, what AI does really well is things at speed and scale. So the lure example in this article, which is actually a really good example, is basically saying, you know, I don’t have time to read through every judgement, every case that’s ever happened in the firm or with my clients, whereas AI can do this very quickly. And I think actually it points to the fact that this lawyer is being very smart. This lawyer is getting ahead of the curve. They’re basically understanding what it can do today tomorrow to actually protect their job. What we’re going to see a lot of is what I call information asymmetry, where people like you and I that take the time to become really, really curious and understand the technology are actually far more prepared if this doomsday scenario happens, because they are at the forefront the number of people I speak to who don’t know some of the very simple things that AI can do. Let me give you an example. I spoke to an organisation north of England last year, probably the most cynical CEO I’ve ever met before. I went into the company. I said, Can you tell me a bit more about your company? Oh yeah, we’ve actually just done a SWOT analysis. We’ve actually reviewed the whole company, all 17 departments, and we’ve done a spreadsheet. I said, Can I have that spreadsheet? Okay, so when I went to present, I’d actually put the spreadsheet into AI, and I’d come up with something that they could do an AI for all 17 departments. And asked the room, I said, who was responsible for analysing this SWOT analysis? Chris. Put his hand up. How long did it take you to do that? 10 days. I said, Chris, I’ve got some bad news for you. While I was running my shirt to come here this morning, I did it in two minutes, but the room then went. I didn’t know we could do that. I now know that that firm, after I went and explained these very simple things you knew with AI have now gone. Why don’t we do this? Why don’t we run all the HR feedback survey form through it. So it sometimes takes someone like me to give you a little push to then accelerate adoption and understanding, and then this information asymmetry becomes less because you know what it can do, just as that lawyer does.

Darryl Morris 9:52
And presumably that’s going to be okay for the people who are digitally literate, who can adapt, but there are going to be vast swathes of people. Pull vassals of jobs for whom that’s really difficult.

Andrew Grill 10:05
Well, I disagree, because

Darryl Morris 10:06
frankly, or just just plainly, companies need less stuff, and therefore there are going to be less people employed. Is that not a basic equation here?

Andrew Grill 10:14
Well, it shows you that some of the roles that are there maybe we’ve never needed, because if we could automate them, someone who has to basically photocopy expense claims probably says, I really don’t like this part of my job. I didn’t sign up to photocopy expense claims. And they probably wish that this wasn’t a part of the role, if you could digitise and automate that, they could be probably doing something that they far more enjoy. So I think it actually is going to show up where we have inefficiencies in the way jobs get done today.

Darryl Morris 10:43
Okay, so is that is our imagined future? You’re saying that we are reimagining what work is. Is there going to be work for everybody?

Andrew Grill 10:54
That’s a two part question. I think we are absolutely having to reimagine work and reimagine the way work gets done. Ai people know to mean artificial intelligence. After today, I want you to think about it, meaning augmented intelligence, humans plus AI. I gave this quote on stage a couple of weeks ago to a group of hairdressers who were concerned about AI. I said, focus on what you love and automate the rest. So think about the parts of your job you love, and you can really apply your skill and learning to and if you automate the rest, that will then actually allow probably a reimagination of how the whole company performs, you may be more profitable.

Darryl Morris 11:29
Okay, okay, let’s, let’s return then, to our much more kind of doomsday scenario that we’re that we’re role playing here, right? So let’s, let’s, let’s imagine, let’s imagine that some of the more sort of extreme ends of the predictions do come true, and that we do shed jobs, and that companies are able to employ less people, the very sort of nature of workforce, perhaps the rhythms of our lives change fundamentally. You mentioned about something about universal basic income earlier. Do you think a sort of government intervention like that is going to be necessary to sort of plug the gap for some people,

Andrew Grill 12:02
I’ve been thinking about this. I want to step back for a bit. The way governments operate is they run on a political system. Someone’s in for 345, years. Towards the end of the term, they try and make sure they get reelected again. So there’s not a lot of forward planning into the future, because the way jobs get done and governments run, imagine if you applied AI to the notion of government. Like I said, before speed and scale, you fed into an AI every decision that’s ever been made, about tax, about jobs, about unemployment, and you asked the mega question, what’s best for this country? And you took out the nation of politics and terms of government and things like that. Imagine if AI looking at everything that new could come back and say, You know what this is the best way in terms of how we tax people or how we provide credit and different countries do it differently. But imagine if we could train AI on that problem, we might have a very different view of how we govern, how we actually raise money to pay for those government services. All right.

Darryl Morris 13:00
Listen to do you think you think it would suggest the universal basic income in a scenario where, you know, if it’s considering a world in which AI tools have, you know, have sort of laid enough people off, made work, perhaps, you know, less less, less secure, less necessary for some people. Would it? Would it suggest a UBI?

Andrew Grill 13:19
It probably would have it as one of the options, but it would probably say here are the challenges based on that, because of what we’ve looked before, it could probably run models and scenarios of seeing how many people would probably want to claim that and what would cost the government. Yeah, okay, but I think at the moment, no one could actually model that because it’s it’s an impossible question

Darryl Morris 13:37
to answer, okay, what about us as people? Then look, we’ve considered what might impact and change the nature of our work. Work is a really sort of central part, isn’t it, of our lives? What sort of impact would the mass upheaval scenario have on us and our sense of sort of purpose and identity as people?

Andrew Grill 13:55
Well, I think what we found over the years that work is very much where many of us find purpose, a community and self work. We go to work to to meet and contribute as a community and as people. And I think taking away the visible parts of a job and leaving humans to supervise machines could feel deeply demeaning. I don’t think I’d like to do that. I’m keen to understand, though, how we deliberately protect and grow the parts of work that are incredibly human. There are things that AI, I believe will never replace proper judgement, care, creativity and empathy. Every AI expert I’ve spoken to says they will never really feel empathy. You can fake it. I can say, Darryl, I love you, and you might believe me, especially on Valentine’s Day, but I’m not being really honest, and I’m not really showing the empathy. So if there are parts of human nature that can never be replicated by machine. Then that is the role for humans, and maybe AI will demand better humans to make better judgement and use our empathy and creativity and judgement and care in a better way.

Darryl Morris 14:53
There’s also a risk, though, isn’t there that it does what rapid technological change already has done, right? Basically. Birth of the Internet, really, which is to destabilise us a bit, right? I mean, it has. It has certainly influenced, I mean, technology has influenced our communities, the way that they look, the way they feel, the rhythms of our lives, and ultimately to our politics. And we’ve talked a lot in our politics in the last decade about control, about taking back control, about people feeling sort of unsteadied and unstable, and that surely is going to happen in our doomsday scenario. Andrew, that happens at scale. Yeah.

Andrew Grill 15:29
And also in the doomsday scenario, this has happened because three or four companies have the control. So we talked before about anthropic and there’s open AI and Google and Amazon and all of those. At the moment, they seem to control what’s going on with AI, and if AI is going to be catas in your scenario, catastrophically bad for humanity. Do we need to look back at those companies to see if they’re making the right decisions for the global economy?

Darryl Morris 15:56
Are we okay? How Much of what you’ve just said do you believe

Andrew Grill 16:02
I It depends with go back and replay the tape to see which parts, I certainly don’t believe it’s with 18 months away from the dooms. So the

Darryl Morris 16:10
speed, so the speed element, the speed the change and the upheaval, and the kind of, you know, the world looking and feeling a very different place, the purpose of humans being very different. You think that all of those things are possible, but not necessarily at the speed that that we’re talking about,

Andrew Grill 16:25
and because the speed won’t be as rapid, humans will have more time to look at it. I wrote this book called digitally curious, and I think what’s required in everyone is curiosity, and especially when it comes to AI, is digital curiosity, understanding what it means. I mentioned a couple of times going into organisations where they say we don’t know what it can do. So if you believe me, that it’s not going to be 18 months, it might be three years before the real impacts are felt. We have a three year head start. We actually know that if we knew what we knew about covid, we would have looked at things very, very differently, because we now know that we are in command and control of our own destiny. Yes, it will happen at speed, but I think we’ve got some time. If you’re in a regulated industry and you have to sign things off and get a legal judgement, those things won’t go away. The laws are still in place, and what the technology is going to run up against is the massive, you know, almost exponential growth hitting up with logic and reason and legality. So it’s happening quickly. What I want your listeners to think about, though, is they have time. They need to be digitally curious. They need to be more and more engaged in what the technology can and can’t do, and that will actually put them in the driver’s seat to understand how their career and their company might change.

Darryl Morris 17:39
Really interesting, that is Andrew grill, who’s an AI expert and a best selling author of digitally curious.

 

author avatar
Andrew Grill Global AI Keynote Speaker, Leading Futurist, International Bestselling Author, Brand Ambassador
Andrew Grill is a Global AI Keynote Speaker, International Bestselling Author, Top 10 Futurist, and Former IBM Managing Partner with over 30 years’ experience helping organisations navigate the future of technology. He holds both a Master of Engineering and an MBA, combining technical expertise with business strategy.