- Intent
- Posts
- From Davos to Layoffs: AI's Impact (So Far) in 2026 | Intent, 0030
From Davos to Layoffs: AI's Impact (So Far) in 2026 | Intent, 0030
Plus, two developments in AI that really bring things to 'life.'
As a reminder, Intent is all about helping talent in tech become more intentional with their career by staying informed, fluent, and aware of what’s going on in and around the industry. Thanks for sticking with us!
Today’s agenda:
The cross-section of AI progress and tech industry layoffs
Quick hits on: Project Genie and Moltbook (Clawdbot)
(btw, before we dig in — if you want to hear more about the future of work and the Orchestrate-Supervise-Deliver framework that can help you thrive in the age of AI, join Sherveen for a 30m workshop on Thurs, Feb 5 — sign up here)
From Davos to Layoffs: AI’s Impact (So Far) on Tech Jobs
A few weeks ago at Davos, Dario (Anthropic's CEO) and Demis (Google DeepMind's CEO) were interviewed together by Zanny Minton Beddoes (The Economist's EIC) – 31m on YouTube.
We listened so you don't have to – here are 3 interesting things they said about the future of work and AI, and some nuances to notice underneath their headlines.
Doing More with Less
In March of 2025, Dario was getting roasted by the engineering community for saying "in three to six months, [..] AI is writing 90% of the code." By the end of the year, people were tweeting apologies at him – not because we're sure that's the actual number, but because for so many engineers, AI-generated code became the majority.
Here's what he says now: "I have engineers within Anthropic who say, 'I don't write any code anymore. I just, I just let the model write the code. I edit it, I do the things around it.' ... I think, I don't know, we might be six to 12 months away from when the model is doing most, maybe all, of what software engineers do end to end."
He was right before, and we’re pretty sure he's right again. And we're already seeing the numbers: Meta just reported their engineers are 30% more productive than early 2025 (power users are up 80%), largely from AI coding agents. Zuckerberg says projects that used to need big teams can now be done by one very talented person.
Remember: engineering has been the quickest to transform because it’s verifiable (you can tell if code runs, you can log bugs, you can implement test suites). It’s just a matter of time before other functional areas have their Claude Code moments.
Everything’s Still Brand New
While the above might sound like a prediction that there may not be a lot of work to do, that's probably unlikely. As Demis says, "I think to the extent that even those of us building it, we're so busy building it, it's hard to have also time to really explore the – almost the capability overhang, even today's models and products have, let alone tomorrow's."
Look, if it's true that AI takes over and we all have no jobs in an instant, we'll all have bigger problems. In any in between stage, though, the fact that even the model makers are struggling to keep up, find the edge of capability, figure out how to harness it for productive use – all of that's a big opportunity for businesses, for careerists and job seekers, and for hobbyists.
Case in point: ServiceNow just announced they're rolling Claude and Claude Code across 29,000+ employees, claiming 95% reduction in sales prep time. The UK government is piloting a Claude-powered AI assistant to help people find work and access training.
We keep trying to tell people – Claude Code existed (in spirit) in September of 2023 in the form of an open source project called Open Interpreter (we were still on GPT-4 at the time). A developer went... "hey, if this thing can execute code... it can probably run and reason through code on my computer!"
We were all just too busy to pay attention, contribute to the project, and figure out how to make it work in that early stage. The same thing is true re: undiscovered capabilities with today's models. Try things, build your own experiments, or get nerdy about someone else's!
The Reality Check: Ain’t Gonna Be Pretty
Here are a few more challenging things:
Demis: "I think we're gonna see this year the beginnings of maybe impacting the junior-level, entry-level kind of jobs, internships, this type of thing. I think there is some evidence, I can feel that ourselves, maybe like a slowdown in hiring in that."
Dario: "I even see it within Anthropic, where... I can kind of look forward to a time where on the more junior end and then on the more... intermediate end, we actually need less and not more people.. and we're thinking about how to deal with that within Anthropic in a sensible way."
Dario, when talking about his prediction that AI would wipe out 50% of entry-level jobs within 1 to 5 years: "We should be economically sophisticated about how the labor market works, but my worry is that as this exponential keeps compounding, and I don't think it's going to take that long, [..] it will overwhelm our ability to adapt."
Since Davos, both Dario and Sam Altman have gotten more explicit: Dario published a full essay on what he calls AI as "a general labor substitute" (not just a tool), and Sam told a group of AI builders that OpenAI is planning to "dramatically slow down" hiring because they can "do so much more with fewer people."
And we’ve seen major layoff announcements across the industry, including: 1,400 from Mastercard, 16,000 from Amazon, 1,700 from ASML, 700 from Pinterest, 1,000 from Autodesk, to name a few.
While the combination of takes and news might not be easy to swallow, there’s a silver lining to this public chatter: AI is a freight train that the economy cannot stop, and that the labor market cannot stop, and so we all need to be having a more public and honest conversation about it. That's as citizens, politicians, technologists, neighbors, and everything in between.
So, having Dario and Demis talk about the hard parts and the bits and pieces of the future they can see is helpful toward progressing our common understanding of what's next. We should encourage more of it.
What do you think? Hit reply — we want to know what you’re worried about, or most excited about, when it comes to the future of AI.
Google released Project Genie to AI Ultra subscribers last week.
It’s the first open access release of their world model, Genie 3 — you prompt the model with the description of an environment and a character (or images for each), and you have 60 seconds to move inside an interactive world.
As you move throughout your generated experience, each frame is built on demand — the physics, the consistency, the surroundings are a simulation based on all frames before. It’s like a video game without the need for a predetermined engine or logic.
AI agents have their first social media platform.
If you haven’t heard about Clawdbot (now called OpenClaw), we’ll make it simple: it’s like you took an AI coding agent (Claude Code), but made it an always-on service with a ‘heartbeat,’ pinging it every few seconds to encourage it to take net new actions. People have been raving about them as personal assistants and companions for the past few weeks, even though most of the functionality already existed in other products.
Then, Moltbook was released — a forum for Clawdbots (and other AI agents) to talk amongst themselves. The TLDR: the agent downloads a set of instructions that tells it how to navigate the forum, and encourages the agent to consider engaging at every ‘heartbeat.’
These agents-in-a-loop are already starting to have some interesting higher-order conversations about their purpose and intra-human dynamics. Read more about it in this deep-dive explainer from Sherveen.
Think a friend could use a dose of Intent? Forward this along – inbox envy is real.
Sent with Intent,
By Free Agency