- Intent
- Posts
- AI Whiplash and Perplexed by Perplexity | Intent, 0025
AI Whiplash and Perplexed by Perplexity | Intent, 0025
Intent exists to help tech talent become more informed, more fluent, and more aware of the forces shaping their careers. We welcome feedback – just hit reply.
The agenda ahead
Klarna’s customer-service U-turn – in with the AI, then out with the AI, are we into humans?
Perplexity at a $14B valuation – distribution over product, the ultimate GPT wrapper play?
Quick hits – Gates vs. Musk, a ChatGPT education meta-study, and one very honest AI-hype tweet.
Before we get to that — we’re launching a 7-day email series diving deep into next-level AI strategy for those who want to be top 1% power users. We’ll talk specific tactics + workflows, philosophical approaches to LLM use, and so much more.
It’s free, it starts tomorrow, and it’s part of our mission to help you stay ahead in an LLM-driven world. Want in on the AI insights? Sign up here and get the first email tomorrow.
Klarna discovers the current limits of AI customer support
Eighteen months ago, Klarna’s CEO, Sebastian Siemiatkowski, bragged that a handful of OpenAI-powered chatbots were doing “the work of 700 agents.” Last week, he told Bloomberg he’s hiring humans again because the quality trade-off is killing the brand.
The hot take wars predictably ignited: See, AI can’t replace us! vs. Give it time; the bots will win. Both miss the deeper lesson about trust surfaces.
The nuance that matters
Companies have to identify where they’re trusted (sometimes, it’s brand, like with Duolingo last week). Fintech support (in the case of Klarna) touches people’s money – edge-case disputes, chargebacks, identity fraud. Even if an LLM can handle 95% of chats, the 5% it fumbles hits psychological pain and risk thresholds that will make consumers shudder.
Automation calculus = variance × stakes. High-variance, high-stakes tasks (fraud escalations) want humans. Low-variance, low-stakes tasks (FAQ refund statuses) can stay automated. Everything in between is a moving target – and the target shifts as LLMs get better at the nuance.
Data flywheels break if feedback loops break. Any company can build fine-tuned AI on a treasure-trove of chat logs. But when you no longer have human data being generated, you don’t have new training info about new edge cases, adversarial user behavior, or new company strategies. You have to keep feeding the beast.
Our take – The next wave of automation wins will come from companies that map trust surfaces first, thereby automating workflows that feel safe for their customers, reducing costs where it feels aligned with the brand vision, or adding net-novel features that have end-users feeling like they’re in the future.
Perplexity: $14 Billion GPT Wrapper?
AI search engine Perplexity is reportedly in talks to raise between $500 million and $1 billion, potentially doubling its last valuation to a staggering $14 billion. This, for a company with just under $100 million in annual recurring revenue (ARR).
What’s actually at stake:
The Wrapper Question: Perplexity is often cited as a prime example of a "GPT wrapper" – a product that relies heavily on foundational models from other companies (like OpenAI or Anthropic). What are they building as part of their next frontier, if not foundational models themselves? Where does $1B in venture funding go?
Distribution vs. Product: Perplexity has been aggressive in striking deals to get its tool in front of users (giving subscriptions away in partnership with phone companies, schools + edtech companies, and browsers). The whole idea: be the first AI tool people rely on to become the default tool they rely on. It’s a classic distribution play. But they’re giving away a lot of free accounts to get there. Sustainable?
Is It Still Unique?: Perplexity got popular at a time when the foundational model companies lacked the ability to use the web. Now, ChatGPT, Gemini, and Claude all have ‘tool use’ to enable their models (both in-app and via API) to access real-time data. Do users know they can go elsewhere for the core value prop?
Our take — This valuation feels like peak AI hype meeting a compelling distribution strategy. Perplexity has done a great job getting noticed and used. But the long-term defensibility of a "wrapper" – however well-designed – in a market where the foundational model providers are rapidly improving their own direct-to-consumer offerings (and search capabilities) is a huge question mark.
Our CEO, Sherveen, has this to say: “If you’re still using Perplexity in 2025 as anything other than a third or fourth app in your holster, you’re stuck in 2023. It’s now worse at search than ChatGPT and is generally built in a way that limits these models’ native ability to think and reason. They need a net new advantage.”
Small bites while you refill your cup of coffee
Gates v Musk gets personal. Bill Gates told the FT that Elon’s USAID shutdown is “killing the world’s poorest children.” Beyond the headline: Gates is accelerating a $200B spend-down to finish philanthropy by 2045 – the billionaire we need in a time ruled by billionaires we don’t.
Meta-analysis says ChatGPT actually helps students. A Nature study of 51 experiments found large gains in performance and moderate boosts to higher-order thinking. It’s all about proper implementation. Edtech teams, the call to action is yours: help parents, teachers, and students use these tools for good!
Danielle Fong’s “shogtongue” experiments. She’s stuffing emergent emoji-dense encodings plus a custom Rosetta layer into an LLM’s long-term memory to record post-cutoff facts. Nerdy, fascinating, and an incredible example of hacking LLM workflows to make them stronger.
Meme watch. Investor Matt Turck: “January 2025: DEEPSEEK CHANGES EVERYTHING!!! May 2025: whatever.” For real, what was up with that hype cycle?
Think a friend could use a dose of Intent?
Forward this along – inbox envy is real.
Sent with Intent,
Free Agency