- Intent
- Posts
- Alt realities hit home: Pixar gets meta, and NPCs get really smart | Intent, 0008
Alt realities hit home: Pixar gets meta, and NPCs get really smart | Intent, 0008
A motley crew of giants lead AR innovation, and AI brings side characters to life
On the agenda
Intent is all about helping talent in tech become more intentional with their career by becoming more informed, more fluent, and more aware about the goings-on within tech and adjacent industries.
On today’s Intent:
Pixar leads the way with new consortium for 3D asset development
Large language models begin to influence game-making in a big way
Pixar, Apple, Nvidia, Adobe, and Autodesk are teaming up — why!?
Midjourney: robot arms building a 3d graphic asset --ar 4:1 --v 5.1
Apple, Adobe, Pixar, Nvidia, and Autodesk are teaming up to promote open standards for highly interoperate 3D tools and data formats. In other words: 3D is hard and expensive, and everyone wants it to be easier and cheaper.
The companies announced the Alliance for OpenUSD, a non-profit organization aimed at creating standards for “developers and content creators to describe, compose, and simulate large-scale 3D projects.”
The problem:
CGI in films and games has to build, store, and move around a lot of 3D data, and all of that 3D data is itself very large. There are many tools used to build the assets in question (modeling, shading, animation, lighting, fx, rendering), and each tool exports “scenes” with its own encoding and metadata.
Because of this, those assets can only typically be modified by the tool(s) used to create them. And if they are transferable, it’s “destructive” – in other words, when you edit it in the second tool, the first tool can no longer read it.
Their proposed solution:
Pixar created OpenUSD, a high-performance 3D scene description technology that offers robust interoperability across tools, data, and workflows.
All of these assets, models, animations, and “scenes” are made interchangeable – build an asset in one OpenUSD tool (listed here) – say, Autodesk’s Maya, and then use it and edit it in another, like Adobe’s Substance 3D.
They all benefit:
Pixar gets everyone else to use USD, their Universal Scene Description framework that they developed in 2016 to help them more easily work with their own assets and movie development. They open-sourced the project in 2016 in their first attempt to get movie-makers to use a more interoperable standard.
Customer-facing platforms like Apple (with Vision Pro) want it to be easier to develop applications for complex devices. An expensive headset is just a paperweight if it can’t play some fascinating new games.
And CGI tool-makers like Adobe and Autodesk never want to feel like their tool is failing out of favor due to compatibility issues, whether within their own ecosystem or when considered next to complementary pieces of software.
And then there are companies like NVIDIA, who would prefer if there were more standards for applications so that they could help better optimize their graphics cards to be as efficient as possible in consideration of these applications and tools.
The takeaway: The cost of movie-making and game development keeps going up. With more advanced hardware and more realistic graphics in our films and video games comes the need to keep up. Open standards – so long as they are kept standard for indie makers and new tool-makers, too – could make it easier for everyone to focus on what really matters: the content itself.
Last note: it’ll be interesting to see how this all intersects with the announcement earlier this year that Samsung, Google, and Qualcomm are teaming up on a new mixed-reality platform. These three major players are notably missing from the OpenUSD partner list.
Think a friend could use a dose of Intent?
Forward this along to your least-informed friend in tech. We love the attention. ;)
Quick Hits
Anthropic launches improved version of its entry-level LLM – TechCrunch
Verizon is shutting down video conferencing app Bluejeans that it bought for $400M – Neowin
Meet Meoweler, the travel site made entirely with AI tools Midjourney, GPT4, and Svelte – Meoweler
Apple, Samsung, and Intel to invest in Arm IPO, and emerge with some control – The Register
WeWork in “substantial doubt” about the future, so where will startups go now? – Bloomberg
AI researchers claim 93% accuracy in detecting keystrokes over Zoom audio – Ars Technica
The biggest advancement in AI is sitting right under our noses
Midjourney: a video game scene where the player is talking to an npc and the npc is plugged into an ai chatbot --ar 4:1 --v 5.1
It’s very rare that we witness true breakthrough moments in an industry, but there might be one happening in the gaming space as we speak — AI NPCs (non-playable characters) powered by LLMs.
For a lot of video games, immersion has been a major selling point — character customization, realistic environments, life-like physics. But even with the best voice acting and strongest writing, dialogue options and cutscenes haven’t made the player feel a sense of control or freedom, constrained by the scripts baked into story.
Until now, that is — companies like Inworld AI (and their recent $500M valuation) are changing the landscape with NPCs fueled by large language models. The NPCs of the future are fueled by natural language processing, real-time large language model responses, custom game world knowledge, and integrations with services like ElevenLabs (to create custom and cloned voices).
Right now, the conversational AI product is what’s put them in the spotlight. But the roadmap for their Character Brain looks downright game-changing, promising that NPCs will be able to:
Have full personalities and develop relationships. Developers can leverage 30+ ML models to “mimic the full range of human connection”, with the added ability to add rich details about a character’s backstory that they will add context to and run with.
Learn and adapt through memory. Often times, character dialogue is rote with repeated phrases once you reach an endpoint. With Inworld, characters have human-like flash and long-term memory retrieval that enables players to truly feel a changing world.
Autonomously carry out actions through their own motivations. With their “Goals and Actions” feature, NPCs can respond to player inputs in actions in a dynamic way based on their personality, rather than every action needing to be hardcoded into the game.
Combine these Character Brain features with the Contextual Mesh layer to keep NPCs informed about the game world around them, and the path forward for immersive entertainment looks bright. As this space grows, keep your eye on Inworld, along with newer companies like Promethean AI, Houdini (by SideFX), and Latitude.io, all looking to leverage AI in unique ways across the game development process.
To get a taste: watch a gaming streamer interact with AI NPCs fueled by Inworld competitor, Replica, within an Unreal Engine 5 demo environment.
Job searching? We’re built to help
Check out CareerMakers by Free Agency, the job search prep program made for tech people, by tech people.
We’ve helped candidates negotiate over $300M in compensation since 2019 — supercharge your job search and get started now.
Thanks for reading, let us know what you think! & if you aren't subscribed, what're you doing!? Click here (& don’t forget to share)!
Sent with Intent,
Free Agency