- Intent
- Posts
- Fighting for eyes and ears: the next gen of sports and sounds | Intent, 0005
Fighting for eyes and ears: the next gen of sports and sounds | Intent, 0005
The prolific potential of high-tech everyday entertainment, from AI music to World Cup wonders.
Intent
Intent is all about helping talent in tech become more intentional with their career by becoming more informed, more fluent, and more aware about the goings-on within tech and adjacent industries. We welcome your feedback!
What to expect:
1. The tech on display at the Women’s World Cup this summer
2. The future (and present) of AI-generated music
///
The Women’s World Cup is a playground for sports tech
Midjourney: scifi female futuristic soccer player playing with a soccer ball on a soccer field --ar 4:1 --v 5.1
The 2023 Women’s World Cup is well underway in New Zealand/Australia. While the play itself has been incredible, we want to talk about something else happening on the field: the impressive tech on display. We’ll also give a peek into some early-stage startups that are shifting paradigms across other sports.
The modern beautiful game
Soccer has been in the world spotlight for new tech frequently (including for basic tech malfunctions that have caused entire matches to be suspended). And if you haven’t seen it yet, watch this incredibly viral CGI ad for the French national team.
But you might not know that these days, soccer is far more reliant on tech than ever to make refereeing decisions, track player data, and feed coaching staffs with valuable analytics.
Let’s break it down:
Automated offsides tracking tech: Maybe you’re like Ted Lasso and don’t understand what it means for a player to be offsides. That’s fine, but here’s what you will understand — in order to track player positions, each stadium roof is mounted with 12 tracking cameras, each sending data (50 times per second) on ball location, in addition to 29 different data points on each player. With 22 players on the field and 12 cameras, that’s 375,000 data points per second just to make sure players aren’t on the other side of some imaginary line.
Connected ball technology: Every World Cup has an iconic ball design (some more popular than others), but this year’s is truly groundbreaking. The OCEANUZ ball has a suspension system at its literal core, hosting a stabilized 500Hz inertial measurement unit motion sensor in the name of more accurate refereeing decisions and advanced statistics (like ball speed, average location, and goal decisions). And to avoid any USB ports that would impact the ball’s flight, it’s rechargeable through induction.
Goal line technology: While this isn’t as new as the previous two on our list, it’s still worth noting here. How it works: a fully automated technology called GoalControl-4D uses 14 high-speed cameras and sensors around the stadium and within the goal to track the ball position, making sure it’s fully over the goal line. That information is sent to the referee’s smartwatch, who will relay the decision to the teams. It’s really efficient — similar tech is used across other sports like tennis.
Player-worn GIS vests: These days, players wear sports bra-like vests under their jerseys during practice and in-game, each with an AirPods case-sized GIS tracker between the shoulder blades. This allows coaches to track average player location, distance covered, average speed, and more so that they can make tactical adjustments, determine training techniques, and predict fatigue among players. Here’s a video explaining the tech in practice.
Who’s making waves in sports tech?
It’s not just soccer where breakthroughs are happening — here are startups to watch who are building products across analytics, training, AR/VR, and connected tech:
Indian startup Stupa Analytics (pre-seed) is the biggest name in table tennis analytics — 25% of 2020 table tennis Olympians used their platform to track shot placements and utilize predictive analytics to prepare for opponents.
Belgium-based Runeasi (~$620k raised) uses in-sole sensors to analyze athletes’ running gait and measure body impact to improve training regiments.
Qintar ($3M raised) is aiming to elevate the live game fan experience by providing spectators with AR data visualization and statistics. For example, golf tournament attendees can use the app to track ball flight, speed, and distance directly from their phones.
Austrian startup VR Motion Learning (pre-seed) has revolutionized tennis training by moving to a digital court, making a hyper-realistic VR training ground while also tracking players’ biomechanical data to create an optimal movement pattern.
StatusPro ($5.2M raised) offers the ultimate XR (extended reality) football training platform, using AR headsets to simulate opponents and teammates in an otherwise-empty practice environment.
Sizzle ($5M raised) started in the eSports space, but has expanded off-screen, leveraging AI to automatically create instant personalized highlight reels.
CUE Audio ($50K raised) is reimagining the connected stadium experience, using ultrasonic waves to connect to 100K+ audience smartphones without the need for WiFi, Bluetooth, or cell service.
English sports analytics startup Sportlight ($5.1M raised) uses hyper-accurate LiDAR and AI tech to provide coaches with insights into load management, performance monitoring, and tactical analysis.
Next time you kick your feet up and watch the big game, keep your eye on what tech is being leveraged to make the magic happen.
///
The future of artificially generated, genuinely enjoyed music
Midjourney: a futuristic female dj playing in a futuristic city at night --ar 4:1 --v 5.1
In 2015, a few short months before the rollout of Apple Music, Spotify launched what is now ubiquitous with the platform – its first iteration of algorithmically-curated playlists. It gave users an on-demand, endless list of songs based on their demographics and listening history. Since, they’ve continued to infuse ML-based tech into the app, and most recently released an AI DJ that tries to talk you through its selections for you.
The industry won’t tell you its secrets, but it’s pretty obvious that pop music has a formula. As early as 2012, major labels started to figure out how to use AI to find correlations behind compositions, what makes us listen, and what sells.
Some formulas are simple – such as the Golden Ratio: if you build up to a point about two-thirds of the way through a song, then make a big change – i.e. bring in bass, change the key, or crescendo – it creates an even more emotional and addicting experience (think: Billie Eilish’s “Happier Than Ever” and Olivia Rodrigo’s “drivers license”). Then, there’s the 4 chords – in the last 70 years, the majority of the most popular music has been based on 4 simple chords – the I–V–vi–IV progression (when played in the key of C, it’s the notes C, G, Am, F, think the opening of “Don’t Stop Believin’”).
Now that you’ve got the basics of musical theory, it’s not hard to see why it’s incredibly easy for computers to figure out how to use years of historical music data to create it themselves. Fast forward to the past few months, and we haven’t been able to escape the chatter of AI-composed music – but this time it’s about how the culmination of generative AI, voice cloning, and melody synthesization comes together to autonomously create a holistic music experience, without much human intervention at all.
What have we seen of this so far?
In May, a completely fake (but very convincing) Drake feat. The Weeknd (called “Heart On My Sleeve”) song somehow made it onto Spotify. It was promptly taken down, but it mesmerized fans and freaked out labels.
The most popular use for this tech, by far, has been to create ridiculous covers. There’s Jonny Cash singing Barbie Girl, and Frank Sinatra singing Get Low. There are countless models trained on iconic voices that everyday users can mess with – voicify.ai, kits.ai (from startup Arpeggi Labs), and covers.ai (from startup mayk) being a few.
There was a huge controversy a while back with “fake” artists making lofi jazz – a report suggesting that Spotify games its own royalty system by creating and promoting in-house, or “fake” artists.
Earlier this year, David Guetta played a track (for fun) with an AI-generated intro from Eminem.
There are legitimate fears here about ownership, art, creativity, copyright, and monetization, but the answers aren’t so easy.
In the US today, a voice isn’t considered intellectual property, so it can’t exactly be copyrighted. For labels, that means if they can’t own the rights to a voice (like they could an artist’s masters), they can’t control all iterations of its use.
Then, artists argue, they can’t protect their own assets. Shouldn’t they own their voice? Will labels figure this out and get rid of artists all together? Will communities begin to replace artists with artificial versions that they get to direct?
The legal battle is so incredibly complicated, so we’re not even going to get into it.
In 2016, Free Agency’s own Paige Connelly was obsessed with how tech was “killing” pop music – and how she didn’t think it was a bad thing. The proliferation of fringe genres into our popular zeitgeist started to create something hardly distinguishable from what we called “indie” or “electronic” or “R&B.” Now, there’s the same conversation, but it’s not about smudging our genres. It’s more about smudging humans altogether.
From a culture-meets-tech perspective, the conversation is getting interesting:
We mentioned David Guetta before, who’s always been a pioneer in music production (from “I’ve Got A Feeling” to “Titanium”). In an interview with Rolling Stone, Guetta got into the nitty-gritty about what AI could mean for the industry and himself, and he’s not concerned about it:
“It wouldn’t be any different than when I made a record that was game-changing, and a million different producers copied me. I’ve copied artists before.” he said. “This is already the way it is now, maybe people don’t realize. Sometimes I create, sometimes I follow. All I’m saying is we need to be a little more humble and admit that AI is going to do what we do already…but what it’ll never have is taste
Grimes, the avant-garde, historically pro-tech artist (&, related, Elon Musk’s former partner), decided to ”open source” her voice. This approach means that anyone can use her voice to train AI and make songs out of it, and she’s asking for 50% royalties.
And this could be a musical renaissance – some of the most creative, thoughtful popular projects of the last decade came out of the “death” of genre. Think of Frank Ocean’s Blonde, The Weeknd’s Beauty Behind The Madness, or Lorde’s Melodrama. There’s likely going to be an awkward phase. If history is any teacher, this era also had PBR&B – a phrase that coined the cheap, hipster-chasing era of integrating R&B and EDM into pop.
There are other major innovations on an artist-level – Neural DSP uses neural nets to create plugins for near-perfect digital versions of guitar amps. RoEx's Mix Check Studio allows amateur musicians to use an AI tool to understand what’s wrong with a mix and exactly how to fix it. Audialab released Emergent Drums – an AI-powered drum machine plugin that's capable of generating an infinite amount of unique drum samples using ML.
Humans are so incredibly creative, and over time, we’ll find a way to work with AI to make something extraordinary, new, profound, and most importantly, tasteful.
///
Thanks for reading, let us know what you think! & if you aren't subscribed, what're you doing!? Click here!