Tag: Machine Learning

How Spotify Uses Design To Make Personalization Features Delightful

Every day, teams across Spotify leverage AI and machine learning to apply our personalization capabilities on a large scale, leading to the features, playlists, and experiences Spotify users have come to know and love. And when you spend your days working with emerging technologies, it’s easy to get transfixed by complicated new advancements and opportunities. So how do our forward-thinking teams ensure they can tackle this technical work while also prioritizing the experience of our users? 

That’s a question constantly on the mind of Emily Galloway, Spotify’s Head of Product Design for Personalization. Her team’s role is to design content experiences that connect listeners and creators. This requires understanding our machine learning capabilities as they relate to personalization to leverage them in a way that is engaging, simple, and fun for our users. 

“Design is often associated with how something looks. Yet when designing for content experiences, we have to consider both the pixels and decibels. It’s more about how it works and how it makes you feel,” Emily explains to For the Record. “It’s about being thoughtful and intentional—in a human way—about how we create our product. I am a design thinker and a human-centric thinker at my core. People come to Spotify to be entertained, relaxed, pumped up, and informed. They come for the content. And my team is really there to think about that user desire for personalized content. What are we recommending, when, and why?”

The Personalization Design team helps create core surfaces like Home and Search, along with much-loved features like Discover Weekly, Blend, and DJ. So to better understand just how to think about the design behind each of these, we asked Emily a few questions of our own.

How does design thinking work to help us keep our listeners in mind?

When you work for a company, you know too much about how things work, which means you are not the end user. Design helps us solve problems by thinking within their mindset. It’s our job to be empathetic to our users. We have to put ourselves in their shoes and think about how they experience something in their everyday life. A big thing to keep in mind is that when using Spotify, phones are often in pockets and people look at the screen in quick, split-second moments. 

Without design, the question often becomes, “How do we do something technically?” For those of us working at Spotify, we understand how or why we’re programming something technically in a certain way, but users don’t understand that—nor should they have to. What they need is to experience the product positively, to get something out of it. We’re accountable for creating user value. We really are there to keep the human, the end user, at the forefront. 

Without this thinking, our products would be overcomplicated. Things would be confusing and hard to use, from a functionality perspective. Good design is about simplicity and should largely remain invisible. 

But design is also additive: It adds delight. That’s what I love about projects like DJ or Jam that are actually creating connection and meaning. Design is not afraid to talk about the emotional side—how things make you feel. 

How does design relate to personalization?

Personalization is at the heart of what we do, and design plays an important role in personalization.  

Historically, Spotify’s personalization efforts happened across playlists and surfaces like Home and Search. But over time we utilized new technologies to drive more opportunities for personalization. This started from a Hack Week project back in the day to become Discover Weekly, our first successful algorithmically driven playlist. It then gave way to Blend, which was designed for a more social listening experience. And more recently, to DJ, our new experience that harnesses the power of AI and editorial expertise to help tell artists’ stories and better contextualize their songs. It utilizes an AI voice that makes personalization possible like never before—and it’s a whole new way for our listeners to experience Spotify’s personalization. 

When designing personalized experiences like these, we must think “content first,” knowing people come to Spotify for the content. Design ultimately makes it feel simple and human and creates experiences that users love. If recommendations are a math problem, then resonance is a design problem.

But we also have to have what I like to call “tech empathy”—empathy for the technology itself. My team, which is a mix of product designers and content designers, has to understand how the technology works to design our recommendations for the programming. Personalization designers need to understand the ways in which we’re working with complex technology like machine learning, generative AI, and algorithms. Our designers need to consider what signals we’re getting that will allow our recommendations to get better in real time and overtime. And when a recommendation is wrong, or a user just wants a different mood, we need to design mechanisms for feedback and control. That really came into play when we developed our AI DJ.

Tell us the story of the inception of DJ.

We’re always trying to create more meaningful connections between listeners and creators in new and engaging ways. And we use technology to deliver this value. DJ is the perfect example of how we’re driving deeper, more meaningful connections through technology.

Prior to generative AI, a “trusted friend DJ” would have required thousands of writers, voice actors, and producers to pull this off—something that wasn’t technically, logistically, or financially possible. Now, new technologies have unlocked quality at scale. Xavier “X” Jernigan’s voice and personality delivers on our mission of creating more meaningful connections to hundreds of millions of people. Generative AI made the once impossible feel magical.

To bring DJ to life we answered some core experiential questions knowing we are taking listeners on a journey with both familiar and unfamiliar music. We asked questions such as: What does it mean to give context to listening? How do we visualize AI in a human way? You can see this in how the DJ introduces itself in a playful way—owning that it’s an AI that doesn’t set timers or turn on lights. 

We also put a lot of thought into how we designed the character, since it is more than a voice. 

Ultimately, we really wanted to lean into making it feel more like a trusted music guide, as well as having an approachable personality. So much of our brand is human playfulness, so we made a major decision to acquire Sonantic and create a more realistic, friendly voice. And that led to Xavier training the model to be our first voice. His background and expertise made him the perfect choice.

With new technologies like generative AI, what are some of the new ways you’re thinking about your team and their work?

I’m challenging our team to think differently about the intersection of design and generative AI. We keep coming back to the conclusion that we don’t need to design that differently because our first principles still stand true. For example, we are still taking a content-first approach and we continue to strive for clarity and trust. We’ve realized that tech advancements are accelerating faster than ever, which makes design’s role more important than ever. 

Because there’s so much more complexity out there with generative AI, it means the human needs must be kept in mind even more. At the end of the day, if our users aren’t interested in a product or they don’t want to use it, what did we create it for? 

Emerging technology inspires you to think differently and to look from different angles. The world is trying to figure this out together, and at Spotify we’re not using technology to use technology. We’re using technology to deliver joy and value and meet our goals of driving discovery and connections in the process.

Spotify’s AI DJ Brings a Personalized Listening Experience to Fans in the UK and Ireland

In February we unveiled DJ, a personalized AI guide that understands you and your music taste so well that it does the choosing for you. Now we’re excited to start rolling out DJ in beta to Premium users across the U.K. and Ireland. 

At its core, DJ is all about connection and discovery. And thanks to DJ’s powerful combination of Spotify’s personalization technology, generative AI through the use of OpenAI technology in the hands of our music experts, and a dynamic AI voice, listening has never felt so personal. 

When we were deciding where to offer DJ next, the U.K. and Ireland just made sense. We have a team of local music experts on the ground in the region, and it’s where some of DJ’s fundamental technology has been developed.

We also know there’s demand: While we’ve seen fans across the globe asking for DJ, it was most commonly requested by users on social media in the U.K. and Ireland.* But don’t just take it from us . . .

 

When users in the U.K. and Ireland tune in they will be greeted by a stunningly realistic AI voice, modeled after Spotify’s own Head of Cultural Partnerships, Xavier “X” Jernigan. Plus, they’ll be served songs and context geared towards them. For example, users who tune in right around launch may hear about how Arlo Parks is releasing her newest album, My Soft Machine, at the end of May alongside her collab, “Phoenix,” with friend and longtime role model Phoebe Bridgers. And when it comes to an engaging listening experience, these moments of relevant context are winning DJ users over.

We’ve found that when DJ listeners hear commentary alongside personal music recommendations, they’re more willing to try something new (or listen to a song they may have otherwise skipped). On days when users tune in, fans spend 25% of their listening time with DJ—and they keep coming back. More than half of first-time listeners come back to listen to DJ the very next day.** 

And DJ has especially resonated with Gen Z and Millennials, who make up 87% of DJ users.***

But this is just the beginning. DJ is still in beta, and we’ll continue to iterate and innovate to evolve the experience over time.

Ready to give DJ a try? Just head to your Music Feed on Home in mobile.

*Results based on tweets between February 22, 2023 – May 11, 2023 from users with a publicly identifiable location.
**Results are based on eligible DJ users (Premium users in the U.S. and Canada on mobile) and collected from February 22, 2023 to March 1, 2023.
***Results are based on eligible DJ users (Premium users in the U.S. and Canada on mobile) and collected from April 28, 2023 to May 4, 2023.

Responsibly Balancing What Goes Into Your Personalized Recommendations

Every month, tens of billions of discoveries happen on Spotify. Personalized recommendations play an important role in our ability to match listeners around the world with the right content, tracks, artists, or creators at the right moment. Behind the scenes, we combine human editorial expertise with a multitude of signals and systems with the aim of providing every listener with a unique and safe experience. 

At Spotify we focus on delivering recommendations that are relevant, encourage diversity in listening, and provide the opportunities for artist and creator discovery. We spoke with Henriette Cramer, Director of Algorithmic Impact, and Amar Ashar, Head of Algorithmic Policy—both members of the Trust & Safety team—for a deeper look at algorithmic impact and safety. 

Why focus on algorithmic impact? 

Henriette: Algorithmically programmed experiences like Discover Weekly, Release Radar, and Made for You Mixes, or even Search, provide opportunities for artists and podcast creators to grow their fan bases. But while machine learning and algorithms enable these really important opportunities, we know we have a responsibility to mitigate unintended harms, ensure we represent a very wide range of global creators on our platform, and understand our impact.

Understanding our algorithmic impact requires extensive internal and external collaboration, and we approach this space through three channels: research, product engagement, and collaboration with external partners. It’s an ever-evolving field, and we’re proactively working with Spotify teams and external stakeholders to continuously improve our approach as we continue to learn

What makes Spotify unique, from an algorithmic perspective? 

Amar: People often talk about the “Spotify Algorithm,” but that’s an oversimplification. In fact, Spotify’s personalization is a combination of a variety of algorithms, along with editorial and data curation teams, all contributing to a unique experience for each listener.

Spotify editors play a crucial role within this space by using their expert judgment to curate playlists and help artists find new fans. They also work with algorithms to create highly situational and personalized experiences. We call this “algotorial”—bringing both the editorial and algorithmic worlds together. This collaboration is critical to the Spotify experience. Think of it this way: Algorithms don’t go out to concerts, people do, which is why human expertise is an essential ingredient in our recommendations. 

We just released a new AI DJ that delivers a curated lineup of music alongside commentary around the tracks and artists. How are teams at Spotify working together to make sure the safety of recommendations is prioritized?

Henriette: In general, ensuring we approach Spotify recommendations responsibly requires close coordination between lots of teams across product, policy, legal, and research. We work with each of them to provide guidance that’s reflective of our algorithmic equity and safety goals, and we use various tools, such as algorithmic assessments, that help us identify and solve problems before they happen. 

Spotify’s DJ takes a unique approach by combining Spotify’s personalization technology, generative AI in the hands of music editors, and voice technology. The expertise of our editors is something that’s really important to our philosophy. As we launch new features, we aim for appropriate safety measures and processes to be in place. The product has been tested in a closed environment for a while, and now that we have launched this product as a beta, we’ll continue to study and improve the experience. 

How does your team work with external partners to improve Spotify’s personalized experience?

Amar: Engaging with research communities outside of Spotify is imperative to do our work. That’s why we also continue to share our findings with the wider community, collaborate across sectors, and ensure, as an industry, that we keep learning and evolving existing practices. 

We also work closely with external partners through Spotify’s Safety Advisory Council, which includes an interdisciplinary group of experts who advise us on safety topics and bring expertise on recommendations, responsibility, and safety from a global perspective.

What’s your go-to playlist?  

Amar: Discover Weekly, not only because it’s consistently a great playlist that has introduced me to new artists and genres, but also because I’ve been lucky enough to have worked with the team that’s built this flagship product.   

Henriette: So many! I love editorial playlists like Techno Bunker, Queens of the Blues, or New Orleans Brass to really get into a genre. Since I worked on voice projects in the past, it’s been really nice to play with the new DJ beta and see editorial, tech, and design work shine together as we continue to study how we can use new techniques responsibly.

Spotify Debuts a New AI DJ, Right in Your Pocket

Meet your AI DJ on Spotify

Personalization is at the heart of what we do at Spotify—just think of fan-favorite playlists like Discover Weekly, or our annual Wrapped campaign. The beauty of these experiences is our ability to deliver the right piece of music for that exact moment in time, and maybe even connect you with your next favorite artist in the process. We’re building on that innovation by harnessing the power of AI in an entirely new way. And today, we’re excited to share that we’re taking our personalization to a whole new level with DJ

Ready for a brand-new way to listen on Spotify and connect even more deeply with the artists you love? The DJ is a personalized AI guide that knows you and your music taste so well that it can choose what to play for you. This feature, first rolling out in beta, will deliver a curated lineup of music alongside commentary around the tracks and artists we think you’ll like in a stunningly realistic voice. 

It will sort through the latest music and look back at some of your old favorites—maybe even resurfacing that song you haven’t listened to for years. It will then review what you might enjoy and deliver a stream of songs picked just for you. And what’s more, it constantly refreshes the lineup based on your feedback. 

If you’re not feeling the vibe, just tap the DJ button and it will switch it up. The more you listen and tell the DJ what you like (and don’t like!), the better its recommendations get. Think of it as the very best of Spotify’s personalization—but as an AI DJ in your pocket.

How our AI DJ works

To create the DJ we reimagined the way users listen on Spotify. The DJ knows you and your music taste so well that it will scan the latest releases we know you’ll like, or take you back to that nostalgic playlist you had on repeat last year. Never before has listening felt so completely personal to each and every user, thanks to the powerful combination of:

Spotify’s personalization technology, which gives you a lineup of music recommendations based on what we know you like. 

Generative AI through the use of OpenAI technology. We put this in the hands of our music editors to provide you with insightful facts about the music, artists, or genres you’re listening to. The expertise of our editors is something that’s really important to our philosophy at Spotify. 

We have experts in genres who know music and culture inside and out. And no one knows the music scene better than they do. With this generative AI tooling, our editors are able to scale their innate knowledge in ways never before possible.

A dynamic AI voice platform from our Sonantic acquisition that brings to life stunningly realistic voices from text.

To create the voice model for the DJ, we partnered with our own Head of Cultural Partnerships, Xavier “X” Jernigan. Previously, X served as one of the hosts on Spotify’s first (and personalized) morning show, The Get Up. His personality and voice resonated with our listeners and resulted in a loyal following for the podcast. His voice is the first model for the DJ, and we’ll continue to iterate and innovate, as we do with all our products. 

Where to find the DJ

Ready to have the DJ soundtrack your day? It’s rolling out in English starting today for Spotify Premium users in the U.S. and Canada. 

  1. Head to your Music Feed on Home in the Spotify mobile app on your iOS or Android device.
  2. Tap Play on the DJ card.
  3. Let Spotify do the rest! The DJ will serve a lineup of music alongside short commentary on the songs and artists, picked just for you. 
  4. Not feeling the vibe? Just hit the DJ button at the bottom right of the screen to be taken to a different genre, artist, or mood.

At Spotify we’re uniquely positioned to transform audio. We’re always looking for innovative new ways to improve our users’ listening experiences to meet their needs—so stay tuned for more.

*Update May 16, 2023: DJ is now rolling out in the UK and Ireland

*Update August 8, 2023: DJ is now rolling out in 46 more markets around the world

Rachel Bittner on Basic Pitch: An Open Source Tool for Musicians

orange open source and coding symbols on a blue, green, and white background

Music creation has never been as accessible as it is now. Gone are the days of classical composers, sheet music, and prohibitively expensive studio time when only trained, bankrolled musicians had the opportunity to transcribe notes onto a page. As technology has changed, so too has the art of music creation—and today it is easier than ever for experts and novices alike to compose, produce, and distribute music. 

Now, musicians use a computer-based digital standard called MIDI (pronounced “MID-ee”). MIDI acts like sheet music for computers, describing which notes are played and when—in a format that’s easy to edit. But creating music from scratch, even using MIDI, can still be very tedious. If you play piano and have a MIDI keyboard, you can create MIDI by playing. But if you don’t, you must create it manually: note by note, click by click. 

To help solve this problem, Spotify’s machine learning experts trained a neural network to predict MIDI note events when given audio input. The network is packaged in a tool called Basic Pitch, which we just released as an open source project

“Basic Pitch makes it easier for musicians to create MIDI from acoustic instruments—for example, by singing their ideas,” says Rachel Bittner, a research manager at Spotify who is focused on applied machine learning on audio. “It can also give musicians a quick ‘starting point’ transcription instead of having to write down everything manually, saving them time and resources. Basically, it allows musicians to compose on the instrument they want to compose on. They can jam on their ukulele, record it on their phone, then use Basic Pitch to turn that recording into MIDI. So we’ve made MIDI, this standard that’s been around for decades, more accessible to more creators. We hope this saves them time and effort while also allowing them to be more expressive and spontaneous.”

For the Record asked Rachel to tell us more about the thinking and development that go into Basic Pitch and other machine learning efforts, and how the team decided to open up the tool for anyone to access and to innovate on.

Help us understand the basics. How are machine learning models being applied to audio?

Rachel Bittner

On the audio ML (machine learning) teams at Spotify, we build neural networks—like the ones that are used to recognize images or understand language—but ours are designed specifically for audio. Similar to how you ask your voice assistant to identify the words you’re saying and also make sense of the meaning behind those words, we’re using neural networks to understand and process audio in music and podcasts. This work combines our ML research and practices with domain knowledge about audio—understanding the fundamentals of how music works, like pitch, tone, tempo, the frequencies of different instruments, and more.

What are some examples of machine learning projects you’re working on that align with our mission to give “a million creators the opportunity to live off their art”?

Spotify enables creators to reach listeners and listeners to discover new creators. A lot of our work helps with this in indirect ways—for example, identifying tracks that might go well together on a playlist because they share similar sonic qualities like instrumentation or recording style. Maybe one track is already a listener’s favorite and the other one is something new they might like.

We also build tools that help creative artists actually create. Some of our tech is in Soundtrap, Spotify’s digital audio workstation (DAW), which is used to produce music and podcasts. It’s like having a complete studio online. And then there’s Basic Pitch, which is a stand-alone tool for converting audio into MIDI that we just released as an open source project. We open sourced Basic Pitch and built an online demo, so anyone can use it to translate musical notes in a recording (including voice, guitar, or piano).

Unlike similar ML models, Basic Pitch is not only versatile and accurate at doing this, but it’s also fast and computationally lightweight. So the musician doesn’t have to sit around forever waiting for their recording to process. And on the technological and environmental side, it uses way less energy—we’re talking orders of magnitude less—compared to other ML models. We named the project Basic Pitch because it can also detect pitch bends in the notes, which is a particularly tricky problem for this kind of model. But also because the model itself is so lightweight and fast.

What else makes Basic Pitch a unique machine learning project for Spotify?

I mentioned before how computationally lightweight it is—that’s a good thing. In my opinion, the ML industry tends to overlook the environmental and energy impact of their models. Usually with ML models like this—whether it’s for processing images, audio, or text—you throw as much processing power as you can at the problem as the default method for reaching some level of accuracy. But from the beginning, we had a different approach in mind: We wanted to see if we could build a model that was both accurate and efficient, and if you have that mindset from the start, it changes the technical decisions you make in how you build the model. Not only is our model as accurate as (or even more accurate than) similar models, but since it’s lightweight, it’s also faster, which is better for the user, too. 

What’s the benefit of open sourcing this tool?

It gives more people access to it since anyone with a web browser can use the online demo. Plus, we believe the external contributions from the open source community help it evolve as software to create a better, more useful product for everyone. For example, while we believe Basic Pitch solves an important problem, the quality of the MIDI that our system (and others’) produces is still far from human-level accuracy. By making it available to creators and developers, we can use our individual knowledge and experience with the product to continue to improve that quality. 

What’s next for Basic Pitch in this area?

There’s so much potential for what we can do with this technology in the future. For example, Basic Pitch could eventually be integrated into a real-time system, allowing a live performance to be automatically accompanied by other MIDI instruments that “react” to what the performer is doing.

Additionally, we shared an early version of Basic Pitch with Bad Snacks, an artist-producer who has a YouTube channel where she shares production tips with other musicians. She’s been playing around with Basic Pitch, and we’ve already made improvements to it based on her feedback, fixing how the online demo handles MIDI tempo, and other things to make it work better for a musician’s workflow. We partnered with her to use Basic Pitch to create an original composition, which she released as a single on Spotify. She even posted a behind-the-scenes video on her channel showing how she used Basic Pitch to create the track. The violin solo section is particularly cool.

But it’s not just artists and creators that we’re excited about. We’re equally looking forward to seeing what everyone in the open-source developer community has been doing with it. We expect to discover many areas for improvement, along with new possibilities for how it could be used. We’re proud of the research that went into Basic Pitch and we’re happy to show it off. We’ll be even happier if musicians start using it as part of their creative workflows. Share your compositions with us!

Create a cool track using Basic Pitch? Share it on Twitter with the hashtag #basicpitch and tag the team @SpotifyEng.

6 Questions (and Answers) with Tony Jebara, VP of Machine Learning

Tony Jebara, Spotify’s new Vice President of Machine Learning, says he started studying the algorithm-based technology when he was in college, “before it was cool.” Now, machine learning is not only undeniably cool, but it’s also incredibly practical—it also enables fan-favorite playlists like Discover Weekly, and more recent creations like On Repeat and Repeat Rewind.

Tony and his team of engineers and research scientists, therefore, have a two-fold mission: To analyze data on what users search and stream, and use those learnings to run experiments that turn into some of your favorite personalized playlists and personalized homepages.

We recently sat down with Tony, and he explained why, after four years as the director of machine learning at Netflix, he was intrigued by Spotify, where machine learning is central to our company strategy.

As a guitarist and songwriter, it was a perfect—dare we say algorithmic—fit.

First, what is machine learning? How do you use it at Spotify?

Machine learning finds patterns in data in a statistically reliable way so that we are confident they were not flukes. Then, it studies that data to determine what actions to take for each context in order to maximize reward. We’re not just trying to find patterns in the data, but cause and effect relationships too.

At Spotify, machine learning helps us match millions of users to the content (e.g. tracks, podcasts) most relevant to them at an unparalleled speed. We’re aiming to facilitate the user journey and make it enjoyable so that it doesn’t involve as much hunting around on our app. It’s a way for us to say ‘you’re going to love these things, let me put them at the top of your page for you,’ and also accelerate that process based on what people with similar interests have discovered.

You came from Netflix, which is a really interesting player in the machine learning space. How does your work today leverage past experience?

There are lots of similarities. Both services have to algorithmically match users to the right content and both have to decide how to invest in content and creators. But one key difference is that the Spotify catalog is huge—there are over 50 million songs and hundreds of thousands of podcasts. On the flip side, the Netflix catalog only has to deal with thousands of movies and TV shows. So, machine learning and algorithms play a much more crucial role at Spotify.

What makes Spotify’s application of machine learning unique or special?

If you think about what Spotify does, we deliver really, truly personalized experiences on a global level and in localized markets. Creating one personalized playlist for one user in one market can be challenging, but it’s doable with a human curator.  We take cultural aspects into consideration, because in culture it’s about more than drawing a straight line from the past into the future. Cultural shifts are sometimes erratic or anything but linear. That’s why we increasingly invest in systems that combine human experts and algorithms. While humans are good at articulating the “new, interesting and unexpected twist”, algorithms are better at scaling that curation to a personal experience for millions of people.

If you have a catalog of millions of songs and a global market of, you know, 200 million plus, you need to be able to scale your efforts thoughtfully. Machine learning allows and enables us to do that at the speed and quality consistency that Spotify is known for.

Our algorithms allow us to scale out very personalized, hand-selected experiences that help members feel were made just for them. The goal is to deliver an amazing listening experience.

What will machine learning mean for our creators—artists and podcasters—on platform?

With machine learning, we can expand our audience analytics capabilities in a way that helps creators get new fans. It’s no longer just about knowing if your song has been downloaded or streamed 8 million times, it’s about creating a connection between artists, creators and their fans. With machine learning, we can actually start to inform them about what types of people are consuming their work, at what time, and what it gets consumed with. You know—like pairing wine with food. What songs are this podcast dining well with? Things like that help unclog creative potential because people can understand their audiences better. Then they get valuable feedback, something that so many creators crave.

Machine learning is a fast-moving field, to say the least. What do you think the future of machine learning will look like? Let’s say three years from now?  

Over the next three years, machine learning will become more causal and long-term. Right now, machine learning mostly uncovers superficial input-output relationships. For example, given what you played today, here’s what you’ll play tomorrow. This leads to short-term engagement but might not yield long-term satisfaction. My hope is that three years from now, machine learning becomes less myopic. It should figure out the best sequence of actions to lead you on a journey where you discover new great audio content, become more engaged, and stay satisfied as a listener for years to come.

Now before we go, are there any podcasts you’re especially into right now? When do you listen to them?

I’m kind of nerdy, so I like Stuff You Should Know. It’s not a story, it’s just interesting things around nutrition or technology or political facts. I like to listen to it and learn about some random new things popping up.

I usually listen to podcasts while I’m lying in bed, when I don’t want to hold my screen or have the blue light keep me up but still want to learn something new.