Tag: Safety Advisory Council

How Spotify Approaches Safety With Sarah Hoyle, Global Head of Trust and Safety

Spotify’s mission is to unlock the potential of human creativity by giving a million artists the opportunity to live off their art and billions of fans the opportunity to enjoy and be inspired by it. In support of that endeavor, our global teams work around the clock to ensure that the experience along the way is safe and enjoyable for creators, listeners, and advertisers. 

While there is some user-generated content on Spotify, the vast majority of listening time is spent on licensed content. Regardless of who created the content, our top priority is to allow our community to connect directly with the music, podcasts, and audiobooks they love. When we think about the safety aspect of this, it can be helpful to do so in the context of seeing a show at a performance venue.

Like a performance venue, Spotify hosts different types of shows across a variety of genres. Not every show may be suitable for all audiences or in line with everyone’s unique tastes. Just like people select which shows they want to see, Spotify provides opportunities for users to seek out and curate content that they like and that is appropriate for their preferences. For example, users can skip music tagged by creators or rights holders as “explicit” by using our content toggle. Mobile users can block artists or songs they wish to hide and exclude playlists from their taste profiles or use the “not interested” button to better control their experiences.

While Spotify strongly supports enabling creative expression and listener choice, this does not mean that anything goes. In the same way that a venue has rules to ensure that shows run smoothly and are safe, Spotify has Platform Rules to guide what’s acceptable content and behavior on our platform. Bad behavior at a concert can lead to things like backstage access being revoked or, in egregious situations, someone being kicked out of the venue. Breaking Spotify’s rules can have consequences like removal, reduced distribution, and/or demonetization. We will also remove content that violates the law and/or our Terms of Service. Creators or rights holders may also choose to remove content themselves.

Measures we continue to take around responsible content recommendations and search also play key roles in creating a safe and enjoyable experience. For example, product and engineering teams across the company work with Trust & Safety to conduct impact assessments that allow us to evaluate and better mitigate potential algorithmic harms and inequities. We’ve also been introducing search warnings and in-app messaging to users searching for suicide, self-harm, and disordered eating-related content, which link to Spotify’s Mental Health Resources page. This work is being done in partnership with experts like Ditch the Label and the Jed Foundation with the goal of connecting potentially at-risk users with trusted help resources. 

Keeping our platform safe is a challenging job and, as the landscape evolves, we’re committed to evolving along with it. Safety is a company-wide responsibility and our efforts involve ongoing coordination between engineers, product managers, data scientists, researchers, lawyers, and social impact experts, as well as the policy and enforcement experts in Trust & Safety. Many of the folks on these teams have long careers in online safety, as well as in fields like human rights, social work, academia, health care, and consulting. We have also established an internal Safety Leadership group that regularly brings executives from different departments together to help ensure awareness of safety needs and monitor progress on our efforts. 

To complement our in-house expertise, we also seek counsel and feedback from third-party experts around the world, including our Safety Advisory Council, to ensure we’re considering multiple points of view when shaping our safety approach. In 2022, we invested in the local and linguistic expertise of start-up Kinzen, now known as the Content Safety Analysis team within Spotify, which has a nuanced understanding of the global safety landscape and works proactively to deliver a safe and enjoyable experience across our content offerings. 

Click here to learn more about Spotify’s approach to safety.

Spotify Continues to Ramp Up Platform Safety Efforts with Acquisition of Kinzen

Spotify and Kinzen logos

Listen to this story read aloud in 2 minutes and 15 seconds. 


Today, Spotify is excited to share that we have acquired Dublin, Ireland-based Kinzen, a global leader in protecting online communities from harmful content. Kinzen’s advanced technology and deep expertise will help us more effectively deliver a safe, enjoyable experience on our platform around the world.
 

Spotify’s current partnership with Kinzen, which began in 2020, has been critical to enhancing our approach to platform safety. The company’s unique technology is particularly suited for podcasting and audio formats, making its value to Spotify clear and unmatched. The technology the Kinzen team brings to Spotify combines machine learning and human expertise—backed by analysis from leading local academics and journalists—to analyze potential harmful content and hate speech in multiple languages and countries. 

“We’ve long had an impactful and collaborative partnership with Kinzen and its exceptional team. Now, working together as one, we’ll be able to even further improve our ability to detect and address harmful content, and importantly, in a way that better considers local context,” said Dustee Jenkins, Spotify’s Global Head of Public Affairs. “This investment expands Spotify’s approach to platform safety, and underscores how seriously we take our commitment to creating a safe and enjoyable experience for creators and users.”

Given the complexity of analyzing audio content in hundreds of languages and dialects, and the challenges in effectively evaluating the nuance and intent of that content, the acquisition of Kinzen will help Spotify better understand the abuse landscape and identify emerging threats on the platform.

“The combination of tools and expert insights is Kinzen’s unique strength that we see as essential to identifying emerging abuse trends in markets and moderating potentially dangerous content at scale,” said Sarah Hoyle, Spotify’s Head of Trust and Safety. “This expansion of our team, combined with the launch of our Safety Advisory Council, demonstrates the proactive approach we’re taking in this important space.”

Introducing the Spotify Safety Advisory Council

Spotify Safety Advisory Council logo on blue background

Over the past several months, Spotify has moved to being more transparent about our safety efforts. In January, we published our Platform Rules and took measures to ensure that creators on our platform view and adhere to them. These were first steps forward, and today we unveil another: our newly formed Spotify Safety Advisory Council—the first safety-focused council of its type at any major audio company.

The founding members of Spotify’s Safety Advisory Council are individuals and organizations around the world with deep expertise in areas that are key to navigating the online safety space. At a high level, the council’s mission is to help Spotify evolve its policies and products in a safe way while making sure we respect creator expression. Our council members will advise our teams in key areas like policy and safety-feature development as well as guide our approach to equity, impact, and academic research. Council members will not make enforcement decisions about specific content or creators. However, their feedback will inform how we shape our high-level policies and the internal processes our teams follow to ensure that policies are applied consistently and at scale around the world.

While Spotify has been seeking feedback from many of these founding members for years, we’re excited to further expand and be more transparent about our safety partnerships. As our product continues to grow and evolve, council membership will grow and evolve along with it. In the months ahead, we will work closely with founding members to expand the council, with the goal of broadening regional and linguistic representation as well as adding additional experts in the equity and impact space.

The founding members and partner organizations include the following:

Dangerous Speech Project, represented by Professor Susan Benesch and Tonei Glavinic

Center for Democracy and Technology, represented by Emma Llansó

Professor Danielle Citron

Dr. Mary Anne Franks

Alex Holmes

Institute for Strategic Dialogue (ISD), represented by Henry Tuck and Milo Comerford

Dr. Jonas Kaiser

Kinzen, represented by Founders Mark Little and Áine Kerr

Dr. Ronaldo Lemos

Dr. Christer Mattsson

Dr. Tanu Mitra

Desmond Upton Patton, PhD, MSW

Megan Phelps-Roper 

USC Annenberg Inclusion Initiative, represented by Dr. Katherine Pieper and Dr. Stacy L. Smith

 

You can read the full bios for the organizations and individuals here.