Tag: platform rules

Partnering To Combat the Spread of Violent Content Online

At Spotify, our Platform Rules strictly prohibit content that promotes terrorism or violent extremism. We take action on content that violates our Platform Rules, and we work closely with third parties with extremism expertise to ensure we are making the most informed decisions throughout these processes, taking local, regional, and cultural context into account.

Today, to advance these efforts, we are announcing the addition of Moonshot to our Spotify Safety Advisory Council (SSAC). Moonshot’s work on understanding the motivations for online hate and extremism—and how to reduce the threat of violence both online and offline—has been critical in building safeguards on our platform. 

Growing our Spotify Safety Advisory Council

At Spotify, we design products and develop policies with safety in mind, and our longstanding partnership with Moonshot has provided critical insights to get ahead of emerging online trends.

“Since 2021, we have partnered with Spotify to produce innovative product solutions and meaningful action in response to the threat of online hate and extremism,” says Clark Hogan-Taylor, Director of Tech Partnerships at Moonshot. “From protections against racial violence to fighting organized crime, our team has worked tirelessly to ensure Spotify is a safe platform for all. We are proud to be joining the Safety Advisory Council to further contribute to this important mission.”

Moonshot joins other members of the SSAC, such as the Institute for Strategic Dialogue (ISD), to help Spotify better understand the global landscape of hate and extremism as harmful trends across borders and platforms. 

These partnerships and ongoing engagement with our SSAC reflect our commitment to constantly evolve our approach as we combine the expertise of our internal safety teams, the scale of our technology, and the insights of our trusted external partners.

Violent extremist content is prohibited on Spotify

Spotify addresses potential violent extremist content through multiple policies, which include, but are not limited to:

  • Our hate policies, which prohibit content that explicitly incites violence or hatred toward people based on protected characteristics, including race, sex, ethnicity, or sexual orientation; and
  • Our dangerous content policies, which clearly outline that content promoting or supporting terrorism or violent extremism is strictly not allowed on the Spotify platform.

We identify potentially violative content by using proactive monitoring methods, leveraging human expertise, and reviewing and responding to user reports. When it comes to enforcement, we may take various actions, including removing content or the creator’s account, reducing distribution, and/or demonetization.

This space is nuanced, complex, and always evolving, and we are committed to continually improving our processes. You can read more about our approach to violent extremism and our safety work here.

Partnering With Industry To Create a Safer Online Environment for Young People

By Marcelina Slota – Head of Platform Integrity, Spotify

Updated September 13, 2024

To continue our work providing positive experiences for young people, we’re proud to announce that PROJECT ROCKIT is joining our Spotify Safety Advisory Council (SSAC). Through their global advocacy, PROJECT ROCKIT empowers young people to combat bullying at school, online, and beyond, and their expertise will help inform how Spotify identifies and manages bullying and harassment however it may arise on our platform. Our work with PROJECT ROCKIT will build upon our long-standing partnerships with experts worldwide to advise how we continue building Spotify with safety by design at every step. 

“It’s a privilege for PROJECT ROCKIT to join Spotify’s Safety Advisory Council,” says Lucy Thomas, Cofounder and CEO of PROJECT ROCKIT. “For years, we’ve worked tirelessly to create safer spaces for young people, and this collaboration represents an incredible opportunity to extend that impact globally. By bringing youth-driven perspectives to Spotify, we can help shape a platform that champions both safety and authentic expression. Together, we’re paving the way for a digital world where young people are empowered, respected, and safe.” 

Original Post

Safety is a top priority for Spotify, which is why we want to make it easier for young people and parents to understand and navigate the digital world. Today, we’re announcing that we have a new Parental Guide to help do just that, and that Spotify has joined the Tech Coalition to share best practices with industry on upholding youth safety.

The Tech Coalition is a trusted organization that unites the global tech industry to foster a safer online environment for young people by preventing and combating online child sexual exploitation and abuse. This partnership expands our network of trusted third-party experts who help advise our teams on how we launch policies and products with safety by design. Some of the key stakeholders who’ve been helping guide our work in this space include Thorn, the WeProtect Global Alliance, and Safety Advisory Council members Alex Holmes (Deputy CEO of the nonprofit The Diana Award and founder of Anti-Bullying Ambassadors). 

In addition to partnering with experts across the globe, Spotify works to craft an experience that is both safe and enjoyable for young people through a number of ways, including by:

  • Establishing a zero-tolerance policy against content that exploits children and Platform Rules that ban illegal and/or abusive behaviors that could harm children;
  • Leveraging machine learning signals and establishing user reporting mechanisms to detect potential policy and/or legal violations;
  • Staffing teams around the clock to review and promptly remove potentially violating or explicit content; and
  • Connecting potentially vulnerable users to mental health resources when they search for content related to suicide, self-harm, and disordered eating content.

We have also launched our Parental Guide to help explain how parents can curate the experience that’s right for their families. Our work in this space is ongoing and we’ll continue updating this resource (and others) to reflect the best and most up–to-date information. 

How Spotify Is Protecting Election Integrity in 2024

Dustee Jenkins, Chief Public Affairs Officer, Spotify

With billions of people from over 50 countries heading to the polls to cast their vote, 2024 is shaping up to be the largest election year in history. Safeguarding our platform during critically important global events is a top priority for our teams, and we’ve spent years developing and refining our approach. Knowing that election safety is top of mind for many of our creators and users, we wanted to provide further insights into our approach at Spotify.

Our Approach 

To help creators understand what they can and cannot do on Spotify, we have Platform Rules that apply to everyone, partnerships with trusted experts who help us identify evolving forms of online abuse, and product interventions to elevate timely and trusted resources and help restrict the distribution of potentially harmful content. Our Platform Rules clearly state that content that attempts to manipulate or interfere with election-related processes is prohibited. When we identify violative content, we take the appropriate action.

When it comes to elections, risks often vary from country to country, and the types of abuse that manifest can be hyper-localized and nuanced. Rather than use the same one-size-fits-all approach to every market, our teams conduct individual risk assessments that consider a variety of indicators, including Spotify’s availability and specific product offerings in a country, historical precedents for online and offline harm during voting periods, and emerging geopolitical factors that may increase on-platform risks. 

We monitor these factors on an ongoing basis and use our learnings to inform policy and enforcement guidelines, customize in-product interventions, and determine where we may benefit from additional resourcing and/or third-party inputs. Ultimately, our primary focus is always to reduce risk, allowing our listeners, creators, and advertisers to enjoy our products. 

Expert Partnerships

To help us understand the nuance of potential harms around the world, Spotify acquired Kinzen in 2022. This has allowed us to conduct ongoing research in multiple languages and in critical areas like misinformation and hate speech. This research is supported by a pioneering tool called Spotlight designed to identify potential risks quickly within long-form audio content like podcasts.

Additionally, we partner closely with third-party experts to ensure we represent multiple viewpoints when discussing sensitive policy areas. These include our global Spotify Safety Advisory Council and the Institute for Strategic Dialogue.

In-Product Resources

During key elections, we encourage nonpartisan community engagement by connecting listeners with reliable, local information. These campaigns have driven millions of visits to civic engagement resources, helping users check their voter status, register to vote, or learn more about their local elections, whatever their political affiliation. 

We also leverage a combination of algorithmic and human curation to identify content that violates our guidelines and may update our recommendations to curb potentially manipulated or dangerous information.

Political Ads

Spotify accepts political advertisements across music, the Spotify Audience Network, and Originals & Licensed podcasts in certain markets.

Political ads may be placed in the Spotify Audience Network and Spotify’s free, ad-supported service inventory. An account must be eligible for political ads, and the account holder must complete an advertiser identity verification process to proceed. Political ads are unavailable for purchase via our self-serve tool, Spotify Ad Studio. 

Additionally, we require that political advertisements clearly disclose the use of any synthetic or manipulated media, including media created or edited with the use of artificial intelligence tools that depict real or realistic-looking people or events. This disclosure must be included in the advertisement and must be clear and conspicuous.

To read more about political ads in the markets where they are offered, and to learn about how to report an ad you believe violates our policies, please review Spotify’s political advertising editorial policies.

How Spotify Approaches Safety With Sarah Hoyle, Global Head of Trust and Safety

Spotify’s mission is to unlock the potential of human creativity by giving a million artists the opportunity to live off their art and billions of fans the opportunity to enjoy and be inspired by it. In support of that endeavor, our global teams work around the clock to ensure that the experience along the way is safe and enjoyable for creators, listeners, and advertisers. 

While there is some user-generated content on Spotify, the vast majority of listening time is spent on licensed content. Regardless of who created the content, our top priority is to allow our community to connect directly with the music, podcasts, and audiobooks they love. When we think about the safety aspect of this, it can be helpful to do so in the context of seeing a show at a performance venue.

Like a performance venue, Spotify hosts different types of shows across a variety of genres. Not every show may be suitable for all audiences or in line with everyone’s unique tastes. Just like people select which shows they want to see, Spotify provides opportunities for users to seek out and curate content that they like and that is appropriate for their preferences. For example, users can skip music tagged by creators or rights holders as “explicit” by using our content toggle. Mobile users can block artists or songs they wish to hide and exclude playlists from their taste profiles or use the “not interested” button to better control their experiences.

While Spotify strongly supports enabling creative expression and listener choice, this does not mean that anything goes. In the same way that a venue has rules to ensure that shows run smoothly and are safe, Spotify has Platform Rules to guide what’s acceptable content and behavior on our platform. Bad behavior at a concert can lead to things like backstage access being revoked or, in egregious situations, someone being kicked out of the venue. Breaking Spotify’s rules can have consequences like removal, reduced distribution, and/or demonetization. We will also remove content that violates the law and/or our Terms of Service. Creators or rights holders may also choose to remove content themselves.

Measures we continue to take around responsible content recommendations and search also play key roles in creating a safe and enjoyable experience. For example, product and engineering teams across the company work with Trust & Safety to conduct impact assessments that allow us to evaluate and better mitigate potential algorithmic harms and inequities. We’ve also been introducing search warnings and in-app messaging to users searching for suicide, self-harm, and disordered eating-related content, which link to Spotify’s Mental Health Resources page. This work is being done in partnership with experts like Ditch the Label and the Jed Foundation with the goal of connecting potentially at-risk users with trusted help resources. 

Keeping our platform safe is a challenging job and, as the landscape evolves, we’re committed to evolving along with it. Safety is a company-wide responsibility and our efforts involve ongoing coordination between engineers, product managers, data scientists, researchers, lawyers, and social impact experts, as well as the policy and enforcement experts in Trust & Safety. Many of the folks on these teams have long careers in online safety, as well as in fields like human rights, social work, academia, health care, and consulting. We have also established an internal Safety Leadership group that regularly brings executives from different departments together to help ensure awareness of safety needs and monitor progress on our efforts. 

To complement our in-house expertise, we also seek counsel and feedback from third-party experts around the world, including our Safety Advisory Council, to ensure we’re considering multiple points of view when shaping our safety approach. In 2022, we invested in the local and linguistic expertise of start-up Kinzen, now known as the Content Safety Analysis team within Spotify, which has a nuanced understanding of the global safety landscape and works proactively to deliver a safe and enjoyable experience across our content offerings. 

Click here to learn more about Spotify’s approach to safety.

Introducing the Spotify Safety Advisory Council

Spotify Safety Advisory Council logo on blue background

Over the past several months, Spotify has moved to being more transparent about our safety efforts. In January, we published our Platform Rules and took measures to ensure that creators on our platform view and adhere to them. These were first steps forward, and today we unveil another: our newly formed Spotify Safety Advisory Council—the first safety-focused council of its type at any major audio company.

The founding members of Spotify’s Safety Advisory Council are individuals and organizations around the world with deep expertise in areas that are key to navigating the online safety space. At a high level, the council’s mission is to help Spotify evolve its policies and products in a safe way while making sure we respect creator expression. Our council members will advise our teams in key areas like policy and safety-feature development as well as guide our approach to equity, impact, and academic research. Council members will not make enforcement decisions about specific content or creators. However, their feedback will inform how we shape our high-level policies and the internal processes our teams follow to ensure that policies are applied consistently and at scale around the world.

While Spotify has been seeking feedback from many of these founding members for years, we’re excited to further expand and be more transparent about our safety partnerships. As our product continues to grow and evolve, council membership will grow and evolve along with it. In the months ahead, we will work closely with founding members to expand the council, with the goal of broadening regional and linguistic representation as well as adding additional experts in the equity and impact space.

The founding members and partner organizations include the following:

Dangerous Speech Project, represented by Professor Susan Benesch and Tonei Glavinic

Center for Democracy and Technology, represented by Emma Llansó

Professor Danielle Citron

Dr. Mary Anne Franks

Alex Holmes

Institute for Strategic Dialogue (ISD), represented by Henry Tuck and Milo Comerford

Dr. Jonas Kaiser

Kinzen, represented by Founders Mark Little and Áine Kerr

Dr. Ronaldo Lemos

Dr. Christer Mattsson

Dr. Tanu Mitra

Desmond Upton Patton, PhD, MSW

Megan Phelps-Roper 

USC Annenberg Inclusion Initiative, represented by Dr. Katherine Pieper and Dr. Stacy L. Smith

 

You can read the full bios for the organizations and individuals here.