Spotify’s finally showing some semblance of an interest in crafting a more coherent content moderation policy.
This week, the company announced plans to create a Safety Advisory Council aimed at helping Spotify “evolve its policies” around key areas like online safety, policy, and equity. The council will have their work cut out for them. Spotify has spent most of 2022 imbued in a scandal over allegations some of its most popular content markers, like comedian Joe Rogan, have spread harmful misinformation.
The council’s members will consist of individuals and organizations with “deep expertise” in safety and other related areas. So far, the council’s founding members include Emma Llansó of the Center for Democracy and Technology and Tonei Glavinic from the Dangerous Speech Project among 12 others. Spotify says it wants these advisors to help shape Spotify’s future policies while still “making sure we respect creator expression.”
“While Spotify has been seeking feedback from many of these founding members for years, we’re excited to further expand and be more transparent about our safety partnerships,” Spotify said. As our product continues to grow and evolve, council membership will grow and evolve along with it.”
With the announcement, Spotify will join the likes of Meta and other tech giants who have moved to implement boards, councils, and other amalgams of highly paid people ostensibly aimed at addressing content policy disagreements. At first glance, Spotify’s council sounds similar to Meta’s Oversight Board, launched back in 2018, however it’s in fact significantly weaker. While the Oversight Board recommends account takedown requests—say like that of the former president of the United States—Spotify’s version is merely advisory in nature. The board members could, for example, hypothetically advise Spotify to take some action on Joe Rogan’s controversial show, but Spotify maintains the ability to say, “fuck off.”
Spotify did not immediately respond to Gizmodo’s request for comment.
While the Joe Rogan Experience is far from the only content on Spotify to test the company’s moderation policies, its scale and public impact were emblematic of a larger trend. As a quick reminder, Rogan and other podcasters put Spotify in the hot seat earlier this year for exposing its alleged inconsistencies surrounding its content moderation policies around covid-19 misinformation, hate speech, and other controversial content. In January, in the middle of the covid-19 omicron wave, a coalition of hundreds of doctors and public health experts called on the company to ban Rogan for spreading “false and societally harmful assertions” related to the pandemic. Not long after that Spotify sided with Rogan over musician Neil Young after Young said he refused to have his content placed alongside misinformation.
Following massive backlash from advocacy groups and some workers within Spotify, the company removed 70 episodes of The Joe Rogan Experience recorded between 2009 and 2018. Though it’s unclear the exact reasons why Spotify removed all those posts, Rogan released an Instagram post days later apologizing for his repeated use of the N-word on his show. Spotify’s CEO Daniel Ek managed to muddle the company’s response in a subsequent memo to employees where he condemned Rogan’s use of racial slurs but went on to say that, that he did not believe “silencing Joe is the answer.” Doing so, he warned, could create a “slippery slope.”
Still, despite all that controversy, Spotify’s policies, or lack thereof around content moderation hadn’t seemed to resonate with its users. In the first quarter, the company increased its paid subscriber count from 80 million to 182 million despite willingly backing out of its Russian market which accounted for 1.5 million subscribers. On the ad-supported side, Spotify increased from 16 million monthly active users to 18 million and increased its revenue by 24% year-over-year.