Spotify Technology SA, an audio streaming service, announced on Wednesday that it had purchased Kinzen, a company that had assisted it in locating objectionable material on the site.
The purchase is a part of Spotify’s efforts to address dangerous content on its platform following criticism of “The Joe Rogan Experience” earlier this year, in which the podcaster was charged with distributing false information regarding COVID-19.
Since 2020, the Dublin-based company has collaborated with Spotify, first concentrating on the integrity of election-related content globally. Since then, Kinzen’s mandate has been broadened to include addressing hate speech, disinformation, and misinformation.
“We’ve long had an impactful and collaborative partnership with Kinzen and its exceptional team. Now, working together as one, we’ll be able to even further improve our ability to detect and address harmful content, and importantly, in a way that better considers local context,” said Dustee Jenkins, Spotify’s Global Head of Public Affairs. “This investment expands Spotify’s approach to platform safety, and underscores how seriously we take our commitment to creating a safe and enjoyable experience for creators and users.”
Given the complexity of analyzing audio content in hundreds of languages and dialects, and the challenges in effectively evaluating the nuance and intent of that content, the acquisition of Kinzen will help Spotify better understand the abuse landscape and identify emerging threats on the platform.
“The combination of tools and expert insights is Kinzen’s unique strength that we see as essential to identifying emerging abuse trends in markets and moderating potentially dangerous content at scale,” said Sarah Hoyle, Spotify’s Head of Trust and Safety. “This expansion of our team, combined with the launch of our Safety Advisory Council, demonstrates the proactive approach we’re taking in this important space.”
Deal terms were not disclosed.
Earlier this year, Spotify said it would be more transparent in how it determines what is acceptable and unacceptable content. It published its platform rules for the first time in January. In June, it formed a Safety Advisory Council to provide input on harmful content.