Thanks to the endlessly depressing extent to which covid has kept everybody trapped inside, Discord is more relevant than ever. But as the company revealed in its latest transparency report, that has led to new challenges—and improved efforts to confront other challenges it probably should have put more effort into sooner.
Discord, which is reportedly in talks with Microsoft to sell for around 1.3 Bethesdas, released the transparency report today. Amid standard operational insights about Discord’s second half of 2020, a few details stood out. For one, the overall number of user reports increased pretty steadily across 2020—from 26,886 in January to 65,103 in December—with the number initially jumping up in March. This makes sense; people were trapped in their homes, and Discord was growing rapidly as a result. Spam resulted in the most account deletions (over 3 million), with exploitative content including nonconsensual pornography coming in a distant second (129,403), and harassment in third (33,615).
Discord also pointed out that of reports made, it most frequently took action against issues involving child harm material, cybercrime, doxxing, exploitative content, and extremist or violent content. “This may be partly explained by the team’s prioritization of issues in 2020 that were most likely to cause damage in the real world,” the company said in the transparency report.
Indeed, according to the report, Discord removed over 1,500 servers for violent extremism in the second half of 2020, which it said was “nearly a 93% increase from the first half of the year.” It cited groups like the Boogaloo Boys and QAnon as examples.
“This increase can be attributed to the expansion of our anti-extremism efforts as well as growing trends in the online extremism space,” the company wrote. “One of the online trends observed in this period was the growth of QAnon. We adjusted our efforts to address the movement—ultimately removing 334 QAnon-related servers.”
Cybercrime server deletions similarly shot up over the course of 2020, increasing by 140% from the first half of the year. In total, Discord removed almost 6,000 servers for cybercrime in the second half of 2020, which it said followed a significant increase in reports. “More cybercrime spaces than ever were flagged to Trust & Safety, and more were ultimately removed from our site,” Discord wrote.
Discord also emphasized its focus on methods that allow it to “proactively detect and remove the highest-harm groups from our platform,” pointing to its efforts against extremism as an example, but also noting where it made a mistake.
“We were disappointed to realize that in this period one of our tools for proactively detecting [sexualized content related to minors] servers contained an error,” Discord wrote. “There were fewer overall flags to our team as a result. That error has since been resolved—and we’ve resumed removing servers the tool surfaces.”
The other issue here is that Discord made a concerted effort to remove QAnon content around the same time other platforms did—after the lion’s share of the damage had already been done. While removal may have been proactive according to Discord’s internal definition, platforms were slow to even behave reactively when it came to QAnon as a whole—which led to real and lasting damage in the United States and across the world. Back in 2017, Discord also functioned as a major staging ground for Unite The Right rally in Charlottesville, Virginia that ultimately led to violence and three deaths. While the platform has tried to clean up its act since, it played host to an abundance of abuse and alt-right activity as recently as 2017.
Some transparency is much better than none, but it remains worth noting that tech companies’ transparency reports often provide little insight into how decisions get made and the larger priorities of the platforms that essentially govern our online lives. Earlier this year, for example, Discord banned r/WallStreetBets’ server at the height of GameStop stonksapalooza. Onlookers suspected foul play—outside interference of some sort. Speaking to Kotaku, however, two sources made it clear that labyrinthine internal moderation policies ultimately caused Discord to make that decision. Bad timing and substandard transparency before and after took care of the rest.
This is just a minor example of how this dynamic can play out. There are many more. Platforms can say they’re being transparent, but ultimately they’re just giving people a bunch of barely contextualized numbers. It’s hard to say what real transparency looks like in the age of all-encompassing tech platforms, but it’s not this.