Twitter has taken the lead in content moderation. What will happen next?

In 2015 a man in a skull mask posted a video laying out his plans to kill Brianna Wu. The skull video was just one of a lot of such disturbing and strange posts targeting Wu and other women online as part of a harassment campaign dubbed Gamergate.

This was in the early days of content moderation in a social media ecosystem with far fewer rules, but Twitter quickly removed the video before it went viral, Wu said. Although Gamergate is illustrated how inept social media platforms have been Wu said Twitter’s immediate action to protect its users was an early example of the company’s relative willingness to respond to criticism and tackle abuse.

While social media platforms have struggled to respond to disinformation, hate speech, election meddling, and incitement to violence, Twitter has taken a more subtle and thorough approach over the years, developing, revisiting, and expanding its broad policy framework.

Twitter, for example, has led efforts to create security policies and enforce high-profile violations of its rules. He permanently suspended right-wing provocateur Milo Yiannopoulos in July 2016 and conspiracy theorist Alex Jones in September 2018. Facebook did not block Yiannopoulos and Jones until May 2019.

In the summer of 2020, Twitter placed a warning label on a tweet by then-President Trump threatening a violent crackdown on protests in Minneapolis as violating his “glorification of violence” rules, and shortly thereafter flagged two more election-related tweets for fact-checking purposes. The move was featured by Twitter from Facebook, whose chief executive Mark Zuckerberg indicated he was not inclined to similar action, and set the stage for a host of social networks that later suspended Trump from their platforms days before his term expired.

This year, Facebook announced a 24-hour suspension of the account of a member of the House of Representatives. Marjorie Taylor Green (GA) One day after Twitter permanently banned Green for repeatedly spreading misinformation about COVID-19.

But Elon Musk’s successful bid to buy Twitter could change the company’s trajectory. Musk, who has said he adheres to an absolutist free-speech philosophy, made it clear he needed a less demanding platform, writing in a series of tweets Tuesday that he only advocates moderation when required by law.

“I am against censorship, which goes far beyond the law. If people want less freedom of speech, they will ask the government to enact appropriate laws. Therefore, going beyond the law is contrary to the will of the people,” Musk announced this on Twitter.

“Twitter has historically served as one of the most advanced social media platforms, always testing new ideas and concepts,” said Jennifer Edwards, executive director of the Texas Social Media Research Institute at Tarleton State University.

Musk’s purchase of the company and turn to let me do The moderation ethic it espouses could push other social media platforms to step back and soften their moderation standards, she said.

The group-leading Twitter dynamic could also move in the opposite direction under Musk, with the new Twitter owner taking a cue from his more experienced Facebook counterpart. For example, Musk said he wants to start “authenticating all people” on Twitter, a move that, while vague, could bring his platform closer to Zuckerberg’s, where users expected publish under any name “they pass in everyday life.

Musk’s hints that he might follow suit have already drawn criticism.

“Any free speech advocate (as Musk seems to think of himself) who wants to require users to provide ID to access the platform likely fails to recognize the critical importance of pseudonyms and anonymity,” said several executives at the nonprofit Electronic Frontier Foundation. in the field of digital rights. wrote in response to the news of Musk’s purchase. The statement said Facebook’s policy requiring real names was used to squeeze out untrustworthy communities such as transgender people, drag queens and sex workers.

For obvious reasons, many users do not trust social networks, do not want to provide identification and refuse them, says Sophie Zhang, a former data scientist at Facebook. Real name authentication databases have been repeatedly hacked in South Korea, she said, because they were a treasure trove of personal information.

“The absolutism of free speech is a good idea,” but the vast majority of content moderation is not a contentious political discussion, as Musk argues, and so these values ​​don’t necessarily work in practice, Zhang said.

Zhang said it’s too early to tell how Musk’s influence will affect content moderation on the platform. The platform’s challenges may make him and other free-speech absolutists wonder why he can’t allow speech to flow freely and at the same time prevent the platform from becoming a swamp of crypto-spam, pornography and fake advertising.

“The real question for me is how does Elon make these decisions when he is really in a position of responsibility,” she said.

Christopher Bale, a Duke University professor and director of the Campus Polarization Lab who studies political extremism using social media data, said the premise of some of Musk’s proposals is wrong. Musk is adamant that conservative voices are kept to a minimum, and while high-profile cases like the suspension of Trump’s account can be pointed to as examples of anti-conservative bias, research shows that the platform actually tends to promote conservative viewsBale said.

Musk said accounts should almost never be banned, but also vowed to crack down on spammers, supposedly identifying them by the content of their speech and taking steps to get accounts banned.

“I think where the rubber hits the road, it will be harder than he realizes to do what he wants,” Bale said.

Researchers and activists fear that Musk’s focus on unrestricted speech will undermine the tools the Twitter team has built over years of trust and security. In offering approaches beyond deactivating an account and deleting messages, experts say Twitter has made its rules stricter. The company has shown greater transparency than its peers, maintaining open channels of communication with researchers and posting datasets on spam and misinformation on a public platform for analysis by academics and others.

Twitter maintains an archive posts it removed from the platform, allowing researchers to study the reach and impact of viral disinformation. Birdwatch Twitter Initiative seeks to create a crowdsourced approach to identifying disinformation.

Wu said that during the height of the harassment she faced during Gamergate, Twitter’s then VP of Trust and Security reached out to her to listen to concerns and offer support. In the years that followed, Twitter broke away from the rest, making a real effort to engage with critics like herself.

“They’ve done more than Facebook, more than Reddit, more than Google,” said Wu, who says she’s been informally advising the company’s trust and security team for about five years at no cost. “Twitter has never received the credit it deserves for being aggressive in the fight against harassment.”