How protesters in Russia and Ukraine ensure online security

On Thursday evening, human rights activist Marina Litinovich posted a video on her Facebook account in which she called on her compatriots to protest against the country’s invasion of its western neighbor.

“I know that right now many of you feel despair, helplessness, shame for Vladimir Putin’s attack on the friendly people of Ukraine,” she said. But I urge you not to despair.

A few hours later Litinovich was in custodywho faces a fine for “attempting to organize an unsanctioned rally”.

As Russia cracks down on anti-war protests, those who express dissent on the ground and online face heightened danger.

Hundreds of protesters detained in Moscow and st. Petersburg. Human rights activists warn that authors of critical social media posts in the region will face a new wave of reprisals, including detentions and other legal consequences.

Some social media users have improvised ways to communicate in an attempt to avoid censorship or arrest. In one case, an Instagram user posted an image with no clear recognizable meaning — rows of emoji of a man walking, a sketch of a woman’s head, and the number seven. indicate the time and place of the protest.

Meanwhile, social networks have taken steps to eliminate threats to their users in these regions.

In response to news of the conflict escalating on Wednesday night, Meta, the parent company of Facebook, created a “Special Operations Center” to monitor and quickly respond to military conflict, and launched a tool in Ukraine that allows people to quickly block their profile with a single click. The tool provides an extra layer of privacy to prevent users who are not their friends from viewing their messages, uploading or sharing their profile photo, according to Nathaniel Gleicher, head of Facebook’s security policy, who described the company’s response to the crisis in a series of posts on Twitter.

Facebook previously launched a one-click tool in Afghanistan in August, as evidenced by feedback from activists and journalists. The tool has also previously been deployed in Ethiopia, Bangladesh and Myanmar, according to the company.

Twitter published a guide to enhance security, warning that when using its platform “in conflict zones or other high-risk areas, it is important to know how to control your account and digital information.” The company advised setting up two-factor authentication (password cracking protection), turning off location information in tweets, adjusting privacy settings so tweets are only visible to followers, or deactivating your account if that seems like the safest option.

Sophie Zhang, a former data scientist at Facebook, said that while a quick and easy tool to block accounts was useful, earlier and more drastic measures from social media could have slowed Putin’s progress towards regional dominance. The lack of an aggressive response to the previously “terrible repressions” in Belarus, including using people’s activity on Facebook making arrests reflects a broader problem with how social media companies deal with human rights issues, she said.

Zhang has criticized Facebook’s response to the global political conflict in the past. She described in long note published by BuzzFeed in 2020 how the company failed to counter or stop the disinformation campaigns of politicians in many countries abusing the platform to influence elections and gain power.

Twitter spokeswoman Kathy Rosborough said in an email that, in line with the company’s response to other global events, companies’ security and integrity teams are monitoring potential risks, including identifying and stopping attempts to spread false and misleading information, and seeking to “increase the speed and scale of ”applying their policies.

“Twitter’s top priority is keeping people safe, and we’ve been working to improve the security of our service for a long time,” Rosborough said.

Facebook is actively removing content that violates its policies and is partnering with third-party fact-checking services in the region to debunk false claims, spokeswoman Dani Lever said in an emailed statement.

“When they rate something as false, we move that content lower in the feed so that fewer people see it,” Lever said. “We are also giving people more information to decide what to read, trust and share by adding warning labels to content judged to be false and applying labels to state-controlled media publishers.”

On Friday, the Russian government said it would partially restrict access to Facebook in response to the company’s handling of some pro-Kremlin media accounts. several news outlets reported. Nick Clegg, Meta’s president of international affairs, said in a statement that the move came after “Russian authorities ordered us to stop independent fact-checking and labeling of Facebook-hosted content” by four media outlets, and the company refused.

While Twitter and Facebook officials said the companies were paying close attention to emerging disinformation threats, their response was not without errors.

Twitter is wrong suspended accounts independent reporters and researchers who post information about the actions of Russian troops near the border with Ukraine.

Rosborough said in an email that while the company has been monitoring “emerging narratives” that violate the platform’s rules on manipulated media, “in this case, we mistakenly took enforcement action against a number of accounts. We are rapidly reviewing these actions and have proactively restored access to a number of affected accounts.”

Some of the affected users accused the Russian state of coordinating a bot campaign of bulk reporting on their Twitter accounts, leading to action against their accounts, but Rosborough said those claims were inaccurate.

While social media companies are releasing tools to improve the safety of their users in conflict zones, those same companies have succumbed to pressure from Russia over the past year by shutting down posts in support of political opponents of the current regime.

Meta, which owns Instagram and WhatsApp, as well as Facebook. recognized its latest transparency report says it sometimes removes content in response to requests from Russian authorities, removing about 1,800 pieces of content “for allegedly violating local law” on Facebook or Instagram in the first half of 2021. Of the content removed, 871 were items “related to extremism,” according to the report. Meta did not immediately respond to email questions about the deleted messages.

December BBC report. found that last year Russian media regulator Roskomnadzor filed more than 60 lawsuits against Google, Facebook, Instagram and Twitter, targeting hundreds of posts. Most of the legal proceedings were aimed at taking action against calls to participate in demonstrations in support of imprisoned anti-Putin political leader Alexei Navalny. According to the BBC, Meta faces heavy fines due to higher sanctions imposed by Russia last year for not taking down illegal content.