Human Rights in the Metaverse – POLITICS

With the help of Sam Sutton

Meta has had its fair share of human rights issues in the history of the company, from the Rohingya massacre to Cambridge Analytica.

So it’s only natural that the human rights community be skeptical of his promise to revolutionize how we use the internet itself, through a three-dimensional overlay on the world that the metaverse promises. Whether they can prove these naysayers wrong may depend on what compromises the company and its fellow virtual world builders are willing to make.

At the moment they are somewhat predictably holding their cards close to their chest. Last night Meta’s Director of Human Rights Miranda Sissons shed light on this topic during a panel discussion with the disturbing title “Human Rights and the Metaverse: Are We Off the Curve Already?

Sissons spoke for the first time about the potential of AR/VR technologies to improve the quality of life in the US. real world, thanks to its use in areas such as automotive safety or medical diagnostics. But this is not a “metaverse”. And when it comes to the rules for the new virtual spaces the Meta is building, well… they’re arbitrary.

“Many significant risks are related to our behavior as humans,” Sissons said. “And many of these behaviors can be mitigated or prevented with fencing, design standards and principles, and design priorities.”

But what are these principles? The human rights community provides many formal tools by which to assess the impact of any given technology and prevent harm such as in the Rohingya and Cambridge Analytica cases, and Sissons argued that companies should follow the human rights principles put forward by the groups. . like the United Nations and the World Economic Forum.

Electronic Frontier Foundation Katitsa Rodriguez, who urged companies to set strict rules on what data their devices can collect and store, including potential “emotion detection”, also attended yesterday’s session. She said Sissons’ vision may require the Meta to make some uncomfortable compromises.

“You have to educate and educate engineers, marketing teams, etc. on the importance of human rights and the implications of their product on society,” Rodriguez said. “It’s difficult, but important… How to mitigate human rights risks? Avoid including facial recognition in the product. It’s a difficult choice.”

And there is no shortage of examples of what happens when these choices are not made early in a technologist’s life.

“What we’ve learned from other immersive worlds like video games is that the norms that are set early on really define the culture of space.” Chloe Pointonpanel moderator and advisor to a consulting firm Article onetold me later.

Daniel Loifersenior policy analyst at Access Now, argued passionately at the panel against the frequent refrains that regulators can’t keep up with the development of new technologies, stating that “often very simple things, like data protection, transparency, access to information, do so much work”.

Brussels, where Leifer is based, has clearly grasped the notion thanks to a plethora of regulations regarding data privacy and artificial intelligence in recent years. As vague as Meta’s promises are at the moment, there are actually signs that regulators in the US may be catching up on this week’s surprise bipartisan announcement. privacy bill is starting to clarify who has the power to set and enforce privacy laws.

Today the National Endowment for Democracy issued a report on the “Global Fight for AI Surveillance”, which examines “both the implications of new technologies for democracy and the vectors for civil society involvement in their development, deployment and operation.”

The authors highlight the threat that AI surveillance poses to civil liberties and privacy, emphasizing the global threat it poses, potentially giving autocratic regimes an advantage in their ability to suppress various social groups as well as freedom of expression.

The report suggests many potential remedies per se, including establishing more specific rules and regulations regarding the development and use of AI (for example, Europe does) and the creation of a global oversight body.

The report pays particular attention to China, whose formidable high-tech surveillance state already a model for repressive regimes around the world.

“Beijing is rapidly moving towards writing rules for AI systems,” the authors note. “These efforts will give Beijing significant leverage when it comes to setting global rules for AI surveillance technology, which in turn could reduce the role of human rights norms within this framework.” free (-r) world.

First in the Digital Future Daily: Lawyer who defended Obamacare before the U.S. Supreme Court deals with bitcoin.

Grayscale Investments has turned to former Solicitor General Don Verrilli to assist in an impending legal battle with the Securities and Exchange Commission over plans to convert its $20 billion bitcoin trust into an investment fund that will trade on the NYSE.

The SEC has rejected similar filings for a bitcoin-based exchange-traded fund, including high-profile effort led by former Trump adviser Anthony Scaramuccion the grounds that they are too risky for retail investors.

Shades of Gray are hoping to tip the scales with their foundation by bringing in Verrilli, who was the Obama administration’s lead attorney on landmark health care and same-sex marriage cases, to hone their position with the market regulator and secure support if they get their way. argument in court.

The firm has been preparing the ground for the lawsuit for several months, claiming that rejecting their offer would be unfair given the SEC’s approval of ETFs linked to bitcoin futures contracts, a more indirect financial instrument that is regulated by the Commodity Futures Trading Commission. They also sought to create a grassroots movement around their efforts with an aggressive public relations and advocacy campaign that filled Washington’s Union Station with advertisements and flooded the SEC with letters of support.

“Hopefully we won’t have to go to court because we’re doing everything we can to convince the board that approval is the right answer,” Verrilli, a partner at Munger, Tolles & Olson, said in an interview. He added that the SEC will face “great difficulty distinguishing a futures ETF from a spot one.” [market] ETF”. – Sam Sutton

Stay in touch with the entire team: Ben Schrekinger ([email protected]); Derek Robertson[email protected]); Konstantin Kakaes (ur.[email protected]); and Heidi Vogt ([email protected]).

If you have received this newsletter, you may sign here. And read our mission statement is here.