LONDON  —  The “metaverse” is not here yet, and when it arrives it will not be a single domain controlled by any one company. Facebook wanted to create that impression when it changed its name to Meta, but its rebranding coincided with major investments by Microsoft and Roblox. All are angling to shape how virtual reality and digital identities will be used to organise more of our daily lives, from work and health care to shopping, gaming and other forms of entertainment.

The metaverse is not a new concept. The term was coined by sci-fi novelist Neal Stephenson in his 1992 book Snow Crash, which depicts a hyper-capitalist dystopia in which humanity has collectively opted into life in virtual environments. So far, the experience has been no less dystopian here in the real world. Most experiments with immersive digital environments have been marred immediately by bullying, harassment, digital sexual assault, and all the other abuses that we have come to associate with platforms that “move fast and break things”.

None of this should come as a surprise. The ethics of new technologies have always lagged behind the innovations themselves. That is why independent parties should provide governance models sooner rather than later, before self-interested corporations do it with their own profit margins in mind.

The evolution of ethics in artificial intelligence is instructive here. Following a major breakthrough in AI image-recognition in 2012, corporate and government interest in the field exploded, attracting important contributions from ethicists and activists who published (and republished) research into the dangers of training AIs on biased data sets. A new language was developed to incorporate into the design of new AI applications the values that we want to uphold.

Owing to this work, we now know that AI is effectively “automating inequality”, as Virginia Eubanks of the University of Albany, SUNY, puts it, as well as perpetuating racial biases in law enforcement. To call attention to this problem, computer scientist Joy Buolamwini of the MIT Media Lab launched the Algorithmic Justice League in 2016.

This first-wave response aimed a public spotlight at the ethical issues associated with AI. But it was soon eclipsed by a renewed push within the industry for self-regulation. AI developers introduced technical toolkits for conducting internal and third-party evaluations, hoping that this would alleviate public fears. It didn’t, because most firms pursuing AI development have business models that are in open conflict with the ethical standards that the public wants them to uphold.


Leave a Reply

Your email address will not be published. Required fields are marked *