To operate in augmented and virtual reality, Facebook believes artificial intelligence will need to develop an “egocentric perspective.” 

To that end, the company on Thursday announced Ego4D, a data set of 2,792 hours of first-person video, and a set of benchmark tests for neural nets, designed to encourage the development of AI that is savvier about what it’s like to move through virtual worlds from a first-person perspective. 

The project is a collaboration between Facebook Reality Labs and scholars from 13 research institutions, including academic institutions and research labs. The details are laid out in a paper lead-authored by Facebook’s Kristen Grauman, “Ego4D: Around the World in 2.8K Hours of Egocentric Video.”

Grauman is a scientist with the company’s Facebook AI Research unit. Her background as a professor at UT Austin has been focused on computer vision and machine learning in related topics. 

The idea is that the data set will propel researchers to develop neural nets that excel at performing tasks from a first-person perspective — in the same way that big datasets such as ImageNet propelled existing AI programs from a “spectator” perspective.

The point of egocentric perception is to try to fix the problems a neural network has with basic tasks, such as image recognition when the point of view of an image shifts from third-person to first-person, said Facebook. 

Source: https://www.zdnet.com/article/facebook-here-comes-the-ai-of-the-metaverse/

Leave a Reply

Your email address will not be published. Required fields are marked *