Niantic has that it’s building a new “Large Geospatial Model” (LGM) that combines millions of scans taken from the smartphones of players of and other Niantic products. This AI model could allow computers and robots to understand and interact with the world in new ways, the company said in a blog post . The LGM’s “spatial intelligence” is built on the neural networks developed as part of Niantic’s Visual Positioning System.
The blog post explains that “Over the past five years, Niantic has focused on building our Visual Positioning System (VPS), which uses a single image from a phone to determine its position and orientation using a 3D map built from people scanning interesting locations in our games and Scaniverse,” and “This data is unique because it is taken from a pedestrian perspective and includes places inaccessible to cars.” Niantic Chief Scientist Victor Prisacariu was more explicit in a , saying, “Using the data our users upload when playing games like and , we built high-fidelity 3D maps of the world, which include both 3D geometry (or the shape of things) and semantic understanding (what stuff in the map is, such as the ground, sky, trees, etc).” As points out, nobody who downloaded in 2016 could have predicted their data would “one day fuel this type of AI product.
” /.
Technology