Just like the physical world, the metaverse needs to be populated with similar items — albeit in a digital form.
NVIDIA researchers have developed a tool that helps create items such as animals, avatars, buildings, furniture, and vehicles.
With training on 2D images, NVIDIA GET3D can generate 3D versions with high-fidelity textures and complex geometric details. These 3D objects are created in the same format used by popular graphics software applications, allowing users to immediately import their shapes into 3D renderers and game engines for further editing.
The generated objects could be used in 3D representations of buildings, outdoor spaces or entire cities, designed for industries including gaming, robotics, architecture, and social media.
GET3D is a boon for early adopters that have already bought land in various metaverses or those thinking of venturing into the space.
In Singapore, the metaverse is gaining interest from sectors such as education, finance and even non-government organisations.
“GET3D brings us a step closer to democratising AI-powered 3D content creation. Its ability to instantly generate textured 3D shapes could be a game-changer for developers, helping them rapidly populate virtual worlds with varied and interesting objects,” said Sanja Fidler, Vice President of AI research at NVIDIA.
GET3D can instead produce some 20 shapes a second when running inference on a single NVIDIA GPU. It works likes a generative adversarial network for 2D images while generating 3D objects. The larger and more diverse the training dataset it’s learned from, the more varied and detailed the output.
A future version of GET3D could use camera pose estimation techniques to allow developers to train the model on real-world data instead of synthetic datasets. It could also be improved to support universal generation that lets developers train GET3D on all kinds of 3D shapes at once, rather than needing to train it on one object category at a time.