Nvidia’s NeRF neural network creates 3D models from multiple photos
Nvidia introduced a new version of the Neural Radiance Field (NeRF) artificial intelligence tool. It is adapted to work with the latest graphics cards with tensor cores, on which you can create 3D scenes based on two-dimensional photos in a few milliseconds. The previous version of NeRF took several hours to achieve the same result.
The principle of operation of NeRF has not changed, it undergoes express training on several photos of an object from different angles, and then combines the data and interpolates them to fill in the gaps. The output is the most realistic three-dimensional model, which can be used in various scenes and compositions. The new version uses hash grid coding technology with multiple resolutions, to optimize performance on graphics chips.
As explained in Nvidia, speed is the main quality of the new neural network, because they are going to apply it to autonomous driving systems. Autopilot will be easier to calculate the distance to the obstacle, if it will see it in three dimensions, and it will be easier for robots to navigate in the real world. In addition, NeRF is likely to find applications in entertainment, education and architecture.