GauGAN 2.0 neural network from Nvidia learned to draw pictures by verbal description
GauGAN 2.0 neural network from Nvidia learned to draw pictures by verbal description
Nvidia developed and showed an artificial intelligence for creating photorealistic images called GauGAN back in 2019. But the neural network has only recently taken the form of a full-fledged tool available to the general public in the form of the Canvas application. And now there is a version of GauGAN 2.0, which can now recognize verbal descriptions of what you want to draw.
The main feature of GauGAN is not only to recognize the essence of user requests, but also to pay attention to the details that they would like to reflect. Ideally, it can change the shape, size and texture of any object in the drawing at will, based on text and graphic instructions. And yet retain the overall harmony and integrity of the canvas, which ends up looking like a photograph or an artful painting.
In order for the AI to understand human requests so subtly, the generative-adversarial model was trained on the examples of 10 million different landscapes. So it understands the difference between “a mud-dusted boulder on the shore” and “rolling rocks in the surf,” and can draw both in the same frame. What’s even more interesting, the changes are displayed in real time as the query is compiled. Go to the neural network site and feel like a creator!