Most game developers strive for an incredible amount of details for their games in order to make them look similar to the real world. In a recent study, a team of researchers from Intel Labs describe a method for improving the realism of synthetic images, and they take scenes from GTA V to get their point across.
The project, titled "Enhancing Photorealism Enhancement," explains how the AI machine learning tool assists in making computer-generated images more realistic by examining the layers of the game animation and comparing them to actual urban street scenes.
In an attempt to close the "appearance gap" between artificial and real images, the approach they took was to enhance already rendered images. What they did was to extract intermediate rendering buffers called G-buffers produced by the game engine. These buffers provide information on the geometry, distance to the camera, materials, and lighting in the scene.
All their methods were applied on the Cityscapes dataset – a large, diverse set of stereo video sequences recorded in streets from 50 different cities with scenes that look quite similar to those you can find on Google Maps Street View. The modifications tried to stay true to the original images.
Their complex work is explained in more detail in a video released by Intel. The researchers do not just show photos to exemplify the difference between the approach they took and other enhancement techniques, but also display a little bit of gameplay to help us better visualize the results.
The AI machine added realistic details, greening the parched grass and hills in GTA's California, making it look more voluminous at the same time. Additionally, it gave reflections to the building's windows, increased the mirroring of the roof of the cars, changed the color grading, and it also removed distant haze and rebuilt the roads.
Putting the pieces together resulted in an enhanced realism of the photos, making some video sequences from the game look more like an interactive movie. Of course, it doesn't replicate the real world 1:1, but they tried to get pretty close to it by using GTA V.
In an attempt to close the "appearance gap" between artificial and real images, the approach they took was to enhance already rendered images. What they did was to extract intermediate rendering buffers called G-buffers produced by the game engine. These buffers provide information on the geometry, distance to the camera, materials, and lighting in the scene.
All their methods were applied on the Cityscapes dataset – a large, diverse set of stereo video sequences recorded in streets from 50 different cities with scenes that look quite similar to those you can find on Google Maps Street View. The modifications tried to stay true to the original images.
Their complex work is explained in more detail in a video released by Intel. The researchers do not just show photos to exemplify the difference between the approach they took and other enhancement techniques, but also display a little bit of gameplay to help us better visualize the results.
The AI machine added realistic details, greening the parched grass and hills in GTA's California, making it look more voluminous at the same time. Additionally, it gave reflections to the building's windows, increased the mirroring of the roof of the cars, changed the color grading, and it also removed distant haze and rebuilt the roads.
Putting the pieces together resulted in an enhanced realism of the photos, making some video sequences from the game look more like an interactive movie. Of course, it doesn't replicate the real world 1:1, but they tried to get pretty close to it by using GTA V.