A new beast in Unreal Engine
You have probably noticed how uncanny people in video games can be. No matter how much technology you add to the mix they always look artificial, rigid and somewhat robotic. It seems as if it could be impossible, even for big studios, to capture all the nuance that a face or a gesture or a garment folding provides.

At AXYZ Design we used video game technology for a different purpose, which is to add humans to Architectural Visualisation images and videos. We had always realised the gap in image fidelity between our static and moving characters. Capturing humans in 3D was the core of our business, using a technique called photogrammetry. But it could only capture static poses, which was very limiting for our customers.
But technology kept advancing and some studios started to capture sequences of poses in 3D which theoretically you could assemble into a video. However the challenge to use these repositories was huge, as the captured sequence of poses consisted of hundreds of heavy files, and for any interesting sequence you had Gigabytes to deal with. How to make this perform in real time?
By leveraging I-Frames, B-Frames and P-Frames, some laborious interpolation techniques for Anti Aliasing, mesh compression such as Draco and GPU based video decoding such as NVDEC a full realtime decoder was created.
I had to work on both the mentioned decoding SDK and its integration in Unreal. For Unreal I had to create Render Thread Proxies for our meshes, which would lock and update geometry buffers such as Vertices, Normals and Tangents, as well as Texture information that was shared in GPU. This allowed several 4D characters to show in real time with high image fidelity and no uncanny experience.
Ultimately this effort contributed to the successful acquisition of our sponsor company AXYZ Design by Chaos Group in 2023.
Deja una respuesta