In an exclusive extract from Epic Games Virtual Production Field Guide, the team at Nviz share the how Unreal Engine is helping to drive innovation across previs, techvis, stuntvis, postvis, simulcam, graphics, and final imageryBy Jenny Priestley
Published: April 6, 2021
In an exclusive extract from Epic Games' Virtual Production Field Guide, the team at Nviz share the how Unreal Engine is helping to drive innovation across previs, techvis, stuntvis, postvis, simulcam, graphics, and final imagery
target=_blank title=Share on LinkedIn class=share-linkedin> In an exclusive extract from Epic Games' Virtual Production Field Guide, the team at Nviz share the how Unreal Engine is helping to drive innovation across previs, techvis, stuntvis, postvis, simulcam, graphics, and final imagery
https://www.tvbeurope.com/production-post/how-epic-games-is-helping-content-creators-to-visualise-the-future title=Share via Email class=share-email>
Epic Games has released the second volume of its Virtual Production Field Guide, a free in-depth resource for creators at any stage of the virtual production process in film and television.
The publication offers a deep dive into workflow evolutions including remote multi-user collaboration, new features released in Unreal Engine, and interviews with industry leaders about their hands-on experiences with virtual production.
Included in the interviews is a Q&A with London-based visualisation studio Nviz, whose credits include Devs, Avenue 5, The Witcher, Solo, Life, Hugo, and The Dark Knight. In this exclusive extract, CEO Kris Wright and chief technology officer Hugh Macdonald share the how Unreal Engine is helping to drive innovation across previs, techvis, stuntvis, postvis, simulcam, graphics, and final imagery.
How long have you been incorporating Unreal Engine into your work? Hugh Macdonald: Unreal Engine has been in our previs pipeline for about three years and in the simulcam pipeline for the last year. Prior to that, we were using Ncam's own renderer, which uses MotionBuilder. We did a job last year that was heavily prevised in Unreal and we did simulcam on set with it, including facial motion capture of CG characters. We did postvis in Composure in Unreal Engine as well.
What led to your adopting Unreal Engine for virtual production? HM: It was a combination of having previs assets already existing in Unreal Engine plus the visual fidelity, the control that you get, and the need to do facial motion capture. We felt Unreal was the way to go for this project, because of Epic's film experience coming from people like Kim Libreri [CTO at Epic Games]. Kim gets the visual effects world. Epic is a company that understands the different sectors and different challenges. Having a CTO who really knows our world inside out is really encouraging for what it could mean for the engine in the future.
What are you typically visualizing with simulcam? HM: Eighty per cent of the time, it's environments. It might be creatures or vehicles in the background, but most of the time it's environments. The main benefit there is to give the camera operator and director the ability to get an understanding of what the shot's going to be like.
So you might be framing a character in the foreground and there's going to be a big CG building behind them. Without simulcam, you might cut off the top of the building or have to shrink the building in that one shot just to make it look right. The key is the ability for everyone on set to have an understanding of exactly what the shot will look like later, over and above the previs and the storyboards.
What kind of data do you need to make simulcam work? HM: We record all the camera data and tracking data, which includes the camera's location within the world. So you know in post where the camera was and how that syncs up with the animation that was being played back at the time to the original footage. We also generate visual effects dailies as slap comps of what was filmed with the CG that was rendered.
How do you track the set itself? HM: It doesn't require any LiDAR or pre-scanning. We use Ncam, which builds a point cloud on the fly. Then you can adjust the position of the real world within the virtual world to make sure that everything lines up with the CG. It's a slightly more grungy point cloud than you might get from LiDAR, but it's good enough to get the line-up.
On a recent project, we had a single compositor who would get the CG backgrounds from the on-set team and the plates from editorial and slap them together in Nuke. Generally, he was staying on top of all the selects that editorial wanted every day, to give a slightly better quality picture.
Kris Wright: And because it's quite a low-cost postvis, it was invaluable for editorial. Some of that postvis stayed in the cut almost until the final turnover. So it was a way to downstream that data and workflow to help the editorial process, it wasn't just about on-set visualization.
How does switching lenses affect simulcam? HM: With Ncam, the lens calibration process will get all the distortion and field of view over the full zoom and focus range including any breathing. If you switch between lenses, it will pick up whatever settings it needs to for that lens. If a new lens is brought in by the production, then we need about twenty minutes to calibrate it.
How has your workflow adapted to the increased need for remote collaboration? HM: We currently have the ability for a camera operator or a DP or director to have an iPad wherever they're located, but have all the hardware running a session in our office. You can have a virtual camera system tech controlling the system from wherever they are. Then you can have a number of other people watching the session and discussing the set at the same time.
We










