Incognito

We often only hear about the wins. But there is value in hearing about the "loses," too.

Virtual production breakdown of a show that didn't go to series.

We often only hear about the wins. But there is value in hearing about the "loses," too. Incognito was developed by Done and Dusted and Sequin AR. The show was sold to a buyer and a technical proof of concept step was ordered. The show was not ordered to series. We previously worked for Done and Dusted on the Mariah Carey Magical Christmas Special for Apple+, that was #1 in 100 countries for Apple.

The sizzle was shot over two days using Unreal Engine and Metahumans, Silverdraft supercomputers, Xsens motion capture, facial capture with Live Link Face, and Mo-sys tracking in Sequin’s Glastonbury Studio.Metahumans were used for the avatars which provided flexibility and cost savings. We would benefit from Epic’s version releases and upgrades as they were release.The Grandfather character is a standard Metahuman, while the Cat is a Metahuman face with a custom facial paint and a custom body mesh. The team built the body to resemble a person inside an inflatable suit. The body proportions and rig were modified to fit an inflatable suit. This required some finetuning and iteration to compensate for the differences in step lengths between the actor and the character in the metaverse, with a variation of about 30%. The team created two environments, the jungle and volcano, using Quixel and marketplace assets, and created a custom levitating platform to travel between worlds. This allowed us to demonstrate the possibilities of shooting in locations not possible in the real world, while maintaining a grounded look and feel.

We used motion and facial capture to drive the metahuman avatars. We created a pipeline to record facial and motion capture data separately to reduce costs and keep the contestants separate while shooting LIVE dialogue. We used Live Link Face with iPhones to capture facial details for the LIVE discussions between the contestants through their avatars. Xsens suits were used for the full body motion capture (thank you Xsens!).

We mapped out the virtual scene/metaverse scene in our studio and played out the merged scene. This allowed the DP, John Valkos, to walk around, find his shots and shoot numerous takes with a standard camera rig inside a 3D scene. This gives the scene a more cinematic look and feel.

Behind the scenes
Full technical proof of concept

Sequin's team works with production companies and networks to develop leading edge content using the latest virtual production and augmented reality. See more about Sequin's virtual production.Drop us a line if you have any questions, happy to answer anything, or if you need virtual production services — info@sequinar.com

Make your content standout

Work with the leading virtual production experts at Sequin AR to create impactful and memorable content that’ll keep audiences on the edge of their seats.

Get in touch