Zomborgs Official Trailer - Released! An AI + UE5 Short

0



Hey Y'all!

We're thrilled to unveil the Zomborgs Teaser Trailer, a project that reflects months of learning and experimentation with UE5 & AI technologies. It's been about 2 months since our AI Test with Metahumans & Stable Diffusion, and we're excited to showcase what we've achieved by building upon that foundation.

Our journey has been one of trials, errors, and a ton of learning. It's taught us new skills and reminded us that even with AI's remarkable ability to elevate our work, it takes a combination of expertise in script writing, performance, 3D design, animation, and sheer determination to create something worthy of sharing.



For those of you interested in the process here are some of the details of what it took to make this thing happen. 


🛠️ TOOLS & SETTINGS USED 🛠️


+ Unreal Engine 5 (on a GTX 1070)

Unreal Engine 5 became our mainstay for heavy lifting. Learning this software has been an ongoing journey, but the real-time animation workflows, asset libraries, and lighting capabilities have proved invaluable for elevating our storytelling.


+ Metahuman & Metahuman Animator w/ iPhone 12-mini

We continued our exploration of facial performance capture using Metahuman tools. Its accuracy and ease of use saved us significant time, making it an indispensable part of our toolkit.

Pro tip 1 - Subscribing to iCloud is worth every penny for larger projects.

Pro Tipe 2Matching your intended AI actor's head shape/features with a Metahuman character improves AI performance, and diverse head shapes help AI produce desired effects more easily.

+ Quixel Mixer (For custom Zombie)

While Substance Painter is a popular choice, Quixel Mixer shouldn't be underestimated. Our budget constraints led us to Mixer, which offers quality UE5 and Megascans assets. It's a smart alternative for those looking to maximize resources.


+ Move.AI (Experimental 4 Camera MoCap)

Move.AI's multi-camera mocap trial served us well, even with minor hiccups. The credit system, while occasionally unpredictable, proved useful in the end. We used their experimental camera set up with various cameras (1 iPhone 12, 1 GalaxyNote9, 1 GalaxyNote10, and 2 gopros) and got the body movement for both characters in the piece.


We're also currently exploring options like Cascadeur, especially its new MoCap Alpha.


Other Ai-Video-Mocap options we're keeping an eye on include; Rokoko Vision just released a dual cam version. There are also projects like FreeMoCap.com and ReMoCap.com that we're keeping an eye on and hope to test in the future as well.


*UPDATE* We are happy to announce that this piece won their discord #show-off competition and got us 600 move.ai credits, which we're pretty pumped about! **


+ Stable Diffusion 1.5 w/ Automatic 1111

I know, I know. SDXL is out! What the heck are we doing still on 1.5? Look, okay? We run a GTX 1070 with low vram. While ComfyUI is cool - it makes my brain melt - plus batching and controlnt wasn't an option when we needed it. So while Automatic1111 fixes the bugs - we did this one on 1.5 and it did pretty nicely.


Basic Settings Used:

Steps: 12 | DPM++ 2M Karras | CFG 25 | Denoise .3

ControlNet (Lineart-Realistic | Normal Bae | Softedge-Hed)

Models: LifeLikeDiffusion Model and the Detail Tweaker Lora (1.0 setting)


This round we followed a tutorial by HilarionOfficial that had some great tips for temporal consistency. This involved a higher CFG setting which allowed us to use a higher Denoise in our img2img process.

The LifeLikeDiffusion Model was really great to work with and does a great job with our diverse cast. The Detail Tweaker Lora was used for Yuki's shots as it added some nice skin texture and hair strands in some of the shots.


Edit: When we shared this process with some folks I realized they didn't really understand "WHY" we had to include an AI PASS at all. For reference here are comparison shots of the "Metahuman" and what the AI Pass offered as an alternative.


Metahuman Frame on the Top VS AI Pass Frame on Bottom




To us  the AI Pass looks more "realistic" than the Metahuman frames. It removes that Metahuman uncanny valley thing and puts a layer of "Human Looking" on top. Thus upgrading our quality significantly, considering we've only just started learning the inner workings of 3D design, 3D Animation, and Character Design. 


+ Midjourney + ControlNet + KohyaSS Lora Training


For this project we did train a new Lora for Yuki. Her original Lora was trained on Midjourney version 2 & 3 character concepts which were prompted using Metahuman source images so they had a bit of a cgi/concept art feel to the training data.


By using our original Lora, LifeLikeDiffusion, and ControlNet we generated new dataset images that were more reallistic and less "cartoony." This helped us get a little more consistent head turns and realistic skin textures on the final video AI pass.


In addition to a new Yuki Lora we had to build a custom Metahuman to build our Zomborg. As mentioned above we used Quixel Mixer to customize the metahuman textures. We then rendered out some stills and prepped them for a general Lora dataset trained on Kohya.


Because the LifeLikeDiffusion model was having trouble with zombie-fying our Zomborg we needed to train a general zombie style lora. We used midjourney to create this Zombie dataset using basic text-to-image prompting and our metahuman zombie as image reference to get a variety of different zombies for a general Zombie dataset.


We then trained a style Lora for this general Zombie dataset. This made it possible for us to mix both the Zomborg Lora and the Zombie Lora in a single prompt to get more intense "zombie features."


+ Audio Clips from: Freesound.org | Mixkit.co | Pexels.com (Mixed by Amber)

Sound was pivotal, and we used free resources from Freesound.org, Mixkit.co, and Pexels.com. I then mixed them in Adobe Audition.



👨🏽‍🏫 TUTORIALS THAT GUIDED US 👨🏽‍🏫

This is a collection of tutorials used for this screen test. Getting Started with Metahuman Animator - @UnrealEngine A great straight forward way to set up and use Metahuman Animator for face performance. This one came in handy and its from the folks who made the app. Huzzah! (https://youtu.be/WWLF-a68-CE) Move AI | Complete Guide to Getting Set Up - @JonathanWinbush He goes over the I-phone set up. His approach was helpful even though we used to the experimental workflow. (https://www.youtube.com/watch?v=MY7c6... ) Move AI - Set Up Playlist - @moveai This is a great playlist for the Move.Ai motion capture process. (https://www.youtube.com/watch?v=ZpvjD... ) 3D Animation w/ AI Pass - @promptmuse This was the general workflow we approached this with but instead of reallusion products we used UE5 & Metahumans. We did not use the multiscreen script for this one - but hope to in the future. (https://www.youtube.com/watch?v=T2nw9... ) Animation w/ ControlNet Only - @HilarionOfficial This was a creat tutorial that explained concepts and settings very well. It's what allowed us to really force our custom Lora onto the characters. (https://www.youtube.com/watch?v=Z6pR7...)


Conclusion:

Our Zomborgs Teaser Trailer was a project-based learning experience. We harnessed disruptive technologies, pushed our creative boundaries, and used our limited budget wisely enough that we're looking forward to making another project very soon. We hope this peek behind the curtain offers insights into the journey.

If you haven't already don't forget to Download Zomborgs Episode 1 Now!

Until the next adventure,
J-Wall & Amber
Oh Hey Void



Post a Comment

0Comments
Post a Comment (0)