Nothing Like a Coke.

style transfer

vid2vid ai

custom lora

3d layout

AI Character Generation

Disillusioned with the state of text-prompted AI video models, we set out to create a nostalgic 15 second Spec in 1960s Stop Motion style using a custom video-to-video AI workflow that honors and retains filmmaker control throughout the entire process. In this method, you feed Ai a previsualization blue print and use Ai to do stylized finishing.

Process: We began by generating 2D images of a cute and approachable Bear character, then converting images of the bear character to a 3D model and rigging the model for traditional 3D animation purposes.

We then built the artic-beach scene in 3D using Unreal engine, dialing in layout and lighting, the timing of character animations, and camera movement. As Unreal Engine was not delivering final pixels, but instead a specific reference to be fed into an AI process, the quality derived here resembled more of a previs or rough draft.

In the background, we collected a dataset of images from the 1964 Rudolph the Rednosed Reindeer film and trained a custom LoRA. In this case, a Lora is effectively a mini-diffusion-model trained on a specific style. In other words, you take previz quality visuals form Unreal Engine in, run them through Ai and tell the Ai by using the Lora, we want to transform this blueprint into the style of 1960’s Rudolph.  

Implications: As mentioned above, the real take away here is Ai can be used as a finishing tool in the VFX and animation processes. 80% of the effort in post production is spent taking a project the last 20% of the distance and this is where we see Ai playing a major role. By training your own data sets, you can achieve specificity of vision and have work that tailors to your brand and product. This enables brands to ethically train models on their own catalogs of footage or license styles of a specific artist and filmmakers to retain intention and control throughout the process.

BTS Breakdown: