Advertisement

Runway releases Act-One: Video and Voice Performance Generating Animation

Yesterday, Runway released Act-One, the latest tool for generating expressive character performances in Gen-3 Alpha (which I introduced earlier). Act-One can generate engaging animated content using video and voice performances as input.

Check out the results first.

Capture the essence of the performance.

The traditional facial animation production process usually involves a complex multi-step workflow, including motion capture equipment, multiple video references, manual facial rigging, and other techniques. The goal is to transform an actor's performance into a 3D model suitable for the animation pipeline. The core challenge of traditional methods lies in how to preserve the emotion and subtle expressions from the reference video in the digital character.

Act-One takes a completely different approach. We have adopted a new pipeline that relies solely on the actor's performance, without requiring additional equipment support. The actor's performance can be captured with a simple single-camera setup, which is then used to animate the generated character.

Animation motion capture.

Act-One can be applied to various reference images, and the model is able to retain authentic facial expressions while accurately translating the performance into characters, even if the proportions of these characters differ from those of the individuals in the original video. This diversity opens up new possibilities for innovative character design and animation creation.

Live-action shooting.

The model also performs exceptionally well in generating cinematic and realistic outputs, maintaining high-fidelity facial animations across different camera angles. This feature enables creators to design emotionally rich and believable characters, enhancing the emotional connection between the audience and the content.

A brand-new creative path.

Runway is also exploring how Act-One can be used to generate multi-round, expressive dialogue scenes, which has been quite challenging in previous generative video models. Now, with just a consumer-grade camera and an actor, narrative content can be created by reading and performing different roles in a script.

Try it - wait.

It is not fully open yet, and the official website says: "coming soon." Fortunately, my Runway Premium subscription is still active, so I will experience it as soon as it becomes available.