RunwayML Act-One generates captivating animations from video and voice inputs, advancing the use of generative models for dynamic live-action and animated storytelling.
No Longer a Tedious Process
Classic facial animation pipelines are like high-stakes marathons for animators—think motion capture suits, endless video references, and tedious face rigging all mashed together. The mission? To squeeze an actor’s expression into a 3D model that doesn’t look like a robot on its day off. The real trick is to somehow keep every eyebrow raise and smirk intact, which is kind of like moving an entire orchestra note-for-note into a kazoo. Challenging? Yes!
Act-One follows a unique pipeline, relying solely on an actor’s performance—no extra equipment needed.
Live Action
The model shines in creating cinematic, realistic outputs with high-fidelity face animations, adapting well to various camera angles. This enables users to build lifelike characters that convey true emotion, deepening audience engagement.
We’re thrilled to see the new storytelling possibilities Act-One will unlock for animation and character performance. Act-One represents another leap in making advanced techniques accessible to more users. We can’t wait to see how creators and storytellers bring their visions to life in fresh, exciting ways.
Act-One is now available for Gen-3 Alpha.