Runway launches new video-to-video AI tool — here's what it can do

Runway's new video-to-video mechanism in Gen-3 is a significant step for AI filmmaking, giving more control over motion.

featured-image

Leading AI video platform RunwayML has finally unveiled its video-to-video tool, allowing you to take a ‘real world’ video and adapt it using artificial intelligence. Runway launched Gen-3 Alpha, the latest version of its video model in June and has gradually added new features to an already impressive platform that we gave 4 stars and named one of the best AI video generators . It started with text-to-video , added image-to-video soon after , and now it has added the ability to start with a video.

There was no video-to-video with Gen-2 so this is a significant upgrade for people wanting to customize a real video using AI. The company says the new version is available on the web interface for anyone on a paid plan and includes the ability to steer the generation with a text prompt in addition to the video upload. I put it to the test with a handful of example videos and my favorite was a short clip of my son running around outside.



With video-to-video, I was able to transport him from the real world to an underwater kingdom and then on to a purple-hued alien world — in minutes. What is Runway Gen-3 video-to-video? Starting an AI video prompt with a video is almost like flipping the script compared to starting with an image. It lets you determine the motion and then use AI for design and aesthetics.

When you start with an image you’re defining the aesthetic then AI sets the motion. Runway wrote on X : “Video to Video represents a new control mechanism for precise movement, expressiveness and intent within generations. To use Video to Video, simply upload your input video, prompt in any aesthetic direction you like.

” As well as being able to define your own prompt there are a selection of preset styles. One can turn the subject matter into a glass effect and another makes it a line drawing. In its demo video we see a sweeping drone view of hills turn first into wool, then into an ocean view and finally sand dunes or clay.

Another example shows a city where we first have it at night, then daytime, then in a thunderstorm and finally into bright colors. Being able to film real footage and then use AI to apply either a new aesthetic or even just specific effects — one example sets off an explosion in the background — is a significant step forward for generative AI video and adds a new usefulness to the feature. More from Tom's Guide.