All News DISPATCH AI VIDEO

Gen-4.5

Runway has released Gen-4.5, a significant update to its video generation suite focusing on realistic movement and precise control. This release aims to reduce common AI artifacts while giving filmmakers more predictable results from text and image prompts.

Runway

Runway has officially released Gen-4.5, the latest iteration of its video generation model designed to address the persistent challenges of motion consistency and prompt accuracy. This update arrives as the industry shifts from simple experimentation toward production-ready assets, offering creators a more reliable way to generate high-fidelity video from text and image inputs. For filmmakers and editors, this means less time spent on trial-and-error and more predictable control over the final output.

What's new

The primary focus of Gen-4.5 is the improvement of motion quality and visual fidelity. Previous models often struggled with complex physics or distorted subjects during fast movements; this version significantly reduces those artifacts, resulting in more fluid and realistic sequences. The model also demonstrates a higher degree of prompt adherence, meaning it follows specific instructions regarding lighting, camera angles, and character actions with greater precision.

Beyond basic movement, the update enhances the overall texture and detail of the generated frames. This reduces the "plastic" look often associated with AI video generation, moving the aesthetic closer to traditional cinematography. You can see the full breakdown of these technical improvements and visual examples on the Runway research blog (see the provider's announcement).

How it fits your workflow

For directors and cinematographers, Runway Gen-4.5 functions as an advanced pre-visualization and b-roll tool. While previous versions were often limited to dreamlike or abstract visuals, the improved motion quality allows for the creation of realistic establishing shots or atmospheric inserts that can sit alongside live-action footage. It serves as a viable alternative to stock footage libraries, especially when a project requires a specific lighting setup or a niche location that is difficult to find or film.

Editors and VFX artists can use the model to extend shots or generate clean plates for compositing. Because the model adheres more closely to prompts, it is easier to match the visual style of an existing sequence compared to using tools like Luma Dream Machine or Kling AI, which may prioritize cinematic flair over technical accuracy. Animators can also benefit by using the model to generate reference footage for complex character movements, providing a foundation that can be refined in post-production. The ability to maintain consistency across multiple generations makes it a more practical choice for short-form content and social media advertising where quick turnarounds are necessary.

What it costs / how to try it

Runway Gen-4.5 is available through the Runway web platform and mobile app. Access typically follows Runway's credit-based subscription system, where different tiers provide varying levels of generation speed and priority. You can check your current plan or sign up for a new account at runwayml.com.

Read the original announcement on Runway ↗

Help keep this running

Your tip funds servers, models, and the time it takes to ship new tools faster. Set any amount below — every bit helps.