StochasticSplats: Stochastic Rasterization for Sorting-Free 3D Gaussian Splatting
A new research paper from Runway details StochasticSplats, a technique that integrates stochastic rasterization into 3D Gaussian Splatting. This approach solves long-standing issues with alpha blending and sorting artifacts in complex 3D environments.
Runway researchers recently published a paper titled "StochasticSplats," introducing a method to improve 3D Gaussian Splatting (3DGS) by integrating stochastic rasterization. This technical shift addresses a fundamental bottleneck in real-time 3D rendering: the requirement to sort thousands of "splats" by depth before they can be displayed. For creators working in virtual production or digital twin creation, this means more stable visual results in scenes with complex overlapping geometry.
What's new
Traditional 3D Gaussian Splatting relies on a process called alpha blending, which requires the engine to sort every point from back-to-front relative to the camera. When objects intersect or the camera moves quickly, this sorting often leads to visual popping or "flickering" artifacts. StochasticSplats replaces this rigid sorting with a probabilistic approach. Instead of calculating every layer in order, the system uses stochastic sampling to determine which splats are visible at any given pixel.
This method allows for mathematically accurate blending without the computational overhead of constant re-sorting. The researchers demonstrate that this approach maintains high visual fidelity while being more resilient to the typical errors found in standard 3DGS implementations. By removing the sorting step, the rendering pipeline becomes more predictable, especially in scenes with high depth complexity (see the provider's announcement).
How it fits your workflow
For filmmakers and VFX artists, StochasticSplats represents a move toward more reliable neural rendering. If you are using 3DGS to capture real-world locations for use as digital backdrops, you likely encounter issues where thin objects—like tree branches or chain-link fences—shimmer or disappear as the camera moves. This tool aims to stabilize those elements, making neural captures more viable for professional cinematography and background plates.
In a production environment, this technique could augment or replace traditional photogrammetry workflows. While tools like Luma AI and NerfStudio have popularized Gaussian Splatting, the integration of stochastic rasterization brings the technology closer to the stability of traditional mesh-based rendering found in Unreal Engine or Blender. It allows for better integration of moving subjects within a splatted environment, as the engine no longer struggles to decide which object is "in front" during every frame of an animation.
Editors and environment designers will find this particularly useful when layering multiple 3D captures together. Because StochasticSplats handles intersections more gracefully than standard alpha blending, you can merge different 3DGS scans without the harsh edge artifacts that usually occur when two point clouds occupy the same space.
What it costs / how to try it
As this is currently a research paper from Runway's team, the code has not yet been integrated into the public Runway Gen-3 or 3D capture tools. Developers and researchers can review the methodology and implementation details on the arXiv pre-print server.
Read the original announcement on Runway ↗