Pikaformance model now available on web
Pika has launched its Pikaformance model on the web interface, focusing on realistic facial animation and audio-driven lip-sync. This update reduces wait times for image-to-video generation while improving how characters react to sound.
Pika has officially integrated its Pikaformance model into its web platform, moving the technology out of limited testing and into the hands of general users. This update targets two of the most difficult hurdles in AI video generation: character expression and processing speed. For creators who rely on consistent human performances, this release provides a more reliable way to animate static portraits with accurate lip-syncing and emotional depth.
What's new
The Pikaformance model introduces a specialized architecture designed to map audio files directly to facial movements. Unlike previous iterations that often resulted in "uncanny valley" mouth movements or static eyes, this model synchronizes the entire face—including micro-expressions and eye contact—to the provided audio track.
Beyond the animation quality, the update significantly cuts down on generation latency. Image-to-video tasks, which previously required several minutes of compute time, now process at a higher velocity. This allows for a more iterative creative process where editors can test multiple takes without long interruptions. The model also shows improved adherence to the original image's lighting and texture, reducing the flickering often seen in temporal upscaling (see the provider's announcement at pika.art).
How it fits your workflow
For filmmakers and animators, Pikaformance functions as a digital puppetry tool. Instead of manual keyframing or expensive motion capture setups, an editor can take a high-quality character concept from Midjourney or Photoshop and animate it using a voiceover track. This is particularly useful for pre-visualization, where a director needs to see a scene's timing and performance before committing to a live-action shoot or high-end VFX pipeline.
In the context of the current market, Pika is positioning itself against tools like HeyGen or Hedra. While those platforms focus heavily on talking-head avatars for corporate use, Pikaformance leans toward a cinematic aesthetic, maintaining the artistic style of the source image. It augments the traditional post-production workflow by allowing for "pick-up" shots—if a line of dialogue changes, an editor can re-animate the character's face to match the new audio without a reshoot. Documentary filmmakers can also use this to animate historical photos or archival portraits to bring a narrative to life.
What it costs / how to try it
Pikaformance is available now through the Pika web interface. Access depends on your current subscription tier, with various credit-based plans available for creators who need high-volume output. You can explore the new model and view the latest pricing options directly on the Pika website.
Read the original announcement on Pika ↗