
Alex Marsharvbov, founder of Higgsfield AI Former Snap executive Alex Mashrabov has launched Higgsfield AI, a new generative video platform that focused on cinematic camera movement in AI videos. Mashrabov, who previously led Snap’s generative AI efforts, says Higgsfield evolved from lessons learned with Diffuse , a viral app that allowed users to create personalized AI clips. Though popular, the app revealed the creative and technical constraints of short-form, gag-driven content.
Mashrabov’s team shifted its focus to AI-generated storytelling—specifically, serialized short dramas for platforms like TikTok and YouTube Shorts, a category projected to grow to $24 billion by 2032. “We kept hearing the same thing from creators: AI video looks better, but it doesn’t feel like cinema,” Mashrabov said. “There’s no intention behind the camera.
” Higgsfield’s solution is a new control engine that lets users direct sophisticated camera movements—such as dolly-ins, crash zooms, overhead sweeps, and body-mounted rigs—using a single image and a simple text prompt. According to the company, these presets mimic techniques that typically require specialized equipment and experienced crews, putting cinematic language within reach of individual creators and small studios. The platform also addresses persistent challenges in generative video, including character and scene consistency over longer sequences.
“We’re not just solving style—we’re solving structure,” said Yerzat Dulat, Higgsfield’s Chief Research Officer. Filmmaker and creative technologist Jason Zada, known for Take This Lollipop and brand experiences with Intel and Lexus, created this demo video, Night Out , featuring stylized neon visuals and rapid, fluid camera motion generated entirely through Higgsfield’s interface. “Tools like the Snorricam, which traditionally require complex rigging and choreography, are now accessible with a click,” Zada said.
“These shots are notoriously difficult to pull off, and seeing them as presets opens up a level of visual storytelling that’s both freeing and inspiring. Higgsfield gives creators fluid, stylized camera motion inside their generative productions. This unlocks a whole new visual palette that was previously out of reach.
” The platform has also drawn praise from John Gaeta, the Academy Award–winning visual effects artist behind The Matrix and a longtime pioneer of immersive and AI-driven media. “There are no limits on the future of virtual cinematography,” Gaeta said. “This moves us all closer to having a ‘God’s Eye’—total creative control over the camera and the scene.
” While companies like Runway, Pika Labs, and OpenAI continue to push visual fidelity, Higgsfield is carving out a distinct niche by focusing on the grammar of film—how a story is told through movement and perspective, not just pixels. Professional creators can request early access beginning today at www.higgsfield.
ai . Whether Higgsfield will break through in a crowded field remains to be seen, but its emphasis on camera language suggests a new phase for generative video is already underway..