In our local testing, a 14-frame generation took about 30 minutes to create on an Nvidia RTX 3060 graphics card, but users can experiment with running the models much faster on the cloud through services like Hugging Face and Replicate (some of which you may need to pay for). In our experiments, the generated animation typically keeps a portion of the scene static and adds panning and zooming effects or animates smoke or fire. People depicted in photos often do not move, although we did get one Getty image of Steve Wozniak to slightly come to life.
Given these limitations, Stability emphasizes that the model is still early and is intended for research only. “While we eagerly update our models with the latest advancements and work to incorporate your feedback,” the company writes on its website, “this model is not intended for real-world or commercial applications at this stage. Your insights and feedback on safety and quality are important to refining this model for its eventual release.” Notably, but perhaps unsurprisingly, the Stable Video Diffusion research paper (PDF) does not reveal the source of the models’ training datasets, only saying that the research team used “a large video dataset comprising roughly 600 million samples” that they curated into the Large Video Dataset (LVD), which consists of 580 million annotated video clips that span 212 years of content in duration.