On May 11, Stability AI released the Stable Animation SDK - https://stability.ai/blog/stable-animation-sdk, which mainly has three functions:
: Users input text prompts (such as stable diffusion) and adjust various parameters to generate animations. : Users provide an initial image as the starting point for the animation. Text prompts are used together with the image to generate the final output animation. : Users provide an initial video as the basis for the animation. By adjusting various parameters, they obtain a final output animation, which is also guided by text prompts.
Feature: https://platform.stability.ai/docs/features/animation
Just run this script in the Terminal environment:
pip install "stability_sdk[anim_ui]" # install the animation SDK
python3 -m stability_sdk animate --gui # launch the UI
You will need an API key in the middle, which can be obtained under the https://beta.dreamstudio.ai/account account.
After running the script, open http://127.0.0.1:7860/ locally.
After creating a project on the project page, you can generate a video under the render tab.
I wrote a simple prompt:
{
0: "A BIRD IS FLYING"
}
So it generated 72 images frame by frame for me, and turned them into a video like this:
In the middle, I had to interrupt once due to insufficient credits and recharged the balance. 10 USD can be used to purchase 1000 credits, and this 72-frame video cost me 27 credits, approximately $0.27. For detailed pricing information, please refer to: https://platform.stability.ai/docs/features/animation/pricing
Here's another one:
{
0: "a photo of a cute cat",
24: "a photo of a cute dog",
}
From the first frame of a cat to the last frame transforming into a dog, the subsequent video is as follows:
If you want something more advanced, you can check out the meaning of parameters and make modifications: https://platform.stability.ai/docs/features/animation/parameters.
Or if you'd like to see how others write their animations, you can refer to: https://replicate.com/andreasjansson/stable-diffusion-animation/
For example, I referred to the one inside and made an animation: https://replicate.com/andreasjansson/stable-diffusion-animation/examples#voxuinyafnbxrisoe3zysivkkm
{
0: "the face of tom cruise very angry, headshot",
24: "the face of tom cruise smiling a happy smile, headshot",
}
From an angry Tom Cruise to a happy Tom Cruise.
The above is run via the web ui using Terminal locally, you can also use Google Colab; the official code provided is: https://colab.research.google.com/github/Stability-AI/stability-sdk/blob/animation/nbs/animation_gradio.ipynb.