Skip Navigation
StableDiffusion @lemmit.online

ComfyUI AnimateDiff Guide/Workflows Including Prompt Scheduling - An Inner-Reflections Guide (Including a Beginner Guide)

This is an automated archive made by the Lemmit Bot.

The original was posted on /r/stablediffusion by /u/Inner-Reflections on 2023-09-30 13:59:18.


AnimateDiff in ComfyUI is an amazing way to generate AI Videos. In this Guide I will try to help you with starting out using this and give you some starting workflows to work with. My attempt here is to try give you a setup that gives you a jumping off point to start making your own videos.

WORKFLOWS ARE ON CIVIT https://civitai.com/articles/2379 *AS WELL AS THIS GUIDE WITH PICTURES

System Requirements

A Windows Computer with a NVIDIA Graphics card with at least 10GB of VRAM (You can do smaller resolutions or the Txt2VID workflows with a minimum of 8GB VRAM). Anything else I will try to point you in the right direction but will not be able to help you troubleshoot. Please note at the resolutions I am using I am hitting 9.9-10GB VRAM with 2 ControlNets so that may become an issues if things are borderline.

Installing the Dependencies

These are things that you need in order to install and use ComfyUI.

  1. GIT - - this lets you download the extensions from GitHub and update your nodes as updates get pushed.
  2. (Optional) - - this is what combine nodes use to take the images and turn them in a gif. Installing is a guide in and of itself. I would YouTube how to install it to PATH. If you do not have this the node will give an error BUT the workflows still run and you will get the frames
  3. 7zip - - this is to extract the ComfyUI Standalone

Installing ComfyUI and Animation Nodes

Now let's Install ComfyUI and the nodes we need for Animate Diff!

  1. Download ComfyUI either using this direct link: or navigate on the webpage: (If you have a Mac or AMD GPU there is a more complex install guide there).
  2. Extract with 7zip Installed above. Please note it does not need to be installed per se just extracted to a target folder.
  3. Navigate to the custom nodes part of comfy
  4. In the explorer tab (ie. the box pictured above) click select and type CMD and then hit enter, you are now should have a command prompt box open.
  5. You are going to type the following commands (you can copy/paste one at a time) - What we are doing here is using Git (installed above) to download the node repositories that we want (some can take a while):
    1. git clone
    2. git clone
    3. git clone
    4. git clone
    5. For the ControlNet preprocessors you cannot simply download them you have to use the manager we installed above. You start by running "runnvidiagpu" in the ComfyUIwindowsportable folder. It will initialize some of the above nodes. Then you will hit the Manager button then "install custom nodes" then search for "Auxiliary Preprocessors" and install ComfyUI's ControlNet Auxiliary Preprocessors.
    6. Similar to ControlNet preprocesors you need to search for "FizzNodes" and install them. This is what is used for prompt traveling in workflows 4/5. Then close the comfy UI window and command window and when you restart it will load them.
  6. Download checkpoint(s) and put them in the checkpoints folder. You can choose any model based on stable diffusion 1.5 to use. For my tutorial download: also . As an aside realistic/midreal models often struggle with animatediff for some reason, except Epic Realism Natural Sin seems to work particularly well and not be blurry. Put
  7. Download VAE to put in the VAE folder. For my tutorial download . It is a good general VAE and VAE's do not make a huge difference overall.
  8. Download motion modules (original ones are here: the fine tuned ones can by great like , , or ). For my tutorial download the original version 2 model and TemporalDiff (you could just use one however your final results will be a bit different than mine). As a note Motion models make a fairly big difference to things especially with any new motion that AnimateDiff Makes. So try different ones. Put them in the animate diff node:
  9. Download Controlnets and put them in your controlnets folder. . For my tutorials you need Lineart, Depth and OpenPose (download bot the pth and yaml files).

You should be all ready to start making your animations!

Making Videos with AnimateDiff

The basic workflows that I have are available for download in the top right of this article. The zip File contains frames from a pre-split video to get you started if you want to recreate my workflows exactly. There are basically two ways of doing it. One which is just text2Vid - it is great but motion is not always what you want. and Vid2Vid which uses controlnet to extract some of the motion in the video to guide the transformation.

  1. If you are doing Vid2Vid you want to split frames from video (using and editing program or a site like ezgif.com) and reduce to the FPS desired (I usually delete/remove half the frames in a video and go for 12-15fps). You can use the skip option in the load images node noted below instead of having to delete them. If you want to copy my workflows you can use the Input frames I have provided (please note there are about 115 but I had to reduce to 90 due to file size restrictions).
  2. In the ComfyUI folder run "runnvidiagpu" if this is the first time then it may take a while to download an install a few things.
  3. To load a workflow either click load or drag the workflow onto comfy (as an aside any picture will have the comfy workflow attached so you can drag any generated image into comfy and it will load the workflow that created it)
  4. I will explain the workflows below, if you want to start with something I would start with the workflow labeled "1-Basic Vid2Vid 1 ControlNet". I will go through the nodes and what they mean.
  5. Run! (this step takes a while because it is making all the frames of the animation at once)

Node Explanations

Some should be self explanatory, however I will make a note on most.

Load Image Node

You need to select the directory your frames are located in (ie. where did you extract the frames zip file if you are following along with the tutorial)

imageloadcap will load every frame if it is set to 0, otherwise it will load however many frames you choose which will determine the length of the animation

skipfirstimages will allow you to skip so many frames at the beginning of a batch if you needed to

selecteverynth will take every frame at 1, ever other frame at 2, every 3rd frame at 3 and so on if you need it to skip some.

Load Checkpoint/VAE/AnimateDiff/ControlNet Model

Each of the above nodes have a model associated with them. The names of the models you have and mine are likely not to be exactly the same in each example. You will need to click on each of the model names and select what you have instead. If there is nothing there then you have put the models in the wrong folder (see Installing ComfyUI above).

Green and Red Text Encode

Green is your positive Prompt

Red is your negative Prompt

They are this color not because they are special but because they are set to be this color by right clicking them FYI.

Uniform Context Options

The uniform context options is new and basically what sets up unlimited context length. Without it animate diff is only able to do up to 24 (v1) or 36 (v2) frames at once. What it is doing is basically chaining and overlapping runs of AD together to smooth things out. The total length of the animation are determined by the number of frames the loader is fed in NOT context length. The loader figures out what to do based on the options which mean as follows. The defaults are what I used and are pretty good.

context length - this is the length of each run of animate diff. If you deviate too far from 16 your animation won't look good (is a limitation of animatediff can do). Default is good here for now

context overlap - is how much overlap each run of animate diff is overlapped with the next (ie. it is running frames 1-16 and then 12-28 with 4 frames overlapping to make things consistent)

closed loop - selecting this will try to make animate diff a looping video, it does not work on vid2vid

context stride - this is harder to explain. At 1 it is off. More than this what it trys to do is make a single run of AD through the entire animation and then fill in the f...


Content cut off. Read original on https://old.reddit.com/r/StableDiffusion/comments/16w4zcc/guide_comfyui_animatediff_guideworkflows/

0 comments

No comments

Start the conversation!