• Josh Watzman's avatar
    Reduce peak memory usage when changing models · b50ff4f4
    Josh Watzman authored
    A few tweaks to reduce peak memory usage, the biggest being that if we
    aren't using the checkpoint cache, we shouldn't duplicate the model
    state dict just to immediately throw it away.
    
    On my machine with 16GB of RAM, this change means I can typically change
    models, whereas before it would typically OOM.
    b50ff4f4
sd_models.py 10 KB