Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Sign in / Register
Toggle navigation
S
Stable Diffusion Webui
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Locked Files
Issues
0
Issues
0
List
Boards
Labels
Service Desk
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Security & Compliance
Security & Compliance
Dependency List
License Compliance
Packages
Packages
List
Container Registry
Analytics
Analytics
CI / CD
Code Review
Insights
Issues
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
novelai-storage
Stable Diffusion Webui
Commits
3b51d239
Commit
3b51d239
authored
Nov 09, 2022
by
cluder
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
- do not use ckpt cache, if disabled
- cache model after is has been loaded from file
parent
2f47724b
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
17 additions
and
10 deletions
+17
-10
modules/sd_models.py
modules/sd_models.py
+17
-10
No files found.
modules/sd_models.py
View file @
3b51d239
...
...
@@ -163,13 +163,21 @@ def load_model_weights(model, checkpoint_info, vae_file="auto"):
checkpoint_file
=
checkpoint_info
.
filename
sd_model_hash
=
checkpoint_info
.
hash
if
shared
.
opts
.
sd_checkpoint_cache
>
0
and
hasattr
(
model
,
"sd_checkpoint_info"
):
cache_enabled
=
shared
.
opts
.
sd_checkpoint_cache
>
0
if
cache_enabled
:
sd_vae
.
restore_base_vae
(
model
)
checkpoints_loaded
[
model
.
sd_checkpoint_info
]
=
model
.
state_dict
()
.
copy
()
vae_file
=
sd_vae
.
resolve_vae
(
checkpoint_file
,
vae_file
=
vae_file
)
if
checkpoint_info
not
in
checkpoints_loaded
:
if
cache_enabled
and
checkpoint_info
in
checkpoints_loaded
:
# use checkpoint cache
vae_name
=
sd_vae
.
get_filename
(
vae_file
)
if
vae_file
else
None
vae_message
=
f
" with {vae_name} VAE"
if
vae_name
else
""
print
(
f
"Loading weights [{sd_model_hash}]{vae_message} from cache"
)
model
.
load_state_dict
(
checkpoints_loaded
[
checkpoint_info
])
else
:
# load from file
print
(
f
"Loading weights [{sd_model_hash}] from {checkpoint_file}"
)
pl_sd
=
torch
.
load
(
checkpoint_file
,
map_location
=
shared
.
weight_load_location
)
...
...
@@ -180,6 +188,10 @@ def load_model_weights(model, checkpoint_info, vae_file="auto"):
del
pl_sd
model
.
load_state_dict
(
sd
,
strict
=
False
)
del
sd
if
cache_enabled
:
# cache newly loaded model
checkpoints_loaded
[
checkpoint_info
]
=
model
.
state_dict
()
.
copy
()
if
shared
.
cmd_opts
.
opt_channelslast
:
model
.
to
(
memory_format
=
torch
.
channels_last
)
...
...
@@ -199,13 +211,8 @@ def load_model_weights(model, checkpoint_info, vae_file="auto"):
model
.
first_stage_model
.
to
(
devices
.
dtype_vae
)
else
:
vae_name
=
sd_vae
.
get_filename
(
vae_file
)
if
vae_file
else
None
vae_message
=
f
" with {vae_name} VAE"
if
vae_name
else
""
print
(
f
"Loading weights [{sd_model_hash}]{vae_message} from cache"
)
model
.
load_state_dict
(
checkpoints_loaded
[
checkpoint_info
])
if
shared
.
opts
.
sd_checkpoint_cache
>
0
:
# clean up cache if limit is reached
if
cache_enabled
:
while
len
(
checkpoints_loaded
)
>
shared
.
opts
.
sd_checkpoint_cache
:
checkpoints_loaded
.
popitem
(
last
=
False
)
# LRU
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment