Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Sign in / Register
Toggle navigation
S
Stable Diffusion Webui
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Locked Files
Issues
0
Issues
0
List
Boards
Labels
Service Desk
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Security & Compliance
Security & Compliance
Dependency List
License Compliance
Packages
Packages
List
Container Registry
Analytics
Analytics
CI / CD
Code Review
Insights
Issues
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
novelai-storage
Stable Diffusion Webui
Commits
2c11e900
Commit
2c11e900
authored
Jul 24, 2023
by
AUTOMATIC1111
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
repair --medvram for SD2.x too after SDXL update
parent
1f26815d
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
5 additions
and
4 deletions
+5
-4
modules/lowvram.py
modules/lowvram.py
+4
-3
modules/sd_hijack_open_clip.py
modules/sd_hijack_open_clip.py
+1
-1
No files found.
modules/lowvram.py
View file @
2c11e900
...
...
@@ -90,8 +90,12 @@ def setup_for_low_vram(sd_model, use_medvram):
sd_model
.
conditioner
.
register_forward_pre_hook
(
send_me_to_gpu
)
elif
is_sd2
:
sd_model
.
cond_stage_model
.
model
.
register_forward_pre_hook
(
send_me_to_gpu
)
sd_model
.
cond_stage_model
.
model
.
token_embedding
.
register_forward_pre_hook
(
send_me_to_gpu
)
parents
[
sd_model
.
cond_stage_model
.
model
]
=
sd_model
.
cond_stage_model
parents
[
sd_model
.
cond_stage_model
.
model
.
token_embedding
]
=
sd_model
.
cond_stage_model
else
:
sd_model
.
cond_stage_model
.
transformer
.
register_forward_pre_hook
(
send_me_to_gpu
)
parents
[
sd_model
.
cond_stage_model
.
transformer
]
=
sd_model
.
cond_stage_model
sd_model
.
first_stage_model
.
register_forward_pre_hook
(
send_me_to_gpu
)
sd_model
.
first_stage_model
.
encode
=
first_stage_model_encode_wrap
...
...
@@ -101,9 +105,6 @@ def setup_for_low_vram(sd_model, use_medvram):
if
sd_model
.
embedder
:
sd_model
.
embedder
.
register_forward_pre_hook
(
send_me_to_gpu
)
if
hasattr
(
sd_model
,
'cond_stage_model'
):
parents
[
sd_model
.
cond_stage_model
.
transformer
]
=
sd_model
.
cond_stage_model
if
use_medvram
:
sd_model
.
model
.
register_forward_pre_hook
(
send_me_to_gpu
)
else
:
...
...
modules/sd_hijack_open_clip.py
View file @
2c11e900
...
...
@@ -32,7 +32,7 @@ class FrozenOpenCLIPEmbedderWithCustomWords(sd_hijack_clip.FrozenCLIPEmbedderWit
def
encode_embedding_init_text
(
self
,
init_text
,
nvpt
):
ids
=
tokenizer
.
encode
(
init_text
)
ids
=
torch
.
asarray
([
ids
],
device
=
devices
.
device
,
dtype
=
torch
.
int
)
embedded
=
self
.
wrapped
.
model
.
token_embedding
.
wrapped
(
ids
.
to
(
self
.
wrapped
.
model
.
token_embedding
.
wrapped
.
weight
.
device
)
)
.
squeeze
(
0
)
embedded
=
self
.
wrapped
.
model
.
token_embedding
.
wrapped
(
ids
)
.
squeeze
(
0
)
return
embedded
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment