Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Sign in / Register
Toggle navigation
S
Stable Diffusion Webui
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Locked Files
Issues
0
Issues
0
List
Boards
Labels
Service Desk
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Security & Compliance
Security & Compliance
Dependency List
License Compliance
Packages
Packages
List
Container Registry
Analytics
Analytics
CI / CD
Code Review
Insights
Issues
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
novelai-storage
Stable Diffusion Webui
Commits
574c8e55
Commit
574c8e55
authored
Oct 11, 2022
by
brkirch
Committed by
AUTOMATIC1111
Oct 11, 2022
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Add InvokeAI and lstein to credits, add back CUDA support
parent
98fd5cde
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
14 additions
and
0 deletions
+14
-0
README.md
README.md
+1
-0
modules/sd_hijack_optimizations.py
modules/sd_hijack_optimizations.py
+13
-0
No files found.
README.md
View file @
574c8e55
...
@@ -123,6 +123,7 @@ The documentation was moved from this README over to the project's [wiki](https:
...
@@ -123,6 +123,7 @@ The documentation was moved from this README over to the project's [wiki](https:
-
LDSR - https://github.com/Hafiidz/latent-diffusion
-
LDSR - https://github.com/Hafiidz/latent-diffusion
-
Ideas for optimizations - https://github.com/basujindal/stable-diffusion
-
Ideas for optimizations - https://github.com/basujindal/stable-diffusion
-
Doggettx - Cross Attention layer optimization - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
-
Doggettx - Cross Attention layer optimization - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
-
InvokeAI, lstein - Cross Attention layer optimization - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
-
Rinon Gal - Textual Inversion - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
-
Rinon Gal - Textual Inversion - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
-
Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
-
Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
-
Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
-
Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
...
...
modules/sd_hijack_optimizations.py
View file @
574c8e55
...
@@ -173,7 +173,20 @@ def einsum_op_tensor_mem(q, k, v, max_tensor_mb):
...
@@ -173,7 +173,20 @@ def einsum_op_tensor_mem(q, k, v, max_tensor_mb):
return
einsum_op_slice_0
(
q
,
k
,
v
,
q
.
shape
[
0
]
//
div
)
return
einsum_op_slice_0
(
q
,
k
,
v
,
q
.
shape
[
0
]
//
div
)
return
einsum_op_slice_1
(
q
,
k
,
v
,
max
(
q
.
shape
[
1
]
//
div
,
1
))
return
einsum_op_slice_1
(
q
,
k
,
v
,
max
(
q
.
shape
[
1
]
//
div
,
1
))
def
einsum_op_cuda
(
q
,
k
,
v
):
stats
=
torch
.
cuda
.
memory_stats
(
q
.
device
)
mem_active
=
stats
[
'active_bytes.all.current'
]
mem_reserved
=
stats
[
'reserved_bytes.all.current'
]
mem_free_cuda
,
_
=
torch
.
cuda
.
mem_get_info
(
q
.
device
)
mem_free_torch
=
mem_reserved
-
mem_active
mem_free_total
=
mem_free_cuda
+
mem_free_torch
# Divide factor of safety as there's copying and fragmentation
return
self
.
einsum_op_tensor_mem
(
q
,
k
,
v
,
mem_free_total
/
3.3
/
(
1
<<
20
))
def
einsum_op
(
q
,
k
,
v
):
def
einsum_op
(
q
,
k
,
v
):
if
q
.
device
.
type
==
'cuda'
:
return
einsum_op_cuda
(
q
,
k
,
v
)
if
q
.
device
.
type
==
'mps'
:
if
q
.
device
.
type
==
'mps'
:
if
mem_total_gb
>=
32
:
if
mem_total_gb
>=
32
:
return
einsum_op_mps_v1
(
q
,
k
,
v
)
return
einsum_op_mps_v1
(
q
,
k
,
v
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment