Commit Graph

9 Commits

Author SHA1 Message Date
Aarni Koskela
49a55b410b Autofix Ruff W (not W605) (mostly whitespace) 2023-05-11 20:29:11 +03:00
AUTOMATIC
e334758ec2 repair #10266 2023-05-11 07:45:05 +03:00
Louis Del Valle
c8732dfa6f
Update sub_quadratic_attention.py
1. Determine the number of query chunks.
2. Calculate the final shape of the res tensor.
3. Initialize the tensor with the calculated shape and dtype, (same dtype as the input tensors, usually)

Can initialize the tensor as a zero-filled tensor with the correct shape and dtype, then compute the attention scores for each query chunk and fill the corresponding slice of tensor.
2023-05-10 22:05:18 -05:00
brkirch
e3b53fd295 Add UI setting for upcasting attention to float32
Adds "Upcast cross attention layer to float32" option in Stable Diffusion settings. This allows for generating images using SD 2.1 models without --no-half or xFormers.

In order to make upcasting cross attention layer optimizations possible it is necessary to indent several sections of code in sd_hijack_optimizations.py so that a context manager can be used to disable autocast. Also, even though Stable Diffusion (and Diffusers) only upcast q and k, unfortunately my findings were that most of the cross attention layer optimizations could not function unless v is upcast also.
2023-01-25 01:13:04 -05:00
AUTOMATIC
cdfcbd9959 Remove fallback for Protocol import and remove Protocol import and remove instances of Protocol in code
add some whitespace between functions to be in line with other code in the repo
2023-01-09 20:08:48 +03:00
ProGamerGov
984b86dd0a
Add fallback for Protocol import 2023-01-07 13:08:21 -07:00
brkirch
c18add68ef Added license 2023-01-06 16:42:47 -05:00
brkirch
b119815333 Use narrow instead of dynamic_slice 2023-01-06 00:15:24 -05:00
brkirch
d782a95967 Add Birch-san's sub-quadratic attention implementation 2023-01-06 00:14:13 -05:00