zhaohu xing
5dcc22606d
add hash and fix undo hijack bug
...
Signed-off-by: zhaohu xing <920232796@qq.com>
2022-12-06 16:04:50 +08:00
Zac Liu
a25dfebeed
Merge pull request #3 from 920232796/master
...
fix device support for mps
update the support for SD2.0
2022-12-06 09:17:57 +08:00
Zac Liu
3ebf977a6e
Merge branch 'AUTOMATIC1111:master' into master
2022-12-06 09:16:15 +08:00
zhaohu xing
4929503258
fix bugs
...
Signed-off-by: zhaohu xing <920232796@qq.com>
2022-12-06 09:03:55 +08:00
AUTOMATIC
0d21624cee
move #5216 to the extension
2022-12-03 18:16:19 +03:00
AUTOMATIC
89e1df013b
Merge remote-tracking branch 'wywywywy/autoencoder-hijack'
2022-12-03 18:08:10 +03:00
AUTOMATIC1111
a2feaa95fc
Merge pull request #5194 from brkirch/autocast-and-mps-randn-fixes
...
Use devices.autocast() and fix MPS randn issues
2022-12-03 09:58:08 +03:00
SmirkingFace
da698ca92e
Fixed AttributeError where openaimodel is not found
2022-12-02 13:47:02 +01:00
zhaohu xing
52cc83d36b
fix bugs
...
Signed-off-by: zhaohu xing <920232796@qq.com>
2022-11-30 14:56:12 +08:00
zhaohu xing
0831ab476c
Merge branch 'master' into master
2022-11-30 10:13:17 +08:00
wywywywy
36c3613d16
Add autoencoder to sd_hijack
2022-11-29 17:40:02 +00:00
zhaohu xing
75c4511e6b
add AltDiffusion to webui
...
Signed-off-by: zhaohu xing <920232796@qq.com>
2022-11-29 10:28:41 +08:00
brkirch
98ca437edf
Refactor and instead check if mps is being used, not availability
2022-11-28 21:18:51 -05:00
AUTOMATIC
b48b7999c8
Merge remote-tracking branch 'flamelaw/master'
2022-11-27 12:19:59 +03:00
Billy Cao
349f0461ec
Merge branch 'master' into support_any_resolution
2022-11-27 12:39:31 +08:00
AUTOMATIC
64c7b7975c
restore hypernetworks to seemingly working state
2022-11-26 16:45:57 +03:00
AUTOMATIC
ce6911158b
Add support Stable Diffusion 2.0
2022-11-26 16:10:46 +03:00
Billy Cao
adb6cb7619
Patch UNet Forward to support resolutions that are not multiples of 64
...
Also modifed the UI to no longer step in 64
2022-11-23 18:11:24 +08:00
flamelaw
bd68e35de3
Gradient accumulation, autocast fix, new latent sampling method, etc
2022-11-20 12:35:26 +09:00
killfrenzy96
17e4432820
cleanly undo circular hijack #4818
2022-11-18 21:22:55 +11:00
AUTOMATIC
c62d17aee3
use the new devices.has_mps() function in register_buffer for DDIM/PLMS fix for OSX
2022-11-12 10:00:22 +03:00
AUTOMATIC
7ba3923d5b
move DDIM/PLMS fix for OSX out of the file with inpainting code.
2022-11-11 18:20:18 +03:00
Jairo Correa
af758e97fa
Unload sd_model before loading the other
2022-11-01 04:01:49 -03:00
AUTOMATIC
2b91251637
removed aesthetic gradients as built-in
...
added support for extensions
2022-10-22 12:23:58 +03:00
AUTOMATIC
9286fe53de
make aestetic embedding ciompatible with prompts longer than 75 tokens
2022-10-21 16:38:06 +03:00
AUTOMATIC
7d6b388d71
Merge branch 'ae'
2022-10-21 13:35:01 +03:00
C43H66N12O12S2
73b5dbf72a
Update sd_hijack.py
2022-10-18 11:53:04 +03:00
C43H66N12O12S2
786ed49922
use legacy attnblock
2022-10-18 11:53:04 +03:00
MalumaDev
9324cdaa31
ui fix, re organization of the code
2022-10-16 17:53:56 +02:00
MalumaDev
e4f8b5f00d
ui fix
2022-10-16 10:28:21 +02:00
MalumaDev
523140d780
ui fix
2022-10-16 10:23:30 +02:00
MalumaDev
b694bba39a
Merge remote-tracking branch 'origin/test_resolve_conflicts' into test_resolve_conflicts
2022-10-16 00:24:05 +02:00
MalumaDev
9325c85f78
fixed dropbox update
2022-10-16 00:23:47 +02:00
MalumaDev
97ceaa23d0
Merge branch 'master' into test_resolve_conflicts
2022-10-16 00:06:36 +02:00
C43H66N12O12S2
529afbf4d7
Update sd_hijack.py
2022-10-15 20:25:27 +03:00
MalumaDev
37d7ffb415
fix to tokens lenght, addend embs generator, add new features to edit the embedding before the generation using text
2022-10-15 15:59:37 +02:00
MalumaDev
bb57f30c2d
init
2022-10-14 10:56:41 +02:00
AUTOMATIC
429442f4a6
fix iterator bug for #2295
2022-10-12 13:38:03 +03:00
hentailord85ez
80f3cf2bb2
Account when lines are mismatched
2022-10-12 11:38:41 +03:00
brkirch
98fd5cde72
Add check for psutil
2022-10-11 17:24:00 +03:00
brkirch
c0484f1b98
Add cross-attention optimization from InvokeAI
...
* Add cross-attention optimization from InvokeAI (~30% speed improvement on MPS)
* Add command line option for it
* Make it default when CUDA is unavailable
2022-10-11 17:24:00 +03:00
AUTOMATIC
873efeed49
rename hypernetwork dir to hypernetworks to prevent clash with an old filename that people who use zip instead of git clone will have
2022-10-11 15:51:30 +03:00
AUTOMATIC
5de806184f
Merge branch 'master' into hypernetwork-training
2022-10-11 11:14:36 +03:00
hentailord85ez
5e2627a1a6
Comma backtrack padding ( #2192 )
...
Comma backtrack padding
2022-10-11 09:55:28 +03:00
C43H66N12O12S2
623251ce2b
allow pascal onwards
2022-10-10 19:54:07 +03:00
hentailord85ez
d5c14365fd
Add back in output hidden states parameter
2022-10-10 18:54:48 +03:00
hentailord85ez
460bbae587
Pad beginning of textual inversion embedding
2022-10-10 18:54:48 +03:00
hentailord85ez
b340439586
Unlimited Token Works
...
Unlimited tokens actually work now. Works with textual inversion too. Replaces the previous not-so-much-working implementation.
2022-10-10 18:54:48 +03:00
Fampai
1824e9ee3a
Removed unnecessary tmp variable
2022-10-09 22:31:23 +03:00
Fampai
ad3ae44108
Updated code for legibility
2022-10-09 22:31:23 +03:00
Fampai
e59c66c008
Optimized code for Ignoring last CLIP layers
2022-10-09 22:31:23 +03:00
Fampai
1371d7608b
Added ability to ignore last n layers in FrozenCLIPEmbedder
2022-10-08 22:10:37 +03:00
AUTOMATIC
3061cdb7b6
add --force-enable-xformers option and also add messages to console regarding cross attention optimizations
2022-10-08 19:22:15 +03:00
C43H66N12O12S2
cc0258aea7
check for ampere without destroying the optimizations. again.
2022-10-08 17:54:16 +03:00
C43H66N12O12S2
017b6b8744
check for ampere
2022-10-08 17:54:16 +03:00
AUTOMATIC
cfc33f99d4
why did you do this
2022-10-08 17:29:06 +03:00
AUTOMATIC
27032c47df
restore old opt_split_attention/disable_opt_split_attention logic
2022-10-08 17:10:05 +03:00
AUTOMATIC
dc1117233e
simplify xfrmers options: --xformers to enable and that's it
2022-10-08 17:02:18 +03:00
AUTOMATIC1111
48feae37ff
Merge pull request #1851 from C43H66N12O12S2/flash
...
xformers attention
2022-10-08 16:29:59 +03:00
C43H66N12O12S2
970de9ee68
Update sd_hijack.py
2022-10-08 16:29:43 +03:00
C43H66N12O12S2
26b459a379
default to split attention if cuda is available and xformers is not
2022-10-08 16:20:04 +03:00
MrCheeze
5f85a74b00
fix bug where when using prompt composition, hijack_comments generated before the final AND will be dropped
2022-10-08 15:48:04 +03:00
AUTOMATIC
77f4237d1c
fix bugs related to variable prompt lengths
2022-10-08 15:25:59 +03:00
AUTOMATIC
4999eb2ef9
do not let user choose his own prompt token count limit
2022-10-08 14:25:47 +03:00
AUTOMATIC
706d5944a0
let user choose his own prompt token count limit
2022-10-08 13:38:57 +03:00
C43H66N12O12S2
91d66f5520
use new attnblock for xformers path
2022-10-08 11:56:01 +03:00
C43H66N12O12S2
b70eaeb200
delete broken and unnecessary aliases
2022-10-08 04:10:35 +03:00
AUTOMATIC
12c4d5c6b5
hypernetwork training mk1
2022-10-07 23:22:22 +03:00
AUTOMATIC
f7c787eb7c
make it possible to use hypernetworks without opt split attention
2022-10-07 16:39:51 +03:00
C43H66N12O12S2
5e3ff846c5
Update sd_hijack.py
2022-10-07 06:38:01 +03:00
C43H66N12O12S2
5303df2428
Update sd_hijack.py
2022-10-07 06:01:14 +03:00
C43H66N12O12S2
35d6b23162
Update sd_hijack.py
2022-10-07 05:31:53 +03:00
C43H66N12O12S2
2eb911b056
Update sd_hijack.py
2022-10-07 05:22:28 +03:00
Jairo Correa
ad0cc85d1f
Merge branch 'master' into stable
2022-10-02 18:31:19 -03:00
AUTOMATIC
88ec0cf557
fix for incorrect embedding token length calculation (will break seeds that use embeddings, you're welcome!)
...
add option to input initialization text for embeddings
2022-10-02 19:40:51 +03:00
AUTOMATIC
820f1dc96b
initial support for training textual inversion
2022-10-02 15:03:39 +03:00
Jairo Correa
ad1fbbae93
Merge branch 'master' into fix-vram
2022-09-30 18:58:51 -03:00
AUTOMATIC
98cc6c6e74
add embeddings dir
2022-09-30 14:16:26 +03:00
AUTOMATIC
c715ef04d1
fix for incorrect model weight loading for #814
2022-09-29 15:40:28 +03:00
AUTOMATIC
c1c27dad3b
new implementation for attention/emphasis
2022-09-29 11:31:48 +03:00
Jairo Correa
c2d5b29040
Move silu to sd_hijack
2022-09-29 01:16:25 -03:00
Liam
e5707b66d6
switched the token counter to use hidden buttons instead of api call
2022-09-27 19:29:53 -04:00
Liam
5034f7d759
added token counter next to txt2img and img2img prompts
2022-09-27 15:56:18 -04:00
AUTOMATIC
073f6eac22
potential fix for embeddings no loading on AMD cards
2022-09-25 15:04:39 +03:00
guaneec
615b2fc9ce
Fix token max length
2022-09-25 09:30:02 +03:00
AUTOMATIC
254da5d127
--opt-split-attention now on by default for torch.cuda, off for others (cpu and MPS; because the option does not work there according to reports)
2022-09-21 09:49:02 +03:00
AUTOMATIC
1578859305
fix for too large embeddings causing an error
2022-09-21 00:20:11 +03:00
AUTOMATIC
90401d96a6
fix a off by one error with embedding at the start of the sentence
2022-09-20 12:12:31 +03:00
AUTOMATIC
ab38392119
add the part that was missing for word textual inversion checksums
2022-09-20 09:53:29 +03:00
AUTOMATIC
cae5c5fa8d
Making opt split attention the default. Are you upset about this? Sorry.
2022-09-18 20:55:46 +03:00
C43H66N12O12S2
18d6fe4346
.....
2022-09-18 01:21:50 +03:00
C43H66N12O12S2
d63dbb3acc
Move scale multiplication to the front
2022-09-18 01:05:31 +03:00
C43H66N12O12S2
72d7f8c761
fix typo
2022-09-15 14:14:27 +03:00
C43H66N12O12S2
7ec6282ec2
pass dtype to torch.zeros as well
2022-09-15 14:14:27 +03:00
C43H66N12O12S2
3b1b1444d4
Complete cross attention update
2022-09-13 14:29:56 +03:00
C43H66N12O12S2
aaea8b4494
Update cross attention to the newest version
2022-09-12 16:48:21 +03:00
AUTOMATIC
06fadd2dc5
added --opt-split-attention-v1
2022-09-11 00:29:10 +03:00
AUTOMATIC
c92f2ff196
Update to cross attention from https://github.com/Doggettx/stable-diffusion #219
2022-09-10 12:06:19 +03:00
AUTOMATIC
62ce77e245
support for sd-concepts as alternatives for textual inversion #151
2022-09-08 15:36:50 +03:00
xeonvs
ba1124b326
directly convert list to tensor
2022-09-07 20:40:32 +02:00