Commit Graph

741 Commits

Author SHA1 Message Date
DepFA
db71290d26
remove old caption method 2022-10-11 19:55:54 +01:00
DepFA
61788c0538
shift embedding logic out of textual_inversion 2022-10-11 19:50:50 +01:00
DepFA
e5fbf5c755
remove embedding related image functions from images 2022-10-11 19:46:33 +01:00
DepFA
c080f52cea
move embedding logic to separate file 2022-10-11 19:37:58 +01:00
DepFA
1eaad95533
Merge branch 'master' into embed-embeddings-in-images 2022-10-11 15:15:09 +01:00
AUTOMATIC
66b7d7584f become even stricter with pickles
no pickle shall pass
thank you again, RyotaK
2022-10-11 17:03:16 +03:00
AUTOMATIC
b0583be088 more renames 2022-10-11 15:54:34 +03:00
AUTOMATIC
873efeed49 rename hypernetwork dir to hypernetworks to prevent clash with an old filename that people who use zip instead of git clone will have 2022-10-11 15:51:30 +03:00
JamnedZ
a004d1a855 Added new line at the end of ngrok.py 2022-10-11 15:38:53 +03:00
JamnedZ
5992564448 Cleaned ngrok integration 2022-10-11 15:38:53 +03:00
Ben
861297cefe add a space holder 2022-10-11 15:37:04 +03:00
Ben
87b77cad5f Layout fix 2022-10-11 15:37:04 +03:00
Martin Cairns
eacc03b167 Fix typo in comments 2022-10-11 15:36:29 +03:00
Martin Cairns
1eae307607 Remove debug code for checking that first sigma value is same after code cleanup 2022-10-11 15:36:29 +03:00
Martin Cairns
92d7a13885 Handle different parameters for DPM fast & adaptive 2022-10-11 15:36:29 +03:00
AUTOMATIC
530103b586 fixes related to merge 2022-10-11 14:53:02 +03:00
AUTOMATIC
5de806184f Merge branch 'master' into hypernetwork-training 2022-10-11 11:14:36 +03:00
AUTOMATIC
948533950c replace duplicate code with a function 2022-10-11 11:10:17 +03:00
hentailord85ez
5e2627a1a6
Comma backtrack padding (#2192)
Comma backtrack padding
2022-10-11 09:55:28 +03:00
Kenneth
8617396c6d Added slider for deepbooru score threshold in settings 2022-10-11 09:43:16 +03:00
Jairo Correa
8b7d3f1bef Make the ctrl+enter shortcut use the generate button on the current tab 2022-10-11 09:32:03 +03:00
DepFA
7aa8fcac1e
use simple lcg in xor 2022-10-11 04:17:36 +01:00
DepFA
e0fbe6d27e
colour depth conversion fix 2022-10-10 23:26:24 +01:00
DepFA
767202a4c3
add dependency 2022-10-10 23:20:52 +01:00
DepFA
315d5a8ed9
update data dis[play style 2022-10-10 23:14:44 +01:00
AUTOMATIC
f98338faa8 add an option to not add watermark to created images 2022-10-10 23:15:48 +03:00
AUTOMATIC
727e4d1086 no to different messages plus fix using != to compare to None 2022-10-10 20:46:55 +03:00
AUTOMATIC1111
b3d3b335cf
Merge pull request #2131 from ssysm/upstream-master
Add VAE Path Arguments
2022-10-10 20:45:14 +03:00
AUTOMATIC
39919c40dd add eta noise seed delta option 2022-10-10 20:32:44 +03:00
ssysm
af62ad4d25 change vae loading method 2022-10-10 13:25:28 -04:00
C43H66N12O12S2
ed769977f0 add swinir v2 support 2022-10-10 19:54:57 +03:00
C43H66N12O12S2
ece27fe989 Add files via upload 2022-10-10 19:54:57 +03:00
C43H66N12O12S2
3e7a981194 remove functorch 2022-10-10 19:54:07 +03:00
C43H66N12O12S2
623251ce2b allow pascal onwards 2022-10-10 19:54:07 +03:00
Vladimir Repin
9d33baba58 Always show previous mask and fix extras_send dest 2022-10-10 19:39:24 +03:00
hentailord85ez
d5c14365fd Add back in output hidden states parameter 2022-10-10 18:54:48 +03:00
hentailord85ez
460bbae587 Pad beginning of textual inversion embedding 2022-10-10 18:54:48 +03:00
hentailord85ez
b340439586 Unlimited Token Works
Unlimited tokens actually work now. Works with textual inversion too. Replaces the previous not-so-much-working implementation.
2022-10-10 18:54:48 +03:00
RW21
f347ddfd80 Remove max_batch_count from ui.py 2022-10-10 18:53:40 +03:00
DepFA
df6d0d9286
convert back to rgb as some hosts add alpha 2022-10-10 15:43:09 +01:00
DepFA
707a431100
add pixel data footer 2022-10-10 15:34:49 +01:00
DepFA
ce2d7f7eac
Merge branch 'master' into embed-embeddings-in-images 2022-10-10 15:13:48 +01:00
alg-wiki
7a20f914ed Custom Width and Height 2022-10-10 17:05:12 +03:00
alg-wiki
6ad3a53e36 Fixed progress bar output for epoch 2022-10-10 17:05:12 +03:00
alg-wiki
ea00c1624b Textual Inversion: Added custom training image size and number of repeats per input image in a single epoch 2022-10-10 17:05:12 +03:00
AUTOMATIC
8f1efdc130 --no-half-vae pt2 2022-10-10 17:03:45 +03:00
AUTOMATIC
7349088d32 --no-half-vae 2022-10-10 16:16:29 +03:00
brkirch
8acc901ba3 Newer versions of PyTorch use TypedStorage instead
Pytorch 1.13 and later will rename _TypedStorage to TypedStorage, so check for TypedStorage and use _TypedStorage if it is not available. Currently this is needed so that nightly builds of PyTorch work correctly.
2022-10-10 08:04:52 +03:00
ssysm
6fdad291bd Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into upstream-master 2022-10-09 23:20:39 -04:00
ssysm
cc92dc1f8d add vae path args 2022-10-09 23:17:29 -04:00