Commit Graph

760 Commits

Author SHA1 Message Date
Fampai
2536ecbb17 Refactored learning rate code 2022-10-10 17:10:29 -04:00
AUTOMATIC
f98338faa8 add an option to not add watermark to created images 2022-10-10 23:15:48 +03:00
alg-wiki
f0ab972f85
Merge branch 'master' into textual__inversion 2022-10-11 03:35:28 +08:00
alg-wiki
bc3e183b73
Textual Inversion: Preprocess and Training will only pick-up image files 2022-10-11 04:30:13 +09:00
Justin Maier
1d64976dbc Simplify crop logic 2022-10-10 12:04:21 -06:00
AUTOMATIC
727e4d1086 no to different messages plus fix using != to compare to None 2022-10-10 20:46:55 +03:00
AUTOMATIC1111
b3d3b335cf
Merge pull request #2131 from ssysm/upstream-master
Add VAE Path Arguments
2022-10-10 20:45:14 +03:00
AUTOMATIC
39919c40dd add eta noise seed delta option 2022-10-10 20:32:44 +03:00
ssysm
af62ad4d25 change vae loading method 2022-10-10 13:25:28 -04:00
C43H66N12O12S2
ed769977f0 add swinir v2 support 2022-10-10 19:54:57 +03:00
C43H66N12O12S2
ece27fe989 Add files via upload 2022-10-10 19:54:57 +03:00
C43H66N12O12S2
3e7a981194 remove functorch 2022-10-10 19:54:07 +03:00
C43H66N12O12S2
623251ce2b allow pascal onwards 2022-10-10 19:54:07 +03:00
Vladimir Repin
9d33baba58 Always show previous mask and fix extras_send dest 2022-10-10 19:39:24 +03:00
hentailord85ez
d5c14365fd Add back in output hidden states parameter 2022-10-10 18:54:48 +03:00
hentailord85ez
460bbae587 Pad beginning of textual inversion embedding 2022-10-10 18:54:48 +03:00
hentailord85ez
b340439586 Unlimited Token Works
Unlimited tokens actually work now. Works with textual inversion too. Replaces the previous not-so-much-working implementation.
2022-10-10 18:54:48 +03:00
RW21
f347ddfd80 Remove max_batch_count from ui.py 2022-10-10 18:53:40 +03:00
alg-wiki
7a20f914ed Custom Width and Height 2022-10-10 17:05:12 +03:00
alg-wiki
6ad3a53e36 Fixed progress bar output for epoch 2022-10-10 17:05:12 +03:00
alg-wiki
ea00c1624b Textual Inversion: Added custom training image size and number of repeats per input image in a single epoch 2022-10-10 17:05:12 +03:00
AUTOMATIC
8f1efdc130 --no-half-vae pt2 2022-10-10 17:03:45 +03:00
alg-wiki
04c745ea4f
Custom Width and Height 2022-10-10 22:35:35 +09:00
AUTOMATIC
7349088d32 --no-half-vae 2022-10-10 16:16:29 +03:00
JC_Array
2f94331df2 removed change in last commit, simplified to adding the visible argument to process_caption_deepbooru and it set to False if deepdanbooru argument is not set 2022-10-10 03:34:00 -05:00
alg-wiki
4ee7519fc2
Fixed progress bar output for epoch 2022-10-10 17:31:33 +09:00
JC_Array
8ec069e64d removed duplicate run_preprocess.click by creating run_preprocess_inputs list and appending deepbooru variable to input list if in scope 2022-10-10 03:23:24 -05:00
alg-wiki
3110f895b2
Textual Inversion: Added custom training image size and number of repeats per input image in a single epoch 2022-10-10 17:07:46 +09:00
brkirch
8acc901ba3 Newer versions of PyTorch use TypedStorage instead
Pytorch 1.13 and later will rename _TypedStorage to TypedStorage, so check for TypedStorage and use _TypedStorage if it is not available. Currently this is needed so that nightly builds of PyTorch work correctly.
2022-10-10 08:04:52 +03:00
JC_Array
1f92336be7 refactored the deepbooru module to improve speed on running multiple interogations in a row. Added the option to generate deepbooru tags for textual inversion preproccessing. 2022-10-09 23:58:18 -05:00
ssysm
6fdad291bd Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into upstream-master 2022-10-09 23:20:39 -04:00
ssysm
cc92dc1f8d add vae path args 2022-10-09 23:17:29 -04:00
Justin Maier
6435691bb1 Add "Scale to" option to Extras 2022-10-09 19:26:52 -06:00
AUTOMATIC
a65476718f add DoubleStorage to list of allowed classes for pickle 2022-10-09 23:38:49 +03:00
AUTOMATIC
8d340cfb88 do not add clip skip to parameters if it's 1 or 0 2022-10-09 22:31:35 +03:00
Fampai
1824e9ee3a Removed unnecessary tmp variable 2022-10-09 22:31:23 +03:00
Fampai
ad3ae44108 Updated code for legibility 2022-10-09 22:31:23 +03:00
Fampai
ec2bd9be75 Fix issues with CLIP ignore option name change 2022-10-09 22:31:23 +03:00
Fampai
a14f7bf113 Corrected CLIP Layer Ignore description and updated its range to the max possible 2022-10-09 22:31:23 +03:00
Fampai
e59c66c008 Optimized code for Ignoring last CLIP layers 2022-10-09 22:31:23 +03:00
AUTOMATIC
6c383d2e82 show model selection setting on top of page 2022-10-09 22:24:07 +03:00
Artem Zagidulin
9ecea0a8d6 fix missing png info when Extras Batch Process 2022-10-09 18:35:25 +03:00
AUTOMATIC
875ddfeecf added guard for torch.load to prevent loading pickles with unknown content 2022-10-09 17:58:43 +03:00
AUTOMATIC
9d1138e294 fix typo in filename for ESRGAN arch 2022-10-09 15:08:27 +03:00
AUTOMATIC
e6e8cabe0c change up #2056 to make it work how i want it to plus make xy plot write correct values to images 2022-10-09 14:57:48 +03:00
William Moorehouse
594cbfd8fb Sanitize infotext output (for now) 2022-10-09 14:49:15 +03:00
William Moorehouse
006791c13d Fix grabbing the model name for infotext 2022-10-09 14:49:15 +03:00
William Moorehouse
d6d10a37bf Added extended model details to infotext 2022-10-09 14:49:15 +03:00
AUTOMATIC
542a3d3a4a fix btoken hypernetworks in XY plot 2022-10-09 14:33:22 +03:00
AUTOMATIC
77a719648d fix logic error in #1832 2022-10-09 13:48:04 +03:00
AUTOMATIC
f4578b343d fix model switching not working properly if there is a different yaml config 2022-10-09 13:23:30 +03:00
AUTOMATIC
bd833409ac additional changes for saving pnginfo for #1803 2022-10-09 13:10:15 +03:00
Milly
0609ce06c0 Removed duplicate definition model_path 2022-10-09 12:46:07 +03:00
AUTOMATIC
6f6798ddab prevent a possible code execution error (thanks, RyotaK) 2022-10-09 12:33:37 +03:00
AUTOMATIC
0241d811d2 Revert "Fix for Prompts_from_file showing extra textbox."
This reverts commit e2930f9821.
2022-10-09 12:04:44 +03:00
AUTOMATIC
ab4fe4f44c hide filenames for save button by default 2022-10-09 11:59:41 +03:00
Tony Beeman
cbf6dad02d Handle case where on_show returns the wrong number of arguments 2022-10-09 11:16:38 +03:00
Tony Beeman
86cb16886f Pull Request Code Review Fixes 2022-10-09 11:16:38 +03:00
Tony Beeman
e2930f9821 Fix for Prompts_from_file showing extra textbox. 2022-10-09 11:16:38 +03:00
Nicolas Noullet
1ffeb42d38 Fix typo 2022-10-09 11:10:13 +03:00
frostydad
ef93acdc73 remove line break 2022-10-09 11:09:17 +03:00
frostydad
03e570886f Fix incorrect sampler name in output 2022-10-09 11:09:17 +03:00
Fampai
122d42687b Fix VRAM Issue by only loading in hypernetwork when selected in settings 2022-10-09 11:08:11 +03:00
AUTOMATIC1111
e00b4df7c6
Merge pull request #1752 from Greendayle/dev/deepdanbooru
Added DeepDanbooru interrogator
2022-10-09 10:52:21 +03:00
aoirusann
14192c5b20 Support Download for txt files. 2022-10-09 10:49:11 +03:00
aoirusann
5ab7e88d9b Add Download & Download as zip 2022-10-09 10:49:11 +03:00
AUTOMATIC
4e569fd888 fixed incorrect message about loading config; thanks anon! 2022-10-09 10:31:47 +03:00
AUTOMATIC
c77c89cc83 make main model loading and model merger use the same code 2022-10-09 10:23:31 +03:00
AUTOMATIC
050a6a798c support loading .yaml config with same name as model
support EMA weights in processing (????)
2022-10-08 23:26:48 +03:00
Aidan Holland
432782163a chore: Fix typos 2022-10-08 22:42:30 +03:00
Edouard Leurent
610a7f4e14 Break after finding the local directory of stable diffusion
Otherwise, we may override it with one of the next two path (. or ..) if it is present there, and then the local paths of other modules (taming transformers, codeformers, etc.) wont be found in sd_path/../.

Fix https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1085
2022-10-08 22:35:04 +03:00
AUTOMATIC
3b2141c5fb add 'Ignore last layers of CLIP model' option as a parameter to the infotext 2022-10-08 22:21:15 +03:00
AUTOMATIC
e6e42f98df make --force-enable-xformers work without needing --xformers 2022-10-08 22:12:23 +03:00
Fampai
1371d7608b Added ability to ignore last n layers in FrozenCLIPEmbedder 2022-10-08 22:10:37 +03:00
DepFA
b458fa48fe Update ui.py 2022-10-08 20:38:35 +03:00
DepFA
15c4278f1a TI preprocess wording
I had to check the code to work out what splitting was 🤷🏿
2022-10-08 20:38:35 +03:00
Greendayle
0ec80f0125
Merge branch 'master' into dev/deepdanbooru 2022-10-08 18:28:22 +02:00
AUTOMATIC
3061cdb7b6 add --force-enable-xformers option and also add messages to console regarding cross attention optimizations 2022-10-08 19:22:15 +03:00
AUTOMATIC
f9c5da1592 add fallback for xformers_attnblock_forward 2022-10-08 19:05:19 +03:00
Greendayle
01f8cb4447 made deepdanbooru optional, added to readme, automatic download of deepbooru model 2022-10-08 18:02:56 +02:00
Artem Zagidulin
a5550f0213 alternate prompt 2022-10-08 18:12:19 +03:00
C43H66N12O12S2
cc0258aea7 check for ampere without destroying the optimizations. again. 2022-10-08 17:54:16 +03:00
C43H66N12O12S2
017b6b8744 check for ampere 2022-10-08 17:54:16 +03:00
Greendayle
5329d0aba0 Merge branch 'master' into dev/deepdanbooru 2022-10-08 16:30:28 +02:00
AUTOMATIC
cfc33f99d4 why did you do this 2022-10-08 17:29:06 +03:00
Greendayle
2e8ba0fa47 fix conflicts 2022-10-08 16:27:48 +02:00
Milly
4f33289d0f Fixed typo 2022-10-08 17:15:30 +03:00
AUTOMATIC
27032c47df restore old opt_split_attention/disable_opt_split_attention logic 2022-10-08 17:10:05 +03:00
AUTOMATIC
dc1117233e simplify xfrmers options: --xformers to enable and that's it 2022-10-08 17:02:18 +03:00
AUTOMATIC
7ff1170a2e emergency fix for xformers (continue + shared) 2022-10-08 16:33:39 +03:00
AUTOMATIC1111
48feae37ff
Merge pull request #1851 from C43H66N12O12S2/flash
xformers attention
2022-10-08 16:29:59 +03:00
C43H66N12O12S2
970de9ee68
Update sd_hijack.py 2022-10-08 16:29:43 +03:00
C43H66N12O12S2
69d0053583
update sd_hijack_opt to respect new env variables 2022-10-08 16:21:40 +03:00
C43H66N12O12S2
ddfa9a9786
add xformers_available shared variable 2022-10-08 16:20:41 +03:00
C43H66N12O12S2
26b459a379
default to split attention if cuda is available and xformers is not 2022-10-08 16:20:04 +03:00
MrCheeze
5f85a74b00 fix bug where when using prompt composition, hijack_comments generated before the final AND will be dropped 2022-10-08 15:48:04 +03:00
ddPn08
772db721a5 fix glob path in hypernetwork.py 2022-10-08 15:46:54 +03:00
AUTOMATIC
7001bffe02 fix AND broken for long prompts 2022-10-08 15:43:25 +03:00
AUTOMATIC
77f4237d1c fix bugs related to variable prompt lengths 2022-10-08 15:25:59 +03:00
AUTOMATIC
4999eb2ef9 do not let user choose his own prompt token count limit 2022-10-08 14:25:47 +03:00