Commit Graph

114 Commits

Author SHA1 Message Date
Kuma
fda04e620d
typo in TI 2023-01-05 18:44:19 +01:00
AUTOMATIC1111
eeb1de4388
Merge branch 'master' into gradient-clipping 2023-01-04 19:56:35 +03:00
AUTOMATIC
525cea9245 use shared function from processing for creating dummy mask when training inpainting model 2023-01-04 17:58:07 +03:00
AUTOMATIC
184e670126 fix the merge 2023-01-04 17:45:01 +03:00
AUTOMATIC1111
da5c1e8a73
Merge branch 'master' into inpaint_textual_inversion 2023-01-04 17:40:19 +03:00
AUTOMATIC1111
7bbd984dda
Merge pull request #6253 from Shondoit/ti-optim
Save Optimizer next to TI embedding
2023-01-04 14:09:13 +03:00
Vladimir Mandic
192ddc04d6
add job info to modules 2023-01-03 10:34:51 -05:00
Shondoit
bddebe09ed Save Optimizer next to TI embedding
Also add check to load only .PT and .BIN files as embeddings. (since we add .optim files in the same directory)
2023-01-03 13:30:24 +01:00
Philpax
c65909ad16 feat(api): return more data for embeddings 2023-01-02 12:21:48 +11:00
AUTOMATIC
311354c0bb fix the issue with training on SD2.0 2023-01-02 00:38:09 +03:00
AUTOMATIC
bdbe09827b changed embedding accepted shape detection to use existing code and support the new alt-diffusion model, and reformatted messages a bit #6149 2022-12-31 22:49:09 +03:00
Vladimir Mandic
f55ac33d44
validate textual inversion embeddings 2022-12-31 11:27:02 -05:00
Yuval Aboulafia
3bf5591efe fix F541 f-string without any placeholders 2022-12-24 21:35:29 +02:00
Jim Hays
c0355caefe Fix various typos 2022-12-14 21:01:32 -05:00
AUTOMATIC1111
c9a2cfdf2a
Merge branch 'master' into racecond_fix 2022-12-03 10:19:51 +03:00
brkirch
4d5f1691dd Use devices.autocast instead of torch.autocast 2022-11-30 10:33:42 -05:00
AUTOMATIC
b48b7999c8 Merge remote-tracking branch 'flamelaw/master' 2022-11-27 12:19:59 +03:00
flamelaw
755df94b2a set TI AdamW default weight decay to 0 2022-11-27 00:35:44 +09:00
AUTOMATIC
ce6911158b Add support Stable Diffusion 2.0 2022-11-26 16:10:46 +03:00
flamelaw
89d8ecff09 small fixes 2022-11-23 02:49:01 +09:00
flamelaw
5b57f61ba4 fix pin_memory with different latent sampling method 2022-11-21 10:15:46 +09:00
flamelaw
bd68e35de3 Gradient accumulation, autocast fix, new latent sampling method, etc 2022-11-20 12:35:26 +09:00
AUTOMATIC
cdc8020d13 change StableDiffusionProcessing to internally use sampler name instead of sampler index 2022-11-19 12:01:51 +03:00
Muhammad Rizqi Nur
bb832d7725 Simplify grad clip 2022-11-05 11:48:38 +07:00
Fampai
39541d7725 Fixes race condition in training when VAE is unloaded
set_current_image can attempt to use the VAE when it is unloaded to
the CPU while training
2022-11-04 04:50:22 -04:00
Muhammad Rizqi Nur
237e79c77d Merge branch 'master' into gradient-clipping 2022-11-02 20:48:58 +07:00
Nerogar
cffc240a73 fixed textual inversion training with inpainting models 2022-11-01 21:02:07 +01:00
Fampai
890e68aaf7 Fixed minor bug
when unloading vae during TI training, generating images after
training will error out
2022-10-31 10:07:12 -04:00
Fampai
3b0127e698 Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into TI_optimizations 2022-10-31 09:54:51 -04:00
Fampai
006756f9cd Added TI training optimizations
option to use xattention optimizations when training
option to unload vae when training
2022-10-31 07:26:08 -04:00
Muhammad Rizqi Nur
cd4d59c0de Merge master 2022-10-30 18:57:51 +07:00
Muhammad Rizqi Nur
3d58510f21 Fix dataset still being loaded even when training will be skipped 2022-10-30 00:54:59 +07:00
Muhammad Rizqi Nur
a07f054c86 Add missing info on hypernetwork/embedding model log
Mentioned here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/1528#discussioncomment-3991513

Also group the saving into one
2022-10-30 00:49:29 +07:00
Muhammad Rizqi Nur
ab05a74ead Revert "Add cleanup after training"
This reverts commit 3ce2bfdf95.
2022-10-30 00:32:02 +07:00
Muhammad Rizqi Nur
3ce2bfdf95 Add cleanup after training 2022-10-29 19:43:21 +07:00
Muhammad Rizqi Nur
ab27c111d0 Add input validations before loading dataset for training 2022-10-29 18:09:17 +07:00
Muhammad Rizqi Nur
05e2e40537 Merge branch 'master' into gradient-clipping 2022-10-29 15:04:21 +07:00
Muhammad Rizqi Nur
9ceef81f77 Fix log off by 1 2022-10-28 20:48:08 +07:00
Muhammad Rizqi Nur
16451ca573 Learning rate sched syntax support for grad clipping 2022-10-28 17:16:23 +07:00
Muhammad Rizqi Nur
1618df41ba Gradient clipping for textual embedding 2022-10-28 10:31:27 +07:00
DepFA
737eb28fac typo: cmd_opts.embedding_dir to cmd_opts.embeddings_dir 2022-10-26 17:38:08 +03:00
timntorres
f4e1464217 Implement PR #3625 but for embeddings. 2022-10-26 10:14:35 +03:00
timntorres
4875a6c217 Implement PR #3309 but for embeddings. 2022-10-26 10:14:35 +03:00
timntorres
c2dc9bfa89 Implement PR #3189 but for embeddings. 2022-10-26 10:14:35 +03:00
AUTOMATIC
cbb857b675 enable creating embedding with --medvram 2022-10-26 09:44:02 +03:00
AUTOMATIC
7d6b388d71 Merge branch 'ae' 2022-10-21 13:35:01 +03:00
DepFA
0087079c2d
allow overwrite old embedding 2022-10-20 00:10:59 +01:00
MalumaDev
1997ccff13
Merge branch 'master' into test_resolve_conflicts 2022-10-18 08:55:08 +02:00
DepFA
62edfae257 print list of embeddings on reload 2022-10-17 08:42:17 +03:00
MalumaDev
ae0fdad64a
Merge branch 'master' into test_resolve_conflicts 2022-10-16 17:55:58 +02:00