Philpax
|
c65909ad16
|
feat(api): return more data for embeddings
|
2023-01-02 12:21:48 +11:00 |
|
AUTOMATIC
|
311354c0bb
|
fix the issue with training on SD2.0
|
2023-01-02 00:38:09 +03:00 |
|
AUTOMATIC
|
bdbe09827b
|
changed embedding accepted shape detection to use existing code and support the new alt-diffusion model, and reformatted messages a bit #6149
|
2022-12-31 22:49:09 +03:00 |
|
Vladimir Mandic
|
f55ac33d44
|
validate textual inversion embeddings
|
2022-12-31 11:27:02 -05:00 |
|
Yuval Aboulafia
|
3bf5591efe
|
fix F541 f-string without any placeholders
|
2022-12-24 21:35:29 +02:00 |
|
Jim Hays
|
c0355caefe
|
Fix various typos
|
2022-12-14 21:01:32 -05:00 |
|
AUTOMATIC1111
|
c9a2cfdf2a
|
Merge branch 'master' into racecond_fix
|
2022-12-03 10:19:51 +03:00 |
|
brkirch
|
4d5f1691dd
|
Use devices.autocast instead of torch.autocast
|
2022-11-30 10:33:42 -05:00 |
|
AUTOMATIC
|
b48b7999c8
|
Merge remote-tracking branch 'flamelaw/master'
|
2022-11-27 12:19:59 +03:00 |
|
flamelaw
|
755df94b2a
|
set TI AdamW default weight decay to 0
|
2022-11-27 00:35:44 +09:00 |
|
AUTOMATIC
|
ce6911158b
|
Add support Stable Diffusion 2.0
|
2022-11-26 16:10:46 +03:00 |
|
flamelaw
|
89d8ecff09
|
small fixes
|
2022-11-23 02:49:01 +09:00 |
|
flamelaw
|
5b57f61ba4
|
fix pin_memory with different latent sampling method
|
2022-11-21 10:15:46 +09:00 |
|
flamelaw
|
bd68e35de3
|
Gradient accumulation, autocast fix, new latent sampling method, etc
|
2022-11-20 12:35:26 +09:00 |
|
AUTOMATIC
|
cdc8020d13
|
change StableDiffusionProcessing to internally use sampler name instead of sampler index
|
2022-11-19 12:01:51 +03:00 |
|
Fampai
|
39541d7725
|
Fixes race condition in training when VAE is unloaded
set_current_image can attempt to use the VAE when it is unloaded to
the CPU while training
|
2022-11-04 04:50:22 -04:00 |
|
Fampai
|
890e68aaf7
|
Fixed minor bug
when unloading vae during TI training, generating images after
training will error out
|
2022-10-31 10:07:12 -04:00 |
|
Fampai
|
3b0127e698
|
Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into TI_optimizations
|
2022-10-31 09:54:51 -04:00 |
|
Fampai
|
006756f9cd
|
Added TI training optimizations
option to use xattention optimizations when training
option to unload vae when training
|
2022-10-31 07:26:08 -04:00 |
|
Muhammad Rizqi Nur
|
3d58510f21
|
Fix dataset still being loaded even when training will be skipped
|
2022-10-30 00:54:59 +07:00 |
|
Muhammad Rizqi Nur
|
a07f054c86
|
Add missing info on hypernetwork/embedding model log
Mentioned here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/1528#discussioncomment-3991513
Also group the saving into one
|
2022-10-30 00:49:29 +07:00 |
|
Muhammad Rizqi Nur
|
ab05a74ead
|
Revert "Add cleanup after training"
This reverts commit 3ce2bfdf95 .
|
2022-10-30 00:32:02 +07:00 |
|
Muhammad Rizqi Nur
|
3ce2bfdf95
|
Add cleanup after training
|
2022-10-29 19:43:21 +07:00 |
|
Muhammad Rizqi Nur
|
ab27c111d0
|
Add input validations before loading dataset for training
|
2022-10-29 18:09:17 +07:00 |
|
Muhammad Rizqi Nur
|
9ceef81f77
|
Fix log off by 1
|
2022-10-28 20:48:08 +07:00 |
|
DepFA
|
737eb28fac
|
typo: cmd_opts.embedding_dir to cmd_opts.embeddings_dir
|
2022-10-26 17:38:08 +03:00 |
|
timntorres
|
f4e1464217
|
Implement PR #3625 but for embeddings.
|
2022-10-26 10:14:35 +03:00 |
|
timntorres
|
4875a6c217
|
Implement PR #3309 but for embeddings.
|
2022-10-26 10:14:35 +03:00 |
|
timntorres
|
c2dc9bfa89
|
Implement PR #3189 but for embeddings.
|
2022-10-26 10:14:35 +03:00 |
|
AUTOMATIC
|
cbb857b675
|
enable creating embedding with --medvram
|
2022-10-26 09:44:02 +03:00 |
|
AUTOMATIC
|
7d6b388d71
|
Merge branch 'ae'
|
2022-10-21 13:35:01 +03:00 |
|
DepFA
|
0087079c2d
|
allow overwrite old embedding
|
2022-10-20 00:10:59 +01:00 |
|
MalumaDev
|
1997ccff13
|
Merge branch 'master' into test_resolve_conflicts
|
2022-10-18 08:55:08 +02:00 |
|
DepFA
|
62edfae257
|
print list of embeddings on reload
|
2022-10-17 08:42:17 +03:00 |
|
MalumaDev
|
ae0fdad64a
|
Merge branch 'master' into test_resolve_conflicts
|
2022-10-16 17:55:58 +02:00 |
|
AUTOMATIC
|
0c5fa9a681
|
do not reload embeddings from disk when doing textual inversion
|
2022-10-16 09:09:04 +03:00 |
|
MalumaDev
|
97ceaa23d0
|
Merge branch 'master' into test_resolve_conflicts
|
2022-10-16 00:06:36 +02:00 |
|
DepFA
|
b6e3b96dab
|
Change vector size footer label
|
2022-10-15 17:23:39 +03:00 |
|
DepFA
|
ddf6899df0
|
generalise to popular lossless formats
|
2022-10-15 17:23:39 +03:00 |
|
DepFA
|
9a1dcd78ed
|
add webp for embed load
|
2022-10-15 17:23:39 +03:00 |
|
DepFA
|
939f16529a
|
only save 1 image per embedding
|
2022-10-15 17:23:39 +03:00 |
|
DepFA
|
9e846083b7
|
add vector size to embed text
|
2022-10-15 17:23:39 +03:00 |
|
MalumaDev
|
7b7561f6e4
|
Merge branch 'master' into test_resolve_conflicts
|
2022-10-15 16:20:17 +02:00 |
|
AUTOMATIC
|
c7a86f7fe9
|
add option to use batch size for training
|
2022-10-15 09:24:59 +03:00 |
|
AUTOMATIC
|
03d62538ae
|
remove duplicate code for log loss, add step, make it read from options rather than gradio input
|
2022-10-14 22:43:55 +03:00 |
|
AUTOMATIC
|
326fe7d44b
|
Merge remote-tracking branch 'Melanpan/master'
|
2022-10-14 22:14:50 +03:00 |
|
AUTOMATIC
|
c344ba3b32
|
add option to read generation params for learning previews from txt2img
|
2022-10-14 20:31:49 +03:00 |
|
MalumaDev
|
bb57f30c2d
|
init
|
2022-10-14 10:56:41 +02:00 |
|
Melan
|
8636b50aea
|
Add learn_rate to csv and removed a left-over debug statement
|
2022-10-13 12:37:58 +02:00 |
|
Melan
|
1cfc2a1898
|
Save a csv containing the loss while training
|
2022-10-12 23:36:29 +02:00 |
|