Shondoit
|
edb10092de
|
Add ability to choose using weighted loss or not
|
2023-02-15 10:03:59 +01:00 |
|
Shondoit
|
bc50936745
|
Call weighted_forward during training
|
2023-02-15 10:03:59 +01:00 |
|
AUTOMATIC
|
aa6e55e001
|
do not display the message for TI unless the list of loaded embeddings changed
|
2023-01-29 11:53:05 +03:00 |
|
Alex "mcmonkey" Goodwin
|
e179b6098a
|
allow symlinks in the textual inversion embeddings folder
|
2023-01-25 08:48:40 -08:00 |
|
AUTOMATIC
|
40ff6db532
|
extra networks UI
rework of hypernets: rather than via settings, hypernets are added directly to prompt as <hypernet:name:weight>
|
2023-01-21 08:36:07 +03:00 |
|
AUTOMATIC
|
924e222004
|
add option to show/hide warnings
removed hiding warnings from LDSR
fixed/reworked few places that produced warnings
|
2023-01-18 23:04:24 +03:00 |
|
AUTOMATIC
|
d8b90ac121
|
big rework of progressbar/preview system to allow multiple users to prompts at the same time and do not get previews of each other
|
2023-01-15 18:51:04 +03:00 |
|
AUTOMATIC
|
a95f135308
|
change hash to sha256
|
2023-01-14 09:56:59 +03:00 |
|
AUTOMATIC
|
82725f0ac4
|
fix a bug caused by merge
|
2023-01-13 15:04:37 +03:00 |
|
AUTOMATIC1111
|
9cd7716753
|
Merge branch 'master' into tensorboard
|
2023-01-13 14:57:38 +03:00 |
|
AUTOMATIC
|
a176d89487
|
print bucket sizes for training without resizing images #6620
fix an error when generating a picture with embedding in it
|
2023-01-13 14:32:15 +03:00 |
|
Shondoit
|
d52a80f7f7
|
Allow creation of zero vectors for TI
|
2023-01-12 09:22:29 +01:00 |
|
Vladimir Mandic
|
3f43d8a966
|
set descriptions
|
2023-01-11 10:28:55 -05:00 |
|
Lee Bousfield
|
f9706acf43
|
Support loading textual inversion embeddings from safetensors files
|
2023-01-10 18:40:34 -07:00 |
|
AUTOMATIC
|
1fbb6f9ebe
|
make a dropdown for prompt template selection
|
2023-01-09 23:35:40 +03:00 |
|
AUTOMATIC
|
43bb5190fc
|
remove/simplify some changes from #6481
|
2023-01-09 22:52:23 +03:00 |
|
AUTOMATIC1111
|
18c001792a
|
Merge branch 'master' into varsize
|
2023-01-09 22:45:39 +03:00 |
|
AUTOMATIC
|
085427de0e
|
make it possible for extensions/scripts to add their own embedding directories
|
2023-01-08 09:37:33 +03:00 |
|
AUTOMATIC
|
a0c87f1fdf
|
skip images in embeddings dir if they have a second .preview extension
|
2023-01-08 08:52:26 +03:00 |
|
dan
|
669fb18d52
|
Add checkbox for variable training dims
|
2023-01-08 02:31:40 +08:00 |
|
dan
|
448b9cedab
|
Allow variable img size
|
2023-01-08 02:14:36 +08:00 |
|
AUTOMATIC
|
79e39fae61
|
CLIP hijack rework
|
2023-01-07 01:46:13 +03:00 |
|
AUTOMATIC
|
683287d87f
|
rework saving training params to file #6372
|
2023-01-06 08:52:06 +03:00 |
|
AUTOMATIC1111
|
88e01b237e
|
Merge pull request #6372 from timntorres/save-ti-hypernet-settings-to-txt-revised
Save hypernet and textual inversion settings to text file, revised.
|
2023-01-06 07:59:44 +03:00 |
|
Faber
|
81133d4168
|
allow loading embeddings from subdirectories
|
2023-01-06 03:38:37 +07:00 |
|
Kuma
|
fda04e620d
|
typo in TI
|
2023-01-05 18:44:19 +01:00 |
|
timntorres
|
b6bab2f052
|
Include model in log file. Exclude directory.
|
2023-01-05 09:14:56 -08:00 |
|
timntorres
|
b85c2b5cf4
|
Clean up ti, add same behavior to hypernetwork.
|
2023-01-05 08:14:38 -08:00 |
|
timntorres
|
eea8fc40e1
|
Add option to save ti settings to file.
|
2023-01-05 07:24:22 -08:00 |
|
AUTOMATIC1111
|
eeb1de4388
|
Merge branch 'master' into gradient-clipping
|
2023-01-04 19:56:35 +03:00 |
|
AUTOMATIC
|
525cea9245
|
use shared function from processing for creating dummy mask when training inpainting model
|
2023-01-04 17:58:07 +03:00 |
|
AUTOMATIC
|
184e670126
|
fix the merge
|
2023-01-04 17:45:01 +03:00 |
|
AUTOMATIC1111
|
da5c1e8a73
|
Merge branch 'master' into inpaint_textual_inversion
|
2023-01-04 17:40:19 +03:00 |
|
AUTOMATIC1111
|
7bbd984dda
|
Merge pull request #6253 from Shondoit/ti-optim
Save Optimizer next to TI embedding
|
2023-01-04 14:09:13 +03:00 |
|
Vladimir Mandic
|
192ddc04d6
|
add job info to modules
|
2023-01-03 10:34:51 -05:00 |
|
Shondoit
|
bddebe09ed
|
Save Optimizer next to TI embedding
Also add check to load only .PT and .BIN files as embeddings. (since we add .optim files in the same directory)
|
2023-01-03 13:30:24 +01:00 |
|
Philpax
|
c65909ad16
|
feat(api): return more data for embeddings
|
2023-01-02 12:21:48 +11:00 |
|
AUTOMATIC
|
311354c0bb
|
fix the issue with training on SD2.0
|
2023-01-02 00:38:09 +03:00 |
|
AUTOMATIC
|
bdbe09827b
|
changed embedding accepted shape detection to use existing code and support the new alt-diffusion model, and reformatted messages a bit #6149
|
2022-12-31 22:49:09 +03:00 |
|
Vladimir Mandic
|
f55ac33d44
|
validate textual inversion embeddings
|
2022-12-31 11:27:02 -05:00 |
|
Yuval Aboulafia
|
3bf5591efe
|
fix F541 f-string without any placeholders
|
2022-12-24 21:35:29 +02:00 |
|
Jim Hays
|
c0355caefe
|
Fix various typos
|
2022-12-14 21:01:32 -05:00 |
|
AUTOMATIC1111
|
c9a2cfdf2a
|
Merge branch 'master' into racecond_fix
|
2022-12-03 10:19:51 +03:00 |
|
brkirch
|
4d5f1691dd
|
Use devices.autocast instead of torch.autocast
|
2022-11-30 10:33:42 -05:00 |
|
AUTOMATIC
|
b48b7999c8
|
Merge remote-tracking branch 'flamelaw/master'
|
2022-11-27 12:19:59 +03:00 |
|
flamelaw
|
755df94b2a
|
set TI AdamW default weight decay to 0
|
2022-11-27 00:35:44 +09:00 |
|
AUTOMATIC
|
ce6911158b
|
Add support Stable Diffusion 2.0
|
2022-11-26 16:10:46 +03:00 |
|
flamelaw
|
89d8ecff09
|
small fixes
|
2022-11-23 02:49:01 +09:00 |
|
flamelaw
|
5b57f61ba4
|
fix pin_memory with different latent sampling method
|
2022-11-21 10:15:46 +09:00 |
|
flamelaw
|
bd68e35de3
|
Gradient accumulation, autocast fix, new latent sampling method, etc
|
2022-11-20 12:35:26 +09:00 |
|