Commit Graph

40 Commits

Author SHA1 Message Date
Aarni Koskela
49a55b410b Autofix Ruff W (not W605) (mostly whitespace) 2023-05-11 20:29:11 +03:00
Aarni Koskela
3ba6c3c83c Fix up string formatting/concatenation to f-strings where feasible 2023-05-09 22:25:39 +03:00
AUTOMATIC
11183b4d90 fix for #6700 2023-02-19 12:44:56 +03:00
Shondoit
edb10092de Add ability to choose using weighted loss or not 2023-02-15 10:03:59 +01:00
Shondoit
21642000b3 Add PNG alpha channel as weight maps to data entries 2023-02-15 10:03:59 +01:00
AUTOMATIC
a176d89487 print bucket sizes for training without resizing images #6620
fix an error when generating a picture with embedding in it
2023-01-13 14:32:15 +03:00
dan
6be644fa04 Enable batch_size>1 for mixed-sized training 2023-01-11 05:31:58 +08:00
AUTOMATIC
43bb5190fc remove/simplify some changes from #6481 2023-01-09 22:52:23 +03:00
dan
72497895b9 Move batchsize check 2023-01-08 02:57:36 +08:00
dan
669fb18d52 Add checkbox for variable training dims 2023-01-08 02:31:40 +08:00
dan
448b9cedab Allow variable img size 2023-01-08 02:14:36 +08:00
Jim Hays
c0355caefe Fix various typos 2022-12-14 21:01:32 -05:00
brkirch
4d5f1691dd Use devices.autocast instead of torch.autocast 2022-11-30 10:33:42 -05:00
AUTOMATIC1111
39827a3998
Merge pull request #4688 from parasi22/resolve-embedding-name-in-filewords
resolve [name] after resolving [filewords] in training
2022-11-27 22:46:49 +03:00
flamelaw
5b57f61ba4 fix pin_memory with different latent sampling method 2022-11-21 10:15:46 +09:00
flamelaw
2d22d72cda fix random sampling with pin_memory 2022-11-20 16:14:27 +09:00
flamelaw
a4a5735d0a remove unnecessary comment 2022-11-20 12:38:18 +09:00
flamelaw
bd68e35de3 Gradient accumulation, autocast fix, new latent sampling method, etc 2022-11-20 12:35:26 +09:00
parasi
9a1aff645a resolve [name] after resolving [filewords] in training 2022-11-13 13:49:28 -06:00
KyuSeok Jung
a1e271207d
Update dataset.py 2022-11-11 10:56:53 +09:00
KyuSeok Jung
b19af67d29
Update dataset.py 2022-11-11 10:54:19 +09:00
KyuSeok Jung
13a2f1dca3
adding tag drop out option 2022-11-11 10:29:55 +09:00
TinkTheBoush
821e2b883d change option position to Training setting 2022-11-04 19:39:03 +09:00
TinkTheBoush
467cae167a append_tag_shuffle 2022-11-01 23:29:12 +09:00
Muhammad Rizqi Nur
a27d19de2e Additional assert on dataset 2022-10-29 19:44:05 +07:00
FlameLaw
a0a7024c67
Fix random dataset shuffle on TI 2022-10-28 02:13:48 +09:00
guaneec
b69c37d25e Allow datasets with only 1 image in TI 2022-10-21 09:54:09 +03:00
AUTOMATIC1111
ea8aa1701a
Merge branch 'master' into master 2022-10-15 10:13:16 +03:00
AUTOMATIC
c7a86f7fe9 add option to use batch size for training 2022-10-15 09:24:59 +03:00
Melan
4d19f3b7d4 Raise an assertion error if no training images have been found. 2022-10-14 22:45:26 +02:00
AUTOMATIC
c3c8eef9fd train: change filename processing to be more simple and configurable
train: make it possible to make text files with prompts
train: rework scheduler so that there's less repeating code in textual inversion and hypernets
train: move epochs setting to options
2022-10-12 20:49:47 +03:00
AUTOMATIC
d4ea5f4d86 add an option to unload models during hypernetwork training to save VRAM 2022-10-11 19:03:08 +03:00
alg-wiki
b2368a3bce
Switched to exception handling 2022-10-11 17:32:46 +09:00
alg-wiki
907a88b2d0 Added .webp .bmp 2022-10-11 06:35:07 +09:00
alg-wiki
bc3e183b73
Textual Inversion: Preprocess and Training will only pick-up image files 2022-10-11 04:30:13 +09:00
alg-wiki
04c745ea4f
Custom Width and Height 2022-10-10 22:35:35 +09:00
alg-wiki
3110f895b2
Textual Inversion: Added custom training image size and number of repeats per input image in a single epoch 2022-10-10 17:07:46 +09:00
AUTOMATIC
5ef0baf5ea add support for gelbooru tags in filenames for textual inversion 2022-10-04 08:52:27 +03:00
AUTOMATIC
6785331e22 keep textual inversion dataset latents in CPU memory to save a bit of VRAM 2022-10-02 22:59:01 +03:00
AUTOMATIC
820f1dc96b initial support for training textual inversion 2022-10-02 15:03:39 +03:00