AUTOMATIC1111
|
eeb1de4388
|
Merge branch 'master' into gradient-clipping
|
2023-01-04 19:56:35 +03:00 |
|
Vladimir Mandic
|
192ddc04d6
|
add job info to modules
|
2023-01-03 10:34:51 -05:00 |
|
AUTOMATIC1111
|
b12de850ae
|
Merge pull request #5992 from yuvalabou/F541
Fix F541: f-string without any placeholders
|
2022-12-25 09:16:08 +03:00 |
|
Vladimir Mandic
|
5f1dfbbc95
|
implement train api
|
2022-12-24 18:02:22 -05:00 |
|
Yuval Aboulafia
|
3bf5591efe
|
fix F541 f-string without any placeholders
|
2022-12-24 21:35:29 +02:00 |
|
AUTOMATIC1111
|
c9a2cfdf2a
|
Merge branch 'master' into racecond_fix
|
2022-12-03 10:19:51 +03:00 |
|
brkirch
|
4d5f1691dd
|
Use devices.autocast instead of torch.autocast
|
2022-11-30 10:33:42 -05:00 |
|
flamelaw
|
1bd57cc979
|
last_layer_dropout default to False
|
2022-11-23 20:21:52 +09:00 |
|
flamelaw
|
d2c97fc3fe
|
fix dropout, implement train/eval mode
|
2022-11-23 20:00:00 +09:00 |
|
flamelaw
|
89d8ecff09
|
small fixes
|
2022-11-23 02:49:01 +09:00 |
|
flamelaw
|
5b57f61ba4
|
fix pin_memory with different latent sampling method
|
2022-11-21 10:15:46 +09:00 |
|
flamelaw
|
bd68e35de3
|
Gradient accumulation, autocast fix, new latent sampling method, etc
|
2022-11-20 12:35:26 +09:00 |
|
AUTOMATIC
|
cdc8020d13
|
change StableDiffusionProcessing to internally use sampler name instead of sampler index
|
2022-11-19 12:01:51 +03:00 |
|
Muhammad Rizqi Nur
|
cabd4e3b3b
|
Merge branch 'master' into gradient-clipping
|
2022-11-07 22:43:38 +07:00 |
|
AUTOMATIC
|
62e3d71aa7
|
rework the code to not use the walrus operator because colab's 3.7 does not support it
|
2022-11-05 17:09:42 +03:00 |
|
AUTOMATIC1111
|
cb84a304f0
|
Merge pull request #4273 from Omegastick/ordered_hypernetworks
Sort hypernetworks list
|
2022-11-05 16:16:18 +03:00 |
|
Muhammad Rizqi Nur
|
bb832d7725
|
Simplify grad clip
|
2022-11-05 11:48:38 +07:00 |
|
Isaac Poulton
|
08feb4c364
|
Sort straight out of the glob
|
2022-11-04 20:53:11 +07:00 |
|
Muhammad Rizqi Nur
|
3277f90e93
|
Merge branch 'master' into gradient-clipping
|
2022-11-04 18:47:28 +07:00 |
|
Isaac Poulton
|
fd62727893
|
Sort hypernetworks
|
2022-11-04 18:34:35 +07:00 |
|
Fampai
|
39541d7725
|
Fixes race condition in training when VAE is unloaded
set_current_image can attempt to use the VAE when it is unloaded to
the CPU while training
|
2022-11-04 04:50:22 -04:00 |
|
aria1th
|
1ca0bcd3a7
|
only save if option is enabled
|
2022-11-04 16:09:19 +09:00 |
|
aria1th
|
f5d394214d
|
split before declaring file name
|
2022-11-04 16:04:03 +09:00 |
|
aria1th
|
283249d239
|
apply
|
2022-11-04 15:57:17 +09:00 |
|
AngelBottomless
|
179702adc4
|
Merge branch 'AUTOMATIC1111:master' into force-push-patch-13
|
2022-11-04 15:51:09 +09:00 |
|
AngelBottomless
|
0d07cbfa15
|
I blame code autocomplete
|
2022-11-04 15:50:54 +09:00 |
|
aria1th
|
0abb39f461
|
resolve conflict - first revert
|
2022-11-04 15:47:19 +09:00 |
|
AUTOMATIC1111
|
4918eb6ce4
|
Merge branch 'master' into hn-activation
|
2022-11-04 09:02:15 +03:00 |
|
aria1th
|
1764ac3c8b
|
use hash to check valid optim
|
2022-11-03 14:49:26 +09:00 |
|
aria1th
|
0b143c1163
|
Separate .optim file from model
|
2022-11-03 14:30:53 +09:00 |
|
Muhammad Rizqi Nur
|
d5ea878b2a
|
Fix merge conflicts
|
2022-10-31 13:54:40 +07:00 |
|
Muhammad Rizqi Nur
|
4123be632a
|
Fix merge conflicts
|
2022-10-31 13:53:22 +07:00 |
|
Muhammad Rizqi Nur
|
cd4d59c0de
|
Merge master
|
2022-10-30 18:57:51 +07:00 |
|
aria1th
|
9d96d7d0a0
|
resolve conflicts
|
2022-10-30 20:40:59 +09:00 |
|
AngelBottomless
|
20194fd975
|
We have duplicate linear now
|
2022-10-30 20:40:59 +09:00 |
|
AUTOMATIC1111
|
17a2076f72
|
Merge pull request #3928 from R-N/validate-before-load
Optimize training a little
|
2022-10-30 09:51:36 +03:00 |
|
Muhammad Rizqi Nur
|
3d58510f21
|
Fix dataset still being loaded even when training will be skipped
|
2022-10-30 00:54:59 +07:00 |
|
Muhammad Rizqi Nur
|
a07f054c86
|
Add missing info on hypernetwork/embedding model log
Mentioned here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/1528#discussioncomment-3991513
Also group the saving into one
|
2022-10-30 00:49:29 +07:00 |
|
Muhammad Rizqi Nur
|
ab05a74ead
|
Revert "Add cleanup after training"
This reverts commit 3ce2bfdf95 .
|
2022-10-30 00:32:02 +07:00 |
|
Muhammad Rizqi Nur
|
3ce2bfdf95
|
Add cleanup after training
|
2022-10-29 19:43:21 +07:00 |
|
Muhammad Rizqi Nur
|
ab27c111d0
|
Add input validations before loading dataset for training
|
2022-10-29 18:09:17 +07:00 |
|
Muhammad Rizqi Nur
|
05e2e40537
|
Merge branch 'master' into gradient-clipping
|
2022-10-29 15:04:21 +07:00 |
|
timntorres
|
e98f72be33
|
Merge branch 'AUTOMATIC1111:master' into 3825-save-hypernet-strength-to-info
|
2022-10-29 00:31:23 -07:00 |
|
AUTOMATIC1111
|
810e6a407d
|
Merge pull request #3858 from R-N/log-csv
Fix log off by 1 #3847
|
2022-10-29 07:55:20 +03:00 |
|
AUTOMATIC1111
|
d3b4b9d7ec
|
Merge pull request #3717 from benkyoujouzu/master
Add missing support for linear activation in hypernetwork
|
2022-10-29 07:30:14 +03:00 |
|
AngelBottomless
|
f361e804eb
|
Re enable linear
|
2022-10-29 08:36:50 +09:00 |
|
Muhammad Rizqi Nur
|
9ceef81f77
|
Fix log off by 1
|
2022-10-28 20:48:08 +07:00 |
|
Muhammad Rizqi Nur
|
16451ca573
|
Learning rate sched syntax support for grad clipping
|
2022-10-28 17:16:23 +07:00 |
|
timntorres
|
db5a354c48
|
Always ignore "None.pt" in the hypernet directory.
|
2022-10-28 01:41:57 -07:00 |
|
benkyoujouzu
|
b2a8b263b2
|
Add missing support for linear activation in hypernetwork
|
2022-10-28 12:54:59 +08:00 |
|