AUTOMATIC1111
3019452927
Merge pull request #3803 from FlameLaw/master
...
Fixed proper dataset shuffling
2022-10-29 07:52:51 +03:00
AUTOMATIC1111
86e19fe873
Merge pull request #3669 from random-thoughtss/master
...
Added option to use unmasked conditioning image for inpainting model.
2022-10-29 07:49:48 +03:00
AUTOMATIC1111
1fba573d24
Merge pull request #3874 from cobryan05/extra_tweak
...
Extras Tab - Option to upscale before face fix, caching improvements
2022-10-29 07:44:17 +03:00
AUTOMATIC1111
2338ed9554
Merge pull request #3755 from M-art-ucci/master
...
Adding pt_BR (portuguese - Brazil) to localizations folder
2022-10-29 07:38:49 +03:00
AUTOMATIC
bce5adcd6d
change default hypernet activation function to linear
2022-10-29 07:37:06 +03:00
AUTOMATIC1111
f3685281e2
Merge pull request #3877 from Yaiol/master
...
Filename tags are wrongly referencing to process size instead of image size
2022-10-29 07:32:11 +03:00
AUTOMATIC1111
d3b4b9d7ec
Merge pull request #3717 from benkyoujouzu/master
...
Add missing support for linear activation in hypernetwork
2022-10-29 07:30:14 +03:00
AUTOMATIC1111
fc89495df3
Merge pull request #3771 from aria1th/patch-12
...
Disable unavailable or duplicate options for Activation functions
2022-10-29 07:29:02 +03:00
AUTOMATIC1111
d5f31f1e14
Merge pull request #3511 from bamarillo/master
...
[API][Feature] Add extras endpoints
2022-10-29 07:24:37 +03:00
Bruno Seoane
0edf100d83
Merge branch 'AUTOMATIC1111:master' into master
2022-10-28 22:03:49 -03:00
AngelBottomless
f361e804eb
Re enable linear
2022-10-29 08:36:50 +09:00
Yaiol
539c0f51e4
Update images.py
...
Filename tags [height] and [width] are wrongly referencing to process size instead of resulting image size. Making all upscale files named wrongly.
2022-10-29 01:07:01 +02:00
Chris OBryan
d8b3661467
extras: upscaler blending should not be considered in cache key
2022-10-28 16:55:02 -05:00
Chris OBryan
5732c0282d
extras-tweaks: autoformat changed lines
2022-10-28 16:36:25 -05:00
Chris OBryan
1f1b327959
extras: Make image cache LRU
...
This changes the extras image cache into a Least-Recently-Used
cache. This allows more experimentation with different upscalers
without missing the cache.
Max cache size is increased to 5 and is cleared on source image
update.
2022-10-28 16:14:21 -05:00
Chris OBryan
bde4731f1d
extras: Rework image cache
...
Bit of a refactor to the image cache to make it easier to extend.
Also takes into account the entire image instead of just a cropped portion.
2022-10-28 14:44:25 -05:00
Chris OBryan
26d0819384
extras: Add option to run upscaling before face fixing
...
Face restoration can look much better if ran after upscaling, as it
allows the restoration to fix upscaling artifacts. This patch adds
an option to choose which order to run upscaling/face fixing in.
2022-10-28 13:33:49 -05:00
Muhammad Rizqi Nur
9ceef81f77
Fix log off by 1
2022-10-28 20:48:08 +07:00
Muhammad Rizqi Nur
16451ca573
Learning rate sched syntax support for grad clipping
2022-10-28 17:16:23 +07:00
timntorres
db5a354c48
Always ignore "None.pt" in the hypernet directory.
2022-10-28 01:41:57 -07:00
timntorres
c0677b3316
Explicitly state when Hypernet is none.
2022-10-27 23:31:45 -07:00
timntorres
d4a069a23c
Read hypernet strength from PNG info.
2022-10-27 23:16:27 -07:00
timntorres
9e465c8aa5
Add strength to textinfo.
2022-10-27 23:03:34 -07:00
benkyoujouzu
b2a8b263b2
Add missing support for linear activation in hypernetwork
2022-10-28 12:54:59 +08:00
Antonio
5d5dc64064
Natural sorting for dropdown checkpoint list
...
Example:
Before After
11.ckpt 11.ckpt
ab.ckpt ab.ckpt
ade_pablo_step_1000.ckpt ade_pablo_step_500.ckpt
ade_pablo_step_500.ckpt ade_pablo_step_1000.ckpt
ade_step_1000.ckpt ade_step_500.ckpt
ade_step_1500.ckpt ade_step_1000.ckpt
ade_step_2000.ckpt ade_step_1500.ckpt
ade_step_2500.ckpt ade_step_2000.ckpt
ade_step_3000.ckpt ade_step_2500.ckpt
ade_step_500.ckpt ade_step_3000.ckpt
atp_step_5500.ckpt atp_step_5500.ckpt
model1.ckpt model1.ckpt
model10.ckpt model10.ckpt
model1000.ckpt model33.ckpt
model33.ckpt model50.ckpt
model400.ckpt model400.ckpt
model50.ckpt model1000.ckpt
moo44.ckpt moo44.ckpt
v1-4-pruned-emaonly.ckpt v1-4-pruned-emaonly.ckpt
v1-5-pruned-emaonly.ckpt v1-5-pruned-emaonly.ckpt
v1-5-pruned.ckpt v1-5-pruned.ckpt
v1-5-vae.ckpt v1-5-vae.ckpt
2022-10-28 05:49:39 +02:00
Muhammad Rizqi Nur
1618df41ba
Gradient clipping for textual embedding
2022-10-28 10:31:27 +07:00
Muhammad Rizqi Nur
a133042c66
Forgot to remove this from train_embedding
2022-10-28 10:01:46 +07:00
benlisquare
ccde874974
adjustments to zh_TW localisation per suggestions by snowmeow2
2022-10-28 13:51:54 +11:00
Muhammad Rizqi Nur
2a25729623
Gradient clipping in train tab
2022-10-28 09:44:56 +07:00
Bruno Seoane
21cbba34f5
Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui
2022-10-27 22:06:17 -03:00
Florian Horn
403c5dba86
hide save btn for other tabs than txt2img and img2img
2022-10-28 00:58:18 +02:00
Martucci
d814db1c25
Update pt_BR.json
2022-10-27 18:09:45 -03:00
Martucci
4ca4900bd4
Update pt_BR.json
2022-10-27 18:04:08 -03:00
Josh Watzman
b50ff4f4e4
Reduce peak memory usage when changing models
...
A few tweaks to reduce peak memory usage, the biggest being that if we
aren't using the checkpoint cache, we shouldn't duplicate the model
state dict just to immediately throw it away.
On my machine with 16GB of RAM, this change means I can typically change
models, whereas before it would typically OOM.
2022-10-27 22:01:06 +01:00
Martucci
5e6344261d
Compromise with other PR for this fork
2022-10-27 17:24:20 -03:00
Roy Shilkrot
bdc9083798
Add a barebones interrogate API
2022-10-27 15:20:15 -04:00
random_thoughtss
b68c7c437e
Updated name and hover text.
2022-10-27 11:45:35 -07:00
random_thoughtss
a38496c1de
Moved mask weight config to SD section
2022-10-27 11:31:31 -07:00
random_thoughtss
26a3fd2fe9
Highres fix works with unmasked latent.
...
Also refactor the mask creation to make it more accesible.
2022-10-27 11:27:59 -07:00
random-thoughtss
f3f2ffd448
Merge branch 'AUTOMATIC1111:master' into master
2022-10-27 11:19:12 -07:00
FlameLaw
a0a7024c67
Fix random dataset shuffle on TI
2022-10-28 02:13:48 +09:00
xmodar
68760a48cb
Add forced LTR for training progress
2022-10-27 17:46:00 +03:00
Florian Horn
bf25b51c31
fixed position to be in line with the other icons
2022-10-27 16:38:55 +02:00
Florian Horn
268159cfe3
fixed indentation
2022-10-27 16:32:10 +02:00
Florian Horn
0995e879ce
added save button and shortcut (s) to Modal View
2022-10-27 16:20:01 +02:00
Martucci
3d38416352
Mais ajustes de tradução
2022-10-27 10:25:54 -03:00
Dynamic
a668444110
Attention editing hotkey fix part 2
2022-10-27 22:24:29 +09:00
Dynamic
9358a421cf
Remove files that shouldn't be here
2022-10-27 22:24:05 +09:00
Dynamic
6e10078b2b
Attention editing with hotkeys should work with KR now
...
Added the word "Prompt" in the placeholders to pass the check from edit-attention.js
2022-10-27 22:21:56 +09:00
Dynamic
96da2e0c33
Merge branch 'AUTOMATIC1111:master' into kr-localization
2022-10-27 22:19:55 +09:00