AUTOMATIC
02d7abf514
helpful error message when trying to load 2.0 without config
...
failing to load model weights from settings won't break generation for currently loaded model anymore
2023-01-04 12:35:07 +03:00
AUTOMATIC
8f96f92899
call script callbacks for reloaded model after loading embeddings
2023-01-03 18:39:14 +03:00
AUTOMATIC
311354c0bb
fix the issue with training on SD2.0
2023-01-02 00:38:09 +03:00
Vladimir Mandic
f55ac33d44
validate textual inversion embeddings
2022-12-31 11:27:02 -05:00
Nicolas Patry
5ba04f9ec0
Attempting to solve slow loads for safetensors
.
...
Fixes #5893
2022-12-27 11:27:19 +01:00
Yuval Aboulafia
3bf5591efe
fix F541 f-string without any placeholders
2022-12-24 21:35:29 +02:00
linuxmobile ( リナックス )
5a650055de
Removed lenght in sd_model at line 115
...
Commit eba60a4
is what is causing this error, delete the length check in sd_model starting at line 115 and it's fine.
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5971#issuecomment-1364507379
2022-12-24 09:25:35 -03:00
AUTOMATIC1111
eba60a42eb
Merge pull request #5627 from deanpress/patch-1
...
fix: fallback model_checkpoint if it's empty
2022-12-24 12:20:31 +03:00
MrCheeze
ec0a48826f
unconditionally set use_ema=False if value not specified (True never worked, and all configs except v1-inpainting-inference.yaml already correctly set it to False)
2022-12-11 11:18:34 -05:00
Dean van Dugteren
59c6511494
fix: fallback model_checkpoint if it's empty
...
This fixes the following error when SD attempts to start with a deleted checkpoint:
```
Traceback (most recent call last):
File "D:\Web\stable-diffusion-webui\launch.py", line 295, in <module>
start()
File "D:\Web\stable-diffusion-webui\launch.py", line 290, in start
webui.webui()
File "D:\Web\stable-diffusion-webui\webui.py", line 132, in webui
initialize()
File "D:\Web\stable-diffusion-webui\webui.py", line 62, in initialize
modules.sd_models.load_model()
File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 283, in load_model
checkpoint_info = checkpoint_info or select_checkpoint()
File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 117, in select_checkpoint
checkpoint_info = checkpoints_list.get(model_checkpoint, None)
TypeError: unhashable type: 'list'
```
2022-12-11 17:08:51 +01:00
MrCheeze
bd81a09eac
fix support for 2.0 inpainting model while maintaining support for 1.5 inpainting model
2022-12-10 11:29:26 -05:00
AUTOMATIC1111
ec5e072124
Merge pull request #4841 from R-N/vae-fix-none
...
Fix None option of VAE selector
2022-12-10 09:58:20 +03:00
Jay Smith
1ed4f0e228
Depth2img model support
2022-12-08 20:50:08 -06:00
AUTOMATIC
0376da180c
make it possible to save nai model using safetensors
2022-11-28 08:39:59 +03:00
AUTOMATIC
dac9b6f15d
add safetensors support for model merging #4869
2022-11-27 15:51:29 +03:00
AUTOMATIC
6074175faa
add safetensors to requirements
2022-11-27 14:46:40 +03:00
AUTOMATIC1111
f108782e30
Merge pull request #4930 from Narsil/allow_to_load_safetensors_file
...
Supporting `*.safetensors` format.
2022-11-27 14:36:55 +03:00
MrCheeze
1e506657e1
no-half support for SD 2.0
2022-11-26 13:28:44 -05:00
Nicolas Patry
0efffbb407
Supporting *.safetensors
format.
...
If a model file exists with extension `.safetensors` then we can load it
more safely than with PyTorch weights.
2022-11-21 14:04:25 +01:00
Muhammad Rizqi Nur
8662b5e57f
Merge branch 'a1111' into vae-fix-none
2022-11-19 16:38:21 +07:00
Muhammad Rizqi Nur
2c5ca706a7
Remove no longer necessary parts and add vae_file safeguard
2022-11-19 12:01:41 +07:00
Muhammad Rizqi Nur
c7be83bf02
Misc
...
Misc
2022-11-19 11:44:37 +07:00
Muhammad Rizqi Nur
abc1e79a5d
Fix base VAE caching was done after loading VAE, also add safeguard
2022-11-19 11:41:41 +07:00
cluder
eebf49592a
restore #4035 behavior
...
- if checkpoint cache is set to 1, keep 2 models in cache (current +1 more)
2022-11-09 07:17:09 +01:00
cluder
3b51d239ac
- do not use ckpt cache, if disabled
...
- cache model after is has been loaded from file
2022-11-09 05:43:57 +01:00
AUTOMATIC
99043f3360
fix one of previous merges breaking the program
2022-11-04 11:20:42 +03:00
AUTOMATIC1111
24fc05cf57
Merge branch 'master' into fix-ckpt-cache
2022-11-04 10:54:17 +03:00
digburn
3780ad3ad8
fix: loading models without vae from cache
2022-11-04 00:43:00 +00:00
Muhammad Rizqi Nur
fb3b564801
Merge branch 'master' into fix-ckpt-cache
2022-11-02 20:53:41 +07:00
AUTOMATIC
f2a5cbe6f5
fix #3986 breaking --no-half-vae
2022-11-02 14:41:29 +03:00
Muhammad Rizqi Nur
056f06d373
Reload VAE without reloading sd checkpoint
2022-11-02 12:51:46 +07:00
Muhammad Rizqi Nur
f8c6468d42
Merge branch 'master' into vae-picker
2022-11-02 00:25:08 +07:00
Jairo Correa
af758e97fa
Unload sd_model before loading the other
2022-11-01 04:01:49 -03:00
Muhammad Rizqi Nur
bf7a699845
Fix #4035 for real now
2022-10-31 16:27:27 +07:00
Muhammad Rizqi Nur
36966e3200
Fix #4035
2022-10-31 15:38:58 +07:00
Muhammad Rizqi Nur
726769da35
Checkpoint cache by combination key of checkpoint and vae
2022-10-31 15:22:03 +07:00
Muhammad Rizqi Nur
cb31abcf58
Settings to select VAE
2022-10-30 21:54:31 +07:00
AUTOMATIC1111
9553a7e071
Merge pull request #3818 from jwatzman/master
...
Reduce peak memory usage when changing models
2022-10-29 09:16:00 +03:00
Antonio
5d5dc64064
Natural sorting for dropdown checkpoint list
...
Example:
Before After
11.ckpt 11.ckpt
ab.ckpt ab.ckpt
ade_pablo_step_1000.ckpt ade_pablo_step_500.ckpt
ade_pablo_step_500.ckpt ade_pablo_step_1000.ckpt
ade_step_1000.ckpt ade_step_500.ckpt
ade_step_1500.ckpt ade_step_1000.ckpt
ade_step_2000.ckpt ade_step_1500.ckpt
ade_step_2500.ckpt ade_step_2000.ckpt
ade_step_3000.ckpt ade_step_2500.ckpt
ade_step_500.ckpt ade_step_3000.ckpt
atp_step_5500.ckpt atp_step_5500.ckpt
model1.ckpt model1.ckpt
model10.ckpt model10.ckpt
model1000.ckpt model33.ckpt
model33.ckpt model50.ckpt
model400.ckpt model400.ckpt
model50.ckpt model1000.ckpt
moo44.ckpt moo44.ckpt
v1-4-pruned-emaonly.ckpt v1-4-pruned-emaonly.ckpt
v1-5-pruned-emaonly.ckpt v1-5-pruned-emaonly.ckpt
v1-5-pruned.ckpt v1-5-pruned.ckpt
v1-5-vae.ckpt v1-5-vae.ckpt
2022-10-28 05:49:39 +02:00
Josh Watzman
b50ff4f4e4
Reduce peak memory usage when changing models
...
A few tweaks to reduce peak memory usage, the biggest being that if we
aren't using the checkpoint cache, we shouldn't duplicate the model
state dict just to immediately throw it away.
On my machine with 16GB of RAM, this change means I can typically change
models, whereas before it would typically OOM.
2022-10-27 22:01:06 +01:00
AUTOMATIC
321bacc6a9
call model_loaded_callback after setting shared.sd_model in case scripts refer to it using that
2022-10-22 20:15:12 +03:00
MrCheeze
0df94d3fcf
fix aesthetic gradients doing nothing after loading a different model
2022-10-22 20:14:18 +03:00
AUTOMATIC
2b91251637
removed aesthetic gradients as built-in
...
added support for extensions
2022-10-22 12:23:58 +03:00
AUTOMATIC
ac0aa2b18e
loading SD VAE, see PR #3303
2022-10-21 17:35:51 +03:00
AUTOMATIC
df57064093
do not load aesthetic clip model until it's needed
...
add refresh button for aesthetic embeddings
add aesthetic params to images' infotext
2022-10-21 16:10:51 +03:00
AUTOMATIC
7d6b388d71
Merge branch 'ae'
2022-10-21 13:35:01 +03:00
random_thoughtss
49533eed9e
XY grid correctly re-assignes model when config changes
2022-10-20 16:01:27 -07:00
random_thoughtss
708c3a7bd8
Added PLMS hijack and made sure to always replace methods
2022-10-20 13:28:43 -07:00
random_thoughtss
8e7097d06a
Added support for RunwayML inpainting model
2022-10-19 13:47:45 -07:00
AUTOMATIC
f894dd552f
fix for broken checkpoint merger
2022-10-19 12:45:42 +03:00
MalumaDev
2362d5f00e
Merge branch 'master' into test_resolve_conflicts
2022-10-19 10:22:39 +02:00
AUTOMATIC
10aca1ca3e
more careful loading of model weights (eliminates some issues with checkpoints that have weird cond_stage_model layer names)
2022-10-19 08:42:22 +03:00
MalumaDev
9324cdaa31
ui fix, re organization of the code
2022-10-16 17:53:56 +02:00
AUTOMATIC1111
af144ebdc7
Merge branch 'master' into ckpt-cache
2022-10-15 10:35:18 +03:00
Rae Fu
e21f01f645
add checkpoint cache option to UI for faster model switching
...
switching time reduced from ~1500ms to ~280ms
2022-10-14 14:09:23 -06:00
AUTOMATIC
bb295f5478
rework the code for lowram a bit
2022-10-14 20:03:41 +03:00
Ljzd-PRO
4a216ded43
load models to VRAM when using --lowram
param
...
load models to VRM instead of RAM (for machines which have bigger VRM than RAM such as free Google Colab server)
2022-10-14 19:57:23 +03:00
AUTOMATIC
727e4d1086
no to different messages plus fix using != to compare to None
2022-10-10 20:46:55 +03:00
AUTOMATIC1111
b3d3b335cf
Merge pull request #2131 from ssysm/upstream-master
...
Add VAE Path Arguments
2022-10-10 20:45:14 +03:00
ssysm
af62ad4d25
change vae loading method
2022-10-10 13:25:28 -04:00
AUTOMATIC
7349088d32
--no-half-vae
2022-10-10 16:16:29 +03:00
ssysm
6fdad291bd
Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into upstream-master
2022-10-09 23:20:39 -04:00
ssysm
cc92dc1f8d
add vae path args
2022-10-09 23:17:29 -04:00
AUTOMATIC
e6e8cabe0c
change up #2056 to make it work how i want it to plus make xy plot write correct values to images
2022-10-09 14:57:48 +03:00
William Moorehouse
d6d10a37bf
Added extended model details to infotext
2022-10-09 14:49:15 +03:00
AUTOMATIC
f4578b343d
fix model switching not working properly if there is a different yaml config
2022-10-09 13:23:30 +03:00
AUTOMATIC
4e569fd888
fixed incorrect message about loading config; thanks anon!
2022-10-09 10:31:47 +03:00
AUTOMATIC
c77c89cc83
make main model loading and model merger use the same code
2022-10-09 10:23:31 +03:00
AUTOMATIC
050a6a798c
support loading .yaml config with same name as model
...
support EMA weights in processing (????)
2022-10-08 23:26:48 +03:00
Aidan Holland
432782163a
chore: Fix typos
2022-10-08 22:42:30 +03:00
leko
616b7218f7
fix: handles when state_dict does not exist
2022-10-08 12:38:50 +03:00
AUTOMATIC
d15b3ec001
support loading VAE
2022-10-07 10:40:22 +03:00
AUTOMATIC
852fd90c0d
emergency fix for disabling SD model download after multiple complaints
2022-10-02 21:22:20 +03:00
AUTOMATIC
a1cde7e646
disabled SD model download after multiple complaints
2022-10-02 21:09:10 +03:00
AUTOMATIC
0758f6e641
fix --ckpt option breaking model selection
2022-10-02 17:24:50 +03:00
AUTOMATIC
820f1dc96b
initial support for training textual inversion
2022-10-02 15:03:39 +03:00
AUTOMATIC
2b03f0bbda
if --ckpt option is specified, load that model
2022-09-30 22:16:03 +03:00
AUTOMATIC
cef838a6ab
revert the annotation not supported by old pythons
2022-09-30 12:15:29 +03:00
AUTOMATIC
d1f098540a
remove unwanted formatting/functionality from the PR
2022-09-30 11:42:40 +03:00
AUTOMATIC
8f1b315318
fix bugs in the PR
2022-09-30 09:46:52 +03:00
AUTOMATIC1111
25414bcd05
Merge pull request #1109 from d8ahazard/ModelLoader
...
Model Loader, Fixes
2022-09-30 09:35:58 +03:00
DepFA
ebd2c48115
return shortest checkpoint title match
2022-09-30 07:37:05 +03:00
DepFA
642b7e333e
add get_closet_checkpoint_match
2022-09-30 07:37:05 +03:00
d8ahazard
d73741794d
Merge remote-tracking branch 'upstream/master' into ModelLoader
2022-09-29 19:59:36 -05:00
d8ahazard
0dce0df1ee
Holy $hit.
...
Yep.
Fix gfpgan_model_arch requirement(s).
Add Upscaler base class, move from images.
Add a lot of methods to Upscaler.
Re-work all the child upscalers to be proper classes.
Add BSRGAN scaler.
Add ldsr_model_arch class, removing the dependency for another repo that just uses regular latent-diffusion stuff.
Add one universal method that will always find and load new upscaler models without having to add new "setup_model" calls. Still need to add command line params, but that could probably be automated.
Add a "self.scale" property to all Upscalers so the scalers themselves can do "things" in response to the requested upscaling size.
Ensure LDSR doesn't get stuck in a longer loop of "upscale/downscale/upscale" as we try to reach the target upscale size.
Add typehints for IDE sanity.
PEP-8 improvements.
Moar.
2022-09-29 17:46:23 -05:00
AUTOMATIC
c715ef04d1
fix for incorrect model weight loading for #814
2022-09-29 15:40:28 +03:00
AUTOMATIC
29ce8a687d
remove unneded debug print
2022-09-29 08:03:23 +03:00
AUTOMATIC
7acfaca05a
update lists of models after merging them in checkpoints tab
...
support saving as half
2022-09-29 00:59:44 +03:00
Bernard Maltais
591c138e32
-Add gradio dropdown list to select checkpoints to merge
...
-Update the name of the model feilds
-Update the associated variable names
2022-09-27 21:08:07 -04:00
d8ahazard
11875f5863
Use model loader with stable-diffusion too.
...
Hook the model loader into the SD_models file.
Add default url/download if checkpoint is not found.
Add matching stablediffusion-models-path argument.
Add message that --ckpt-dir will be removed in the future, but have it pipe to stablediffusion-models-path for now.
Update help strings for models-path args so they're more or less uniform.
Move sd_model "setup" call to webUI with the others.
Ensure "cleanup_models" method moves existing models to the new locations, including SD, and that we aren't deleting folders that still have stuff in them.
2022-09-27 11:01:13 -05:00
AUTOMATIC
7ae3dc2866
display a more informative message when a checkpoint is not found
2022-09-18 23:52:01 +03:00
AUTOMATIC
304222ef94
X/Y plot support for switching checkpoints.
2022-09-17 13:49:36 +03:00
AUTOMATIC
247f58a5e7
add support for switching model checkpoints at runtime
2022-09-17 12:05:18 +03:00