space-nuko
d132481058
Embed model merge metadata in .safetensors file
2023-04-02 17:41:55 -05:00
papuSpartan
a609bd56b4
Transition to using settings through UI instead of cmd line args. Added feature to only apply to hr-fix. Install package using requirements_versions.txt
2023-04-01 22:18:35 -05:00
papuSpartan
26ab018253
delay import
2023-04-01 03:31:22 -05:00
papuSpartan
56680cd84a
first
2023-04-01 02:07:08 -05:00
AUTOMATIC
1b63afbedc
sort hypernetworks and checkpoints by name
2023-03-28 20:03:57 +03:00
AUTOMATIC1111
f1db987e6a
Merge pull request #8958 from MrCheeze/variations-model
...
Add support for the unclip (Variations) models, unclip-h and unclip-l
2023-03-28 19:39:20 +03:00
MrCheeze
1f08600345
overwrite xformers in the unclip model config if not available
2023-03-26 16:55:29 -04:00
MrCheeze
8a34671fe9
Add support for the Variations models (unclip-h and unclip-l)
2023-03-25 21:03:07 -04:00
AUTOMATIC1111
956ed9a737
Merge pull request #8780 from Brawlence/master
...
Unload and re-load checkpoint to VRAM on request (API & Manual)
2023-03-25 12:03:26 +03:00
carat-johyun
92e173d414
fix variable typo
2023-03-23 14:28:08 +09:00
Φφ
4cbbb881ee
Unload checkpoints on Request
...
…to free VRAM.
New Action buttons in the settings to manually free and reload checkpoints, essentially
juggling models between RAM and VRAM.
2023-03-21 09:28:50 +03:00
AUTOMATIC
6a04a7f20f
fix an error loading Lora with empty values in metadata
2023-03-14 11:22:29 +03:00
AUTOMATIC
c19530f1a5
Add view metadata button for Lora cards.
2023-03-14 09:10:26 +03:00
w-e-w
014e7323f6
when exists
2023-02-19 20:49:07 +09:00
w-e-w
c77f01ff31
fix auto sd download issue
2023-02-19 20:37:40 +09:00
missionfloyd
c4ea16a03f
Add ".vae.ckpt" to ext_blacklist
2023-02-15 19:47:30 -07:00
missionfloyd
1615f786ee
Download model if none are found
2023-02-14 20:54:02 -07:00
AUTOMATIC
668d7e9b9a
make it possible to load SD1 checkpoints without CLIP
2023-02-05 11:21:00 +03:00
AUTOMATIC
3e0f9a7543
fix issue with switching back to checkpoint that had its checksum calculated during runtime mentioned in #7506
2023-02-04 15:23:16 +03:00
AUTOMATIC1111
c0e0b5844d
Merge pull request #7470 from cbrownstein-lambda/update-error-message-no-checkpoint
...
Update error message WRT missing checkpoint file
2023-02-04 12:07:12 +03:00
AUTOMATIC
81823407d9
add --no-hashing
2023-02-04 11:38:56 +03:00
Cody Brownstein
fb97acef63
Update error message WRT missing checkpoint file
...
The Safetensors format is also supported.
2023-02-01 14:51:06 -08:00
AUTOMATIC
f6b7768f84
support for searching subdirectory names for extra networks
2023-01-29 10:20:19 +03:00
AUTOMATIC
5d14f282c2
fixed a bug where after switching to a checkpoint with unknown hash, you'd get empty space instead of checkpoint name in UI
...
fixed a bug where if you update a selected checkpoint on disk and then restart the program, a different checkpoint loads, but the name is shown for the the old one.
2023-01-28 16:23:49 +03:00
Max Audron
5eee2ac398
add data-dir flag and set all user data directories based on it
2023-01-27 14:44:30 +01:00
AUTOMATIC
6f31d2210c
support detecting midas model
...
fix broken api for checkpoint list
2023-01-27 11:54:19 +03:00
AUTOMATIC
d2ac95fa7b
remove the need to place configs near models
2023-01-27 11:28:12 +03:00
AUTOMATIC1111
1574e96729
Merge pull request #6510 from brkirch/unet16-upcast-precision
...
Add upcast options, full precision sampling from float16 UNet and upcasting attention for inference using SD 2.1 models without --no-half
2023-01-25 19:12:29 +03:00
Kyle
ee0a0da324
Add instruct-pix2pix hijack
...
Allows loading instruct-pix2pix models via same method as inpainting models in sd_models.py and sd_hijack_ip2p.py
Adds ddpm_edit.py necessary for instruct-pix2pix
2023-01-25 08:53:23 -05:00
brkirch
84d9ce30cb
Add option for float32 sampling with float16 UNet
...
This also handles type casting so that ROCm and MPS torch devices work correctly without --no-half. One cast is required for deepbooru in deepbooru_model.py, some explicit casting is required for img2img and inpainting. depth_model can't be converted to float16 or it won't work correctly on some systems (it's known to have issues on MPS) so in sd_models.py model.depth_model is removed for model.half().
2023-01-25 01:13:02 -05:00
AUTOMATIC
c1928cdd61
bring back short hashes to sd checkpoint selection
2023-01-19 18:58:08 +03:00
AUTOMATIC
a5bbcd2153
fix bug with "Ignore selected VAE for..." option completely disabling VAE election
...
rework VAE resolving code to be more simple
2023-01-14 19:56:09 +03:00
AUTOMATIC
08c6f009a5
load hashes from cache for checkpoints that have them
...
add checkpoint hash to footer
2023-01-14 15:55:40 +03:00
AUTOMATIC
febd2b722e
update key to use with checkpoints' sha256 in cache
2023-01-14 13:37:55 +03:00
AUTOMATIC
f9ac3352cb
change hypernets to use sha256 hashes
2023-01-14 10:25:37 +03:00
AUTOMATIC
a95f135308
change hash to sha256
2023-01-14 09:56:59 +03:00
AUTOMATIC
4bd490727e
fix for an error caused by skipping initialization, for realsies this time: TypeError: expected str, bytes or os.PathLike object, not NoneType
2023-01-11 18:54:13 +03:00
AUTOMATIC
1a23dc32ac
possible fix for fallback for fast model creation from config, attempt 2
2023-01-11 10:34:36 +03:00
AUTOMATIC
4fdacd31e4
possible fix for fallback for fast model creation from config
2023-01-11 10:24:56 +03:00
AUTOMATIC
0f8603a559
add support for transformers==4.25.1
...
add fallback for when quick model creation fails
2023-01-10 17:46:59 +03:00
AUTOMATIC
ce3f639ec8
add more stuff to ignore when creating model from config
...
prevent .vae.safetensors files from being listed as stable diffusion models
2023-01-10 16:51:04 +03:00
AUTOMATIC
0c3feb202c
disable torch weight initialization and CLIP downloading/reading checkpoint to speedup creating sd model from config
2023-01-10 14:08:29 +03:00
Vladimir Mandic
552d7b90bf
allow model load if previous model failed
2023-01-09 18:34:26 -05:00
AUTOMATIC
642142556d
use commandline-supplied cuda device name instead of cuda:0 for safetensors PR that doesn't fix anything
2023-01-04 15:09:53 +03:00
AUTOMATIC
68fbf4558f
Merge remote-tracking branch 'Narsil/fix_safetensors_load_speed'
2023-01-04 14:53:03 +03:00
AUTOMATIC
0cd6399b8b
fix broken inpainting model
2023-01-04 14:29:13 +03:00
AUTOMATIC
8d8a05a3bb
find configs for models at runtime rather than when starting
2023-01-04 12:47:42 +03:00
AUTOMATIC
02d7abf514
helpful error message when trying to load 2.0 without config
...
failing to load model weights from settings won't break generation for currently loaded model anymore
2023-01-04 12:35:07 +03:00
AUTOMATIC
8f96f92899
call script callbacks for reloaded model after loading embeddings
2023-01-03 18:39:14 +03:00
AUTOMATIC
311354c0bb
fix the issue with training on SD2.0
2023-01-02 00:38:09 +03:00
Vladimir Mandic
f55ac33d44
validate textual inversion embeddings
2022-12-31 11:27:02 -05:00
Nicolas Patry
5ba04f9ec0
Attempting to solve slow loads for safetensors
.
...
Fixes #5893
2022-12-27 11:27:19 +01:00
Yuval Aboulafia
3bf5591efe
fix F541 f-string without any placeholders
2022-12-24 21:35:29 +02:00
linuxmobile ( リナックス )
5a650055de
Removed lenght in sd_model at line 115
...
Commit eba60a4
is what is causing this error, delete the length check in sd_model starting at line 115 and it's fine.
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5971#issuecomment-1364507379
2022-12-24 09:25:35 -03:00
AUTOMATIC1111
eba60a42eb
Merge pull request #5627 from deanpress/patch-1
...
fix: fallback model_checkpoint if it's empty
2022-12-24 12:20:31 +03:00
MrCheeze
ec0a48826f
unconditionally set use_ema=False if value not specified (True never worked, and all configs except v1-inpainting-inference.yaml already correctly set it to False)
2022-12-11 11:18:34 -05:00
Dean van Dugteren
59c6511494
fix: fallback model_checkpoint if it's empty
...
This fixes the following error when SD attempts to start with a deleted checkpoint:
```
Traceback (most recent call last):
File "D:\Web\stable-diffusion-webui\launch.py", line 295, in <module>
start()
File "D:\Web\stable-diffusion-webui\launch.py", line 290, in start
webui.webui()
File "D:\Web\stable-diffusion-webui\webui.py", line 132, in webui
initialize()
File "D:\Web\stable-diffusion-webui\webui.py", line 62, in initialize
modules.sd_models.load_model()
File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 283, in load_model
checkpoint_info = checkpoint_info or select_checkpoint()
File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 117, in select_checkpoint
checkpoint_info = checkpoints_list.get(model_checkpoint, None)
TypeError: unhashable type: 'list'
```
2022-12-11 17:08:51 +01:00
MrCheeze
bd81a09eac
fix support for 2.0 inpainting model while maintaining support for 1.5 inpainting model
2022-12-10 11:29:26 -05:00
AUTOMATIC1111
ec5e072124
Merge pull request #4841 from R-N/vae-fix-none
...
Fix None option of VAE selector
2022-12-10 09:58:20 +03:00
Jay Smith
1ed4f0e228
Depth2img model support
2022-12-08 20:50:08 -06:00
AUTOMATIC
0376da180c
make it possible to save nai model using safetensors
2022-11-28 08:39:59 +03:00
AUTOMATIC
dac9b6f15d
add safetensors support for model merging #4869
2022-11-27 15:51:29 +03:00
AUTOMATIC
6074175faa
add safetensors to requirements
2022-11-27 14:46:40 +03:00
AUTOMATIC1111
f108782e30
Merge pull request #4930 from Narsil/allow_to_load_safetensors_file
...
Supporting `*.safetensors` format.
2022-11-27 14:36:55 +03:00
MrCheeze
1e506657e1
no-half support for SD 2.0
2022-11-26 13:28:44 -05:00
Nicolas Patry
0efffbb407
Supporting *.safetensors
format.
...
If a model file exists with extension `.safetensors` then we can load it
more safely than with PyTorch weights.
2022-11-21 14:04:25 +01:00
Muhammad Rizqi Nur
8662b5e57f
Merge branch 'a1111' into vae-fix-none
2022-11-19 16:38:21 +07:00
Muhammad Rizqi Nur
2c5ca706a7
Remove no longer necessary parts and add vae_file safeguard
2022-11-19 12:01:41 +07:00
Muhammad Rizqi Nur
c7be83bf02
Misc
...
Misc
2022-11-19 11:44:37 +07:00
Muhammad Rizqi Nur
abc1e79a5d
Fix base VAE caching was done after loading VAE, also add safeguard
2022-11-19 11:41:41 +07:00
cluder
eebf49592a
restore #4035 behavior
...
- if checkpoint cache is set to 1, keep 2 models in cache (current +1 more)
2022-11-09 07:17:09 +01:00
cluder
3b51d239ac
- do not use ckpt cache, if disabled
...
- cache model after is has been loaded from file
2022-11-09 05:43:57 +01:00
AUTOMATIC
99043f3360
fix one of previous merges breaking the program
2022-11-04 11:20:42 +03:00
AUTOMATIC1111
24fc05cf57
Merge branch 'master' into fix-ckpt-cache
2022-11-04 10:54:17 +03:00
digburn
3780ad3ad8
fix: loading models without vae from cache
2022-11-04 00:43:00 +00:00
Muhammad Rizqi Nur
fb3b564801
Merge branch 'master' into fix-ckpt-cache
2022-11-02 20:53:41 +07:00
AUTOMATIC
f2a5cbe6f5
fix #3986 breaking --no-half-vae
2022-11-02 14:41:29 +03:00
Muhammad Rizqi Nur
056f06d373
Reload VAE without reloading sd checkpoint
2022-11-02 12:51:46 +07:00
Muhammad Rizqi Nur
f8c6468d42
Merge branch 'master' into vae-picker
2022-11-02 00:25:08 +07:00
Jairo Correa
af758e97fa
Unload sd_model before loading the other
2022-11-01 04:01:49 -03:00
Muhammad Rizqi Nur
bf7a699845
Fix #4035 for real now
2022-10-31 16:27:27 +07:00
Muhammad Rizqi Nur
36966e3200
Fix #4035
2022-10-31 15:38:58 +07:00
Muhammad Rizqi Nur
726769da35
Checkpoint cache by combination key of checkpoint and vae
2022-10-31 15:22:03 +07:00
Muhammad Rizqi Nur
cb31abcf58
Settings to select VAE
2022-10-30 21:54:31 +07:00
AUTOMATIC1111
9553a7e071
Merge pull request #3818 from jwatzman/master
...
Reduce peak memory usage when changing models
2022-10-29 09:16:00 +03:00
Antonio
5d5dc64064
Natural sorting for dropdown checkpoint list
...
Example:
Before After
11.ckpt 11.ckpt
ab.ckpt ab.ckpt
ade_pablo_step_1000.ckpt ade_pablo_step_500.ckpt
ade_pablo_step_500.ckpt ade_pablo_step_1000.ckpt
ade_step_1000.ckpt ade_step_500.ckpt
ade_step_1500.ckpt ade_step_1000.ckpt
ade_step_2000.ckpt ade_step_1500.ckpt
ade_step_2500.ckpt ade_step_2000.ckpt
ade_step_3000.ckpt ade_step_2500.ckpt
ade_step_500.ckpt ade_step_3000.ckpt
atp_step_5500.ckpt atp_step_5500.ckpt
model1.ckpt model1.ckpt
model10.ckpt model10.ckpt
model1000.ckpt model33.ckpt
model33.ckpt model50.ckpt
model400.ckpt model400.ckpt
model50.ckpt model1000.ckpt
moo44.ckpt moo44.ckpt
v1-4-pruned-emaonly.ckpt v1-4-pruned-emaonly.ckpt
v1-5-pruned-emaonly.ckpt v1-5-pruned-emaonly.ckpt
v1-5-pruned.ckpt v1-5-pruned.ckpt
v1-5-vae.ckpt v1-5-vae.ckpt
2022-10-28 05:49:39 +02:00
Josh Watzman
b50ff4f4e4
Reduce peak memory usage when changing models
...
A few tweaks to reduce peak memory usage, the biggest being that if we
aren't using the checkpoint cache, we shouldn't duplicate the model
state dict just to immediately throw it away.
On my machine with 16GB of RAM, this change means I can typically change
models, whereas before it would typically OOM.
2022-10-27 22:01:06 +01:00
AUTOMATIC
321bacc6a9
call model_loaded_callback after setting shared.sd_model in case scripts refer to it using that
2022-10-22 20:15:12 +03:00
MrCheeze
0df94d3fcf
fix aesthetic gradients doing nothing after loading a different model
2022-10-22 20:14:18 +03:00
AUTOMATIC
2b91251637
removed aesthetic gradients as built-in
...
added support for extensions
2022-10-22 12:23:58 +03:00
AUTOMATIC
ac0aa2b18e
loading SD VAE, see PR #3303
2022-10-21 17:35:51 +03:00
AUTOMATIC
df57064093
do not load aesthetic clip model until it's needed
...
add refresh button for aesthetic embeddings
add aesthetic params to images' infotext
2022-10-21 16:10:51 +03:00
AUTOMATIC
7d6b388d71
Merge branch 'ae'
2022-10-21 13:35:01 +03:00
random_thoughtss
49533eed9e
XY grid correctly re-assignes model when config changes
2022-10-20 16:01:27 -07:00
random_thoughtss
708c3a7bd8
Added PLMS hijack and made sure to always replace methods
2022-10-20 13:28:43 -07:00
random_thoughtss
8e7097d06a
Added support for RunwayML inpainting model
2022-10-19 13:47:45 -07:00
AUTOMATIC
f894dd552f
fix for broken checkpoint merger
2022-10-19 12:45:42 +03:00
MalumaDev
2362d5f00e
Merge branch 'master' into test_resolve_conflicts
2022-10-19 10:22:39 +02:00
AUTOMATIC
10aca1ca3e
more careful loading of model weights (eliminates some issues with checkpoints that have weird cond_stage_model layer names)
2022-10-19 08:42:22 +03:00
MalumaDev
9324cdaa31
ui fix, re organization of the code
2022-10-16 17:53:56 +02:00