w-e-w
a2e213bc7b
separate Extra options
2023-08-14 18:50:22 +09:00
AUTOMATIC1111
6bfd4dfecf
add second_order to samplers that mistakenly didn't have it
2023-08-14 12:07:38 +03:00
Robert Barron
99ab3d43a7
hires prompt timeline: merge to latests, slightly simplify diff
2023-08-14 00:43:27 -07:00
AUTOMATIC1111
353c876172
fix API always using -1 as seed
2023-08-14 10:43:18 +03:00
Robert Barron
d61e31bae6
Merge remote-tracking branch 'auto1111/dev' into shared-hires-prompt-test
2023-08-14 00:35:17 -07:00
AUTOMATIC1111
f3b96d4998
return seed controls UI to how it was before
2023-08-14 10:22:52 +03:00
AUTOMATIC1111
abbecb3e73
further repair the /docs page to not break styles with the attempted fix
2023-08-14 10:15:10 +03:00
whitebell
b39d9364d8
Fix typo in shared_options.py
...
unperdictable -> unpredictable
2023-08-14 15:58:38 +09:00
AUTOMATIC1111
c7c16f805c
repair /docs page
2023-08-14 09:49:51 +03:00
AUTOMATIC1111
f37cc5f5e1
Merge pull request #12542 from AUTOMATIC1111/res-sampler
...
Add RES sampler and reorder the sampler list
2023-08-14 09:02:10 +03:00
AUTOMATIC1111
3a4bee1096
Merge pull request #12543 from AUTOMATIC1111/extra-norm-module
...
Fix MHA error with ex_bias and support ex_bias for layers which don't have bias
2023-08-14 09:01:34 +03:00
AUTOMATIC1111
c1a31ec9f7
revert to applying mask before denoising for k-diffusion, like it was before
2023-08-14 08:59:15 +03:00
Kohaku-Blueleaf
f70ded8936
remove "if bias exist" check
2023-08-14 13:53:40 +08:00
Kohaku-Blueleaf
aa26f8eb40
Put frequently used sampler back
2023-08-14 13:50:53 +08:00
AUTOMATIC1111
cda2f0a162
make on_before_component/on_after_component possible earlier
2023-08-14 08:49:39 +03:00
AUTOMATIC1111
aeb76ef174
repair DDIM/PLMS/UniPC batches
2023-08-14 08:49:02 +03:00
Kohaku-Blueleaf
e7c03ccdce
Merge branch 'dev' into extra-norm-module
2023-08-14 13:34:51 +08:00
Kohaku-Blueleaf
d9cc27cb29
Fix MHA updown err and support ex-bias for no-bias layer
2023-08-14 13:32:51 +08:00
Kohaku-Blueleaf
0ea61a74be
add res(dpmdd 2m sde heun) and reorder the sampler list
2023-08-14 11:46:36 +08:00
AUTOMATIC1111
007ecfbb29
also use setup callback for the refiner instead of before_process
2023-08-13 21:01:13 +03:00
AUTOMATIC1111
9cd0475c08
Merge pull request #12526 from brkirch/mps-adjust-sub-quad
...
Fixes for `git checkout`, MPS/macOS fixes and optimizations
2023-08-13 20:28:49 +03:00
AUTOMATIC1111
8452708560
Merge pull request #12530 from eltociear/eltociear-patch-1
...
Fix typo in launch_utils.py
2023-08-13 20:27:17 +03:00
AUTOMATIC1111
16781ba09a
fix 2 for git code botched by previous PRs
2023-08-13 20:15:20 +03:00
Ikko Eltociear Ashimine
09ff5b5416
Fix typo in launch_utils.py
...
existance -> existence
2023-08-14 01:03:49 +09:00
AUTOMATIC1111
f093c9d39d
fix broken XYZ plot seeds
...
add new callback for scripts to be used before processing
2023-08-13 17:31:10 +03:00
brkirch
2035cbbd5d
Fix DDIM and PLMS samplers on MPS
2023-08-13 10:07:52 -04:00
brkirch
5df535b7c2
Remove duplicate code for torchsde randn
2023-08-13 10:07:52 -04:00
brkirch
232c931f40
Mac k-diffusion workarounds are no longer needed
2023-08-13 10:07:52 -04:00
brkirch
f4dbb0c820
Change the repositories origin URLs when necessary
2023-08-13 10:07:52 -04:00
brkirch
9058620cec
git checkout
with commit hash
2023-08-13 10:07:14 -04:00
brkirch
2489252099
torch.empty
can create issues; use torch.zeros
...
For MPS, using a tensor created with `torch.empty()` can cause `torch.baddbmm()` to include NaNs in the tensor it returns, even though `beta=0`. However, with a tensor of shape [1,1,1], there should be a negligible performance difference between `torch.empty()` and `torch.zeros()` anyway, so it's better to just use `torch.zeros()` for this and avoid unnecessarily creating issues.
2023-08-13 10:06:25 -04:00
brkirch
87dd685224
Make sub-quadratic the default for MPS
2023-08-13 10:06:25 -04:00
brkirch
abfa4ad8bc
Use fixed size for sub-quadratic chunking on MPS
...
Even if this causes chunks to be much smaller, performance isn't significantly impacted. This will usually reduce memory usage but should also help with poor performance when free memory is low.
2023-08-13 10:06:25 -04:00
AUTOMATIC1111
3163d1269a
fix for the broken run_git calls
2023-08-13 16:51:21 +03:00
AUTOMATIC1111
1c6ca09992
Merge pull request #12510 from catboxanon/feat/extnet/hashes
...
Support search and display of hashes for all extra network items
2023-08-13 16:46:32 +03:00
AUTOMATIC1111
d73db17ee3
Merge pull request #12515 from catboxanon/fix/gc1
...
Clear sampler and garbage collect before decoding images to reduce VRAM
2023-08-13 16:45:38 +03:00
AUTOMATIC1111
127ab9114f
Merge pull request #12514 from catboxanon/feat/batch-encode
...
Encode batch items individually to significantly reduce VRAM
2023-08-13 16:41:07 +03:00
AUTOMATIC1111
d53f3b5596
Merge pull request #12520 from catboxanon/eta
...
Update description of eta setting
2023-08-13 16:40:17 +03:00
AUTOMATIC1111
d41a5bb97d
Merge pull request #12521 from catboxanon/feat/more-s-noise
...
Add `s_noise` param to more samplers
2023-08-13 16:39:25 +03:00
AUTOMATIC1111
551d2fabcc
Merge pull request #12522 from catboxanon/fix/extra_params
...
Restore `extra_params` that was lost in merge
2023-08-13 16:38:27 +03:00
AUTOMATIC1111
db40d26d08
linter
2023-08-13 16:38:10 +03:00
catboxanon
525b55b1e9
Restore extra_params that was lost in merge
2023-08-13 09:08:34 -04:00
catboxanon
ce0829d711
Merge branch 'feat/dpmpp3msde' into feat/more-s-noise
2023-08-13 08:46:58 -04:00
catboxanon
ac790fc49b
Discard penultimate sigma for DPM-Solver++(3M) SDE
2023-08-13 08:46:07 -04:00
catboxanon
f4757032e7
Fix s_noise description
2023-08-13 08:24:28 -04:00
catboxanon
d1a70c3f05
Add s_noise param to more samplers
2023-08-13 08:22:24 -04:00
AUTOMATIC1111
d8419762c1
Lora: output warnings in UI rather than fail for unfitting loras; switch to logging for error output in console
2023-08-13 15:07:37 +03:00
catboxanon
60a7405165
Update description of eta setting
2023-08-13 08:06:40 -04:00
catboxanon
1ae9dacb4b
Add DPM-Solver++(3M) SDE
2023-08-13 07:57:29 -04:00
catboxanon
69f49c8d39
Clear sampler before decoding images
...
More significant VRAM reduction.
2023-08-13 04:40:34 -04:00