Commit Graph

26 Commits

Author SHA1 Message Date
AUTOMATIC1111 eee46a5094
Merge pull request #14981 from wangshuai09/gpu_info_for_ascend
Add training support and change lspci for Ascend NPU
2024-03-04 20:06:54 +03:00
Aarni Koskela e3fa46f26f Fix various typos with crate-ci/typos 2024-03-04 08:42:07 +02:00
wangshuai09 ba66cf8d69 update 2024-02-22 20:17:10 +08:00
AUTOMATIC1111 e2b19900ec add infotext entry for emphasis; put emphasis into a separate file, add an option to parse but still ignore emphasis 2024-02-11 09:39:51 +03:00
hako-mikan c3c88ca8b4
Update sd_hijack_clip.py 2024-02-10 00:18:08 +09:00
hako-mikan 6b3f7039b6
add option 2024-02-09 23:57:46 +09:00
w-e-w f56a309432 fix missing TI hash 2023-08-03 18:46:49 +09:00
AUTOMATIC1111 6f0abbb71a textual inversion support for SDXL 2023-07-29 15:15:06 +03:00
AUTOMATIC1111 89e6dfff71 repair SDXL 2023-07-26 15:07:56 +03:00
AUTOMATIC1111 8284ebd94c fix autograd which i broke for no good reason when implementing SDXL 2023-07-26 13:03:52 +03:00
AUTOMATIC1111 0198eaec45
Merge pull request #11757 from AUTOMATIC1111/sdxl
SD XL support
2023-07-16 12:04:53 +03:00
AUTOMATIC1111 2b1bae0d75 add textual inversion hashes to infotext 2023-07-15 08:41:22 +03:00
AUTOMATIC1111 594c8e7b26 fix CLIP doing the unneeded normalization
revert SD2.1 back to use the original repo
add SDXL's force_zero_embeddings to negative prompt
2023-07-13 11:35:52 +03:00
AUTOMATIC1111 da464a3fb3 SDXL support 2023-07-12 23:52:43 +03:00
Aarni Koskela 51864790fd Simplify a bunch of `len(x) > 0`/`len(x) == 0` style expressions 2023-06-02 15:07:10 +03:00
AUTOMATIC 3ec7b705c7 suggestions and fixes from the PR 2023-05-10 21:21:32 +03:00
AUTOMATIC a5121e7a06 fixes for B007 2023-05-10 11:37:18 +03:00
AUTOMATIC 8e2aeee4a1 add BREAK keyword to end current text chunk and start the next 2023-01-15 22:29:53 +03:00
brkirch df3b31eb55 In-place operations can break gradient calculation 2023-01-07 07:04:59 -05:00
AUTOMATIC 1740c33547 more comments 2023-01-07 07:48:44 +03:00
AUTOMATIC 08066676a4 make it not break on empty inputs; thank you tarded, we are 2023-01-07 07:22:07 +03:00
AUTOMATIC 79e39fae61 CLIP hijack rework 2023-01-07 01:46:13 +03:00
AUTOMATIC 210449b374 fix 'RuntimeError: Expected all tensors to be on the same device' error preventing models from loading on lowvram/medvram. 2023-01-01 02:41:15 +03:00
AUTOMATIC f34c734172 alt-diffusion integration 2022-12-31 18:06:35 +03:00
zhaohu xing 52cc83d36b fix bugs
Signed-off-by: zhaohu xing <920232796@qq.com>
2022-11-30 14:56:12 +08:00
AUTOMATIC ce6911158b Add support Stable Diffusion 2.0 2022-11-26 16:10:46 +03:00