diff --git a/README.md b/README.md index 24f8e7998..b67e2296a 100644 --- a/README.md +++ b/README.md @@ -13,9 +13,9 @@ A browser interface based on Gradio library for Stable Diffusion. - Prompt Matrix - Stable Diffusion Upscale - Attention, specify parts of text that the model should pay more attention to - - a man in a ((tuxedo)) - will pay more attention to tuxedo - - a man in a (tuxedo:1.21) - alternative syntax - - select text and press ctrl+up or ctrl+down to automatically adjust attention to selected text (code contributed by anonymous user) + - a man in a `((tuxedo))` - will pay more attention to tuxedo + - a man in a `(tuxedo:1.21)` - alternative syntax + - select text and press `Ctrl+Up` or `Ctrl+Down` to automatically adjust attention to selected text (code contributed by anonymous user) - Loopback, run img2img processing multiple times - X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters - Textual Inversion @@ -28,7 +28,7 @@ A browser interface based on Gradio library for Stable Diffusion. - CodeFormer, face restoration tool as an alternative to GFPGAN - RealESRGAN, neural network upscaler - ESRGAN, neural network upscaler with a lot of third party models - - SwinIR and Swin2SR([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers + - SwinIR and Swin2SR ([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers - LDSR, Latent diffusion super resolution upscaling - Resizing aspect ratio options - Sampling method selection @@ -46,7 +46,7 @@ A browser interface based on Gradio library for Stable Diffusion. - drag and drop an image/text-parameters to promptbox - Read Generation Parameters Button, loads parameters in promptbox to UI - Settings page -- Running arbitrary python code from UI (must run with --allow-code to enable) +- Running arbitrary python code from UI (must run with `--allow-code` to enable) - Mouseover hints for most UI elements - Possible to change defaults/mix/max/step values for UI elements via text config - Tiling support, a checkbox to create images that can be tiled like textures @@ -69,7 +69,7 @@ A browser interface based on Gradio library for Stable Diffusion. - also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2` - No token limit for prompts (original stable diffusion lets you use up to 75 tokens) - DeepDanbooru integration, creates danbooru style tags for anime prompts -- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add --xformers to commandline args) +- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args) - via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI - Generate forever option - Training tab @@ -78,11 +78,11 @@ A browser interface based on Gradio library for Stable Diffusion. - Clip skip - Hypernetworks - Loras (same as Hypernetworks but more pretty) -- A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt. +- A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt - Can select to load a different VAE from settings screen - Estimated completion time in progress bar - API -- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML. +- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML - via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients)) - [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions - [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions @@ -91,7 +91,6 @@ A browser interface based on Gradio library for Stable Diffusion. - Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64 - Now with a license! - Reorder elements in the UI from settings screen -- ## Installation and Running Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs. @@ -101,7 +100,7 @@ Alternatively, use online services (like Google Colab): - [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services) ### Automatic Installation on Windows -1. Install [Python 3.10.6](https://www.python.org/downloads/windows/), checking "Add Python to PATH" +1. Install [Python 3.10.6](https://www.python.org/downloads/windows/), checking "Add Python to PATH". 2. Install [git](https://git-scm.com/download/win). 3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`. 4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user. @@ -159,4 +158,4 @@ Licenses for borrowed code can be found in `Settings -> Licenses` screen, and al - Security advice - RyotaK - UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC - Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user. -- (You) +- (You) \ No newline at end of file diff --git a/extensions-builtin/Lora/lora.py b/extensions-builtin/Lora/lora.py index 7c371deb7..d3eb0d3bc 100644 --- a/extensions-builtin/Lora/lora.py +++ b/extensions-builtin/Lora/lora.py @@ -2,20 +2,34 @@ import glob import os import re import torch +from typing import Union from modules import shared, devices, sd_models, errors metadata_tags_order = {"ss_sd_model_name": 1, "ss_resolution": 2, "ss_clip_skip": 3, "ss_num_train_images": 10, "ss_tag_frequency": 20} re_digits = re.compile(r"\d+") -re_unet_down_blocks = re.compile(r"lora_unet_down_blocks_(\d+)_attentions_(\d+)_(.+)") -re_unet_mid_blocks = re.compile(r"lora_unet_mid_block_attentions_(\d+)_(.+)") -re_unet_up_blocks = re.compile(r"lora_unet_up_blocks_(\d+)_attentions_(\d+)_(.+)") -re_text_block = re.compile(r"lora_te_text_model_encoder_layers_(\d+)_(.+)") +re_x_proj = re.compile(r"(.*)_([qkv]_proj)$") +re_compiled = {} + +suffix_conversion = { + "attentions": {}, + "resnets": { + "conv1": "in_layers_2", + "conv2": "out_layers_3", + "time_emb_proj": "emb_layers_1", + "conv_shortcut": "skip_connection", + } +} -def convert_diffusers_name_to_compvis(key): - def match(match_list, regex): +def convert_diffusers_name_to_compvis(key, is_sd2): + def match(match_list, regex_text): + regex = re_compiled.get(regex_text) + if regex is None: + regex = re.compile(regex_text) + re_compiled[regex_text] = regex + r = re.match(regex, key) if not r: return False @@ -26,16 +40,33 @@ def convert_diffusers_name_to_compvis(key): m = [] - if match(m, re_unet_down_blocks): - return f"diffusion_model_input_blocks_{1 + m[0] * 3 + m[1]}_1_{m[2]}" + if match(m, r"lora_unet_down_blocks_(\d+)_(attentions|resnets)_(\d+)_(.+)"): + suffix = suffix_conversion.get(m[1], {}).get(m[3], m[3]) + return f"diffusion_model_input_blocks_{1 + m[0] * 3 + m[2]}_{1 if m[1] == 'attentions' else 0}_{suffix}" - if match(m, re_unet_mid_blocks): - return f"diffusion_model_middle_block_1_{m[1]}" + if match(m, r"lora_unet_mid_block_(attentions|resnets)_(\d+)_(.+)"): + suffix = suffix_conversion.get(m[0], {}).get(m[2], m[2]) + return f"diffusion_model_middle_block_{1 if m[0] == 'attentions' else m[1] * 2}_{suffix}" - if match(m, re_unet_up_blocks): - return f"diffusion_model_output_blocks_{m[0] * 3 + m[1]}_1_{m[2]}" + if match(m, r"lora_unet_up_blocks_(\d+)_(attentions|resnets)_(\d+)_(.+)"): + suffix = suffix_conversion.get(m[1], {}).get(m[3], m[3]) + return f"diffusion_model_output_blocks_{m[0] * 3 + m[2]}_{1 if m[1] == 'attentions' else 0}_{suffix}" + + if match(m, r"lora_unet_down_blocks_(\d+)_downsamplers_0_conv"): + return f"diffusion_model_input_blocks_{3 + m[0] * 3}_0_op" + + if match(m, r"lora_unet_up_blocks_(\d+)_upsamplers_0_conv"): + return f"diffusion_model_output_blocks_{2 + m[0] * 3}_{2 if m[0]>0 else 1}_conv" + + if match(m, r"lora_te_text_model_encoder_layers_(\d+)_(.+)"): + if is_sd2: + if 'mlp_fc1' in m[1]: + return f"model_transformer_resblocks_{m[0]}_{m[1].replace('mlp_fc1', 'mlp_c_fc')}" + elif 'mlp_fc2' in m[1]: + return f"model_transformer_resblocks_{m[0]}_{m[1].replace('mlp_fc2', 'mlp_c_proj')}" + else: + return f"model_transformer_resblocks_{m[0]}_{m[1].replace('self_attn', 'attn')}" - if match(m, re_text_block): return f"transformer_text_model_encoder_layers_{m[0]}_{m[1]}" return key @@ -101,15 +132,22 @@ def load_lora(name, filename): sd = sd_models.read_state_dict(filename) - keys_failed_to_match = [] + keys_failed_to_match = {} + is_sd2 = 'model_transformer_resblocks' in shared.sd_model.lora_layer_mapping for key_diffusers, weight in sd.items(): - fullkey = convert_diffusers_name_to_compvis(key_diffusers) - key, lora_key = fullkey.split(".", 1) + key_diffusers_without_lora_parts, lora_key = key_diffusers.split(".", 1) + key = convert_diffusers_name_to_compvis(key_diffusers_without_lora_parts, is_sd2) sd_module = shared.sd_model.lora_layer_mapping.get(key, None) + if sd_module is None: - keys_failed_to_match.append(key_diffusers) + m = re_x_proj.match(key) + if m: + sd_module = shared.sd_model.lora_layer_mapping.get(m.group(1), None) + + if sd_module is None: + keys_failed_to_match[key_diffusers] = key continue lora_module = lora.modules.get(key, None) @@ -123,15 +161,21 @@ def load_lora(name, filename): if type(sd_module) == torch.nn.Linear: module = torch.nn.Linear(weight.shape[1], weight.shape[0], bias=False) + elif type(sd_module) == torch.nn.modules.linear.NonDynamicallyQuantizableLinear: + module = torch.nn.Linear(weight.shape[1], weight.shape[0], bias=False) + elif type(sd_module) == torch.nn.MultiheadAttention: + module = torch.nn.Linear(weight.shape[1], weight.shape[0], bias=False) elif type(sd_module) == torch.nn.Conv2d: module = torch.nn.Conv2d(weight.shape[1], weight.shape[0], (1, 1), bias=False) else: + print(f'Lora layer {key_diffusers} matched a layer with unsupported type: {type(sd_module).__name__}') + continue assert False, f'Lora layer {key_diffusers} matched a layer with unsupported type: {type(sd_module).__name__}' with torch.no_grad(): module.weight.copy_(weight) - module.to(device=devices.device, dtype=devices.dtype) + module.to(device=devices.cpu, dtype=devices.dtype) if lora_key == "lora_up.weight": lora_module.up = module @@ -177,29 +221,120 @@ def load_loras(names, multipliers=None): loaded_loras.append(lora) -def lora_forward(module, input, res): - input = devices.cond_cast_unet(input) - if len(loaded_loras) == 0: - return res +def lora_calc_updown(lora, module, target): + with torch.no_grad(): + up = module.up.weight.to(target.device, dtype=target.dtype) + down = module.down.weight.to(target.device, dtype=target.dtype) - lora_layer_name = getattr(module, 'lora_layer_name', None) - for lora in loaded_loras: - module = lora.modules.get(lora_layer_name, None) - if module is not None: - if shared.opts.lora_apply_to_outputs and res.shape == input.shape: - res = res + module.up(module.down(res)) * lora.multiplier * (module.alpha / module.up.weight.shape[1] if module.alpha else 1.0) + if up.shape[2:] == (1, 1) and down.shape[2:] == (1, 1): + updown = (up.squeeze(2).squeeze(2) @ down.squeeze(2).squeeze(2)).unsqueeze(2).unsqueeze(3) + else: + updown = up @ down + + updown = updown * lora.multiplier * (module.alpha / module.up.weight.shape[1] if module.alpha else 1.0) + + return updown + + +def lora_apply_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn.MultiheadAttention]): + """ + Applies the currently selected set of Loras to the weights of torch layer self. + If weights already have this particular set of loras applied, does nothing. + If not, restores orginal weights from backup and alters weights according to loras. + """ + + lora_layer_name = getattr(self, 'lora_layer_name', None) + if lora_layer_name is None: + return + + current_names = getattr(self, "lora_current_names", ()) + wanted_names = tuple((x.name, x.multiplier) for x in loaded_loras) + + weights_backup = getattr(self, "lora_weights_backup", None) + if weights_backup is None: + if isinstance(self, torch.nn.MultiheadAttention): + weights_backup = (self.in_proj_weight.to(devices.cpu, copy=True), self.out_proj.weight.to(devices.cpu, copy=True)) + else: + weights_backup = self.weight.to(devices.cpu, copy=True) + + self.lora_weights_backup = weights_backup + + if current_names != wanted_names: + if weights_backup is not None: + if isinstance(self, torch.nn.MultiheadAttention): + self.in_proj_weight.copy_(weights_backup[0]) + self.out_proj.weight.copy_(weights_backup[1]) else: - res = res + module.up(module.down(input)) * lora.multiplier * (module.alpha / module.up.weight.shape[1] if module.alpha else 1.0) + self.weight.copy_(weights_backup) - return res + for lora in loaded_loras: + module = lora.modules.get(lora_layer_name, None) + if module is not None and hasattr(self, 'weight'): + self.weight += lora_calc_updown(lora, module, self.weight) + continue + + module_q = lora.modules.get(lora_layer_name + "_q_proj", None) + module_k = lora.modules.get(lora_layer_name + "_k_proj", None) + module_v = lora.modules.get(lora_layer_name + "_v_proj", None) + module_out = lora.modules.get(lora_layer_name + "_out_proj", None) + + if isinstance(self, torch.nn.MultiheadAttention) and module_q and module_k and module_v and module_out: + updown_q = lora_calc_updown(lora, module_q, self.in_proj_weight) + updown_k = lora_calc_updown(lora, module_k, self.in_proj_weight) + updown_v = lora_calc_updown(lora, module_v, self.in_proj_weight) + updown_qkv = torch.vstack([updown_q, updown_k, updown_v]) + + self.in_proj_weight += updown_qkv + self.out_proj.weight += lora_calc_updown(lora, module_out, self.out_proj.weight) + continue + + if module is None: + continue + + print(f'failed to calculate lora weights for layer {lora_layer_name}') + + setattr(self, "lora_current_names", wanted_names) + + +def lora_reset_cached_weight(self: Union[torch.nn.Conv2d, torch.nn.Linear]): + setattr(self, "lora_current_names", ()) + setattr(self, "lora_weights_backup", None) def lora_Linear_forward(self, input): - return lora_forward(self, input, torch.nn.Linear_forward_before_lora(self, input)) + lora_apply_weights(self) + + return torch.nn.Linear_forward_before_lora(self, input) + + +def lora_Linear_load_state_dict(self, *args, **kwargs): + lora_reset_cached_weight(self) + + return torch.nn.Linear_load_state_dict_before_lora(self, *args, **kwargs) def lora_Conv2d_forward(self, input): - return lora_forward(self, input, torch.nn.Conv2d_forward_before_lora(self, input)) + lora_apply_weights(self) + + return torch.nn.Conv2d_forward_before_lora(self, input) + + +def lora_Conv2d_load_state_dict(self, *args, **kwargs): + lora_reset_cached_weight(self) + + return torch.nn.Conv2d_load_state_dict_before_lora(self, *args, **kwargs) + + +def lora_MultiheadAttention_forward(self, *args, **kwargs): + lora_apply_weights(self) + + return torch.nn.MultiheadAttention_forward_before_lora(self, *args, **kwargs) + + +def lora_MultiheadAttention_load_state_dict(self, *args, **kwargs): + lora_reset_cached_weight(self) + + return torch.nn.MultiheadAttention_load_state_dict_before_lora(self, *args, **kwargs) def list_available_loras(): @@ -212,7 +347,7 @@ def list_available_loras(): glob.glob(os.path.join(shared.cmd_opts.lora_dir, '**/*.safetensors'), recursive=True) + \ glob.glob(os.path.join(shared.cmd_opts.lora_dir, '**/*.ckpt'), recursive=True) - for filename in sorted(candidates): + for filename in sorted(candidates, key=str.lower): if os.path.isdir(filename): continue diff --git a/extensions-builtin/Lora/scripts/lora_script.py b/extensions-builtin/Lora/scripts/lora_script.py index 2e860160e..0adab2254 100644 --- a/extensions-builtin/Lora/scripts/lora_script.py +++ b/extensions-builtin/Lora/scripts/lora_script.py @@ -9,7 +9,11 @@ from modules import script_callbacks, ui_extra_networks, extra_networks, shared def unload(): torch.nn.Linear.forward = torch.nn.Linear_forward_before_lora + torch.nn.Linear._load_from_state_dict = torch.nn.Linear_load_state_dict_before_lora torch.nn.Conv2d.forward = torch.nn.Conv2d_forward_before_lora + torch.nn.Conv2d._load_from_state_dict = torch.nn.Conv2d_load_state_dict_before_lora + torch.nn.MultiheadAttention.forward = torch.nn.MultiheadAttention_forward_before_lora + torch.nn.MultiheadAttention._load_from_state_dict = torch.nn.MultiheadAttention_load_state_dict_before_lora def before_ui(): @@ -20,11 +24,27 @@ def before_ui(): if not hasattr(torch.nn, 'Linear_forward_before_lora'): torch.nn.Linear_forward_before_lora = torch.nn.Linear.forward +if not hasattr(torch.nn, 'Linear_load_state_dict_before_lora'): + torch.nn.Linear_load_state_dict_before_lora = torch.nn.Linear._load_from_state_dict + if not hasattr(torch.nn, 'Conv2d_forward_before_lora'): torch.nn.Conv2d_forward_before_lora = torch.nn.Conv2d.forward +if not hasattr(torch.nn, 'Conv2d_load_state_dict_before_lora'): + torch.nn.Conv2d_load_state_dict_before_lora = torch.nn.Conv2d._load_from_state_dict + +if not hasattr(torch.nn, 'MultiheadAttention_forward_before_lora'): + torch.nn.MultiheadAttention_forward_before_lora = torch.nn.MultiheadAttention.forward + +if not hasattr(torch.nn, 'MultiheadAttention_load_state_dict_before_lora'): + torch.nn.MultiheadAttention_load_state_dict_before_lora = torch.nn.MultiheadAttention._load_from_state_dict + torch.nn.Linear.forward = lora.lora_Linear_forward +torch.nn.Linear._load_from_state_dict = lora.lora_Linear_load_state_dict torch.nn.Conv2d.forward = lora.lora_Conv2d_forward +torch.nn.Conv2d._load_from_state_dict = lora.lora_Conv2d_load_state_dict +torch.nn.MultiheadAttention.forward = lora.lora_MultiheadAttention_forward +torch.nn.MultiheadAttention._load_from_state_dict = lora.lora_MultiheadAttention_load_state_dict script_callbacks.on_model_loaded(lora.assign_lora_names_to_compvis_modules) script_callbacks.on_script_unloaded(unload) @@ -33,6 +53,4 @@ script_callbacks.on_before_ui(before_ui) shared.options_templates.update(shared.options_section(('extra_networks', "Extra Networks"), { "sd_lora": shared.OptionInfo("None", "Add Lora to prompt", gr.Dropdown, lambda: {"choices": [""] + [x for x in lora.available_loras]}, refresh=lora.list_available_loras), - "lora_apply_to_outputs": shared.OptionInfo(False, "Apply Lora to outputs rather than inputs when possible (experimental)"), - })) diff --git a/javascript/aspectRatioOverlay.js b/javascript/aspectRatioOverlay.js index 0f164b82c..a8278cca2 100644 --- a/javascript/aspectRatioOverlay.js +++ b/javascript/aspectRatioOverlay.js @@ -12,7 +12,7 @@ function dimensionChange(e, is_width, is_height){ currentHeight = e.target.value*1.0 } - var inImg2img = Boolean(gradioApp().querySelector("button.rounded-t-lg.border-gray-200")) + var inImg2img = gradioApp().querySelector("#tab_img2img").style.display == "block"; if(!inImg2img){ return; @@ -22,7 +22,7 @@ function dimensionChange(e, is_width, is_height){ var tabIndex = get_tab_index('mode_img2img') if(tabIndex == 0){ // img2img - targetElement = gradioApp().querySelector('div[data-testid=image] img'); + targetElement = gradioApp().querySelector('#img2img_image div[data-testid=image] img'); } else if(tabIndex == 1){ //Sketch targetElement = gradioApp().querySelector('#img2img_sketch div[data-testid=image] img'); } else if(tabIndex == 2){ // Inpaint @@ -30,7 +30,7 @@ function dimensionChange(e, is_width, is_height){ } else if(tabIndex == 3){ // Inpaint sketch targetElement = gradioApp().querySelector('#inpaint_sketch div[data-testid=image] img'); } - + if(targetElement){ @@ -38,7 +38,7 @@ function dimensionChange(e, is_width, is_height){ if(!arPreviewRect){ arPreviewRect = document.createElement('div') arPreviewRect.id = "imageARPreview"; - gradioApp().getRootNode().appendChild(arPreviewRect) + gradioApp().appendChild(arPreviewRect) } @@ -91,23 +91,26 @@ onUiUpdate(function(){ if(arPreviewRect){ arPreviewRect.style.display = 'none'; } - var inImg2img = Boolean(gradioApp().querySelector("button.rounded-t-lg.border-gray-200")) - if(inImg2img){ - let inputs = gradioApp().querySelectorAll('input'); - inputs.forEach(function(e){ - var is_width = e.parentElement.id == "img2img_width" - var is_height = e.parentElement.id == "img2img_height" + var tabImg2img = gradioApp().querySelector("#tab_img2img"); + if (tabImg2img) { + var inImg2img = tabImg2img.style.display == "block"; + if(inImg2img){ + let inputs = gradioApp().querySelectorAll('input'); + inputs.forEach(function(e){ + var is_width = e.parentElement.id == "img2img_width" + var is_height = e.parentElement.id == "img2img_height" - if((is_width || is_height) && !e.classList.contains('scrollwatch')){ - e.addEventListener('input', function(e){dimensionChange(e, is_width, is_height)} ) - e.classList.add('scrollwatch') - } - if(is_width){ - currentWidth = e.value*1.0 - } - if(is_height){ - currentHeight = e.value*1.0 - } - }) - } + if((is_width || is_height) && !e.classList.contains('scrollwatch')){ + e.addEventListener('input', function(e){dimensionChange(e, is_width, is_height)} ) + e.classList.add('scrollwatch') + } + if(is_width){ + currentWidth = e.value*1.0 + } + if(is_height){ + currentHeight = e.value*1.0 + } + }) + } + } }); diff --git a/javascript/hints.js b/javascript/hints.js index b3f3d08d8..f48a0eb69 100644 --- a/javascript/hints.js +++ b/javascript/hints.js @@ -21,8 +21,7 @@ titles = { "\u{1f5d1}\ufe0f": "Clear prompt", "\u{1f4cb}": "Apply selected styles to current prompt", "\u{1f4d2}": "Paste available values into the field", - "\u{1f3b4}": "Show extra networks", - + "\u{1f3b4}": "Show/hide extra networks", "Inpaint a part of image": "Draw a mask over an image, and the script will regenerate the masked area with content according to prompt", "SD upscale": "Upscale image normally, split result into tiles, improve each tile using img2img, merge whole image back", diff --git a/javascript/imageviewer.js b/javascript/imageviewer.js index 7547e7711..d64835622 100644 --- a/javascript/imageviewer.js +++ b/javascript/imageviewer.js @@ -32,13 +32,7 @@ function negmod(n, m) { function updateOnBackgroundChange() { const modalImage = gradioApp().getElementById("modalImage") if (modalImage && modalImage.offsetParent) { - let allcurrentButtons = gradioApp().querySelectorAll(".gallery-item.transition-all.\\!ring-2") - let currentButton = null - allcurrentButtons.forEach(function(elem) { - if (elem.parentElement.offsetParent) { - currentButton = elem; - } - }) + let currentButton = selected_gallery_button(); if (currentButton?.children?.length > 0 && modalImage.src != currentButton.children[0].src) { modalImage.src = currentButton.children[0].src; @@ -50,22 +44,10 @@ function updateOnBackgroundChange() { } function modalImageSwitch(offset) { - var allgalleryButtons = gradioApp().querySelectorAll(".gradio-gallery .thumbnail-item") - var galleryButtons = [] - allgalleryButtons.forEach(function(elem) { - if (elem.parentElement.offsetParent) { - galleryButtons.push(elem); - } - }) + var galleryButtons = all_gallery_buttons(); if (galleryButtons.length > 1) { - var allcurrentButtons = gradioApp().querySelectorAll(".gradio-gallery .thumbnail-item.selected") - var currentButton = null - allcurrentButtons.forEach(function(elem) { - if (elem.parentElement.offsetParent) { - currentButton = elem; - } - }) + var currentButton = selected_gallery_button(); var result = -1 galleryButtons.forEach(function(v, i) { diff --git a/javascript/notification.js b/javascript/notification.js index 5ae6df24d..8ddd4c5d9 100644 --- a/javascript/notification.js +++ b/javascript/notification.js @@ -15,7 +15,7 @@ onUiUpdate(function(){ } } - const galleryPreviews = gradioApp().querySelectorAll('div[id^="tab_"][style*="display: block"] div[id$="_results"] img.h-full.w-full.overflow-hidden'); + const galleryPreviews = gradioApp().querySelectorAll('div[id^="tab_"][style*="display: block"] div[id$="_results"] .thumbnail-item > img'); if (galleryPreviews == null) return; diff --git a/javascript/ui.js b/javascript/ui.js index 7aa30dc19..a73eeaa29 100644 --- a/javascript/ui.js +++ b/javascript/ui.js @@ -7,9 +7,31 @@ function set_theme(theme){ } } +function all_gallery_buttons() { + var allGalleryButtons = gradioApp().querySelectorAll('[style="display: block;"].tabitem div[id$=_gallery].gradio-gallery .thumbnails > .thumbnail-item.thumbnail-small'); + var visibleGalleryButtons = []; + allGalleryButtons.forEach(function(elem) { + if (elem.parentElement.offsetParent) { + visibleGalleryButtons.push(elem); + } + }) + return visibleGalleryButtons; +} + +function selected_gallery_button() { + var allCurrentButtons = gradioApp().querySelectorAll('[style="display: block;"].tabitem div[id$=_gallery].gradio-gallery .thumbnail-item.thumbnail-small.selected'); + var visibleCurrentButton = null; + allCurrentButtons.forEach(function(elem) { + if (elem.parentElement.offsetParent) { + visibleCurrentButton = elem; + } + }) + return visibleCurrentButton; +} + function selected_gallery_index(){ - var buttons = gradioApp().querySelectorAll('[style="display: block;"].tabitem div[id$=_gallery] .thumbnails > .thumbnail-item') - var button = gradioApp().querySelector('[style="display: block;"].tabitem div[id$=_gallery] .thumbnails > .thumbnail-item.selected') + var buttons = all_gallery_buttons(); + var button = selected_gallery_button(); var result = -1 buttons.forEach(function(v, i){ if(v==button) { result = i } }) @@ -18,14 +40,18 @@ function selected_gallery_index(){ } function extract_image_from_gallery(gallery){ - if(gallery.length == 1){ - return [gallery[0]] + if (gallery.length == 0){ + return [null]; + } + if (gallery.length == 1){ + return [gallery[0]]; } index = selected_gallery_index() if (index < 0 || index >= gallery.length){ - return [null] + // Use the first image in the gallery as the default + index = 0; } return [gallery[index]]; diff --git a/modules/api/api.py b/modules/api/api.py index 13af9ed61..518b2a61f 100644 --- a/modules/api/api.py +++ b/modules/api/api.py @@ -3,6 +3,7 @@ import io import time import datetime import uvicorn +import gradio as gr from threading import Lock from io import BytesIO from gradio.processing_utils import decode_base64_to_file @@ -197,6 +198,9 @@ class Api: self.add_api_route("/sdapi/v1/reload-checkpoint", self.reloadapi, methods=["POST"]) self.add_api_route("/sdapi/v1/scripts", self.get_scripts_list, methods=["GET"], response_model=ScriptsList) + self.default_script_arg_txt2img = [] + self.default_script_arg_img2img = [] + def add_api_route(self, path: str, endpoint, **kwargs): if shared.cmd_opts.api_auth: return self.app.add_api_route(path, endpoint, dependencies=[Depends(self.auth)], **kwargs) @@ -230,7 +234,7 @@ class Api: script_idx = script_name_to_index(script_name, script_runner.scripts) return script_runner.scripts[script_idx] - def init_script_args(self, request, selectable_scripts, selectable_idx, script_runner): + def init_default_script_args(self, script_runner): #find max idx from the scripts in runner and generate a none array to init script_args last_arg_index = 1 for script in script_runner.scripts: @@ -238,13 +242,24 @@ class Api: last_arg_index = script.args_to # None everywhere except position 0 to initialize script args script_args = [None]*last_arg_index + script_args[0] = 0 + + # get default values + with gr.Blocks(): # will throw errors calling ui function without this + for script in script_runner.scripts: + if script.ui(script.is_img2img): + ui_default_values = [] + for elem in script.ui(script.is_img2img): + ui_default_values.append(elem.value) + script_args[script.args_from:script.args_to] = ui_default_values + return script_args + + def init_script_args(self, request, default_script_args, selectable_scripts, selectable_idx, script_runner): + script_args = default_script_args.copy() # position 0 in script_arg is the idx+1 of the selectable script that is going to be run when using scripts.scripts_*2img.run() if selectable_scripts: script_args[selectable_scripts.args_from:selectable_scripts.args_to] = request.script_args script_args[0] = selectable_idx + 1 - else: - # when [0] = 0 no selectable script to run - script_args[0] = 0 # Now check for always on scripts if request.alwayson_scripts and (len(request.alwayson_scripts) > 0): @@ -265,6 +280,8 @@ class Api: if not script_runner.scripts: script_runner.initialize_scripts(False) ui.create_ui() + if not self.default_script_arg_txt2img: + self.default_script_arg_txt2img = self.init_default_script_args(script_runner) selectable_scripts, selectable_script_idx = self.get_selectable_script(txt2imgreq.script_name, script_runner) populate = txt2imgreq.copy(update={ # Override __init__ params @@ -280,7 +297,7 @@ class Api: args.pop('script_args', None) # will refeed them to the pipeline directly after initializing them args.pop('alwayson_scripts', None) - script_args = self.init_script_args(txt2imgreq, selectable_scripts, selectable_script_idx, script_runner) + script_args = self.init_script_args(txt2imgreq, self.default_script_arg_txt2img, selectable_scripts, selectable_script_idx, script_runner) send_images = args.pop('send_images', True) args.pop('save_images', None) @@ -317,6 +334,8 @@ class Api: if not script_runner.scripts: script_runner.initialize_scripts(True) ui.create_ui() + if not self.default_script_arg_img2img: + self.default_script_arg_img2img = self.init_default_script_args(script_runner) selectable_scripts, selectable_script_idx = self.get_selectable_script(img2imgreq.script_name, script_runner) populate = img2imgreq.copy(update={ # Override __init__ params @@ -334,7 +353,7 @@ class Api: args.pop('script_args', None) # will refeed them to the pipeline directly after initializing them args.pop('alwayson_scripts', None) - script_args = self.init_script_args(img2imgreq, selectable_scripts, selectable_script_idx, script_runner) + script_args = self.init_script_args(img2imgreq, self.default_script_arg_img2img, selectable_scripts, selectable_script_idx, script_runner) send_images = args.pop('send_images', True) args.pop('save_images', None) diff --git a/modules/cmd_args.py b/modules/cmd_args.py index 0af872519..81c0b82a3 100644 --- a/modules/cmd_args.py +++ b/modules/cmd_args.py @@ -4,6 +4,7 @@ from modules.paths_internal import models_path, script_path, data_path, extensio parser = argparse.ArgumentParser() +parser.add_argument("-f", action='store_true', help=argparse.SUPPRESS) # allows running as root; implemented outside of webui parser.add_argument("--update-all-extensions", action='store_true', help="launch.py argument: download updates for all extensions when starting the program") parser.add_argument("--skip-python-version-check", action='store_true', help="launch.py argument: do not check python version") parser.add_argument("--skip-torch-cuda-test", action='store_true', help="launch.py argument: do not check if CUDA is able to work properly") diff --git a/modules/extensions.py b/modules/extensions.py index a14ffbf04..0d34b89ab 100644 --- a/modules/extensions.py +++ b/modules/extensions.py @@ -5,13 +5,14 @@ import traceback import time import git -from modules import paths, shared +from modules import shared from modules.paths_internal import extensions_dir, extensions_builtin_dir extensions = [] -if not os.path.exists(paths.extensions_dir): - os.makedirs(paths.extensions_dir) +if not os.path.exists(extensions_dir): + os.makedirs(extensions_dir) + def active(): return [x for x in extensions if x.enabled] @@ -26,21 +27,29 @@ class Extension: self.can_update = False self.is_builtin = is_builtin self.version = '' + self.remote = None + self.have_info_from_repo = False + + def read_info_from_repo(self): + if self.have_info_from_repo: + return + + self.have_info_from_repo = True repo = None try: - if os.path.exists(os.path.join(path, ".git")): - repo = git.Repo(path) + if os.path.exists(os.path.join(self.path, ".git")): + repo = git.Repo(self.path) except Exception: - print(f"Error reading github repository info from {path}:", file=sys.stderr) + print(f"Error reading github repository info from {self.path}:", file=sys.stderr) print(traceback.format_exc(), file=sys.stderr) if repo is None or repo.bare: self.remote = None else: try: - self.remote = next(repo.remote().urls, None) self.status = 'unknown' + self.remote = next(repo.remote().urls, None) head = repo.head.commit ts = time.asctime(time.gmtime(repo.head.commit.committed_date)) self.version = f'{head.hexsha[:8]} ({ts})' @@ -85,11 +94,11 @@ class Extension: def list_extensions(): extensions.clear() - if not os.path.isdir(paths.extensions_dir): + if not os.path.isdir(extensions_dir): return extension_paths = [] - for dirname in [paths.extensions_dir, paths.extensions_builtin_dir]: + for dirname in [extensions_dir, extensions_builtin_dir]: if not os.path.isdir(dirname): return @@ -98,7 +107,7 @@ def list_extensions(): if not os.path.isdir(path): continue - extension_paths.append((extension_dirname, path, dirname == paths.extensions_builtin_dir)) + extension_paths.append((extension_dirname, path, dirname == extensions_builtin_dir)) for dirname, path, is_builtin in extension_paths: extension = Extension(name=dirname, path=path, enabled=dirname not in shared.opts.disabled_extensions, is_builtin=is_builtin) diff --git a/modules/images.py b/modules/images.py index 7030aaaaf..b3535070b 100644 --- a/modules/images.py +++ b/modules/images.py @@ -261,9 +261,12 @@ def resize_image(resize_mode, im, width, height, upscaler_name=None): if scale > 1.0: upscalers = [x for x in shared.sd_upscalers if x.name == upscaler_name] - assert len(upscalers) > 0, f"could not find upscaler named {upscaler_name}" + if len(upscalers) == 0: + upscaler = shared.sd_upscalers[0] + print(f"could not find upscaler named {upscaler_name or ''}, using {upscaler.name} as a fallback") + else: + upscaler = upscalers[0] - upscaler = upscalers[0] im = upscaler.scaler.upscale(im, scale, upscaler.data_path) if im.width != w or im.height != h: diff --git a/modules/scripts.py b/modules/scripts.py index d661be4fa..4d0bbd665 100644 --- a/modules/scripts.py +++ b/modules/scripts.py @@ -553,3 +553,15 @@ def IOComponent_init(self, *args, **kwargs): original_IOComponent_init = gr.components.IOComponent.__init__ gr.components.IOComponent.__init__ = IOComponent_init + + +def BlockContext_init(self, *args, **kwargs): + res = original_BlockContext_init(self, *args, **kwargs) + + add_classes_to_gradio_component(self) + + return res + + +original_BlockContext_init = gr.blocks.BlockContext.__init__ +gr.blocks.BlockContext.__init__ = BlockContext_init diff --git a/modules/shared.py b/modules/shared.py index 11be3985d..3ad0862b2 100644 --- a/modules/shared.py +++ b/modules/shared.py @@ -640,7 +640,7 @@ mem_mon.start() def listfiles(dirname): - filenames = [os.path.join(dirname, x) for x in sorted(os.listdir(dirname)) if not x.startswith(".")] + filenames = [os.path.join(dirname, x) for x in sorted(os.listdir(dirname), key=str.lower) if not x.startswith(".")] return [file for file in filenames if os.path.isfile(file)] diff --git a/modules/ui_common.py b/modules/ui_common.py index 0f3427c8f..3b11dcc85 100644 --- a/modules/ui_common.py +++ b/modules/ui_common.py @@ -145,8 +145,7 @@ Requested path was: {f} ) if tabname != "extras": - with gr.Row(): - download_files = gr.File(None, file_count="multiple", interactive=False, show_label=False, visible=False, elem_id=f'download_files_{tabname}') + download_files = gr.File(None, file_count="multiple", interactive=False, show_label=False, visible=False, elem_id=f'download_files_{tabname}') with gr.Group(): html_info = gr.HTML(elem_id=f'html_info_{tabname}', elem_classes="infotext") diff --git a/modules/ui_extensions.py b/modules/ui_extensions.py index da7e79f07..b4a0d6ecf 100644 --- a/modules/ui_extensions.py +++ b/modules/ui_extensions.py @@ -63,6 +63,9 @@ def check_updates(id_task, disable_list): try: ext.check_updates() + except FileNotFoundError as e: + if 'FETCH_HEAD' not in str(e): + raise except Exception: print(f"Error checking updates for {ext.name}:", file=sys.stderr) print(traceback.format_exc(), file=sys.stderr) @@ -87,6 +90,8 @@ def extension_table(): """ for ext in extensions.extensions: + ext.read_info_from_repo() + remote = f"""{html.escape("built-in" if ext.is_builtin else ext.remote or '')}""" if ext.can_update: diff --git a/modules/ui_extra_networks.py b/modules/ui_extra_networks.py index daea03d62..25eb464b9 100644 --- a/modules/ui_extra_networks.py +++ b/modules/ui_extra_networks.py @@ -2,8 +2,10 @@ import glob import os.path import urllib.parse from pathlib import Path +from PIL import PngImagePlugin from modules import shared +from modules.images import read_info_from_image import gradio as gr import json import html @@ -252,10 +254,10 @@ def create_ui(container, button, tabname): def toggle_visibility(is_visible): is_visible = not is_visible - return is_visible, gr.update(visible=is_visible) + return is_visible, gr.update(visible=is_visible), gr.update(variant=("secondary-down" if is_visible else "secondary")) state_visible = gr.State(value=False) - button.click(fn=toggle_visibility, inputs=[state_visible], outputs=[state_visible, container]) + button.click(fn=toggle_visibility, inputs=[state_visible], outputs=[state_visible, container, button]) def refresh(): res = [] @@ -290,6 +292,7 @@ def setup_ui(ui, gallery): img_info = images[index if index >= 0 else 0] image = image_from_url_text(img_info) + geninfo, items = read_info_from_image(image) is_allowed = False for extra_page in ui.stored_extra_pages: @@ -299,7 +302,12 @@ def setup_ui(ui, gallery): assert is_allowed, f'writing to {filename} is not allowed' - image.save(filename) + if geninfo: + pnginfo_data = PngImagePlugin.PngInfo() + pnginfo_data.add_text('parameters', geninfo) + image.save(filename, pnginfo=pnginfo_data) + else: + image.save(filename) return [page.create_html(ui.tabname) for page in ui.stored_extra_pages] diff --git a/scripts/loopback.py b/scripts/loopback.py index 9c388aa8d..d3065fe6b 100644 --- a/scripts/loopback.py +++ b/scripts/loopback.py @@ -54,15 +54,12 @@ class Script(scripts.Script): return strength progress = loop / (loops - 1) - match denoising_curve: - case "Aggressive": - strength = math.sin((progress) * math.pi * 0.5) - - case "Lazy": - strength = 1 - math.cos((progress) * math.pi * 0.5) - - case _: - strength = progress + if denoising_curve == "Aggressive": + strength = math.sin((progress) * math.pi * 0.5) + elif denoising_curve == "Lazy": + strength = 1 - math.cos((progress) * math.pi * 0.5) + else: + strength = progress change = (final_denoising_strength - initial_denoising_strength) * strength return initial_denoising_strength + change diff --git a/style.css b/style.css index 844056223..75c26b407 100644 --- a/style.css +++ b/style.css @@ -7,7 +7,7 @@ --block-background-fill: transparent; } -.block.padded{ +.block.padded:not(.gradio-accordion) { padding: 0 !important; } @@ -54,10 +54,6 @@ div.compact{ gap: 1em; } -.gradio-dropdown ul.options{ - z-index: 3000; -} - .gradio-dropdown label span:not(.has-info), .gradio-textbox label span:not(.has-info), .gradio-number label span:not(.has-info) @@ -65,11 +61,30 @@ div.compact{ margin-bottom: 0; } +.gradio-dropdown ul.options{ + z-index: 3000; + min-width: fit-content; + max-width: inherit; + white-space: nowrap; +} + +.gradio-dropdown ul.options li.item { + padding: 0.05em 0; +} + +.gradio-dropdown ul.options li.item.selected { + background-color: var(--neutral-100); +} + +.dark .gradio-dropdown ul.options li.item.selected { + background-color: var(--neutral-900); +} + .gradio-dropdown div.wrap.wrap.wrap.wrap{ box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05); } -.gradio-dropdown .wrap-inner.wrap-inner.wrap-inner{ +.gradio-dropdown:not(.multiselect) .wrap-inner.wrap-inner.wrap-inner{ flex-wrap: unset; } @@ -123,6 +138,18 @@ div.gradio-html.min{ border-radius: 0.5em; } +.gradio-button.secondary-down{ + background: var(--button-secondary-background-fill); + color: var(--button-secondary-text-color); +} +.gradio-button.secondary-down, .gradio-button.secondary-down:hover{ + box-shadow: 1px 1px 1px rgba(0,0,0,0.25) inset, 0px 0px 3px rgba(0,0,0,0.15) inset; +} +.gradio-button.secondary-down:hover{ + background: var(--button-secondary-background-fill-hover); + color: var(--button-secondary-text-color-hover); +} + .checkboxes-row{ margin-bottom: 0.5em; margin-left: 0em; @@ -507,6 +534,17 @@ div.dimensions-tools{ background-color: rgba(0, 0, 0, 0.8); } +#imageARPreview { + position: absolute; + top: 0px; + left: 0px; + border: 2px solid red; + background: rgba(255, 0, 0, 0.3); + z-index: 900; + pointer-events: none; + display: none; +} + /* context menu (ie for the generate button) */ #context-menu{ diff --git a/webui.py b/webui.py index 30f3e4a1f..b570895fb 100644 --- a/webui.py +++ b/webui.py @@ -265,9 +265,6 @@ def webui(): inbrowser=cmd_opts.autolaunch, prevent_thread_lock=True ) - for dep in shared.demo.dependencies: - dep['show_progress'] = False # disable gradio css animation on component update - # after initial launch, disable --autolaunch for subsequent restarts cmd_opts.autolaunch = False