Fix various typos with crate-ci/typos

This commit is contained in:
Aarni Koskela 2024-03-04 08:37:23 +02:00
parent e2a8745abc
commit e3fa46f26f
36 changed files with 76 additions and 71 deletions

View File

@ -14,7 +14,7 @@
* Add support for DAT upscaler models ([#14690](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14690), [#15039](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15039)) * Add support for DAT upscaler models ([#14690](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14690), [#15039](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15039))
* Extra Networks Tree View ([#14588](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14588), [#14900](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14900)) * Extra Networks Tree View ([#14588](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14588), [#14900](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14900))
* NPU Support ([#14801](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14801)) * NPU Support ([#14801](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14801))
* Propmpt comments support * Prompt comments support
### Minor: ### Minor:
* Allow pasting in WIDTHxHEIGHT strings into the width/height fields ([#14296](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14296)) * Allow pasting in WIDTHxHEIGHT strings into the width/height fields ([#14296](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14296))
@ -59,7 +59,7 @@
* modules/api/api.py: add api endpoint to refresh embeddings list ([#14715](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14715)) * modules/api/api.py: add api endpoint to refresh embeddings list ([#14715](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14715))
* set_named_arg ([#14773](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14773)) * set_named_arg ([#14773](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14773))
* add before_token_counter callback and use it for prompt comments * add before_token_counter callback and use it for prompt comments
* ResizeHandleRow - allow overriden column scale parameter ([#15004](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15004)) * ResizeHandleRow - allow overridden column scale parameter ([#15004](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15004))
### Performance ### Performance
* Massive performance improvement for extra networks directories with a huge number of files in them in an attempt to tackle #14507 ([#14528](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14528)) * Massive performance improvement for extra networks directories with a huge number of files in them in an attempt to tackle #14507 ([#14528](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14528))
@ -101,7 +101,7 @@
* Gracefully handle mtime read exception from cache ([#14933](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14933)) * Gracefully handle mtime read exception from cache ([#14933](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14933))
* Only trigger interrupt on `Esc` when interrupt button visible ([#14932](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14932)) * Only trigger interrupt on `Esc` when interrupt button visible ([#14932](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14932))
* Disable prompt token counters option actually disables token counting rather than just hiding results. * Disable prompt token counters option actually disables token counting rather than just hiding results.
* avoid doble upscaling in inpaint ([#14966](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14966)) * avoid double upscaling in inpaint ([#14966](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14966))
* Fix #14591 using translated content to do categories mapping ([#14995](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14995)) * Fix #14591 using translated content to do categories mapping ([#14995](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14995))
* fix: the `split_threshold` parameter does not work when running Split oversized images ([#15006](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15006)) * fix: the `split_threshold` parameter does not work when running Split oversized images ([#15006](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15006))
* Fix resize-handle for mobile ([#15010](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15010), [#15065](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15065)) * Fix resize-handle for mobile ([#15010](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15010), [#15065](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15065))
@ -171,7 +171,7 @@
* infotext updates: add option to disregard certain infotext fields, add option to not include VAE in infotext, add explanation to infotext settings page, move some options to infotext settings page * infotext updates: add option to disregard certain infotext fields, add option to not include VAE in infotext, add explanation to infotext settings page, move some options to infotext settings page
* add FP32 fallback support on sd_vae_approx ([#14046](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14046)) * add FP32 fallback support on sd_vae_approx ([#14046](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14046))
* support XYZ scripts / split hires path from unet ([#14126](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14126)) * support XYZ scripts / split hires path from unet ([#14126](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14126))
* allow use of mutiple styles csv files ([#14125](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14125)) * allow use of multiple styles csv files ([#14125](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14125))
* make extra network card description plaintext by default, with an option (Treat card description as HTML) to re-enable HTML as it was (originally by [#13241](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13241)) * make extra network card description plaintext by default, with an option (Treat card description as HTML) to re-enable HTML as it was (originally by [#13241](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13241))
### Extensions and API: ### Extensions and API:
@ -308,7 +308,7 @@
* new samplers: Restart, DPM++ 2M SDE Exponential, DPM++ 2M SDE Heun, DPM++ 2M SDE Heun Karras, DPM++ 2M SDE Heun Exponential, DPM++ 3M SDE, DPM++ 3M SDE Karras, DPM++ 3M SDE Exponential ([#12300](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12300), [#12519](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12519), [#12542](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12542)) * new samplers: Restart, DPM++ 2M SDE Exponential, DPM++ 2M SDE Heun, DPM++ 2M SDE Heun Karras, DPM++ 2M SDE Heun Exponential, DPM++ 3M SDE, DPM++ 3M SDE Karras, DPM++ 3M SDE Exponential ([#12300](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12300), [#12519](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12519), [#12542](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12542))
* rework DDIM, PLMS, UniPC to use CFG denoiser same as in k-diffusion samplers: * rework DDIM, PLMS, UniPC to use CFG denoiser same as in k-diffusion samplers:
* makes all of them work with img2img * makes all of them work with img2img
* makes prompt composition posssible (AND) * makes prompt composition possible (AND)
* makes them available for SDXL * makes them available for SDXL
* always show extra networks tabs in the UI ([#11808](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/11808)) * always show extra networks tabs in the UI ([#11808](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/11808))
* use less RAM when creating models ([#11958](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/11958), [#12599](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12599)) * use less RAM when creating models ([#11958](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/11958), [#12599](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12599))
@ -484,7 +484,7 @@
* user metadata system for custom networks * user metadata system for custom networks
* extended Lora metadata editor: set activation text, default weight, view tags, training info * extended Lora metadata editor: set activation text, default weight, view tags, training info
* Lora extension rework to include other types of networks (all that were previously handled by LyCORIS extension) * Lora extension rework to include other types of networks (all that were previously handled by LyCORIS extension)
* show github stars for extenstions * show github stars for extensions
* img2img batch mode can read extra stuff from png info * img2img batch mode can read extra stuff from png info
* img2img batch works with subdirectories * img2img batch works with subdirectories
* hotkeys to move prompt elements: alt+left/right * hotkeys to move prompt elements: alt+left/right
@ -703,7 +703,7 @@
* do not wait for Stable Diffusion model to load at startup * do not wait for Stable Diffusion model to load at startup
* add filename patterns: `[denoising]` * add filename patterns: `[denoising]`
* directory hiding for extra networks: dirs starting with `.` will hide their cards on extra network tabs unless specifically searched for * directory hiding for extra networks: dirs starting with `.` will hide their cards on extra network tabs unless specifically searched for
* LoRA: for the `<...>` text in prompt, use name of LoRA that is in the metdata of the file, if present, instead of filename (both can be used to activate LoRA) * LoRA: for the `<...>` text in prompt, use name of LoRA that is in the metadata of the file, if present, instead of filename (both can be used to activate LoRA)
* LoRA: read infotext params from kohya-ss's extension parameters if they are present and if his extension is not active * LoRA: read infotext params from kohya-ss's extension parameters if they are present and if his extension is not active
* LoRA: fix some LoRAs not working (ones that have 3x3 convolution layer) * LoRA: fix some LoRAs not working (ones that have 3x3 convolution layer)
* LoRA: add an option to use old method of applying LoRAs (producing same results as with kohya-ss) * LoRA: add an option to use old method of applying LoRAs (producing same results as with kohya-ss)
@ -733,7 +733,7 @@
* fix gamepad navigation * fix gamepad navigation
* make the lightbox fullscreen image function properly * make the lightbox fullscreen image function properly
* fix squished thumbnails in extras tab * fix squished thumbnails in extras tab
* keep "search" filter for extra networks when user refreshes the tab (previously it showed everthing after you refreshed) * keep "search" filter for extra networks when user refreshes the tab (previously it showed everything after you refreshed)
* fix webui showing the same image if you configure the generation to always save results into same file * fix webui showing the same image if you configure the generation to always save results into same file
* fix bug with upscalers not working properly * fix bug with upscalers not working properly
* fix MPS on PyTorch 2.0.1, Intel Macs * fix MPS on PyTorch 2.0.1, Intel Macs
@ -751,7 +751,7 @@
* switch to PyTorch 2.0.0 (except for AMD GPUs) * switch to PyTorch 2.0.0 (except for AMD GPUs)
* visual improvements to custom code scripts * visual improvements to custom code scripts
* add filename patterns: `[clip_skip]`, `[hasprompt<>]`, `[batch_number]`, `[generation_number]` * add filename patterns: `[clip_skip]`, `[hasprompt<>]`, `[batch_number]`, `[generation_number]`
* add support for saving init images in img2img, and record their hashes in infotext for reproducability * add support for saving init images in img2img, and record their hashes in infotext for reproducibility
* automatically select current word when adjusting weight with ctrl+up/down * automatically select current word when adjusting weight with ctrl+up/down
* add dropdowns for X/Y/Z plot * add dropdowns for X/Y/Z plot
* add setting: Stable Diffusion/Random number generator source: makes it possible to make images generated from a given manual seed consistent across different GPUs * add setting: Stable Diffusion/Random number generator source: makes it possible to make images generated from a given manual seed consistent across different GPUs

5
_typos.toml Normal file
View File

@ -0,0 +1,5 @@
[default.extend-words]
# Part of "RGBa" (Pillow's pre-multiplied alpha RGB mode)
Ba = "Ba"
# HSA is something AMD uses for their GPUs
HSA = "HSA"

View File

@ -301,7 +301,7 @@ class DDPMV1(pl.LightningModule):
elif self.parameterization == "x0": elif self.parameterization == "x0":
target = x_start target = x_start
else: else:
raise NotImplementedError(f"Paramterization {self.parameterization} not yet supported") raise NotImplementedError(f"Parameterization {self.parameterization} not yet supported")
loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3]) loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3])
@ -880,7 +880,7 @@ class LatentDiffusionV1(DDPMV1):
def apply_model(self, x_noisy, t, cond, return_ids=False): def apply_model(self, x_noisy, t, cond, return_ids=False):
if isinstance(cond, dict): if isinstance(cond, dict):
# hybrid case, cond is exptected to be a dict # hybrid case, cond is expected to be a dict
pass pass
else: else:
if not isinstance(cond, list): if not isinstance(cond, list):
@ -916,7 +916,7 @@ class LatentDiffusionV1(DDPMV1):
cond_list = [{c_key: [c[:, :, :, :, i]]} for i in range(c.shape[-1])] cond_list = [{c_key: [c[:, :, :, :, i]]} for i in range(c.shape[-1])]
elif self.cond_stage_key == 'coordinates_bbox': elif self.cond_stage_key == 'coordinates_bbox':
assert 'original_image_size' in self.split_input_params, 'BoudingBoxRescaling is missing original_image_size' assert 'original_image_size' in self.split_input_params, 'BoundingBoxRescaling is missing original_image_size'
# assuming padding of unfold is always 0 and its dilation is always 1 # assuming padding of unfold is always 0 and its dilation is always 1
n_patches_per_row = int((w - ks[0]) / stride[0] + 1) n_patches_per_row = int((w - ks[0]) / stride[0] + 1)
@ -926,7 +926,7 @@ class LatentDiffusionV1(DDPMV1):
num_downs = self.first_stage_model.encoder.num_resolutions - 1 num_downs = self.first_stage_model.encoder.num_resolutions - 1
rescale_latent = 2 ** (num_downs) rescale_latent = 2 ** (num_downs)
# get top left postions of patches as conforming for the bbbox tokenizer, therefore we # get top left positions of patches as conforming for the bbbox tokenizer, therefore we
# need to rescale the tl patch coordinates to be in between (0,1) # need to rescale the tl patch coordinates to be in between (0,1)
tl_patch_coordinates = [(rescale_latent * stride[0] * (patch_nr % n_patches_per_row) / full_img_w, tl_patch_coordinates = [(rescale_latent * stride[0] * (patch_nr % n_patches_per_row) / full_img_w,
rescale_latent * stride[1] * (patch_nr // n_patches_per_row) / full_img_h) rescale_latent * stride[1] * (patch_nr // n_patches_per_row) / full_img_h)

View File

@ -30,7 +30,7 @@ def factorization(dimension: int, factor:int=-1) -> tuple[int, int]:
In LoRA with Kroneckor Product, first value is a value for weight scale. In LoRA with Kroneckor Product, first value is a value for weight scale.
secon value is a value for weight. secon value is a value for weight.
Becuase of non-commutative property, AB BA. Meaning of two matrices is slightly different. Because of non-commutative property, AB BA. Meaning of two matrices is slightly different.
examples) examples)
factor factor

View File

@ -355,7 +355,7 @@ def network_apply_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn
""" """
Applies the currently selected set of networks to the weights of torch layer self. Applies the currently selected set of networks to the weights of torch layer self.
If weights already have this particular set of networks applied, does nothing. If weights already have this particular set of networks applied, does nothing.
If not, restores orginal weights from backup and alters weights according to networks. If not, restores original weights from backup and alters weights according to networks.
""" """
network_layer_name = getattr(self, 'network_layer_name', None) network_layer_name = getattr(self, 'network_layer_name', None)

View File

@ -292,7 +292,7 @@ onUiLoaded(async() => {
// Create tooltip // Create tooltip
function createTooltip() { function createTooltip() {
const toolTipElemnt = const toolTipElement =
targetElement.querySelector(".image-container"); targetElement.querySelector(".image-container");
const tooltip = document.createElement("div"); const tooltip = document.createElement("div");
tooltip.className = "canvas-tooltip"; tooltip.className = "canvas-tooltip";
@ -355,7 +355,7 @@ onUiLoaded(async() => {
tooltip.appendChild(tooltipContent); tooltip.appendChild(tooltipContent);
// Add a hint element to the target element // Add a hint element to the target element
toolTipElemnt.appendChild(tooltip); toolTipElement.appendChild(tooltip);
} }
//Show tool tip if setting enable //Show tool tip if setting enable

View File

@ -8,8 +8,8 @@ shared.options_templates.update(shared.options_section(('canvas_hotkey', "Canvas
"canvas_hotkey_grow_brush": shared.OptionInfo("W", "Enlarge the brush size"), "canvas_hotkey_grow_brush": shared.OptionInfo("W", "Enlarge the brush size"),
"canvas_hotkey_move": shared.OptionInfo("F", "Moving the canvas").info("To work correctly in firefox, turn off 'Automatically search the page text when typing' in the browser settings"), "canvas_hotkey_move": shared.OptionInfo("F", "Moving the canvas").info("To work correctly in firefox, turn off 'Automatically search the page text when typing' in the browser settings"),
"canvas_hotkey_fullscreen": shared.OptionInfo("S", "Fullscreen Mode, maximizes the picture so that it fits into the screen and stretches it to its full width "), "canvas_hotkey_fullscreen": shared.OptionInfo("S", "Fullscreen Mode, maximizes the picture so that it fits into the screen and stretches it to its full width "),
"canvas_hotkey_reset": shared.OptionInfo("R", "Reset zoom and canvas positon"), "canvas_hotkey_reset": shared.OptionInfo("R", "Reset zoom and canvas position"),
"canvas_hotkey_overlap": shared.OptionInfo("O", "Toggle overlap").info("Technical button, neededs for testing"), "canvas_hotkey_overlap": shared.OptionInfo("O", "Toggle overlap").info("Technical button, needed for testing"),
"canvas_show_tooltip": shared.OptionInfo(True, "Enable tooltip on the canvas"), "canvas_show_tooltip": shared.OptionInfo(True, "Enable tooltip on the canvas"),
"canvas_auto_expand": shared.OptionInfo(True, "Automatically expands an image that does not fit completely in the canvas area, similar to manually pressing the S and R buttons"), "canvas_auto_expand": shared.OptionInfo(True, "Automatically expands an image that does not fit completely in the canvas area, similar to manually pressing the S and R buttons"),
"canvas_blur_prompt": shared.OptionInfo(False, "Take the focus off the prompt when working with a canvas"), "canvas_blur_prompt": shared.OptionInfo(False, "Take the focus off the prompt when working with a canvas"),

View File

@ -104,7 +104,7 @@ def latent_blend(settings, a, b, t):
def get_modified_nmask(settings, nmask, sigma): def get_modified_nmask(settings, nmask, sigma):
""" """
Converts a negative mask representing the transparency of the original latent vectors being overlayed Converts a negative mask representing the transparency of the original latent vectors being overlaid
to a mask that is scaled according to the denoising strength for this step. to a mask that is scaled according to the denoising strength for this step.
Where: Where:

View File

@ -50,17 +50,17 @@ function dimensionChange(e, is_width, is_height) {
var scaledx = targetElement.naturalWidth * viewportscale; var scaledx = targetElement.naturalWidth * viewportscale;
var scaledy = targetElement.naturalHeight * viewportscale; var scaledy = targetElement.naturalHeight * viewportscale;
var cleintRectTop = (viewportOffset.top + window.scrollY); var clientRectTop = (viewportOffset.top + window.scrollY);
var cleintRectLeft = (viewportOffset.left + window.scrollX); var clientRectLeft = (viewportOffset.left + window.scrollX);
var cleintRectCentreY = cleintRectTop + (targetElement.clientHeight / 2); var clientRectCentreY = clientRectTop + (targetElement.clientHeight / 2);
var cleintRectCentreX = cleintRectLeft + (targetElement.clientWidth / 2); var clientRectCentreX = clientRectLeft + (targetElement.clientWidth / 2);
var arscale = Math.min(scaledx / currentWidth, scaledy / currentHeight); var arscale = Math.min(scaledx / currentWidth, scaledy / currentHeight);
var arscaledx = currentWidth * arscale; var arscaledx = currentWidth * arscale;
var arscaledy = currentHeight * arscale; var arscaledy = currentHeight * arscale;
var arRectTop = cleintRectCentreY - (arscaledy / 2); var arRectTop = clientRectCentreY - (arscaledy / 2);
var arRectLeft = cleintRectCentreX - (arscaledx / 2); var arRectLeft = clientRectCentreX - (arscaledx / 2);
var arRectWidth = arscaledx; var arRectWidth = arscaledx;
var arRectHeight = arscaledy; var arRectHeight = arscaledy;

View File

@ -290,7 +290,7 @@ function extraNetworksTreeProcessDirectoryClick(event, btn, tabname, extra_netwo
* Processes `onclick` events when user clicks on directories in tree. * Processes `onclick` events when user clicks on directories in tree.
* *
* Here is how the tree reacts to clicks for various states: * Here is how the tree reacts to clicks for various states:
* unselected unopened directory: Diretory is selected and expanded. * unselected unopened directory: Directory is selected and expanded.
* unselected opened directory: Directory is selected. * unselected opened directory: Directory is selected.
* selected opened directory: Directory is collapsed and deselected. * selected opened directory: Directory is collapsed and deselected.
* chevron is clicked: Directory is expanded or collapsed. Selected state unchanged. * chevron is clicked: Directory is expanded or collapsed. Selected state unchanged.

View File

@ -411,7 +411,7 @@ function switchWidthHeight(tabname) {
var onEditTimers = {}; var onEditTimers = {};
// calls func after afterMs milliseconds has passed since the input elem has beed enited by user // calls func after afterMs milliseconds has passed since the input elem has been edited by user
function onEdit(editId, elem, afterMs, func) { function onEdit(editId, elem, afterMs, func) {
var edited = function() { var edited = function() {
var existingTimer = onEditTimers[editId]; var existingTimer = onEditTimers[editId];

View File

@ -360,7 +360,7 @@ class Api:
return script_args return script_args
def apply_infotext(self, request, tabname, *, script_runner=None, mentioned_script_args=None): def apply_infotext(self, request, tabname, *, script_runner=None, mentioned_script_args=None):
"""Processes `infotext` field from the `request`, and sets other fields of the `request` accoring to what's in infotext. """Processes `infotext` field from the `request`, and sets other fields of the `request` according to what's in infotext.
If request already has a field set, and that field is encountered in infotext too, the value from infotext is ignored. If request already has a field set, and that field is encountered in infotext too, the value from infotext is ignored.
@ -409,8 +409,8 @@ class Api:
if request.override_settings is None: if request.override_settings is None:
request.override_settings = {} request.override_settings = {}
overriden_settings = infotext_utils.get_override_settings(params) overridden_settings = infotext_utils.get_override_settings(params)
for _, setting_name, value in overriden_settings: for _, setting_name, value in overridden_settings:
if setting_name not in request.override_settings: if setting_name not in request.override_settings:
request.override_settings[setting_name] = value request.override_settings[setting_name] = value

View File

@ -100,8 +100,8 @@ def wrap_gradio_call(func, extra_outputs=None, add_stats=False):
sys_pct = sys_peak/max(sys_total, 1) * 100 sys_pct = sys_peak/max(sys_total, 1) * 100
toltip_a = "Active: peak amount of video memory used during generation (excluding cached data)" toltip_a = "Active: peak amount of video memory used during generation (excluding cached data)"
toltip_r = "Reserved: total amout of video memory allocated by the Torch library " toltip_r = "Reserved: total amount of video memory allocated by the Torch library "
toltip_sys = "System: peak amout of video memory allocated by all running programs, out of total capacity" toltip_sys = "System: peak amount of video memory allocated by all running programs, out of total capacity"
text_a = f"<abbr title='{toltip_a}'>A</abbr>: <span class='measurement'>{active_peak/1024:.2f} GB</span>" text_a = f"<abbr title='{toltip_a}'>A</abbr>: <span class='measurement'>{active_peak/1024:.2f} GB</span>"
text_r = f"<abbr title='{toltip_r}'>R</abbr>: <span class='measurement'>{reserved_peak/1024:.2f} GB</span>" text_r = f"<abbr title='{toltip_r}'>R</abbr>: <span class='measurement'>{reserved_peak/1024:.2f} GB</span>"

View File

@ -259,7 +259,7 @@ def test_for_nans(x, where):
def first_time_calculation(): def first_time_calculation():
""" """
just do any calculation with pytorch layers - the first time this is done it allocaltes about 700MB of memory and just do any calculation with pytorch layers - the first time this is done it allocaltes about 700MB of memory and
spends about 2.7 seconds doing that, at least wih NVidia. spends about 2.7 seconds doing that, at least with NVidia.
""" """
x = torch.zeros((1, 1)).to(device, dtype) x = torch.zeros((1, 1)).to(device, dtype)

View File

@ -60,7 +60,7 @@ class ExtraNetwork:
Where name matches the name of this ExtraNetwork object, and arg1:arg2:arg3 are any natural number of text arguments Where name matches the name of this ExtraNetwork object, and arg1:arg2:arg3 are any natural number of text arguments
separated by colon. separated by colon.
Even if the user does not mention this ExtraNetwork in his prompt, the call will stil be made, with empty params_list - Even if the user does not mention this ExtraNetwork in his prompt, the call will still be made, with empty params_list -
in this case, all effects of this extra networks should be disabled. in this case, all effects of this extra networks should be disabled.
Can be called multiple times before deactivate() - each new call should override the previous call completely. Can be called multiple times before deactivate() - each new call should override the previous call completely.

View File

@ -139,7 +139,7 @@ def initialize_rest(*, reload_script_modules=False):
""" """
Accesses shared.sd_model property to load model. Accesses shared.sd_model property to load model.
After it's available, if it has been loaded before this access by some extension, After it's available, if it has been loaded before this access by some extension,
its optimization may be None because the list of optimizaers has neet been filled its optimization may be None because the list of optimizers has not been filled
by that time, so we apply optimization again. by that time, so we apply optimization again.
""" """
from modules import devices from modules import devices

View File

@ -12,7 +12,7 @@ log = logging.getLogger(__name__)
# before torch version 1.13, has_mps is only available in nightly pytorch and macOS 12.3+, # before torch version 1.13, has_mps is only available in nightly pytorch and macOS 12.3+,
# use check `getattr` and try it for compatibility. # use check `getattr` and try it for compatibility.
# in torch version 1.13, backends.mps.is_available() and backends.mps.is_built() are introduced in to check mps availabilty, # in torch version 1.13, backends.mps.is_available() and backends.mps.is_built() are introduced in to check mps availability,
# since torch 2.0.1+ nightly build, getattr(torch, 'has_mps', False) was deprecated, see https://github.com/pytorch/pytorch/pull/103279 # since torch 2.0.1+ nightly build, getattr(torch, 'has_mps', False) was deprecated, see https://github.com/pytorch/pytorch/pull/103279
def check_for_mps() -> bool: def check_for_mps() -> bool:
if version.parse(torch.__version__) <= version.parse("2.0.1"): if version.parse(torch.__version__) <= version.parse("2.0.1"):

View File

@ -110,7 +110,7 @@ def load_upscalers():
except Exception: except Exception:
pass pass
datas = [] data = []
commandline_options = vars(shared.cmd_opts) commandline_options = vars(shared.cmd_opts)
# some of upscaler classes will not go away after reloading their modules, and we'll end # some of upscaler classes will not go away after reloading their modules, and we'll end
@ -129,10 +129,10 @@ def load_upscalers():
scaler = cls(commandline_model_path) scaler = cls(commandline_model_path)
scaler.user_path = commandline_model_path scaler.user_path = commandline_model_path
scaler.model_download_path = commandline_model_path or scaler.model_path scaler.model_download_path = commandline_model_path or scaler.model_path
datas += scaler.scalers data += scaler.scalers
shared.sd_upscalers = sorted( shared.sd_upscalers = sorted(
datas, data,
# Special case for UpscalerNone keeps it at the beginning of the list. # Special case for UpscalerNone keeps it at the beginning of the list.
key=lambda x: x.name.lower() if not isinstance(x.scaler, (UpscalerNone, UpscalerLanczos, UpscalerNearest)) else "" key=lambda x: x.name.lower() if not isinstance(x.scaler, (UpscalerNone, UpscalerLanczos, UpscalerNearest)) else ""
) )

View File

@ -341,7 +341,7 @@ class DDPM(pl.LightningModule):
elif self.parameterization == "x0": elif self.parameterization == "x0":
target = x_start target = x_start
else: else:
raise NotImplementedError(f"Paramterization {self.parameterization} not yet supported") raise NotImplementedError(f"Parameterization {self.parameterization} not yet supported")
loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3]) loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3])
@ -901,7 +901,7 @@ class LatentDiffusion(DDPM):
def apply_model(self, x_noisy, t, cond, return_ids=False): def apply_model(self, x_noisy, t, cond, return_ids=False):
if isinstance(cond, dict): if isinstance(cond, dict):
# hybrid case, cond is exptected to be a dict # hybrid case, cond is expected to be a dict
pass pass
else: else:
if not isinstance(cond, list): if not isinstance(cond, list):
@ -937,7 +937,7 @@ class LatentDiffusion(DDPM):
cond_list = [{c_key: [c[:, :, :, :, i]]} for i in range(c.shape[-1])] cond_list = [{c_key: [c[:, :, :, :, i]]} for i in range(c.shape[-1])]
elif self.cond_stage_key == 'coordinates_bbox': elif self.cond_stage_key == 'coordinates_bbox':
assert 'original_image_size' in self.split_input_params, 'BoudingBoxRescaling is missing original_image_size' assert 'original_image_size' in self.split_input_params, 'BoundingBoxRescaling is missing original_image_size'
# assuming padding of unfold is always 0 and its dilation is always 1 # assuming padding of unfold is always 0 and its dilation is always 1
n_patches_per_row = int((w - ks[0]) / stride[0] + 1) n_patches_per_row = int((w - ks[0]) / stride[0] + 1)
@ -947,7 +947,7 @@ class LatentDiffusion(DDPM):
num_downs = self.first_stage_model.encoder.num_resolutions - 1 num_downs = self.first_stage_model.encoder.num_resolutions - 1
rescale_latent = 2 ** (num_downs) rescale_latent = 2 ** (num_downs)
# get top left postions of patches as conforming for the bbbox tokenizer, therefore we # get top left positions of patches as conforming for the bbbox tokenizer, therefore we
# need to rescale the tl patch coordinates to be in between (0,1) # need to rescale the tl patch coordinates to be in between (0,1)
tl_patch_coordinates = [(rescale_latent * stride[0] * (patch_nr % n_patches_per_row) / full_img_w, tl_patch_coordinates = [(rescale_latent * stride[0] * (patch_nr % n_patches_per_row) / full_img_w,
rescale_latent * stride[1] * (patch_nr // n_patches_per_row) / full_img_h) rescale_latent * stride[1] * (patch_nr // n_patches_per_row) / full_img_h)

View File

@ -34,7 +34,7 @@ def randn_local(seed, shape):
def randn_like(x): def randn_like(x):
"""Generate a tensor with random numbers from a normal distribution using the previously initialized genrator. """Generate a tensor with random numbers from a normal distribution using the previously initialized generator.
Use either randn() or manual_seed() to initialize the generator.""" Use either randn() or manual_seed() to initialize the generator."""
@ -48,7 +48,7 @@ def randn_like(x):
def randn_without_seed(shape, generator=None): def randn_without_seed(shape, generator=None):
"""Generate a tensor with random numbers from a normal distribution using the previously initialized genrator. """Generate a tensor with random numbers from a normal distribution using the previously initialized generator.
Use either randn() or manual_seed() to initialize the generator.""" Use either randn() or manual_seed() to initialize the generator."""

View File

@ -92,7 +92,7 @@ class Script:
"""If true, the script setup will only be run in Gradio UI, not in API""" """If true, the script setup will only be run in Gradio UI, not in API"""
controls = None controls = None
"""A list of controls retured by the ui().""" """A list of controls returned by the ui()."""
def title(self): def title(self):
"""this function should return the title of the script. This is what will be displayed in the dropdown menu.""" """this function should return the title of the script. This is what will be displayed in the dropdown menu."""
@ -109,7 +109,7 @@ class Script:
def show(self, is_img2img): def show(self, is_img2img):
""" """
is_img2img is True if this function is called for the img2img interface, and Fasle otherwise is_img2img is True if this function is called for the img2img interface, and False otherwise
This function should return: This function should return:
- False if the script should not be shown in UI at all - False if the script should not be shown in UI at all

View File

@ -35,7 +35,7 @@ class EmphasisIgnore(Emphasis):
class EmphasisOriginal(Emphasis): class EmphasisOriginal(Emphasis):
name = "Original" name = "Original"
description = "the orginal emphasis implementation" description = "the original emphasis implementation"
def after_transformers(self): def after_transformers(self):
original_mean = self.z.mean() original_mean = self.z.mean()
@ -48,7 +48,7 @@ class EmphasisOriginal(Emphasis):
class EmphasisOriginalNoNorm(EmphasisOriginal): class EmphasisOriginalNoNorm(EmphasisOriginal):
name = "No norm" name = "No norm"
description = "same as orginal, but without normalization (seems to work better for SDXL)" description = "same as original, but without normalization (seems to work better for SDXL)"
def after_transformers(self): def after_transformers(self):
self.z = self.z * self.multipliers.reshape(self.multipliers.shape + (1,)).expand(self.z.shape) self.z = self.z * self.multipliers.reshape(self.multipliers.shape + (1,)).expand(self.z.shape)

View File

@ -23,7 +23,7 @@ class PromptChunk:
PromptChunkFix = namedtuple('PromptChunkFix', ['offset', 'embedding']) PromptChunkFix = namedtuple('PromptChunkFix', ['offset', 'embedding'])
"""An object of this type is a marker showing that textual inversion embedding's vectors have to placed at offset in the prompt """An object of this type is a marker showing that textual inversion embedding's vectors have to placed at offset in the prompt
chunk. Thos objects are found in PromptChunk.fixes and, are placed into FrozenCLIPEmbedderWithCustomWordsBase.hijack.fixes, and finally chunk. Those objects are found in PromptChunk.fixes and, are placed into FrozenCLIPEmbedderWithCustomWordsBase.hijack.fixes, and finally
are applied by sd_hijack.EmbeddingsWithFixes's forward function.""" are applied by sd_hijack.EmbeddingsWithFixes's forward function."""
@ -66,7 +66,7 @@ class FrozenCLIPEmbedderWithCustomWordsBase(torch.nn.Module):
def encode_with_transformers(self, tokens): def encode_with_transformers(self, tokens):
""" """
converts a batch of token ids (in python lists) into a single tensor with numeric respresentation of those tokens; converts a batch of token ids (in python lists) into a single tensor with numeric representation of those tokens;
All python lists with tokens are assumed to have same length, usually 77. All python lists with tokens are assumed to have same length, usually 77.
if input is a list with B elements and each element has T tokens, expected output shape is (B, T, C), where C depends on if input is a list with B elements and each element has T tokens, expected output shape is (B, T, C), where C depends on
model - can be 768 and 1024. model - can be 768 and 1024.
@ -136,7 +136,7 @@ class FrozenCLIPEmbedderWithCustomWordsBase(torch.nn.Module):
if token == self.comma_token: if token == self.comma_token:
last_comma = len(chunk.tokens) last_comma = len(chunk.tokens)
# this is when we are at the end of alloted 75 tokens for the current chunk, and the current token is not a comma. opts.comma_padding_backtrack # this is when we are at the end of allotted 75 tokens for the current chunk, and the current token is not a comma. opts.comma_padding_backtrack
# is a setting that specifies that if there is a comma nearby, the text after the comma should be moved out of this chunk and into the next. # is a setting that specifies that if there is a comma nearby, the text after the comma should be moved out of this chunk and into the next.
elif opts.comma_padding_backtrack != 0 and len(chunk.tokens) == self.chunk_length and last_comma != -1 and len(chunk.tokens) - last_comma <= opts.comma_padding_backtrack: elif opts.comma_padding_backtrack != 0 and len(chunk.tokens) == self.chunk_length and last_comma != -1 and len(chunk.tokens) - last_comma <= opts.comma_padding_backtrack:
break_location = last_comma + 1 break_location = last_comma + 1
@ -206,7 +206,7 @@ class FrozenCLIPEmbedderWithCustomWordsBase(torch.nn.Module):
be a multiple of 77; and C is dimensionality of each token - for SD1 it's 768, for SD2 it's 1024, and for SDXL it's 1280. be a multiple of 77; and C is dimensionality of each token - for SD1 it's 768, for SD2 it's 1024, and for SDXL it's 1280.
An example shape returned by this function can be: (2, 77, 768). An example shape returned by this function can be: (2, 77, 768).
For SDXL, instead of returning one tensor avobe, it returns a tuple with two: the other one with shape (B, 1280) with pooled values. For SDXL, instead of returning one tensor avobe, it returns a tuple with two: the other one with shape (B, 1280) with pooled values.
Webui usually sends just one text at a time through this function - the only time when texts is an array with more than one elemenet Webui usually sends just one text at a time through this function - the only time when texts is an array with more than one element
is when you do prompt editing: "a picture of a [cat:dog:0.4] eating ice cream" is when you do prompt editing: "a picture of a [cat:dog:0.4] eating ice cream"
""" """

View File

@ -784,7 +784,7 @@ def reuse_model_from_already_loaded(sd_model, checkpoint_info, timer):
If it is loaded, returns that (moving it to GPU if necessary, and moving the currently loadded model to CPU if necessary). If it is loaded, returns that (moving it to GPU if necessary, and moving the currently loadded model to CPU if necessary).
If not, returns the model that can be used to load weights from checkpoint_info's file. If not, returns the model that can be used to load weights from checkpoint_info's file.
If no such model exists, returns None. If no such model exists, returns None.
Additionaly deletes loaded models that are over the limit set in settings (sd_checkpoints_limit). Additionally deletes loaded models that are over the limit set in settings (sd_checkpoints_limit).
""" """
already_loaded = None already_loaded = None

View File

@ -43,7 +43,7 @@ restricted_opts = None
sd_model: sd_models_types.WebuiSdModel = None sd_model: sd_models_types.WebuiSdModel = None
settings_components = None settings_components = None
"""assinged from ui.py, a mapping on setting names to gradio components repsponsible for those settings""" """assigned from ui.py, a mapping on setting names to gradio components repsponsible for those settings"""
tab_names = [] tab_names = []

View File

@ -213,7 +213,7 @@ options_templates.update(options_section(('optimizations', "Optimizations", "sd"
"pad_cond_uncond": OptionInfo(False, "Pad prompt/negative prompt", infotext='Pad conds').info("improves performance when prompt and negative prompt have different lengths; changes seeds"), "pad_cond_uncond": OptionInfo(False, "Pad prompt/negative prompt", infotext='Pad conds').info("improves performance when prompt and negative prompt have different lengths; changes seeds"),
"pad_cond_uncond_v0": OptionInfo(False, "Pad prompt/negative prompt (v0)", infotext='Pad conds v0').info("alternative implementation for the above; used prior to 1.6.0 for DDIM sampler; overrides the above if set; WARNING: truncates negative prompt if it's too long; changes seeds"), "pad_cond_uncond_v0": OptionInfo(False, "Pad prompt/negative prompt (v0)", infotext='Pad conds v0').info("alternative implementation for the above; used prior to 1.6.0 for DDIM sampler; overrides the above if set; WARNING: truncates negative prompt if it's too long; changes seeds"),
"persistent_cond_cache": OptionInfo(True, "Persistent cond cache").info("do not recalculate conds from prompts if prompts have not changed since previous calculation"), "persistent_cond_cache": OptionInfo(True, "Persistent cond cache").info("do not recalculate conds from prompts if prompts have not changed since previous calculation"),
"batch_cond_uncond": OptionInfo(True, "Batch cond/uncond").info("do both conditional and unconditional denoising in one batch; uses a bit more VRAM during sampling, but improves speed; previously this was controlled by --always-batch-cond-uncond comandline argument"), "batch_cond_uncond": OptionInfo(True, "Batch cond/uncond").info("do both conditional and unconditional denoising in one batch; uses a bit more VRAM during sampling, but improves speed; previously this was controlled by --always-batch-cond-uncond commandline argument"),
"fp8_storage": OptionInfo("Disable", "FP8 weight", gr.Radio, {"choices": ["Disable", "Enable for SDXL", "Enable"]}).info("Use FP8 to store Linear/Conv layers' weight. Require pytorch>=2.1.0."), "fp8_storage": OptionInfo("Disable", "FP8 weight", gr.Radio, {"choices": ["Disable", "Enable for SDXL", "Enable"]}).info("Use FP8 to store Linear/Conv layers' weight. Require pytorch>=2.1.0."),
"cache_fp16_weight": OptionInfo(False, "Cache FP16 weight for LoRA").info("Cache fp16 weight when enabling FP8, will increase the quality of LoRA. Use more system ram."), "cache_fp16_weight": OptionInfo(False, "Cache FP16 weight for LoRA").info("Cache fp16 weight when enabling FP8, will increase the quality of LoRA. Use more system ram."),
})) }))
@ -370,7 +370,7 @@ options_templates.update(options_section(('sampler-params', "Sampler parameters"
'rho': OptionInfo(0.0, "rho", gr.Number, infotext='Schedule rho').info("0 = default (7 for karras, 1 for polyexponential); higher values result in a steeper noise schedule (decreases faster)"), 'rho': OptionInfo(0.0, "rho", gr.Number, infotext='Schedule rho').info("0 = default (7 for karras, 1 for polyexponential); higher values result in a steeper noise schedule (decreases faster)"),
'eta_noise_seed_delta': OptionInfo(0, "Eta noise seed delta", gr.Number, {"precision": 0}, infotext='ENSD').info("ENSD; does not improve anything, just produces different results for ancestral samplers - only useful for reproducing images"), 'eta_noise_seed_delta': OptionInfo(0, "Eta noise seed delta", gr.Number, {"precision": 0}, infotext='ENSD').info("ENSD; does not improve anything, just produces different results for ancestral samplers - only useful for reproducing images"),
'always_discard_next_to_last_sigma': OptionInfo(False, "Always discard next-to-last sigma", infotext='Discard penultimate sigma').link("PR", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/6044"), 'always_discard_next_to_last_sigma': OptionInfo(False, "Always discard next-to-last sigma", infotext='Discard penultimate sigma').link("PR", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/6044"),
'sgm_noise_multiplier': OptionInfo(False, "SGM noise multiplier", infotext='SGM noise multplier').link("PR", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12818").info("Match initial noise to official SDXL implementation - only useful for reproducing images"), 'sgm_noise_multiplier': OptionInfo(False, "SGM noise multiplier", infotext='SGM noise multiplier').link("PR", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12818").info("Match initial noise to official SDXL implementation - only useful for reproducing images"),
'uni_pc_variant': OptionInfo("bh1", "UniPC variant", gr.Radio, {"choices": ["bh1", "bh2", "vary_coeff"]}, infotext='UniPC variant'), 'uni_pc_variant': OptionInfo("bh1", "UniPC variant", gr.Radio, {"choices": ["bh1", "bh2", "vary_coeff"]}, infotext='UniPC variant'),
'uni_pc_skip_type': OptionInfo("time_uniform", "UniPC skip type", gr.Radio, {"choices": ["time_uniform", "time_quadratic", "logSNR"]}, infotext='UniPC skip type'), 'uni_pc_skip_type': OptionInfo("time_uniform", "UniPC skip type", gr.Radio, {"choices": ["time_uniform", "time_quadratic", "logSNR"]}, infotext='UniPC skip type'),
'uni_pc_order': OptionInfo(3, "UniPC order", gr.Slider, {"minimum": 1, "maximum": 50, "step": 1}, infotext='UniPC order').info("must be < sampling steps"), 'uni_pc_order': OptionInfo(3, "UniPC order", gr.Slider, {"minimum": 1, "maximum": 50, "step": 1}, infotext='UniPC order').info("must be < sampling steps"),

View File

@ -157,7 +157,7 @@ class State:
self.current_image_sampling_step = self.sampling_step self.current_image_sampling_step = self.sampling_step
except Exception: except Exception:
# when switching models during genration, VAE would be on CPU, so creating an image will fail. # when switching models during generation, VAE would be on CPU, so creating an image will fail.
# we silently ignore this error # we silently ignore this error
errors.record_exception() errors.record_exception()

View File

@ -65,7 +65,7 @@ def crop_image(im, settings):
rect[3] -= 1 rect[3] -= 1
d.rectangle(rect, outline=GREEN) d.rectangle(rect, outline=GREEN)
results.append(im_debug) results.append(im_debug)
if settings.destop_view_image: if settings.desktop_view_image:
im_debug.show() im_debug.show()
return results return results
@ -341,5 +341,5 @@ class Settings:
self.entropy_points_weight = entropy_points_weight self.entropy_points_weight = entropy_points_weight
self.face_points_weight = face_points_weight self.face_points_weight = face_points_weight
self.annotate_image = annotate_image self.annotate_image = annotate_image
self.destop_view_image = False self.desktop_view_image = False
self.dnn_model_path = dnn_model_path self.dnn_model_path = dnn_model_path

View File

@ -193,11 +193,11 @@ if __name__ == '__main__':
embedded_image = insert_image_data_embed(cap_image, test_embed) embedded_image = insert_image_data_embed(cap_image, test_embed)
retrived_embed = extract_image_data_embed(embedded_image) retrieved_embed = extract_image_data_embed(embedded_image)
assert str(retrived_embed) == str(test_embed) assert str(retrieved_embed) == str(test_embed)
embedded_image2 = insert_image_data_embed(cap_image, retrived_embed) embedded_image2 = insert_image_data_embed(cap_image, retrieved_embed)
assert embedded_image == embedded_image2 assert embedded_image == embedded_image2

View File

@ -172,7 +172,7 @@ class EmbeddingDatabase:
if data: if data:
name = data.get('name', name) name = data.get('name', name)
else: else:
# if data is None, means this is not an embeding, just a preview image # if data is None, means this is not an embedding, just a preview image
return return
elif ext in ['.BIN', '.PT']: elif ext in ['.BIN', '.PT']:
data = torch.load(path, map_location="cpu") data = torch.load(path, map_location="cpu")

View File

@ -105,7 +105,7 @@ def save_files(js_data, images, do_make_zip, index):
logfile_path = os.path.join(shared.opts.outdir_save, "log.csv") logfile_path = os.path.join(shared.opts.outdir_save, "log.csv")
# NOTE: ensure csv integrity when fields are added by # NOTE: ensure csv integrity when fields are added by
# updating headers and padding with delimeters where needed # updating headers and padding with delimiters where needed
if os.path.exists(logfile_path): if os.path.exists(logfile_path):
update_logfile(logfile_path, fields) update_logfile(logfile_path, fields)

View File

@ -88,7 +88,7 @@ class DropdownEditable(FormComponent, gr.Dropdown):
class InputAccordion(gr.Checkbox): class InputAccordion(gr.Checkbox):
"""A gr.Accordion that can be used as an input - returns True if open, False if closed. """A gr.Accordion that can be used as an input - returns True if open, False if closed.
Actaully just a hidden checkbox, but creates an accordion that follows and is followed by the state of the checkbox. Actually just a hidden checkbox, but creates an accordion that follows and is followed by the state of the checkbox.
""" """
global_index = 0 global_index = 0

View File

@ -380,7 +380,7 @@ def install_extension_from_url(dirname, url, branch_name=None):
except OSError as err: except OSError as err:
if err.errno == errno.EXDEV: if err.errno == errno.EXDEV:
# Cross device link, typical in docker or when tmp/ and extensions/ are on different file systems # Cross device link, typical in docker or when tmp/ and extensions/ are on different file systems
# Since we can't use a rename, do the slower but more versitile shutil.move() # Since we can't use a rename, do the slower but more versatile shutil.move()
shutil.move(tmpdir, target_dir) shutil.move(tmpdir, target_dir)
else: else:
# Something else, not enough free space, permissions, etc. rethrow it so that it gets handled. # Something else, not enough free space, permissions, etc. rethrow it so that it gets handled.

View File

@ -67,7 +67,7 @@ class UiPromptStyles:
with gr.Row(): with gr.Row():
self.selection = gr.Dropdown(label="Styles", elem_id=f"{tabname}_styles_edit_select", choices=list(shared.prompt_styles.styles), value=[], allow_custom_value=True, info="Styles allow you to add custom text to prompt. Use the {prompt} token in style text, and it will be replaced with user's prompt when applying style. Otherwise, style's text will be added to the end of the prompt.") self.selection = gr.Dropdown(label="Styles", elem_id=f"{tabname}_styles_edit_select", choices=list(shared.prompt_styles.styles), value=[], allow_custom_value=True, info="Styles allow you to add custom text to prompt. Use the {prompt} token in style text, and it will be replaced with user's prompt when applying style. Otherwise, style's text will be added to the end of the prompt.")
ui_common.create_refresh_button([self.dropdown, self.selection], shared.prompt_styles.reload, lambda: {"choices": list(shared.prompt_styles.styles)}, f"refresh_{tabname}_styles") ui_common.create_refresh_button([self.dropdown, self.selection], shared.prompt_styles.reload, lambda: {"choices": list(shared.prompt_styles.styles)}, f"refresh_{tabname}_styles")
self.materialize = ui_components.ToolButton(value=styles_materialize_symbol, elem_id=f"{tabname}_style_apply_dialog", tooltip="Apply all selected styles from the style selction dropdown in main UI to the prompt.") self.materialize = ui_components.ToolButton(value=styles_materialize_symbol, elem_id=f"{tabname}_style_apply_dialog", tooltip="Apply all selected styles from the style selection dropdown in main UI to the prompt.")
self.copy = ui_components.ToolButton(value=styles_copy_symbol, elem_id=f"{tabname}_style_copy", tooltip="Copy main UI prompt to style.") self.copy = ui_components.ToolButton(value=styles_copy_symbol, elem_id=f"{tabname}_style_copy", tooltip="Copy main UI prompt to style.")
with gr.Row(): with gr.Row():

View File

@ -102,7 +102,7 @@ def get_matched_noise(_np_src_image, np_mask_rgb, noise_q=1, color_variation=0.0
shaped_noise_fft = _fft2(noise_rgb) shaped_noise_fft = _fft2(noise_rgb)
shaped_noise_fft[:, :, :] = np.absolute(shaped_noise_fft[:, :, :]) ** 2 * (src_dist ** noise_q) * src_phase # perform the actual shaping shaped_noise_fft[:, :, :] = np.absolute(shaped_noise_fft[:, :, :]) ** 2 * (src_dist ** noise_q) * src_phase # perform the actual shaping
brightness_variation = 0. # color_variation # todo: temporarily tieing brightness variation to color variation for now brightness_variation = 0. # color_variation # todo: temporarily tying brightness variation to color variation for now
contrast_adjusted_np_src = _np_src_image[:] * (brightness_variation + 1.) - brightness_variation * 2. contrast_adjusted_np_src = _np_src_image[:] * (brightness_variation + 1.) - brightness_variation * 2.
# scikit-image is used for histogram matching, very convenient! # scikit-image is used for histogram matching, very convenient!

View File

@ -45,7 +45,7 @@ def apply_prompt(p, x, xs):
def apply_order(p, x, xs): def apply_order(p, x, xs):
token_order = [] token_order = []
# Initally grab the tokens from the prompt, so they can be replaced in order of earliest seen # Initially grab the tokens from the prompt, so they can be replaced in order of earliest seen
for token in x: for token in x:
token_order.append((p.prompt.find(token), token)) token_order.append((p.prompt.find(token), token))