deepbooru: added option to quote (\) in tags
deepbooru/BLIP: write caption to file instead of image filename
deepbooru/BLIP: now possible to use both for captions
deepbooru: process is stopped even if an exception occurs
train: make it possible to make text files with prompts
train: rework scheduler so that there's less repeating code in textual inversion and hypernets
train: move epochs setting to options
What:
* Update wrap_gradio_call to add a limit to the maximum amount of text output
Why:
* wrap_gradio_call currently prints out a list of the arguments provided to the failing function.
* if that function is save_image, this causes the entire image to be printed to stderr
* If the image is large, this can cause the service to lock up while attempting to print all the text
* It is easy to generate large images using the x/y plot script
* it is easy to encounter image save exceptions, including if the output directory does not exist / cannot be written to, or if the file is too big
* The huge amount of log spam is confusing and not particularly helpful
Since the UI also allows users to specify ranks, it can be useful to show people what ranks are being returned by interrogate
This can also give much better results when feeding the interrogate results back into either img2img or txt2img, especially when trying to generate a specific character or scene for which you have a similar concept image
Testing Steps:
Launch Webui with command line arg: --deepdanbooru
Navigate to img2img tab, use interrogate DeepBooru, verify tags appears as before. Use "Interrogate CLIP", verify prompt appears as before
Navigate to Settings tab, enable new option, click "apply settings"
Navigate to img2img, Interrogate DeepBooru again, verify that weights appear and are properly formatted. Note that "Interrogate CLIP" prompt is still unchanged
In my testing, this change has no effect to "Interrogate CLIP", as it seems to generate a sentence-structured caption, and not a set of tags.
(reproduce changes from 6ed4faac46)
* Add cross-attention optimization from InvokeAI (~30% speed improvement on MPS)
* Add command line option for it
* Make it default when CUDA is unavailable