This tries to execute interpolate with FP32 if it failed.
Background is that
on some environment such as Mx chip MacOS devices, we get error as follows:
```
"torch/nn/functional.py", line 3931, in interpolate
return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'
```
In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it.
Note that the ```upsample_nearest2d``` is called from ```torch.nn.functional.interpolate```.
And the fallback for torch.nn.functional.interpolate is necessary at
```modules/sd_vae_approx.py``` 's ```VAEApprox.forward```
```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py``` 's ```Upsample.forward```
autocrop.download_and_cache_models
in opencv >= 4.8 the face detection model was updated
download the base on opencv version
returns the model path or raise exception
fixes issue where "--use-cpu" all properly makes SD run on CPU but leaves ControlNet (and other extensions, I presume) pointed at GPU, causing a crash in ControlNet caused by a mismatch between devices between SD and CN
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/14097
This tries to execute interpolate with FP32 if it failed.
Background is that
on some environment such as Mx chip MacOS devices, we get error as follows:
```
"torch/nn/functional.py", line 3931, in interpolate
return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'
```
In this case, ```--no-half``` doesn't help to solve. Therefore this commits add the FP32 fallback execution to solve it.
Note that the submodule may require additional modifications. The following is the example modification on the other submodule.
```repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py
class Upsample(nn.Module):
..snip..
def forward(self, x):
assert x.shape[1] == self.channels
if self.dims == 3:
x = F.interpolate(
x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest"
)
else:
try:
x = F.interpolate(x, scale_factor=2, mode="nearest")
except:
x = F.interpolate(x.to(th.float32), scale_factor=2, mode="nearest").to(x.dtype)
if self.use_conv:
x = self.conv(x)
return x
..snip..
```
You can see the FP32 fallback execution as same as sd_vae_approx.py.
shared.opts.img2img_batch_show_results_limit
limit the number of images return to the UI for batch img2img
default limit 32
0 no images are shown
-1 unlimited, all images are shown