LocalAI/backend
B4ckslash 9dddd1134d
fix: move python header comments below shebang in some backends (#1321)
* Fix python header comments for some extra gRPC backends

When a Python script is to be executed directly via exec(3), either the platform knows how to execute
the file itself (i.e. special configuration is necessary) or the first line
contains a shebang (#!) specifying the interpreter to run it (similar to
shell scripts).

The shebang MUST be on the first line for the script to work on all platforms,
so any header comments need to be in the lines following it. Otherwise
executing these scripts as extra backends will yield an "exec format
error" message.

Changes:
* Move introductory comments below the shebang line
* Change header comment in transformers.py to refer to the correct
  python module

Signed-off-by: Marcus Köhler <khler.marcus@gmail.com>

* Make header comment in ttsbark.py more specific

Signed-off-by: Marcus Köhler <khler.marcus@gmail.com>

---------

Signed-off-by: Marcus Köhler <khler.marcus@gmail.com>
2023-11-23 15:22:37 +01:00
..
cpp refactor: move backends into the backends directory (#1279) 2023-11-13 22:40:16 +01:00
go refactor: rename llama-stable to llama-ggml (#1287) 2023-11-18 08:18:43 +01:00
python fix: move python header comments below shebang in some backends (#1321) 2023-11-23 15:22:37 +01:00
backend.proto refactor: move backends into the backends directory (#1279) 2023-11-13 22:40:16 +01:00