Merge branch 'master' into master

This commit is contained in:
fyears 2024-04-05 10:44:03 +08:00 committed by GitHub
commit 0adc3948e5
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
130 changed files with 7776 additions and 3406 deletions

View File

@ -31,13 +31,30 @@ body:
- type: dropdown
id: service
attributes:
label: What remote cloud services are you using?
label: What remote cloud services are you using? (Please choose the specified one if it's in the list)
multiple: true
options:
- S3
- S3 (Cloudflare R2)
- S3 (BackBlaze B2)
- S3 (腾讯云 COS Tencent Cloud COS)
- S3 (阿里云 OSS Alibaba Cloud OSS)
- S3 (MinIO)
- S3 (Wasabi)
- S3 (Storj)
- OneDrive for personal
- OneDrive for business
- Dropbox
- webdav
- webdav
- webdav (ownCloud)
- webdav (InfiniCloud (formally TeraCLOUD))
- webdav (AList)
- webdav (Cloudreve)
- webdav (坚果云 JianGuoYun/NutStore)
- webdav (NextCloud)
- webdav (FastMail)
- webdav (rclone webdav)
- webdav (nginx)
- others
validations:
required: true

View File

@ -21,9 +21,26 @@ body:
multiple: true
options:
- S3
- S3 (Cloudflare R2)
- S3 (BackBlaze B2)
- S3 (腾讯云 COS Tencent Cloud COS)
- S3 (阿里云 OSS Alibaba Cloud OSS)
- S3 (MinIO)
- S3 (Wasabi)
- S3 (Storj)
- OneDrive for personal
- OneDrive for business
- Dropbox
- webdav
- webdav
- webdav (ownCloud)
- webdav (InfiniCloud (formally TeraCLOUD))
- webdav (AList)
- webdav (Cloudreve)
- webdav (坚果云 JianGuoYun/NutStore)
- webdav (NextCloud)
- webdav (FastMail)
- webdav (rclone webdav)
- webdav (nginx)
- others
validations:
required: false

1
.gitignore vendored
View File

@ -5,6 +5,7 @@
# npm
node_modules
package-lock.json
pnpm-lock.yaml
# build
main.js

3
.gitmodules vendored
View File

@ -1,3 +0,0 @@
[submodule "src/langs"]
path = src/langs
url = https://github.com/remotely-save/langs.git

View File

@ -17,22 +17,22 @@ This is yet another unofficial sync plugin for Obsidian. If you like it or find
## Features
- Supports:
- Amazon S3 or S3-compatible
- Amazon S3 or S3-compatible (Cloudflare R2 / BackBlaze B2 / MinIO / ...)
- Dropbox
- OneDrive for personal
- Webdav
- [Here](./docs/services_connectable_or_not.md) shows more connectable (or not-connectable) services in details.
- **Obsidian Mobile supported.** Vaults can be synced across mobile and desktop devices with the cloud service as the "broker".
- **[End-to-end encryption](./docs/encryption.md) supported.** Files would be encrypted using openssl format before being sent to the cloud **if** user specify a password.
- **[End-to-end encryption](./docs/encryption/README.md) supported.** Files would be encrypted using openssl format before being sent to the cloud **if** user specify a password.
- **Scheduled auto sync supported.** You can also manually trigger the sync using sidebar ribbon, or using the command from the command palette (or even bind the hot key combination to the command then press the hot key combination).
- **[Minimal Intrusive](./docs/minimal_intrusive_design.md).**
- **Skip Large files** and **skip paths** by custom regex conditions!
- **Fully open source under [Apache-2.0 License](./LICENSE).**
- **[Sync Algorithm open](./docs/sync_algorithm_v2.md) for discussion.**
- **[Sync Algorithm open](./docs/sync_algorithm/v3/intro.md) for discussion.**
- **[Basic Conflict Detection And Handling](./docs/sync_algorithm/v3/intro.md)** now, more to come!
## Limitations
- **To support deltions sync, extra metadata will also be uploaded.** See [Minimal Intrusive](./docs/minimal_intrusive_design.md).
- **No Conflict resolution. No content-diff-and-patch algorithm.** All files and folders are compared using their local and remote "last modified time" and those with later "last modified time" wins.
- **Cloud services cost you money.** Always be aware of the costs and pricing. Specifically, all the operations, including but not limited to downloading, uploading, listing all files, calling any api, storage sizes, may or may not cost you money.
- **Some limitations from the browser environment.** More technical details are [in the doc](./docs/browser_env.md).
- **You should protect your `data.json` file.** The file contains sensitive information.
@ -60,15 +60,21 @@ Additionally, the plugin author may occasionally visit Obsidian official forum a
### S3
- Tutorials / Examples:
- [Cloudflare R2](./docs/remote_services/s3_cloudflare_r2/README.md)
- [BackBlaze B2](./docs/remote_services/s3_backblaze_b2/README.md)
- [Storj](./docs/remote_services/s3_storj_io/README.md)
- [腾讯云 COS](./docs/remote_services/s3_tencent_cloud_cos/README.zh-cn.md) | [Tencent Cloud COS](./docs/remote_services/s3_tencent_cloud_cos/README.md)
- [MinIO](./docs/remote_services/s3_minio/README.md)
- Prepare your S3 (-compatible) service information: [endpoint, region](https://docs.aws.amazon.com/general/latest/gr/s3.html), [access key id, secret access key](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/getting-your-credentials.html), bucket name. The bucket should be empty and solely for syncing a vault.
- About CORS:
- If you are using Obsidian desktop >= 0.13.25 or mobile >= 1.1.1, you can skip this CORS part.
- If you are using Obsidian desktop < 0.13.25 or mobile < 1.1.1, you need to configure (enable) [CORS](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enabling-cors-examples.html) for requests from `app://obsidian.md` and `capacitor://localhost` and `http://localhost`, and add at least `ETag` into exposed headers. Full example is [here](./docs/s3_cors_configure.md). It's unfortunately required, because the plugin sends requests from a browser-like envirement. And those addresses are tested and found on desktop and ios and android.
- If you are using AWS S3, create [policy and user](./docs/remote_services/s3_general/s3_user_policy.md).
- Very old version of Obsidian needs [configuring CORS](./docs/remote_services/s3_general/s3_cors_configure.md).
- Download and enable this plugin.
- Enter your information to the settings of this plugin.
- If you want to enable end-to-end encryption, also set a password in settings. If you do not specify a password, the files and folders are synced in plain, original content to the cloud.
- Click the new "circle arrow" icon on the ribbon (the left sidebar), **every time** you want to sync your vault between local and remote. (Or, you could configure auto sync in the settings panel (See next chapter).) While syncing, the icon becomes "two half-circle arrows". Besides clicking the icon on the sidebar ribbon, you can also activate the corresponding command in the command palette.
- **Be patient while syncing.** Especially in the first-time sync.
- If you want to sync the files across multiple devices, **your vault name should be the same** while using default settings.
### Dropbox
@ -76,6 +82,7 @@ Additionally, the plugin author may occasionally visit Obsidian official forum a
- After the authorization, the plugin can read your name and email (which cannot be unselected on Dropbox api), and read and write files in your Dropbox's `/Apps/remotely-save` folder.
- If you decide to authorize this plugin to connect to Dropbox, please go to plugin's settings, and choose Dropbox then follow the instructions. [More with screenshot is here](./docs/dropbox_review_material/README.md).
- Password-based end-to-end encryption is also supported. But please be aware that **the vault name itself is not encrypted**.
- If you want to sync the files across multiple devices, **your vault name should be the same** while using default settings.
### OneDrive for personal
@ -84,19 +91,21 @@ Additionally, the plugin author may occasionally visit Obsidian official forum a
- After the authorization, the plugin can read your name and email, and read and write files in your OneDrive's `/Apps/remotely-save` folder.
- If you decide to authorize this plugin to connect to OneDrive, please go to plugin's settings, and choose OneDrive then follow the instructions.
- Password-based end-to-end encryption is also supported. But please be aware that **the vault name itself is not encrypted**.
- If you want to sync the files across multiple devices, **your vault name should be the same** while using default settings.
- You might also want to checkout [faq for OneDrive](./docs/remote_services/onedrive/README.md).
### webdav
- About CORS:
- If you are using Obsidian desktop >= 0.13.25 or iOS >= 1.1.1, you can skip this CORS part.
- If you are using Obsidian desktop < 0.13.25 or iOS < 1.1.1 or any Android version:
- The webdav server has to be enabled CORS for requests from `app://obsidian.md` and `capacitor://localhost` and `http://localhost`, **AND** all webdav HTTP methods, **AND** all webdav headers. These are required, because Obsidian mobile works like a browser and mobile plugins are limited by CORS policies unless under a upgraded Obsidian version.
- Popular software NextCloud, OwnCloud, `rclone serve webdav` do **NOT** enable CORS by default. If you are using any of them, you should evaluate the risk, and find a way to enable CORS, before using this plugin, or use a upgraded Obsidian version.
- **Unofficial** workaround: NextCloud users can **evaluate the risk by themselves**, and if decide to accept the risk, they can install [WebAppPassword](https://apps.nextcloud.com/apps/webapppassword) app, and add `app://obsidian.md`, `capacitor://localhost`, `http://localhost` to `Allowed origins`
- **Unofficial** workaround: OwnCloud users can **evaluate the risk by themselves**, and if decide to accept the risk, they can download `.tar.gz` of `WebAppPassword` above and manually install and configure it on their instances.
- The plugin is tested successfully under python package [`wsgidav` (version 4.0)](https://github.com/mar10/wsgidav). See [this issue](https://github.com/mar10/wsgidav/issues/239) for some details.
- Tutorials / Examples:
- [ownCloud](./docs/remote_services/webdav_owncloud/README.md)
- [InfiniCloud](./docs/remote_services/webdav_infinicloud_teracloud/README.md)
- [Synology webdav server](./docs/remote_services/webdav_synology_webdav_server/README.md) | [群晖 webdav server](./docs/remote_services/webdav_synology_webdav_server/README.zh-cn.md)
- [AList中文](./docs/remote_services/webdav_alist/README.zh-cn.md) | [AList (English)](./docs/remote_services/webdav_alist/README.md)
- [坚果云](./docs/remote_services/webdav_jianguoyun/README.zh-cn.md) | [JianGuoYun/NutStore](./docs/remote_services/webdav_jianguoyun/README.md)
- Very old version of Obsidian needs [configuring CORS](./docs/remote_services/webdav_general/webav_cors.md).
- Your data would be synced to a `${vaultName}` sub folder on your webdav server.
- Password-based end-to-end encryption is also supported. But please be aware that **the vault name itself is not encrypted**.
- If you want to sync the files across multiple devices, **your vault name should be the same** while using default settings.
## Scheduled Auto Sync

25
debugServer.js Normal file
View File

@ -0,0 +1,25 @@
// Importing the http module
const http = require("http");
const requestHandler = (req, res) => {
let body = [];
req
.on("data", (chunk) => {
body.push(chunk);
})
.on("end", () => {
const parsed = JSON.parse(Buffer.concat(body).toString());
const prettyParsed = JSON.stringify(parsed, null, 2);
console.log(prettyParsed);
res.setHeader("Content-Type", "application/json");
res.end(prettyParsed);
});
};
const server = http.createServer(requestHandler);
const addr = "0.0.0.0";
const port = 3000;
server.listen(port, addr, undefined, () => {
console.log(`Server is Running on ${addr}:${port}`);
});

View File

@ -0,0 +1,8 @@
# Encryption
Currently (March 2024), Remotely Save supports two end to end encryption format:
1. [RClone Crypt](./rclone.md) format, which is the recommend way now.
2. [OpenSSL enc](./openssl.md) format
Here is also the [comparation](./comparation.md).

View File

@ -0,0 +1,23 @@
# Comparation Between Encryption Formats
## Warning
**ALWAYS BACKUP YOUR VAULT MANUALLY!!!**
If you switch between RClone Crypt format and OpenSSL enc format, you have to delete the cloud vault files **manually** and **fully**, so that the plugin can re-sync (i.e. re-upload) the newly encrypted versions to the cloud.
## The feature table
| | RClone Crypt | OpenSSL enc | comments |
| ------------------------ | ------------------------------------------------------------------------------------------ | -------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| key generation | scrypt with fixed salt | PBKDF2 with dynamic salt | scrypt is better than PBKDF2 from the algorithm aspect. But RClone uses fixed salt by default. Also the parameters might affect the result. |
| content encryption | XSalsa20Poly1305 on chunks | AES-256-CBC | XSalsa20Poly1305 is way better than AES-256-CBC. And encryption by chunks should require less resources. |
| file name encryption | EME on each segment of the path | AES-256-CBC on the whole path | RClone has the benefit as well as pitfall that the path structure is preserved. Maybe it's more of a design decision difference? No comment on EME and AES-256-CBC. |
| viewing decrypted result | RClone has command that can mount the encrypted vault as if the encryption is transparent. | No convenient way except writing some scripts we are aware of. | RClone is way more convenient. |
## Some notes
1. Anyway, security is a hard problem. The author of Remotely Save doesn't have sufficient knowledge to "judge" which one is the better format. **Use them at your own risk.**
2. Currently the RClone Crypt format is recommended by default in Remotely Save. Just because of the taste from the Remotely Save author, who likes RClone.
3. **Always use a long password.**
4. Both algorithms are selected deliberately to **be compatible with some well-known third-party tools** (instead of some home-made methods) and **have many tests to ensure the correctness**.

View File

@ -1,10 +1,22 @@
# Encryption
# OpenSSL enc format
If a password is set, the files are encrypted before being sent to the cloud.
The encryption algorithm is delibrately designed to be aligned with openssl format.
## Warning
1. The encryption algorithm is implemented using web-crypto.
**ALWAYS BACKUP YOUR VAULT MANUALLY!!!**
If you switch between RClone Crypt format and OpenSSL enc format, you have to delete the cloud vault files **manually** and **fully**, so that the plugin can re-sync (i.e. re-upload) the newly encrypted versions to the cloud.
## Comparation between encryption formats
See the doc [Comparation](./comparation.md).
## Interoperability with official OpenSSL
This encryption algorithm is delibrately designed to be aligned with openssl format.
1. The encryption algorithm is implemented using web-crypto. Using AES-256-CBC.
2. The file content is encrypted using openssl format. Assuming a file named `sometext.txt`, a password `somepassword`, then the encryption is equivalent to the following command:
```bash

46
docs/encryption/rclone.md Normal file
View File

@ -0,0 +1,46 @@
# RClone Crypt format
The encryption is compatible with RClone Crypt with **base64** name encryption format.
It's developed based on another js project by the same author of Remotely Save: [`@fyears/rclone-crypt`](https://github.com/fyears/rclone-crypt), which is NOT an official library from RClone, and is NOT affiliated with RClone.
Reasonable tests are also ported from official RClone code, to ensure the compatibility and correctness of the encryption.
## Warning
**ALWAYS BACKUP YOUR VAULT MANUALLY!!!**
If you switch between RClone Crypt format and OpenSSL enc format, you have to delete the cloud vault files **manually** and **fully**, so that the plugin can re-sync (i.e. re-upload) the newly encrypted versions to the cloud.
## Comparation between encryption formats
See the doc [Comparation](./comparation.md).
## Interoperability with official RClone
Please pay attention that the plugin uses **base64** of encrypted file names, while official RClone by default uses **base32** file names. The intention is purely for potentially support longer file names.
You could set up the RClone profile by calling `rclone config`. You need to create two profiles, one for your original connection and the other for RClone Crypt.
Finally, a working config file should like this:
```ini
[webdav1]
type = webdav
url = https://example.com/sharefolder1/subfolder1 # the same as the web address in Remotely Save settings.
vendor = other
user = <some webdav username>
pass = <some webdav password, obfuscated>
[webdav1crypt]
type = crypt
remote = nas1test:vaultname # the same as your "Remote Base Directory" (usually the vault name) in Remotely Save settings
password = <some encryption password, obfuscated>
filename_encoding = base64 # don't forget this!!!
```
You can use the `mount` command to view and see the files in file explorer! On Windows, the command should like this (the remote vault is mounted to drive `X:`):
```bash
rclone mount webdav1crypt: X: --network-mode
```

View File

@ -12,8 +12,8 @@ See [here](./export_sync_plans.md).
See [here](./check_console_output.md).
## Advanced: Save Console Output Then Read Them Later
## Advanced: Use `Logstravaganza` to export logs
This method works for desktop and mobile devices (iOS, Android).
This method works for desktop and mobile devices (iOS, Android), especially useful for iOS.
See [here](./save_console_output_and_export.md).
See [here](./use_logstravaganza.md).

View File

@ -1,25 +0,0 @@
# Save Console Output And Read Them Later
## Disable Auto Sync Firstly
You should disable auto sync to avoid any unexpected running.
## Set The Output Level To Debug
Go to the plugin settings, scroll down to the section "Debug" -> "alter console log level", and change it from "info" to "debug".
## Enable Saving The Output To DB
Go to the plugin settings, scroll down to the section "Debug" -> "Save Console Logs Into DB", and change it from "disable" to "enable". **This setting has some performance cost, so do NOT always turn this on when not necessary!**
## Run The Sync
Trigger the sync manually (by clicking the icon on the ribbon sidebar). Something (hopefully) helpful should show up in the console. The the console logs are also saved into DB now.
## Export The Output And Read The Logs
Go to the plugin settings, scroll down to the section "Debug" -> "Export Console Logs From DB", and click the button. A new file `log_hist_exported_on_....md` should be created inside the special folder `_debug_remotely_save/`. You could read it and hopefully find something useful.
## Disable Saving The Output To DB
After debugging, go to the plugin settings, scroll down to the section "Debug" -> "Save Console Logs Into DB", and change it from "enable" to "disable".

View File

@ -0,0 +1,14 @@
# Use `Logstravaganza`
On iOS, it's quite hard to directly check the console logs.
Luckily, there is a third-party plugin: [`Logstravaganza`](https://obsidian.md/plugins?search=Logstravaganza#), by Carlo Zottmann, that can redirect the output to a note.
You can just:
1. Install it.
2. Enable it.
3. Do something, to trigger some console logs.
4. Checkout `LOGGING-NOTE (device name).md` in the root of your vault.
See more on its site: <https://github.com/czottmann/obsidian-logstravaganza>.

56
docs/linux.md Normal file
View File

@ -0,0 +1,56 @@
# How to receive `obsidian://` in Linux
## Background
For example, when we are authorizing OneDrive, we have to jump back to Obsidian automatically using `obsidian://`.
## Short Desc From Official Obsidian Doc
Official doc has some explanation:
<https://help.obsidian.md/Extending+Obsidian/Obsidian+URI#Register+Obsidian+URI>
# Long Desc
Assuming the username is `somebody`, and the `.AppImage` file is downloaded to `~/Desktop`.
1. Download and **extract** the app image file in terminal
```bash
cd /home/somebody/Desktop
chmod +x Obsidian-x.y.z.AppImage
./Obsidian-x.y.z.AppImage --appimage-extract
# you should have the folder squashfs-root
# we want to rename it
mv squashfs-root Obsidian
```
2. Create a `.desktop` file
```bash
# copy and paste the follow MULTI LINE command
# you might need to input your password because it requires root privilege
# remember to adjust the path
cat > ~/Desktop/obsidian.desktop <<EOF
[Desktop Entry]
Name=Obsidian
Comment=obsidian
Exec=/home/somebody/Desktop/Obsidian/obsidian %u
Keywords=obsidian
StartupNotify=true
Terminal=false
Type=Application
Icon=/home/somebody/Desktop/Obsidian/obsidian.png
MimeType=x-scheme-handler/obsidian;
EOF
# yeah we can check out the output
cat ~/Desktop/obsidian.desktop
## [Desktop Entry]
## ...
```
3. Right click the `obsidian.desktop` file on the Desktop, and click "Allow launching"
4. Double click the `obsidian.desktop` file.

View File

@ -1,8 +1,10 @@
# Minimal Intrusive Design
Before version 0.3.0, the plugin did not upload additional meta data to the remote.
~~Before version 0.3.0, the plugin did not upload additional meta data to the remote.~~
From and after version 0.3.0, the plugin just upload minimal extra necessary meta data to the remote.
~~From version 0.3.0 ~ 0.3.40, the plugin just upload minimal extra necessary meta data to the remote.~~
From version 0.4.1 and above, the plugin doesn't need uploading meta data due to the sync algorithm upgrade.
## Benefits
@ -12,10 +14,14 @@ For example, it's possbile for a uses to manually upload a file to s3, and next
And it's also possible to combine another "sync-to-s3" solution (like, another software) on desktops, and this plugin on mobile devices, together.
## Necessarity Of Uploading Extra Metadata
## ~~Necessarity Of Uploading Extra Metadata from 0.3.0 ~ 0.3.40~~
The main issue comes from deletions (and renamings which is actually interpreted as "deletion-then-creation").
~~The main issue comes from deletions (and renamings which is actually interpreted as "deletion-then-creation").~~
If we don't upload any extra info to the remote, there's usually no way for the second device to know what files / folders have been deleted on the first device.
~~If we don't upload any extra info to the remote, there's usually no way for the second device to know what files / folders have been deleted on the first device.~~
To overcome this issue, from and after version 0.3.0, the plugin uploads extra metadata files `_remotely-save-metadata-on-remote.{json,bin}` to users' configured cloud services. Those files contain some info about what has been deleted on the first device, so that the second device can read the list to apply the deletions to itself. Some other necessary meta info would also be written into the extra files.
~~To overcome this issue, from and after version 0.3.0, the plugin uploads extra metadata files `_remotely-save-metadata-on-remote.{json,bin}` to users' configured cloud services. Those files contain some info about what has been deleted on the first device, so that the second device can read the list to apply the deletions to itself. Some other necessary meta info would also be written into the extra files.~~
## No uploading extra metadata from 0.4.1
Some information, including previous successful sync status of each file, is kept locally.

View File

@ -0,0 +1,23 @@
# OneDrive
- **This plugin is NOT an official Microsoft / OneDrive product.** The plugin just uses Microsoft's [OneDrive's public API](https://docs.microsoft.com/en-us/onedrive/developer/rest-api).
- After the authorization, the plugin can read your name and email, and read and write files in your OneDrive's `/Apps/remotely-save` folder.
- If you decide to authorize this plugin to connect to OneDrive, please go to plugin's settings, and choose OneDrive then follow the instructions.
- Password-based end-to-end encryption is also supported. But please be aware that **the vault name itself is not encrypted**.
- If you want to sync the files across multiple devices, **your vault name should be the same** while using default settings.
## FAQ
### How about OneDrive for Business?
This plugin only works for "OneDrive for personal", and not works for "OneDrive for Business" (yet). See [#11](https://github.com/fyears/remotely-save/issues/11) to further details.
### I cannot find `/Apps/remotely-save` folder
Mystically some users report that their OneDrive generate `/Application/Graph` instead of `/Apps/remotely-save`. See [#517](https://github.com/remotely-save/remotely-save/issues/517).
The solution is simple:
1. Backup your vault manually.
2. Go to onedrive website (<https://onedrive.live.com/>), and rename `/Application/Graph` to `/Application/remotely-save` (right click on the folder and you will see rename option)
3. Come back to Obsidian and try to sync!

View File

@ -0,0 +1,53 @@
# Backblaze B2
## Links
https://www.backblaze.com/cloud-storage
## Steps
1. Create a Backblaze account [on this page](https://www.backblaze.com/cloud-storage). Credit card info _is not_ required. Backblaze B2 offers 10 GB of free storage.
2. Please be aware that, though B2 provides some free quota, **it may still cost you money if the usage of storage or api requests exceed a certain value!!!** Especially pay attention to the api requests!!!
3. Create a **bucket**, you can leave the default settings, or you can enable the encryption (which is different from what you can set in Remotely Save):
![](./s3_backblaze_b2-1-bucket.png)
![](./s3_backblaze_b2-2-create_bucket.png)
4. Copy `Endpoint`, eg. `s3.us-east-005.backblazeb2.com` — it'll be used later.
5. Copy `bucketname` near the 🪣 icon (the "bucket icon") — it'll be used later.
![](./s3_backblaze_b2-3-copy.png)
6. Go to **Application Keys**:
![](./s3_backblaze_b2-4-app_keys.png)
7. **Add a new key**:
![](./s3_backblaze_b2-5-add_new_app_keys.png)
![](./s3_backblaze_b2-6-app_keys_copy.png)
8. Save `keyID` and `applicationKey` — they will be used later.
9. Go to Remotely Save settings in Obsidian and:
- Choose `S3 or compatibile` in **Remote Service**:
- Copy `Endpoint` from Backblaze (see 3. above) to `Endpoint` in Remotely Save
- From `endpoint` take `region` (eg. `us-east-005`) and paste it in `endpoint` in Remotely Save
- Copy `keyID` (see 7. above) to `Access Key ID` in Remotely Save
- Copy `applicationKey` (see 7. above) to `Secret Access Key` in Remotely Save
- Copy `bucketname` (see 4. above) to `Bucket Name` in Remotely Save
![](./s3_backblaze_b2-7-copy_paste.png)
10. **Enable Bypass CORS**:
![](./s3_backblaze_b2-8-cors.png)
11. Click **Check** in _Check Connectivity_ to see if you can connect to B2 bucket:
![](./s3_backblaze_b2-9-check_connectionpng.png)
12. Sync!
![](./s3_backblaze_b2-10-sync.png)

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,27 @@
# Cloudflare R2
## Links
<https://www.cloudflare.com/developer-platform/r2/>
## Steps
1. **Be aware that it may cost you money.**
2. Create a Cloudflare account and enable R2 feature. **Credit card info might be required by Cloudflare**, though Cloudflare provides generous free tier and zero egress fee.
3. Create a bucket.
![](./s3_cloudflare_r2_create_bucket.png)
4. Create an Access Key with "Object Read & Write" permission, and add specify to your created bucket. During the creation, you will also get the auto-generated secret key, and the endpoint address.
![](./s3_cloudflare_r2_create_api_token.png)
5. In remotely-save setting page, input the address / bucket / access key / secret key. **Region being set to `us-east-1` is sufficient.** Enable "Bypass CORS", because usually that's what you want.
Click "check connectivity". (If you encounter an issue and sure the info are correct, please upgrade remotely-save to **version >= 0.3.29** and try again.)
![](./s3_cloudflare_r2_rs_settings.png)
6. Sync!
## And Issue Related To "Check Connectivity"
If you encounter an issue and sure the info are correct, please upgrade remotely-save to **version >= 0.3.29** and try again.
Cloudflare doesn't allow `HeadBucket` for access keys with "Object Read & Write". So it may be possible that checking connectivity is not ok but actual syncing is ok. New version >= 0.3.29 of the plugin fix this problem by using `ListObjects` instead of `HeadBucket`.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,79 @@
# AWS S3 Bucket: How to configure user's policy
## Attention
Please read the doc carefully and adjust the optional fields accordingly. The doc is not fully tested and contributions are welcome.
## AWS Official Docs
- <https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-policy-language-overview.html>
- <https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-actions.html>
- <https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operations.html>
## Prerequisites
Using the principle of least privilege is crucial for security when allowing a third party system to access your AWS resources.
**Prerequisites**: Ensure you have an AWS account and administrative access to manage IAM policies.
## Step 1: Create a new IAM Policy
1. Log in to your AWS Management Console.
1. Navigate to the IAM Policies section.
1. Create a new policy with the following configuration.
**Note**: `my-bucket` is a placeholder. For example, if your bucket's name is `obsidian-data`, the resource line should read `arn:aws:s3:::obsidian-data`.
```JSON
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ObsidianObjects",
"Effect": "Allow",
"Action": [
"s3:HeadObject",
"s3:ListBucket",
"s3:PutObject",
"s3:CopyObject",
"s3:UploadPart",
"s3:UploadPartCopy",
"s3:ListMultipartUploads",
"s3:AbortMultipartUpload",
"s3:CompleteMultipartUpload",
"s3:ListObjects",
"s3:ListObjectsV2",
"s3:ListParts",
"s3:GetObject",
"s3:GetObjectAttributes",
"s3:DeleteObject",
"s3:DeleteObjects"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}
```
> The policy allows the Obsidian plugin to list, add, retrieve, and delete objects in the specified S3 bucket.
## Step 2: Attach the Policy to Obsidian user
1. Create a new user in the IAM console. (Never use your own root user, as it would have full access to your AWS account).
1. When creating the user, select "Attach policy directly" and select the policy created.
1. Edit the recent created user and go to "Security Credentials" tab to create your access key.
1. Create an Access Key. If asked for a "use case", select "other"
1. Use the credentials in the plugin settings. (NEVER share these credentials)
> PS. The bucket doesn't need to have a policy, only the user.
## Verifying the Policy
After attaching the policy, test it by trying to access the S3 bucket through the Obsidian plugin. Ensure that all intended actions can be performed without errors.
## Troubleshooting
If you encounter permission errors, check the policy for typos in the bucket name or actions. Ensure the policy is attached to the correct user.

View File

@ -0,0 +1,27 @@
# MinIO
## Links
<https://min.io/>
## Steps
1. Configure your minio instance and get an account.
2. Create an Access Key (during the creation, you will also get the auto-generated secret key).
![](./minio_access_key.png)
3. Check or set the region.
![](./minio_region.png)
4. Create a bucket.
![](./minio_create_bucket.png)
5. In remotely-save setting page, input the address / bucket / access key / secret key. **Usually minio instances may need "S3 URL style"="Path Style".** Enable "Bypass CORS", because usually that's what you want.
![](./minio_rs_settings.png)
6. Sync!
![](./minio_sync_success.png)
## Ports In Address
Just type in the full address with `http(s)://` and `:port` in remotely-save settings, for example `http://192.168.31.198:9000`.
It's verified that everything is ok.
![](./minio_custom_port.png)

BIN
docs/remote_services/s3_minio/minio_access_key.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
docs/remote_services/s3_minio/minio_create_bucket.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
docs/remote_services/s3_minio/minio_custom_port.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
docs/remote_services/s3_minio/minio_region.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
docs/remote_services/s3_minio/minio_rs_settings.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
docs/remote_services/s3_minio/minio_sync_success.png (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1,17 @@
# Storj
## Links
<https://www.storj.io/>
## Steps
1. Register an account. Login.
2. Create a bucket.
3. Create S3 Credentials in Access Management. Allow all permissions for the bucket. Remember the access key and secret key and the end point. The end point is likely to be [`https://gateway.storjshare.io`](https://docs.storj.io/dcs/api/s3/s3-compatible-gateway).
![](./storj_create_s3_cred_1.png)
![](./storj_create_s3_cred_2.png)
4. Input your credentials into remotely-save settings. Region [should be `global`](https://docs.storj.io/dcs/api/s3/s3-compatibility).
![](storj_remotely_save_settings.png)
5. Check connectivity.
6. Sync!

BIN
docs/remote_services/s3_storj_io/storj_create_s3_cred_1.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
docs/remote_services/s3_storj_io/storj_create_s3_cred_2.png (Stored with Git LFS) Normal file

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,23 @@
# Tencent Cloud COS
English | [中文](./README.zh-cn.md)
## Link
- international <https://console.tencentcloud.com/cos>
## 步骤 Steps
The example shows the steps of China version. International version should be similar.
1. Create a bucket with **private read-write permissions** and recommendly enable server-side-encryption.
2. In bucket list page, enter the bucket overview page of the bucket you just created. You should see the bucket name (your texts with the number of your account id), region, and access address.
![](./cos_bucket_info.png)
3. In CAM page, create api key, and note down the SecretID and SecretKey.
![](./cos_create_secret.png)
4. **Remove the bucket name from your access address to obtain your endpoint address! If your access address on the website is `https://<bucket-name-with-number>.cos.<region>.myqcloud.com`, then the endpoint address you are going to use is `https://cos.<region>.myqcloud.com`.**
5. In remotely-save settings page, enter your endpoint adress, SecretID, SecretKey,and bucket name.
![](./cos_setting.png)
6. Check Connectivity.
![](./cos_connection.png)
7. Sync!

View File

@ -0,0 +1,23 @@
# 腾讯云 COS
[English](./README.md) | 中文
## 链接
- 中国区 <https://console.cloud.tencent.com/cos>
## 步骤
注意这里用中国区示例,国际区配置应该类似。
1. 在“存储桶列表”页,[“创建存储桶”](https://console.cloud.tencent.com/cos/bucket?action=create)。注意创建**私有读写**,建议打开服务端加密。
2. 在桶列表页,点击刚刚存储的桶,进入概览页。可以见到桶名称(一般来说是之前指定的英文加账号数字),地域,访问域名。记录下来。
![](./cos_bucket_info.png)
3. 在[“访问管理页”](https://console.cloud.tencent.com/cam/capi) “API 密钥管理”,“创建密钥”,要记录 SecretID 和 SecretKey。
![](./cos_create_secret.png)
4. **把桶名称从访问域名移除,才是你即将输入的服务地址!假如你在腾讯云网站看到访问域名是 `https://<bucket-name-with-number>.cos.<region>.myqcloud.com`,那么“服务地址”是 `https://cos.<region>.myqcloud.com`.**
5. 在 remotely-save 设置输入服务地址SecretIDSecretKey和 桶名称。
![](./cos_setting.png)
6. 检查连接。
![](./cos_connection.png)
7. 可以同步了!

Binary file not shown.

BIN
docs/remote_services/s3_tencent_cloud_cos/cos_connection.png (Stored with Git LFS) Normal file

Binary file not shown.

Binary file not shown.

BIN
docs/remote_services/s3_tencent_cloud_cos/cos_setting.png (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1,23 @@
# AList
English | [中文](./README.zh-cn.md)
## Links
- English official website: <https://alist.nn.ci/> and <https://alist.nn.ci/guide/webdav.html>
## Steps
1. Install and run AList. Get the account and password. Login using the web page.
2. Add new storage. Pay attention to the mount path. The screenshot shows the mount path as `/alisttest davpath`.
![](./alist_mount_path.zh.png)
![](./alist_mount_path.en.png)
3. Construct the webdav address as: **http(s)://domain** + **port** + **`/dav`** + **mount path**, and the space inside the mount path should be replaced with `%20`:
```
http[s]://domain:port/dav/[mountpath url encoded]
http://127.0.0.1:5244/dav/alisttest%20davpath
```
4. In remotely-save setting page, select webdav type, then input the **full address with mount path**/account/password.
![](./alist_rs_settings.en.png)
5. In remotely-save setting page, click "Check Connectivity".
6. Sync!

View File

@ -0,0 +1,23 @@
# AList
[English](./README.md) | 中文
## 链接
- 中文官网:<https://alist.nn.ci/zh/><https://alist.nn.ci/zh/guide/webdav.html>
## 步骤
1. 安装和使用 AList。获取账号名和密码。在网页上登录。
2. 新建挂载,检查挂载路径。如图所示是 `/alisttest davpath`
![](./alist_mount_path.zh.png)
![](./alist_mount_path.en.png)
3. 从而构建 webdav 网址如下,**http(s)://域名** + **端口** + **`/dav`** + **挂载路径**,其中挂载路径中假如有空格,换成 `%20`
```
http[s]://domain:port/dav/[mountpath url encoded]
http://127.0.0.1:5244/dav/alisttest%20davpath
```
4. 在 remotely-save 设置,输入**带域名端口`/dav`和挂载路径的网址**、账号、密码。
![](./alist_rs_settings.en.png)
5. 在 remotely-save 设置,检查连接。
6. 同步文件!

BIN
docs/remote_services/webdav_alist/alist_mount_path.en.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
docs/remote_services/webdav_alist/alist_mount_path.zh.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
docs/remote_services/webdav_alist/alist_rs_settings.en.png (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1,10 @@
If you are using Obsidian desktop >= 0.13.25 or iOS >= 1.1.1, you can skip this CORS part.
If you are using Obsidian desktop < 0.13.25 or iOS < 1.1.1 or any Android version:
- The webdav server has to be enabled CORS for requests from `app://obsidian.md` and `capacitor://localhost` and `http://localhost`, **AND** all webdav HTTP methods, **AND** all webdav headers. These are required, because Obsidian mobile works like a browser and mobile plugins are limited by CORS policies unless under a upgraded Obsidian version.
- Popular software NextCloud, OwnCloud, `rclone serve webdav` do **NOT** enable CORS by default. If you are using any of them, you should evaluate the risk, and find a way to enable CORS, before using this plugin, or use a upgraded Obsidian version.
- **Unofficial** workaround: NextCloud users can **evaluate the risk by themselves**, and if decide to accept the risk, they can install [WebAppPassword](https://apps.nextcloud.com/apps/webapppassword) app, and add `app://obsidian.md`, `capacitor://localhost`, `http://localhost` to `Allowed origins`
- **Unofficial** workaround: OwnCloud users can **evaluate the risk by themselves**, and if decide to accept the risk, they can download `.tar.gz` of `WebAppPassword` above and manually install and configure it on their instances.
- [Apache is also possible](./webdav_apache_cors.md).
- The plugin is tested successfully under python package [`wsgidav` (version 4.0)](https://github.com/mar10/wsgidav). See [this issue](https://github.com/mar10/wsgidav/issues/239) for some details.

View File

@ -0,0 +1,15 @@
# InfiniCLOUD (formally TeraCLOUD) Webdav
## Link
<https://infini-cloud.net/en/>
## Steps
1. Register an acount.
2. Go to <https://infini-cloud.net/en/modules/mypage/usage/>, in section "Apps Connection", enable "Turn on Apps Connection". Here you get the address and account and webdav password (different from your account password):
![](./infinicloud_account.png)
3. In remotely-save setting page, select webdav type, then input the address/account/**webdav password**(not your account password).
![](./infinicloud_rs_setting.png)
4. In remotely-save setting page, click "Check Connectivity".
5. Sync!

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,22 @@
# JianGuoYun/NutStore
English | [中文](./README.zh-cn.md)
## Link
<https://www.jianguoyun.com/>
## Attentions!!!
JianGuoYun/NutStore has api limits. The plugin may generate many queries, and it's possible to reach the api limits if there are many files, then do not work properly. It's not a bug and there's no way to fix this situation.
## Steps
1. **Be aware that JianGuoYun/NutStore has api limits, and the plugin may not work properly because of this.**
2. Register an account.
3. Go to "settings"->"Security", click "Add Application", then obtain the WebDAV account (email), and WebDAV password (a string different from web site password).
![](./webdav_jianguoyun.cn.png)
4. Input the WebDAV address, account, password, **Depth Header Sent To Servers="only supports depth='1'"** in remotely-save settings.
![](./webdav_jianguoyun_rs_settting.cn.png)
5. In remotely-save setting page, click "Check Connectivity".
6. Sync!

View File

@ -0,0 +1,22 @@
# 坚果云
[English](./README.md) | 中文
## 链接
<https://www.jianguoyun.com/>
## 注意!!!
坚果云有限制 api 数量等设定。本插件会产生若干查询,如果文件较多很容易触发 api 上限,从而工作不正常。这不是插件 bug也没有办法解决。
## 步骤
1. **知悉坚果云有 api 限制,本插件可能因此工作不正常。**
2. 注册账号,登录。
3. 去“个人信息”->“安全”,“添加应用”,从而获取了 webDAV 账号(应该是 email和 WebDAV 密码(一串特殊的字符,不等于网站密码)。
![](./webdav_jianguoyun.cn.png)
4. 在 remotely-save 设置,输入网址、账号、密码、**“发送到服务器的 Depth Header”设置为“只支持 depth='1'”**。
![](./webdav_jianguoyun_rs_settting.cn.png)
5. 在 remotely-save 设置,检查连接。
6. 同步文件!

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,17 @@
# ownCloud Webdav
## Link
<https://owncloud.com/>
# Steps
1. Create an account.
2. Login.
3. In the Settings position, enable the "Show hidden files" and find out the WebDAV address.
![](./owncloud_address.png)
4. Input the WebDAV address, account, password, **Depth Header Sent To Servers="only supports depth='1'"** in remotely-save settings.
![](./owncloud_rs_settings.png)
5. In remotely-save setting page, click "Check Connectivity".
6. Sync!
![](./owncloud_files.png)

BIN
docs/remote_services/webdav_owncloud/owncloud_address.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
docs/remote_services/webdav_owncloud/owncloud_files.png (Stored with Git LFS) Normal file

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,55 @@
# Synology Webdav Server
English | [中文](./README.zh-cn.md)
## Link
<https://kb.synology.com/en-global/DSM/tutorial/How_to_access_files_on_Synology_NAS_with_WebDAV>
## Attention
The tutorial author (the author of Remotely Save) is NOT an expert of NAS / Synology. Please read the doc carefully and change it to accommodate your needs by yourself.
**It's dangerous to expose your NAS into public Internet if you don't know how to set up firewalls and other protections.**
## Steps
Synology DSM 7 is used in this tutorial.
1. Create a new shared folder if you don't have one. For this tutorial, a new shared folder `share2` is created. You should assing a proper user to read / write the shared folder.
![](./synology_create_shared_folder.png)
2. Assuming you want to sync your vault into sub folder `哈哈哈/sub folder`, please create the sub folder(s) correspondingly inide the shared folder `share2`.
3. Install webdav server from package center.
![](./synology_install_webdav_server.png)
4. Enter the webdav server settings.
5. If you know how to configure https certificates correctly, you are strongly recommend to enable https.
For the demonstration purpose, this tutorial enable http server for the later steps.
Also "Enable DavDepthInfinity", which could speed up the plugin runnings greatly.
"Apply".
![](./synology_webdav_server_settings.png)
6. In Remotely Save settings, you should input your address as:
`http(s)://<your synology ip or domain>:<port>/<shared folder>/<sub folders>`
For example, in the tutorial, the proper url should be:
`http://<ip>:5000/share2/哈哈哈/sub folder`
Username and password should be the user you configured before with read / write permissions to `share2`.
Depth header should be "supports depth="infinity"".
Check connectivity!
![](./synology_remotely_save_settings.png)
7. Sync!

View File

@ -0,0 +1,56 @@
# 群晖 Webdav Server
[English](./README.md) | 中文
## 链接
<https://kb.synology.cn/zh-cn/DSM/tutorial/How_to_access_files_on_Synology_NAS_with_WebDAV>
## 注意
教程作者Remotely Save 作者)**不是** NAS、群晖专家。请仔细阅读文档并自行改动以适应您自身需求。
**没有设置防火墙和其他保护措施的话,将 NAS 暴露到公网上非常危险。**
## 步骤
本教程有用到群晖 DSM 7。
1. 创建共享文件夹。本教程示例创建了 `share2`。你需要允许某个账号对此的读写权限。
![](./synology_create_shared_folder.png)
2. 假设之后你想同步你的库到子文件夹,`哈哈哈/sub folder`,请先在共享文件夹 `share2` 底下创建好。
3. 从套件中心安装 webdav server 。
![](./synology_install_webdav_server.png)
4. 进入 webdav server 设置。
5. 如果你知道如何正确配置 https 证书的话,强烈建议开启 https。
本教程简化示例,开启了 http。
也设置“Enable DavDepthInfinity”这可以加速插件连接速度。
“Apply”。
![](./synology_webdav_server_settings.png)
6. 在 Remotely Save 设置页,你的地址应如下格式输入:
`http(s)://<your synology ip or domain>:<port>/<shared folder>/<sub folders>`
比如说,本教程里,正确的地址类似于:
`http://<ip>:5000/share2/哈哈哈/sub folder`
用户名和密码是你之前配置了允许读写 `share2` 的那个账号。
Depth 设置应为“supports depth="infinity"”。
检查连接!
![](./synology_remotely_save_settings.png)
7. 同步!

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -18,7 +18,7 @@ The list is for information purposes only.
| [MinIO](https://min.io/) | ? | ? | | | | |
| [WsgiDAV](https://github.com/mar10/wsgidav) | Yes | | Yes | | Yes | CORS rules can be set. |
| [Nginx `ngx_http_dav_module`](http://nginx.org/en/docs/http/ngx_http_dav_module.html) | Yes? | | Yes? | | Yes? | ? |
| NextCloud | Yes? | | Yes? | | Yes? | No CORS config by default. |
| NextCloud | Yes | | Yes | | Yes? | No CORS config by default. |
| OwnCloud | Yes? | | Yes? | | Yes? | No CORS config by default. |
| Seafile | Yes | | Yes | | Yes? | No CORS config by default. |
| `rclone serve webdav` | Yes | | Yes | | Yes | No CORS support. |
@ -26,7 +26,7 @@ The list is for information purposes only.
| [TeraCLOUD](https://teracloud.jp/en/) | Yes | | Yes | | Yes | No CORS support. |
| Dropbox | Yes | | | Yes | | |
| OneDrive for personal | Yes | | | Yes | | |
| OneDrive for Business | In the plan | | | ? | | |
| OneDrive for Business | Yes | | | ? | | |
| Google Drive | In the plan | | | ? | | |
| [Box](https://www.box.com/) | ? | | | May be possible but needs further development. | | |
| Google Cloud Storage | ? | | | May be possible but needs further development. | | |

View File

@ -0,0 +1,7 @@
# Sync Algorithm
- [v1](./v1/README.md)
- [v2](./v2/README.md)
- v3
- [intro doc for end users](./v3/intro.md)
- [design doc](./v3/design.md)

View File

@ -0,0 +1,4 @@
# Sync Algorithm V3
- [intro doc for end users](./intro.md)
- [design doc](./design.md)

View File

@ -0,0 +1,71 @@
# Sync Algorithm V3
Drafted on 20240117.
An absolutely better sync algorithm. Better for tracking deletions and better for subbranching.
## Huge Thanks
Basically a combination of algorithm v2 + [synclone](https://github.com/Jwink3101/syncrclone/blob/master/docs/algorithm.md) + [rsinc](https://github.com/ConorWilliams/rsinc) + (some of rclone [bisync](https://rclone.org/bisync/)). All of the later three are released under MIT License so no worries about the licenses.
## Features
Must have
1. true deletion detection
2. deletion protection (blocking) with a setting
3. transaction from the old algorithm
4. user warning show up, **new algorithm needs all clients to be updated!** (deliberately corrput the metadata file??)
5. filters
6. conflict warning
7. partial sync
Nice to have
1. true time and hash
2. conflict rename
## Description
We have _five_ input sources:
1. local all files
2. remote all files
3. _local previous succeeded sync history_
4. local deletions
5. remote deletions.
Init run, consuming remote deletions :
Change history data into _local previous succeeded sync history_.
Later runs, use the first, second, third sources **only**.
Bidirectional table is modified based on synclone and rsinc. Incremental push / pull only tables is further modified based on the bidirectional table. The number inside the table cell is the decision branch in the code.
Bidirectional:
| local\remote | remote unchanged | remote modified | remote deleted | remote created |
| --------------- | ------------------ | ------------------------- | ------------------ | ------------------------- |
| local unchanged | (02/21) do nothing | (09) pull | (07) delete local | (??) conflict |
| local modified | (10) push | (16/17/18/19/20) conflict | (08) push | (??) conflict |
| local deleted | (04) delete remote | (05) pull | (01) clean history | (03) pull |
| local created | (??) conflict | (??) conflict | (06) push | (11/12/13/14/15) conflict |
Incremental push only:
| local\remote | remote unchanged | remote modified | remote deleted | remote created |
| --------------- | ---------------------------- | ---------------------------- | ---------------------- | ---------------------------- |
| local unchanged | (02/21) do nothing | **(26) conflict push** | **(32) conflict push** | (??) conflict |
| local modified | (10) push | **(25) conflict push** | (08) push | (??) conflict |
| local deleted | **(29) conflict do nothing** | **(30) conflict do nothing** | (01) clean history | **(28) conflict do nothing** |
| local created | (??) conflict | (??) conflict | (06) push | **(23) conflict push** |
Incremental pull only:
| local\remote | remote unchanged | remote modified | remote deleted | remote created |
| --------------- | ---------------------- | ---------------------- | ---------------------------- | ---------------------- |
| local unchanged | (02/21) do nothing | (09) pull | **(33) conflict do nothing** | (??) conflict |
| local modified | **(27) conflict pull** | **(24) conflict pull** | **(34) conflict do nothing** | (??) conflict |
| local deleted | **(35) conflict pull** | (05) pull | (01) clean history | (03) pull |
| local created | (??) conflict | (??) conflict | **(31) conflict do nothing** | **(22) conflict pull** |

View File

@ -0,0 +1,14 @@
# Introduction To Sync Algorithm V3
- [x] sync conflict: keep newer
- [x] sync conflict: keep larger
- [ ] sync conflict: keep both and rename
- [ ] sync conflict: show warning
- [x] deletion: true deletion status computation
- [x] meta data: no remote meta data any more
- [x] migration: old data auto transfer to new db (hopefully)
- [x] sync direction: incremental push only
- [x] sync direction: incremental pull only
- [x] sync protection: warning based on the threshold
- [ ] partial sync: better sync on save
- [x] encrpytion: new encryption method, see [this](../../encryption/)

View File

@ -1,6 +1,7 @@
import dotenv from "dotenv/config";
import esbuild from "esbuild";
import process from "process";
import inlineWorkerPlugin from "esbuild-plugin-inline-worker";
// import builtins from 'builtin-modules'
const banner = `/*
@ -18,7 +19,7 @@ const DEFAULT_ONEDRIVE_CLIENT_ID = process.env.ONEDRIVE_CLIENT_ID || "";
const DEFAULT_ONEDRIVE_AUTHORITY = process.env.ONEDRIVE_AUTHORITY || "";
esbuild
.build({
.context({
banner: {
js: banner,
},
@ -33,11 +34,13 @@ esbuild
"fs",
"tls",
"net",
"http",
"https",
// ...builtins
],
inject: ["./esbuild.injecthelper.mjs"],
format: "cjs",
watch: !prod,
// watch: !prod, // no longer valid in esbuild 0.17
target: "es2016",
logLevel: "info",
sourcemap: prod ? false : "inline",
@ -52,5 +55,17 @@ esbuild
"process.env.NODE_DEBUG": `undefined`, // ugly fix
"process.env.DEBUG": `undefined`, // ugly fix
},
plugins: [inlineWorkerPlugin()],
})
.then((context) => {
if (process.argv.includes("--watch")) {
// Enable watch mode
context.watch();
} else {
// Build once and exit if not in watch mode
context.rebuild().then((result) => {
context.dispose();
});
}
})
.catch(() => process.exit(1));

11
manifest-beta.json Normal file
View File

@ -0,0 +1,11 @@
{
"id": "remotely-save",
"name": "Remotely Save",
"version": "0.4.16",
"minAppVersion": "0.13.21",
"description": "Yet another unofficial plugin allowing users to synchronize notes between local device and the cloud service.",
"author": "fyears",
"authorUrl": "https://github.com/fyears",
"isDesktopOnly": false,
"fundingUrl": "https://github.com/remotely-save/donation"
}

View File

@ -1,10 +1,11 @@
{
"id": "remotely-save",
"name": "Remotely Save",
"version": "0.3.25",
"version": "0.4.16",
"minAppVersion": "0.13.21",
"description": "Yet another unofficial plugin allowing users to synchronize notes between local device and the cloud service.",
"author": "fyears",
"authorUrl": "https://github.com/fyears",
"isDesktopOnly": false
"isDesktopOnly": false,
"fundingUrl": "https://github.com/remotely-save/donation"
}

View File

@ -1,13 +1,13 @@
{
"name": "remotely-save",
"version": "0.3.25",
"version": "0.4.16",
"description": "This is yet another sync plugin for Obsidian app.",
"scripts": {
"dev2": "node esbuild.config.mjs",
"dev2": "node esbuild.config.mjs --watch",
"build2": "tsc -noEmit -skipLibCheck && node esbuild.config.mjs production",
"build": "webpack --mode production",
"dev": "webpack --mode development --watch",
"format": "npx prettier --write .",
"format": "npx prettier --trailing-comma es5 --write .",
"clean": "npx rimraf main.js",
"test": "cross-env TS_NODE_COMPILER_OPTIONS={\\\"module\\\":\\\"commonjs\\\"} mocha -r ts-node/register 'tests/**/*.ts'"
},
@ -23,73 +23,75 @@
"author": "",
"license": "Apache-2.0",
"devDependencies": {
"@microsoft/microsoft-graph-types": "^2.19.0",
"@types/chai": "^4.3.1",
"@types/chai-as-promised": "^7.1.5",
"@types/jsdom": "^16.2.14",
"@types/lodash": "^4.14.182",
"@types/mime-types": "^2.1.1",
"@types/mocha": "^9.1.1",
"@types/mustache": "^4.1.2",
"@types/node": "^17.0.30",
"@types/qrcode": "^1.4.2",
"builtin-modules": "^3.2.0",
"chai": "^4.3.6",
"@microsoft/microsoft-graph-types": "^2.40.0",
"@types/chai": "^4.3.14",
"@types/chai-as-promised": "^7.1.8",
"@types/jsdom": "^21.1.6",
"@types/lodash": "^4.14.202",
"@types/mime-types": "^2.1.4",
"@types/mocha": "^10.0.6",
"@types/mustache": "^4.2.5",
"@types/node": "^20.10.4",
"@types/qrcode": "^1.5.5",
"builtin-modules": "^3.3.0",
"chai": "^4.4.1",
"chai-as-promised": "^7.1.1",
"cross-env": "^7.0.3",
"dotenv": "^16.0.0",
"esbuild": "^0.14.38",
"jsdom": "^19.0.0",
"mocha": "^9.2.2",
"prettier": "^2.6.2",
"ts-loader": "^9.2.9",
"ts-node": "^10.7.0",
"tslib": "^2.4.0",
"typescript": "^4.6.4",
"dotenv": "^16.3.1",
"esbuild": "^0.19.9",
"esbuild-plugin-inline-worker": "^0.1.1",
"jsdom": "^23.0.1",
"mocha": "^10.4.0",
"npm-check-updates": "^16.14.12",
"obsidian": "^1.4.11",
"prettier": "^3.1.1",
"ts-loader": "^9.5.1",
"ts-node": "^10.9.2",
"tslib": "^2.6.2",
"typescript": "^5.3.3",
"webdav-server": "^2.6.2",
"webpack": "^5.72.0",
"webpack-cli": "^4.9.2"
"webpack": "^5.89.0",
"webpack-cli": "^5.1.4",
"worker-loader": "^3.0.8"
},
"dependencies": {
"@aws-sdk/client-s3": "^3.81.0",
"@aws-sdk/fetch-http-handler": "^3.78.0",
"@aws-sdk/lib-storage": "^3.81.0",
"@aws-sdk/protocol-http": "^3.78.0",
"@aws-sdk/querystring-builder": "^3.78.0",
"@aws-sdk/signature-v4-crt": "^3.78.0",
"@aws-sdk/types": "^3.78.0",
"@azure/msal-node": "^1.8.0",
"@aws-sdk/client-s3": "^3.474.0",
"@aws-sdk/lib-storage": "^3.474.0",
"@aws-sdk/signature-v4-crt": "^3.474.0",
"@aws-sdk/types": "^3.468.0",
"@azure/msal-node": "^2.6.0",
"@fyears/rclone-crypt": "^0.0.7",
"@fyears/tsqueue": "^1.0.1",
"@microsoft/microsoft-graph-client": "^3.0.2",
"acorn": "^8.7.1",
"aggregate-error": "^4.0.0",
"assert": "^2.0.0",
"aws-crt": "^1.12.1",
"@microsoft/microsoft-graph-client": "^3.0.7",
"@smithy/fetch-http-handler": "^2.3.1",
"@smithy/protocol-http": "^3.0.11",
"@smithy/querystring-builder": "^2.0.15",
"acorn": "^8.11.2",
"aggregate-error": "^5.0.0",
"assert": "^2.1.0",
"aws-crt": "^1.20.0",
"buffer": "^6.0.3",
"crypto-browserify": "^3.12.0",
"delay": "^5.0.0",
"dropbox": "^10.28.0",
"emoji-regex": "^10.1.0",
"http-status-codes": "^2.2.0",
"dropbox": "^10.34.0",
"emoji-regex": "^10.3.0",
"http-status-codes": "^2.3.0",
"localforage": "^1.10.0",
"localforage-getitems": "^1.4.2",
"lodash": "^4.17.21",
"loglevel": "^1.8.0",
"lucide": "^0.35.0",
"lucide": "^0.298.0",
"mime-types": "^2.1.35",
"mustache": "^4.2.0",
"nanoid": "^3.3.3",
"obsidian": "^0.14.6",
"p-queue": "^7.2.0",
"nanoid": "^5.0.4",
"p-queue": "^8.0.1",
"path-browserify": "^1.0.1",
"process": "^0.11.10",
"qrcode": "^1.5.0",
"rfc4648": "^1.5.1",
"rimraf": "^3.0.2",
"qrcode": "^1.5.3",
"rfc4648": "^1.5.3",
"rimraf": "^5.0.5",
"stream-browserify": "^3.0.0",
"url": "^0.11.0",
"util": "^0.12.4",
"webdav": "^4.9.0",
"webdav-fs": "^4.0.1",
"xregexp": "^5.1.0"
"url": "^0.11.3",
"util": "^0.12.5",
"webdav": "^5.3.1",
"xregexp": "^5.1.1"
}
}

View File

@ -21,9 +21,18 @@ export interface S3Config {
s3AccessKeyID: string;
s3SecretAccessKey: string;
s3BucketName: string;
bypassCorsLocally?: boolean;
partsConcurrency?: number;
forcePathStyle?: boolean;
remotePrefix?: string;
useAccurateMTime?: boolean;
/**
* @deprecated
*/
bypassCorsLocally?: boolean;
reverseProxyUrl: string;
}
export interface DropboxConfig {
@ -40,9 +49,10 @@ export interface DropboxConfig {
export type WebdavAuthType = "digest" | "basic";
export type WebdavDepthType =
| "auto_unknown"
| "auto_1"
| "auto_infinity"
| "auto" // deprecated on 20240116
| "auto_unknown" // deprecated on 20240116
| "auto_1" // deprecated on 20240116
| "auto_infinity" // deprecated on 20240116
| "manual_1"
| "manual_infinity";
@ -51,9 +61,14 @@ export interface WebdavConfig {
username: string;
password: string;
authType: WebdavAuthType;
manualRecursive: boolean; // deprecated in 0.3.6, use depth
depth?: WebdavDepthType;
remoteBaseDir?: string;
/**
* @deprecated
*/
manualRecursive: boolean; // deprecated in 0.3.6, use depth
}
export interface OnedriveConfig {
@ -69,6 +84,15 @@ export interface OnedriveConfig {
remoteBaseDir?: string;
}
export type SyncDirectionType =
| "bidirectional"
| "incremental_pull_only"
| "incremental_push_only";
export type CipherMethodType = "rclone-base64" | "openssl-base64" | "unknown";
export type QRExportType = "all_but_oauth2" | "dropbox" | "onedrive";
export interface RemotelySavePluginSettings {
s3: S3Config;
webdav: WebdavConfig;
@ -79,26 +103,43 @@ export interface RemotelySavePluginSettings {
currLogLevel?: string;
autoRunEveryMilliseconds?: number;
initRunAfterMilliseconds?: number;
agreeToUploadExtraMetadata?: boolean;
syncOnSaveAfterMilliseconds?: number;
concurrency?: number;
syncConfigDir?: boolean;
syncUnderscoreItems?: boolean;
lang?: LangTypeAndAuto;
logToDB?: boolean;
agreeToUseSyncV3?: boolean;
skipSizeLargerThan?: number;
ignorePaths?: string[];
enableStatusBarInfo?: boolean;
deleteToWhere?: "system" | "obsidian";
conflictAction?: ConflictActionType;
howToCleanEmptyFolder?: EmptyFolderCleanType;
protectModifyPercentage?: number;
syncDirection?: SyncDirectionType;
obfuscateSettingFile?: boolean;
enableMobileStatusBar?: boolean;
encryptionMethod?: CipherMethodType;
/**
* @deprecated
*/
agreeToUploadExtraMetadata?: boolean;
/**
* @deprecated
*/
vaultRandomID?: string;
}
export interface RemoteItem {
key: string;
lastModified: number;
size: number;
remoteType: SUPPORTED_SERVICES_TYPE;
etag?: string;
/**
* @deprecated
*/
logToDB?: boolean;
}
export const COMMAND_URI = "remotely-save";
@ -116,32 +157,83 @@ export interface UriParams {
// 80 days
export const OAUTH2_FORCE_EXPIRE_MILLISECONDS = 1000 * 60 * 60 * 24 * 80;
type DecisionTypeForFile =
| "skipUploading" // special, mtimeLocal === mtimeRemote
| "uploadLocalDelHistToRemote" // "delLocalIfExists && delRemoteIfExists && cleanLocalDelHist && uploadLocalDelHistToRemote"
| "keepRemoteDelHist" // "delLocalIfExists && delRemoteIfExists && cleanLocalDelHist && keepRemoteDelHist"
| "uploadLocalToRemote" // "skipLocal && uploadLocalToRemote && cleanLocalDelHist && cleanRemoteDelHist"
| "downloadRemoteToLocal"; // "downloadRemoteToLocal && skipRemote && cleanLocalDelHist && cleanRemoteDelHist"
export type EmptyFolderCleanType = "skip" | "clean_both";
type DecisionTypeForFileSize =
| "skipUploadingTooLarge"
| "skipDownloadingTooLarge"
| "skipUsingLocalDelTooLarge"
| "skipUsingRemoteDelTooLarge"
| "errorLocalTooLargeConflictRemote"
| "errorRemoteTooLargeConflictLocal";
export type ConflictActionType = "keep_newer" | "keep_larger" | "rename_both";
type DecisionTypeForFolder =
| "createFolder"
| "uploadLocalDelHistToRemoteFolder"
| "keepRemoteDelHistFolder"
| "skipFolder";
export type DecisionTypeForMixedEntity =
| "only_history"
| "equal"
| "local_is_modified_then_push"
| "remote_is_modified_then_pull"
| "local_is_created_then_push"
| "remote_is_created_then_pull"
| "local_is_created_too_large_then_do_nothing"
| "remote_is_created_too_large_then_do_nothing"
| "local_is_deleted_thus_also_delete_remote"
| "remote_is_deleted_thus_also_delete_local"
| "conflict_created_then_keep_local"
| "conflict_created_then_keep_remote"
| "conflict_created_then_keep_both"
| "conflict_created_then_do_nothing"
| "conflict_modified_then_keep_local"
| "conflict_modified_then_keep_remote"
| "conflict_modified_then_keep_both"
| "folder_existed_both_then_do_nothing"
| "folder_existed_local_then_also_create_remote"
| "folder_existed_remote_then_also_create_local"
| "folder_to_be_created"
| "folder_to_skip"
| "folder_to_be_deleted_on_both"
| "folder_to_be_deleted_on_remote"
| "folder_to_be_deleted_on_local";
export type DecisionType =
| DecisionTypeForFile
| DecisionTypeForFileSize
| DecisionTypeForFolder;
/**
* uniform representation
* everything should be flat and primitive, so that we can copy.
*/
export interface Entity {
key?: string;
keyEnc?: string;
keyRaw: string;
mtimeCli?: number;
mtimeCliFmt?: string;
mtimeSvr?: number;
mtimeSvrFmt?: string;
prevSyncTime?: number;
prevSyncTimeFmt?: string;
size?: number; // might be unknown or to be filled
sizeEnc?: number;
sizeRaw: number;
hash?: string;
etag?: string;
synthesizedFolder?: boolean;
}
export interface UploadedType {
entity: Entity;
mtimeCli?: number;
}
/**
* A replacement of FileOrFolderMixedState
*/
export interface MixedEntity {
key: string;
local?: Entity;
prevSync?: Entity;
remote?: Entity;
decisionBranch?: number;
decision?: DecisionTypeForMixedEntity;
conflictAction?: ConflictActionType;
sideNotes?: any;
}
/**
* @deprecated
*/
export interface FileOrFolderMixedState {
key: string;
existLocal?: boolean;
@ -156,7 +248,7 @@ export interface FileOrFolderMixedState {
sizeRemoteEnc?: number;
changeRemoteMtimeUsingMapping?: boolean;
changeLocalMtimeUsingMapping?: boolean;
decision?: DecisionType;
decision?: string; // old DecisionType is deleted, fallback to string
decisionBranch?: number;
syncDone?: "done";
remoteEncryptedKey?: string;
@ -170,6 +262,7 @@ export interface FileOrFolderMixedState {
export const API_VER_STAT_FOLDER = "0.13.27";
export const API_VER_REQURL = "0.13.26"; // desktop ver 0.13.26, iOS ver 1.1.1
export const API_VER_REQURL_ANDROID = "0.14.6"; // Android ver 1.2.1
export const API_VER_ENSURE_REQURL_OK = "1.0.0"; // always bypass CORS here
export const VALID_REQURL =
(!Platform.isAndroidApp && requireApiVersion(API_VER_REQURL)) ||
@ -179,5 +272,15 @@ export const DEFAULT_DEBUG_FOLDER = "_debug_remotely_save/";
export const DEFAULT_SYNC_PLANS_HISTORY_FILE_PREFIX =
"sync_plans_hist_exported_on_";
export const DEFAULT_LOG_HISTORY_FILE_PREFIX = "log_hist_exported_on_";
export const DEFAULT_PROFILER_RESULT_FILE_PREFIX =
"profiler_results_exported_on_";
export type SyncTriggerSourceType = "manual" | "auto" | "dry" | "autoOnceInit";
export type SyncTriggerSourceType =
| "manual"
| "dry"
| "auto"
| "auto_once_init"
| "auto_sync_on_save";
export const REMOTELY_SAVE_VERSION_2022 = "0.3.25";
export const REMOTELY_SAVE_VERSION_2024PREPARE = "0.3.32";

View File

@ -3,8 +3,6 @@ import { reverseString } from "./misc";
import type { RemotelySavePluginSettings } from "./baseTypes";
import { log } from "./moreOnLog";
const DEFAULT_README: string =
"The file contains sensitive info, so DO NOT take screenshot of, copy, or share it to anyone! It's also generated automatically, so do not edit it manually.";
@ -19,10 +17,10 @@ interface MessyConfigType {
export const messyConfigToNormal = (
x: MessyConfigType | RemotelySavePluginSettings | null | undefined
): RemotelySavePluginSettings | null | undefined => {
// log.debug("loading, original config on disk:");
// log.debug(x);
// console.debug("loading, original config on disk:");
// console.debug(x);
if (x === null || x === undefined) {
log.debug("the messy config is null or undefined, skip");
console.debug("the messy config is null or undefined, skip");
return x as any;
}
if ("readme" in x && "d" in x) {
@ -35,12 +33,12 @@ export const messyConfigToNormal = (
}) as Buffer
).toString("utf-8")
);
// log.debug("loading, parsed config is:");
// log.debug(y);
// console.debug("loading, parsed config is:");
// console.debug(y);
return y;
} else {
// return as is
// log.debug("loading, parsed config is the same");
// console.debug("loading, parsed config is the same");
return x;
}
};
@ -52,7 +50,7 @@ export const normalConfigToMessy = (
x: RemotelySavePluginSettings | null | undefined
) => {
if (x === null || x === undefined) {
log.debug("the normal config is null or undefined, skip");
console.debug("the normal config is null or undefined, skip");
return x;
}
const y = {
@ -63,7 +61,7 @@ export const normalConfigToMessy = (
})
),
};
// log.debug("encoding, encoded config is:");
// log.debug(y);
// console.debug("encoding, encoded config is:");
// console.debug(y);
return y;
};

View File

@ -1,102 +1,41 @@
import { TAbstractFile, TFolder, TFile, Vault } from "obsidian";
import type { SyncPlanType } from "./sync";
import {
readAllProfilerResultsByVault,
readAllSyncPlanRecordTextsByVault,
readAllLogRecordTextsByVault,
} from "./localdb";
import type { InternalDBs } from "./localdb";
import { mkdirpInVault } from "./misc";
import { mkdirpInVault, unixTimeToStr } from "./misc";
import {
DEFAULT_DEBUG_FOLDER,
DEFAULT_LOG_HISTORY_FILE_PREFIX,
DEFAULT_PROFILER_RESULT_FILE_PREFIX,
DEFAULT_SYNC_PLANS_HISTORY_FILE_PREFIX,
FileOrFolderMixedState,
} from "./baseTypes";
import { log } from "./moreOnLog";
const turnSyncPlanToTable = (record: string) => {
const syncPlan: SyncPlanType = JSON.parse(record);
const { ts, tsFmt, remoteType, mixedStates } = syncPlan;
type allowedHeadersType = keyof FileOrFolderMixedState;
const headers: allowedHeadersType[] = [
"key",
"remoteEncryptedKey",
"existLocal",
"sizeLocal",
"sizeLocalEnc",
"mtimeLocal",
"deltimeLocal",
"changeLocalMtimeUsingMapping",
"existRemote",
"sizeRemote",
"sizeRemoteEnc",
"mtimeRemote",
"deltimeRemote",
"changeRemoteMtimeUsingMapping",
"decision",
"decisionBranch",
];
const lines = [
`ts: ${ts}${tsFmt !== undefined ? " / " + tsFmt : ""}`,
`remoteType: ${remoteType}`,
`| ${headers.join(" | ")} |`,
`| ${headers.map((x) => "---").join(" | ")} |`,
];
for (const [k1, v1] of Object.entries(syncPlan.mixedStates)) {
const k = k1 as string;
const v = v1 as FileOrFolderMixedState;
const singleLine = [];
for (const h of headers) {
const field = v[h];
if (field === undefined) {
singleLine.push("");
continue;
}
if (
h === "mtimeLocal" ||
h === "deltimeLocal" ||
h === "mtimeRemote" ||
h === "deltimeRemote"
) {
const fmt = v[(h + "Fmt") as allowedHeadersType] as string;
const s = `${field}${fmt !== undefined ? " / " + fmt : ""}`;
singleLine.push(s);
} else {
singleLine.push(field);
}
}
lines.push(`| ${singleLine.join(" | ")} |`);
}
return lines.join("\n");
};
export const exportVaultSyncPlansToFiles = async (
db: InternalDBs,
vault: Vault,
vaultRandomID: string,
toFormat: "table" | "json" = "json"
howMany: number
) => {
log.info("exporting");
console.info("exporting sync plans");
await mkdirpInVault(DEFAULT_DEBUG_FOLDER, vault);
const records = await readAllSyncPlanRecordTextsByVault(db, vaultRandomID);
let md = "";
if (records.length === 0) {
md = "No sync plans history found";
} else {
if (toFormat === "json") {
if (howMany <= 0) {
md =
"Sync plans found:\n\n" +
records.map((x) => "```json\n" + x + "\n```\n").join("\n");
} else if (toFormat === "table") {
md =
"Sync plans found:\n\n" + records.map(turnSyncPlanToTable).join("\n\n");
} else {
const _: never = toFormat;
md =
"Sync plans found:\n\n" +
records
.map((x) => "```json\n" + x + "\n```\n")
.slice(0, howMany)
.join("\n");
}
}
const ts = Date.now();
@ -104,29 +43,29 @@ export const exportVaultSyncPlansToFiles = async (
await vault.create(filePath, md, {
mtime: ts,
});
log.info("finish exporting");
console.info("finish exporting sync plans");
};
export const exportVaultLoggerOutputToFiles = async (
export const exportVaultProfilerResultsToFiles = async (
db: InternalDBs,
vault: Vault,
vaultRandomID: string
) => {
console.info("exporting profiler results");
await mkdirpInVault(DEFAULT_DEBUG_FOLDER, vault);
const records = await readAllLogRecordTextsByVault(db, vaultRandomID);
const records = await readAllProfilerResultsByVault(db, vaultRandomID);
let md = "";
if (records.length === 0) {
md = "No logger history found.";
md = "No profiler results found";
} else {
md =
"Logger history found:\n\n" +
"```text\n" +
records.join("\n") +
"\n```\n";
"Profiler results found:\n\n" +
records.map((x) => "```\n" + x + "\n```\n").join("\n");
}
const ts = Date.now();
const filePath = `${DEFAULT_DEBUG_FOLDER}${DEFAULT_LOG_HISTORY_FILE_PREFIX}${ts}.md`;
const filePath = `${DEFAULT_DEBUG_FOLDER}${DEFAULT_PROFILER_RESULT_FILE_PREFIX}${ts}.md`;
await vault.create(filePath, md, {
mtime: ts,
});
console.info("finish exporting profiler results");
};

View File

@ -1,8 +1,6 @@
import { base32, base64url } from "rfc4648";
import { bufferToArrayBuffer, hexStringToTypedArray } from "./misc";
import { log } from "./moreOnLog";
const DEFAULT_ITER = 20000;
// base32.stringify(Buffer.from('Salted__'))

251
src/encryptRClone.ts Normal file
View File

@ -0,0 +1,251 @@
import {
Cipher as CipherRCloneCryptPack,
encryptedSize,
} from "@fyears/rclone-crypt";
// @ts-ignore
import EncryptWorker from "./encryptRClone.worker";
interface RecvMsg {
status: "ok" | "error";
outputName?: string;
outputContent?: ArrayBuffer;
error?: any;
}
export const getSizeFromOrigToEnc = encryptedSize;
export class CipherRclone {
readonly password: string;
readonly cipher: CipherRCloneCryptPack;
readonly workers: Worker[];
init: boolean;
workerIdx: number;
constructor(password: string, workerNum: number) {
this.password = password;
this.init = false;
this.workerIdx = 0;
// console.debug("begin creating CipherRCloneCryptPack");
this.cipher = new CipherRCloneCryptPack("base64");
// console.debug("finish creating CipherRCloneCryptPack");
// console.debug("begin creating EncryptWorker");
this.workers = [];
for (let i = 0; i < workerNum; ++i) {
this.workers.push(new (EncryptWorker as any)() as Worker);
}
// console.debug("finish creating EncryptWorker");
}
closeResources() {
for (let i = 0; i < this.workers.length; ++i) {
this.workers[i].terminate();
}
}
async prepareByCallingWorker(): Promise<void> {
if (this.init) {
return;
}
// console.debug("begin prepareByCallingWorker");
await this.cipher.key(this.password, "");
// console.debug("finish getting key");
const res: Promise<void>[] = [];
for (let i = 0; i < this.workers.length; ++i) {
res.push(
new Promise((resolve, reject) => {
const channel = new MessageChannel();
channel.port2.onmessage = (event) => {
// console.debug("main: receiving msg in prepare");
const { status } = event.data as RecvMsg;
if (status === "ok") {
// console.debug("main: receiving init ok in prepare");
this.init = true;
resolve(); // return the class object itself
} else {
reject("error after prepareByCallingWorker");
}
};
channel.port2.onmessageerror = (event) => {
// console.debug("main: receiving error in prepare");
reject(event);
};
// console.debug("main: before postMessage in prepare");
this.workers[i].postMessage(
{
action: "prepare",
dataKeyBuf: this.cipher.dataKey.buffer,
nameKeyBuf: this.cipher.nameKey.buffer,
nameTweakBuf: this.cipher.nameTweak.buffer,
},
[channel.port1 /* buffer no transfered because we need to copy */]
);
})
);
}
await Promise.all(res);
}
async encryptNameByCallingWorker(inputName: string): Promise<string> {
// console.debug("main: start encryptNameByCallingWorker");
await this.prepareByCallingWorker();
// console.debug(
// "main: really start generate promise in encryptNameByCallingWorker"
// );
++this.workerIdx;
const whichWorker = this.workerIdx % this.workers.length;
return await new Promise((resolve, reject) => {
const channel = new MessageChannel();
channel.port2.onmessage = (event) => {
// console.debug("main: receiving msg in encryptNameByCallingWorker");
const { outputName } = event.data as RecvMsg;
if (outputName === undefined) {
reject("unknown outputName after encryptNameByCallingWorker");
} else {
resolve(outputName);
}
};
channel.port2.onmessageerror = (event) => {
// console.debug("main: receiving error in encryptNameByCallingWorker");
reject(event);
};
// console.debug("main: before postMessage in encryptNameByCallingWorker");
this.workers[whichWorker].postMessage(
{
action: "encryptName",
inputName: inputName,
},
[channel.port1]
);
});
}
async decryptNameByCallingWorker(inputName: string): Promise<string> {
await this.prepareByCallingWorker();
++this.workerIdx;
const whichWorker = this.workerIdx % this.workers.length;
return await new Promise((resolve, reject) => {
const channel = new MessageChannel();
channel.port2.onmessage = (event) => {
// console.debug("main: receiving msg in decryptNameByCallingWorker");
const { outputName, status } = event.data as RecvMsg;
if (status === "error") {
reject("error");
} else {
if (outputName === undefined) {
reject("unknown outputName after decryptNameByCallingWorker");
} else {
resolve(outputName);
}
}
};
channel.port2.onmessageerror = (event) => {
// console.debug("main: receiving error in decryptNameByCallingWorker");
reject(event);
channel;
};
// console.debug("main: before postMessage in decryptNameByCallingWorker");
this.workers[whichWorker].postMessage(
{
action: "decryptName",
inputName: inputName,
},
[channel.port1]
);
});
}
async encryptContentByCallingWorker(
input: ArrayBuffer
): Promise<ArrayBuffer> {
await this.prepareByCallingWorker();
++this.workerIdx;
const whichWorker = this.workerIdx % this.workers.length;
return await new Promise((resolve, reject) => {
const channel = new MessageChannel();
channel.port2.onmessage = (event) => {
// console.debug("main: receiving msg in encryptContentByCallingWorker");
const { outputContent } = event.data as RecvMsg;
if (outputContent === undefined) {
reject("unknown outputContent after encryptContentByCallingWorker");
} else {
resolve(outputContent);
}
};
channel.port2.onmessageerror = (event) => {
// console.debug("main: receiving error in encryptContentByCallingWorker");
reject(event);
};
// console.debug(
// "main: before postMessage in encryptContentByCallingWorker"
// );
this.workers[whichWorker].postMessage(
{
action: "encryptContent",
inputContent: input,
},
[channel.port1, input]
);
});
}
async decryptContentByCallingWorker(
input: ArrayBuffer
): Promise<ArrayBuffer> {
await this.prepareByCallingWorker();
++this.workerIdx;
const whichWorker = this.workerIdx % this.workers.length;
return await new Promise((resolve, reject) => {
const channel = new MessageChannel();
channel.port2.onmessage = (event) => {
// console.debug("main: receiving msg in decryptContentByCallingWorker");
const { outputContent, status } = event.data as RecvMsg;
if (status === "error") {
reject("error");
} else {
if (outputContent === undefined) {
reject("unknown outputContent after decryptContentByCallingWorker");
} else {
resolve(outputContent);
}
}
};
channel.port2.onmessageerror = (event) => {
// console.debug(
// "main: receiving onmessageerror in decryptContentByCallingWorker"
// );
reject(event);
};
// console.debug(
// "main: before postMessage in decryptContentByCallingWorker"
// );
this.workers[whichWorker].postMessage(
{
action: "decryptContent",
inputContent: input,
},
[channel.port1, input]
);
});
}
}

184
src/encryptRClone.worker.ts Normal file
View File

@ -0,0 +1,184 @@
import { nanoid } from "nanoid";
import { Cipher as CipherRCloneCryptPack } from "@fyears/rclone-crypt";
const ctx: WorkerGlobalScope = self as any;
const workerNanoID = nanoid();
const cipher = new CipherRCloneCryptPack("base64");
// console.debug(`worker [${workerNanoID}]: cipher created`);
async function encryptNameStr(input: string) {
const res = await cipher.encryptFileName(input);
return res;
}
async function decryptNameStr(input: string) {
return await cipher.decryptFileName(input);
}
async function encryptContentBuf(input: ArrayBuffer) {
return (await cipher.encryptData(new Uint8Array(input), undefined)).buffer;
}
async function decryptContentBuf(input: ArrayBuffer) {
return (await cipher.decryptData(new Uint8Array(input))).buffer;
}
ctx.addEventListener("message", async (event: any) => {
const port: MessagePort = event.ports[0];
const {
action,
dataKeyBuf,
nameKeyBuf,
nameTweakBuf,
inputName,
inputContent,
} = event.data as {
action:
| "prepare"
| "encryptContent"
| "decryptContent"
| "encryptName"
| "decryptName";
dataKeyBuf?: ArrayBuffer;
nameKeyBuf?: ArrayBuffer;
nameTweakBuf?: ArrayBuffer;
inputName?: string;
inputContent?: ArrayBuffer;
};
// console.debug(`worker [${workerNanoID}]: receiving action=${action}`);
if (action === "prepare") {
// console.debug(`worker [${workerNanoID}]: prepare: start`);
try {
if (
dataKeyBuf === undefined ||
nameKeyBuf === undefined ||
nameTweakBuf === undefined
) {
// console.debug(`worker [${workerNanoID}]: prepare: no buffer??`);
throw Error(
`worker [${workerNanoID}]: prepare: internal keys not transferred to worker properly`
);
}
// console.debug(`worker [${workerNanoID}]: prepare: so we update`);
cipher.updateInternalKey(
new Uint8Array(dataKeyBuf),
new Uint8Array(nameKeyBuf),
new Uint8Array(nameTweakBuf)
);
port.postMessage({
status: "ok",
});
} catch (error) {
console.error(error);
port.postMessage({
status: "error",
error: error,
});
}
} else if (action === "encryptName") {
try {
if (inputName === undefined) {
throw Error(
`worker [${workerNanoID}]: encryptName: internal inputName not transferred to worker properly`
);
}
const outputName = await encryptNameStr(inputName);
// console.debug(
// `worker [${workerNanoID}]: after encryptNameStr, before postMessage`
// );
port.postMessage({
status: "ok",
outputName: outputName,
});
} catch (error) {
console.error(`worker [${workerNanoID}]: encryptName=${inputName}`);
console.error(error);
port.postMessage({
status: "error",
error: error,
});
}
} else if (action === "decryptName") {
try {
if (inputName === undefined) {
throw Error(
`worker [${workerNanoID}]: decryptName: internal inputName not transferred to worker properly`
);
}
const outputName = await decryptNameStr(inputName);
// console.debug(
// `worker [${workerNanoID}]: after decryptNameStr, before postMessage`
// );
port.postMessage({
status: "ok",
outputName: outputName,
});
} catch (error) {
console.error(`worker [${workerNanoID}]: decryptName=${inputName}`);
console.error(error);
port.postMessage({
status: "error",
error: error,
});
}
} else if (action === "encryptContent") {
try {
if (inputContent === undefined) {
throw Error(
`worker [${workerNanoID}]: encryptContent: internal inputContent not transferred to worker properly`
);
}
const outputContent = await encryptContentBuf(inputContent);
// console.debug(
// `worker [${workerNanoID}]: after encryptContentBuf, before postMessage`
// );
port.postMessage(
{
status: "ok",
outputContent: outputContent,
},
[outputContent]
);
} catch (error) {
console.error(error);
port.postMessage({
status: "error",
error: error,
});
}
} else if (action === "decryptContent") {
try {
if (inputContent === undefined) {
throw Error(
`worker [${workerNanoID}]: decryptContent: internal inputContent not transferred to worker properly`
);
}
const outputContent = await decryptContentBuf(inputContent);
// console.debug(
// `worker [${workerNanoID}]: after decryptContentBuf, before postMessage`
// );
port.postMessage(
{
status: "ok",
outputContent: outputContent,
},
[outputContent]
);
} catch (error) {
console.error(error);
port.postMessage({
status: "error",
error: error,
});
}
} else {
port.postMessage({
status: "error",
error: `worker [${workerNanoID}]: unknown action=${action}`,
});
}
});

215
src/encryptUnified.ts Normal file
View File

@ -0,0 +1,215 @@
import { CipherMethodType } from "./baseTypes";
import * as openssl from "./encryptOpenSSL";
import * as rclone from "./encryptRClone";
import { isVaildText } from "./misc";
export class Cipher {
readonly password: string;
readonly method: CipherMethodType;
cipherRClone?: rclone.CipherRclone;
constructor(password: string, method: CipherMethodType) {
this.password = password ?? "";
this.method = method;
if (method === "rclone-base64") {
this.cipherRClone = new rclone.CipherRclone(password, 5);
}
}
closeResources() {
if (this.method === "rclone-base64" && this.cipherRClone !== undefined) {
this.cipherRClone.closeResources();
}
}
isPasswordEmpty() {
return this.password === "";
}
isFolderAware() {
if (this.method === "openssl-base64") {
return false;
}
if (this.method === "rclone-base64") {
return true;
}
throw Error(`no idea about isFolderAware for method=${this.method}`);
}
async encryptContent(content: ArrayBuffer) {
// console.debug("start encryptContent");
if (this.password === "") {
return content;
}
if (this.method === "openssl-base64") {
const res = await openssl.encryptArrayBuffer(content, this.password);
if (res === undefined) {
throw Error(`cannot encrypt content`);
}
return res;
} else if (this.method === "rclone-base64") {
const res =
await this.cipherRClone!.encryptContentByCallingWorker(content);
if (res === undefined) {
throw Error(`cannot encrypt content`);
}
return res;
} else {
throw Error(`not supported encrypt method=${this.method}`);
}
}
async decryptContent(content: ArrayBuffer) {
// console.debug("start decryptContent");
if (this.password === "") {
return content;
}
if (this.method === "openssl-base64") {
const res = await openssl.decryptArrayBuffer(content, this.password);
if (res === undefined) {
throw Error(`cannot decrypt content`);
}
return res;
} else if (this.method === "rclone-base64") {
const res =
await this.cipherRClone!.decryptContentByCallingWorker(content);
if (res === undefined) {
throw Error(`cannot decrypt content`);
}
return res;
} else {
throw Error(`not supported decrypt method=${this.method}`);
}
}
async encryptName(name: string) {
// console.debug("start encryptName");
if (this.password === "") {
return name;
}
if (this.method === "openssl-base64") {
const res = await openssl.encryptStringToBase64url(name, this.password);
if (res === undefined) {
throw Error(`cannot encrypt name=${name}`);
}
return res;
} else if (this.method === "rclone-base64") {
const res = await this.cipherRClone!.encryptNameByCallingWorker(name);
if (res === undefined) {
throw Error(`cannot encrypt name=${name}`);
}
return res;
} else {
throw Error(`not supported encrypt method=${this.method}`);
}
}
async decryptName(name: string): Promise<string> {
// console.debug("start decryptName");
if (this.password === "") {
return name;
}
if (this.method === "openssl-base64") {
if (name.startsWith(openssl.MAGIC_ENCRYPTED_PREFIX_BASE32)) {
// backward compitable with the openssl-base32
try {
const res = await openssl.decryptBase32ToString(name, this.password);
if (res !== undefined && isVaildText(res)) {
return res;
} else {
throw Error(`cannot decrypt name=${name}`);
}
} catch (error) {
throw Error(`cannot decrypt name=${name}`);
}
} else if (name.startsWith(openssl.MAGIC_ENCRYPTED_PREFIX_BASE64URL)) {
try {
const res = await openssl.decryptBase64urlToString(
name,
this.password
);
if (res !== undefined && isVaildText(res)) {
return res;
} else {
throw Error(`cannot decrypt name=${name}`);
}
} catch (error) {
throw Error(`cannot decrypt name=${name}`);
}
} else {
throw Error(
`method=${this.method} but the name=${name}, likely mismatch`
);
}
} else if (this.method === "rclone-base64") {
const res = await this.cipherRClone!.decryptNameByCallingWorker(name);
if (res === undefined) {
throw Error(`cannot decrypt name=${name}`);
}
return res;
} else {
throw Error(`not supported decrypt method=${this.method}`);
}
}
getSizeFromOrigToEnc(x: number) {
if (this.password === "") {
return x;
}
if (this.method === "openssl-base64") {
return openssl.getSizeFromOrigToEnc(x);
} else if (this.method === "rclone-base64") {
return rclone.getSizeFromOrigToEnc(x);
} else {
throw Error(`not supported encrypt method=${this.method}`);
}
}
/**
* quick guess, no actual decryption here
* @param name
* @returns
*/
static isLikelyOpenSSLEncryptedName(name: string): boolean {
if (
name.startsWith(openssl.MAGIC_ENCRYPTED_PREFIX_BASE32) ||
name.startsWith(openssl.MAGIC_ENCRYPTED_PREFIX_BASE64URL)
) {
return true;
}
return false;
}
/**
* quick guess, no actual decryption here
* @param name
* @returns
*/
static isLikelyEncryptedName(name: string): boolean {
return Cipher.isLikelyOpenSSLEncryptedName(name);
}
/**
* quick guess, no actual decryption here, only openssl can be guessed here
* @param name
* @returns
*/
static isLikelyEncryptedNameNotMatchMethod(
name: string,
method: CipherMethodType
): boolean {
if (
Cipher.isLikelyOpenSSLEncryptedName(name) &&
method !== "openssl-base64"
) {
return true;
}
if (
!Cipher.isLikelyOpenSSLEncryptedName(name) &&
method === "openssl-base64"
) {
return true;
}
return false;
}
}

View File

@ -5,7 +5,7 @@ import { LANGS } from "./langs";
export type LangType = keyof typeof LANGS;
export type LangTypeAndAuto = LangType | "auto";
export type TransItemType = keyof typeof LANGS["en"];
export type TransItemType = keyof (typeof LANGS)["en"];
export class I18n {
lang: LangTypeAndAuto;
@ -31,7 +31,7 @@ export class I18n {
}
const res: string =
(LANGS[realLang] as typeof LANGS["en"])[key] || LANGS["en"][key] || key;
(LANGS[realLang] as (typeof LANGS)["en"])[key] || LANGS["en"][key] || key;
return res;
}

View File

@ -5,24 +5,34 @@ import {
COMMAND_URI,
UriParams,
RemotelySavePluginSettings,
QRExportType,
} from "./baseTypes";
import { log } from "./moreOnLog";
import { getShrinkedSettings } from "./remoteForOnedrive";
export const exportQrCodeUri = async (
settings: RemotelySavePluginSettings,
currentVaultName: string,
pluginVersion: string
pluginVersion: string,
exportFields: QRExportType
) => {
const settings2 = cloneDeep(settings);
delete settings2.dropbox;
delete settings2.onedrive;
let settings2: Partial<RemotelySavePluginSettings> = {};
if (exportFields === "all_but_oauth2") {
settings2 = cloneDeep(settings);
delete settings2.dropbox;
delete settings2.onedrive;
} else if (exportFields === "dropbox") {
settings2 = { dropbox: cloneDeep(settings.dropbox) };
} else if (exportFields === "onedrive") {
settings2 = { onedrive: getShrinkedSettings(settings.onedrive) };
}
delete settings2.vaultRandomID;
const data = encodeURIComponent(JSON.stringify(settings2));
const vault = encodeURIComponent(currentVaultName);
const version = encodeURIComponent(pluginVersion);
const rawUri = `obsidian://${COMMAND_URI}?func=settings&version=${version}&vault=${vault}&data=${data}`;
// log.info(uri)
// console.info(uri)
const imgUri = await QRCode.toDataURL(rawUri);
return {
rawUri,
@ -36,6 +46,20 @@ export interface ProcessQrCodeResultType {
result?: RemotelySavePluginSettings;
}
/**
* we also support directly parse the uri, instead of relying on web browser
* @param input
*/
export const parseUriByHand = (input: string) => {
if (!input.startsWith("obsidian://remotely-save?func=settings&")) {
throw Error(`not valid string`);
}
const k = new URL(input);
const output = Object.fromEntries(k.searchParams);
return output;
};
export const importQrCodeUri = (
inputParams: any,
currentVaultName: string

@ -1 +0,0 @@
Subproject commit 42eab5d544961f4c7830c63ba9559375437340c0

202
src/langs/LICENSE Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

14
src/langs/README.md Normal file
View File

@ -0,0 +1,14 @@
# Translations for Remotely Save
## How To Add A Language?
1. Copy `en.json` to a new json file named `<lang>.json`, and translate all the items inside. The language should match what is available in Obsidian app.
2. Modify the `index.ts` file to include the new language file.
~~## Why Seperated Repo?~~
~~For better pull requests management.~~
## No more `lang` submodule from 20240106
It's actually harder to manage pull requests of submodule for me. The submodule is merged back to the main repo on 20240106. Any further improved translations (and/or pull requestes) should be dealt with in the main repo.

333
src/langs/en.json Normal file
View File

@ -0,0 +1,333 @@
{
"confirm": "Confirm",
"disable": "Disable",
"enable": "Enable",
"goback": "Go Back",
"submit": "Submit",
"sometext": "Here are some texts.",
"syncrun_alreadyrunning": "New command {{newTriggerSource}} stops because {{pluginName}} is already running in stage {{syncStatus}}!",
"syncrun_syncingribbon": "{{pluginName}}: syncing from {{triggerSource}}",
"syncrun_step0": "0/8 Remotely Save is running in dry mode, thus not actual file changes would happen.",
"syncrun_step1": "1/8 Remotely Save is preparing ({{serviceType}})",
"syncrun_step2": "2/8 Starting to fetch remote meta data.",
"syncrun_step3": "3/8 Checking password correct or not.",
"syncrun_passworderr": "Something goes wrong while checking password.",
"syncrun_step4": "4/8 Starting to fetch local meta data.",
"syncrun_step5": "5/8 Starting to fetch local prev sync data.",
"syncrun_step6": "6/8 Starting to generate sync plan.",
"syncrun_step7": "7/8 Remotely Save Sync data is exchanging!",
"syncrun_step7skip": "7/8 Remotely Save real sync is skipped in dry run mode.",
"syncrun_step8": "8/8 Remotely Save finished!",
"syncrun_shortstep0": "0/2 Remotely Save is running in dry mode, not actual file changes would happen.",
"syncrun_shortstep1": "1/2 Remotely Save starts running ({{serviceType}})",
"syncrun_shortstep2skip": "2/2 Remotely Save real sync is skipped in dry run mode.",
"syncrun_shortstep2": "2/2 Remotely Save finished!",
"syncrun_abort": "{{manifestID}}-{{theDate}}: abort sync, triggerSource={{triggerSource}}, error while {{syncStatus}}",
"syncrun_abort_protectmodifypercentage": "Abort! you set changing files >= {{protectModifyPercentage}}% is not allowed but {{realModifyDeleteCount}}/{{allFilesCount}}={{percent}}% is going to be modified or deleted! If you are sure you want this sync, please adjust the allowed ratio in the settings.",
"protocol_saveqr": "New settings for {{manifestName}} is imported and saved. Reopen the plugin settings to make it effective.",
"protocol_callbacknotsupported": "Your uri calls a callback that's not supported yet: {{params}}",
"protocol_dropbox_connecting": "Connecting to Dropbox...\nPlease DO NOT close this modal.",
"protocol_dropbox_connect_succ": "Good! We've connected to Dropbox as user {{username}}!",
"protocol_dropbox_connect_succ_revoke": "You've connected as user {{username}}. If you want to disconnect, click this button.",
"protocol_dropbox_connect_fail": "Something went wrong from response from Dropbox. Maybe the network connection is not good. Maybe you rejected the auth?",
"protocol_dropbox_connect_unknown": "Do not know how to deal with the callback: {{params}}",
"protocol_dropbox_no_modal": "You are not startting Dropbox connection from the settings page. Abort.",
"protocol_onedrive_connecting": "Connecting to OneDrive...\nPlease DO NOT close this modal.",
"protocol_onedrive_connect_succ_revoke": "You've connected as user {{username}}. If you want to disconnect, click this button.",
"protocol_onedrive_connect_fail": "Something went wrong from response from OneDrive. Maybe you rejected the auth?",
"protocol_onedrive_connect_unknown": "Do not know how to deal with the callback: {{params}}",
"command_startsync": "start sync",
"command_drynrun": "start sync (dry run only)",
"command_exportsyncplans_1": "export sync plans (latest 1)",
"command_exportsyncplans_5": "export sync plans (latest 5)",
"command_exportsyncplans_all": "export sync plans (all)",
"command_exportlogsindb": "export logs saved in db",
"statusbar_time_years": "Synced {{time}} years ago",
"statusbar_time_months": "Synced {{time}} months ago",
"statusbar_time_weeks": "Synced {{time}} weeks ago",
"statusbar_time_days": "Synced {{time}} days ago",
"statusbar_time_hours": "Synced {{time}} hours ago",
"statusbar_time_minutes": "Synced {{time}} minutes ago",
"statusbar_time_lessminute": "Synced less than a minute ago",
"statusbar_lastsync": "Synced {{time}} ago",
"statusbar_syncing": "Syncing...",
"statusbar_now": "Synced just now",
"statusbar_lastsync_label": "Last successful Sync on {{date}}",
"statusbar_lastsync_never": "Never Synced",
"statusbar_lastsync_never_label": "Never Synced before",
"modal_password_title": "Hold on and PLEASE READ ON...",
"modal_password_shortdesc": "If the field is not empty, files would be encrypted locally before being uploaded.\nIf the field is empty, then files would be uploaded without encryption.",
"modal_password_attn1": "Attention 1/5: The vault name is NOT encrypted. The plugin creates a folder with the vault name on some remote services.",
"modal_password_attn2": "Attention 2/5: The password itself is stored in PLAIN TEXT LOCALLY.",
"modal_password_attn3": "Attention 3/5: Some metadata are not encrypted or can be easily guessed. (File sizes are closed to their unencrypted ones, and directory path may be stored as 0-byte-size object.)",
"modal_password_attn4": "Attention 4/5: You should make sure the remote store IS EMPTY, or REMOTE FILES WERE ENCRYPTED BY THAT NEW PASSWORD, to avoid conflicts.",
"modal_password_attn5": "Attention 5/5: The longer the password, the better.",
"modal_password_secondconfirm": "The Second Confirm to change password.",
"modal_password_notice": "New password saved!",
"modal_encryptionmethod_title": "Hold on and PLEASE READ ON...",
"modal_encryptionmethod_shortdesc": "You are changing the encrpytion method but you have set the password before.\nAfter switching the method, you need to <b>manually</b> and <b>fully</b> delete every encrypted vault files in the remote and re-sync (so that re-upload) the newly encrypted files again.",
"modal_remotebasedir_title": "You are changing the remote base directory config",
"modal_remotebasedir_shortdesc": "1. The plugin would NOT automatically move the content from the old directory to the new one directly on the remote. Everything syncs from the beginning again.\n2. If you set the string to the empty, the config would be reset to use the vault folder name (the default config).\n3. The remote directory name itself would not be encrypted even you've set an E2E password.\n4. Some special char like '?', '/', '\\' are not allowed. Spaces in the beginning or in the end are also trimmed.",
"modal_remotebasedir_invaliddirhint": "Your input contains special characters like '?', '/', '\\' which are not allowed.",
"modal_remotebasedir_secondconfirm_vaultname": "Reset To The Default Vault Folder Name",
"modal_remotebasedir_secondconfirm_change": "Confirm To Change",
"modal_remotebasedir_notice": "New remote base directory config saved!",
"modal_remoteprefix_title": "You are changing the remote prefix config",
"modal_remoteprefix_shortdesc": "1. The plugin would NOT automatically move the content from the old directory to the new one directly on the remote. Everything syncs from the beginning again.\n2. If you set the string to the empty, the prefix will be empty and the files will be saved at the root of the bucket.\n3. The remote directory name itself would not be encrypted even you've set an E2E password.\n4. Some special char like '?', '/', '\\' are not allowed. Spaces in the beginning or in the end are also trimmed.",
"modal_remoteprefix_invaliddirhint": "Your input contains special characters like '?', '/', '\\' which are not allowed.",
"modal_remoteprefix_tosave": "The prefix to save is \"{{{prefix}}}\"",
"modal_remoteprefix_secondconfirm_empty": "The prefix is empty and the files will be saved at the root of the bucket.",
"modal_remoteprefix_secondconfirm_change": "Confirm To Change",
"modal_remoteprefix_notice": "New remote prefix config saved!",
"modal_dropboxauth_manualsteps": "Step 1: Visit the address in a browser, and follow the steps.\nStep 2: In the end of the web flow, you obtain a long code. Paste it here then click \"Submit\".",
"modal_dropboxauth_autosteps": "Visit the address in a browser, and follow the steps.\nFinally you should be redirected to Obsidian.",
"modal_dropboxauth_copybutton": "Click to copy the auth url",
"modal_dropboxauth_copynotice": "The auth url is copied to the clipboard!",
"modal_dropboxauth_maualinput": "Auth Code from web page",
"modal_dropboxauth_maualinput_desc": "You need to click \"Confirm\".",
"modal_dropboxauth_maualinput_notice": "Trying to connect to Dropbox",
"modal_dropboxauth_maualinput_conn_succ": "Good! We've connected to Dropbox as user {{username}}!",
"modal_dropboxauth_maualinput_conn_succ_revoke": "You've connected as user {{username}}. If you want to disconnect, click this button.",
"modal_dropboxauth_maualinput_conn_fail": "Something goes wrong while connecting to Dropbox.",
"modal_onedriveauth_shortdesc": "Currently only OneDrive for personal is supported. OneDrive for Business is NOT supported (yet).\nVisit the address in a browser, and follow the steps.\nFinally you should be redirected to Obsidian.",
"modal_onedriveauth_shortdesc_linux": "It seems that you are using Obsidian on Linux, and you might not be able to jump back here properly. Please consider <a href=\"https://github.com/remotely-save/remotely-save/issues/415\">using</a> the flatpack version of Obsidian, or creating an <a href=\"https://github.com/remotely-save/remotely-save/blob/master/docs/linux.md\"><code>obsidian.desktop</code> file</a>.",
"modal_onedriveauth_copybutton": "Click to copy the auth url",
"modal_onedriveauth_copynotice": "The auth url is copied to the clipboard!",
"modal_onedriverevokeauth_step1": "Step 1: Go to the following address, click the \"Edit\" button for the plugin, then click \"Remove these permissions\" button on the page.",
"modal_onedriverevokeauth_step2": "Step 2: Click the button below, to clean the locally-saved login credentials.",
"modal_onedriverevokeauth_clean": "Clean Locally-Saved Login Credentials",
"modal_onedriverevokeauth_clean_desc": "You need to click the button.",
"modal_onedriverevokeauth_clean_button": "Clean",
"modal_onedriverevokeauth_clean_notice": "Cleaned!",
"modal_onedriverevokeauth_clean_fail": "Something goes wrong while revoking.",
"modal_syncconfig_attn": "Attention 1/2: This only syncs (copies) the whole Obsidian config dir, not other startting-with-dot folders or files. Except for ignoring folders .git and node_modules, it also doesn't understand the meaning of sub-files and sub-folders inside the config dir.\nAttention 2/2: After the config dir is synced, plugins settings might be corrupted, and Obsidian might need to be restarted to load the new settings.\nIf you are agreed to take your own risk, please click the following second confirm button.",
"modal_syncconfig_secondconfirm": "The Second Confirm To Enable.",
"modal_syncconfig_notice": "You've enabled syncing config folder!",
"modal_qr_shortdesc": "This exports (partial) settings.\nYou can use another device to scan this qrcode.\nOr, you can click the button to copy the special uri and paste it into another device's web browser or Remotely Save Import Setting.",
"modal_qr_button": "Click to copy the special URI",
"modal_qr_button_notice": "The special uri is copied to the clipboard!",
"modal_sizesconflict_title": "Remotely Save: Some conflict were found while skipping large files",
"modal_sizesconflict_desc": "You've set skipping files larger than {{thresholdMB}} MB ({{thresholdBytes}} bytes).\nBut the following files have sizes larger than the threshold on one side, and sizes smaller than the threshold on the other side.\nTo avoid unexpected overwriting or deleting, the plugin stops, and you have to manually deal with at least one side of the files.",
"modal_sizesconflict_copybutton": "Click to copy all the below sizes conflicts info",
"modal_sizesconflict_copynotice": "All the sizes conflicts info have been copied to the clipboard!",
"settings_basic": "Basic Settings",
"settings_password": "Encryption Password",
"settings_password_desc": "Password for E2E encryption. Empty for no password. You need to click \"Confirm\". Attention: The password and other info are saved locally. After changing the password, you need to manually delete every original files in the remote, and re-sync (so that upload) the encrypted files again.",
"settings_encryptionmethod": "Encryption Method",
"settings_encryptionmethod_desc": "Encryption method for E2E encryption. RClone Crypt format is recommended but it doesn't encrypt path structure. OpenSSL enc is the legacy format of this plugin. <b>Both are not affliated with official RClone and OpenSSL product or community.</b> Attention: After switching the method, you need to manually delete every original files in the remote and re-sync (so that upload) the encrypted files again. More info in the <a href='https://github.com/remotely-save/remotely-save/tree/master/docs/encryption'>online doc</a>.",
"settings_encryptionmethod_rclone": "RClone Crypt (recommended)",
"settings_encryptionmethod_openssl": "OpenSSL enc (legacy)",
"settings_autorun": "Schedule For Auto Run",
"settings_autorun_desc": "The plugin tries to schedule the running after every interval. Battery may be impacted.",
"settings_autorun_notset": "(not set)",
"settings_autorun_1min": "every 1 minute",
"settings_autorun_5min": "every 5 minutes",
"settings_autorun_10min": "every 10 minutes",
"settings_autorun_30min": "every 30 minutes",
"settings_runoncestartup": "Run Once On Start Up Automatically",
"settings_runoncestartup_desc": "This settings allows setting running ONCE on start up automatically. This will take effect on NEXT start up after changing. This setting, is different from \"schedule for auto run\" which starts syncing after EVERY interval.",
"settings_runoncestartup_notset": "(not set)",
"settings_runoncestartup_1sec": "sync once after 1 second of start up",
"settings_runoncestartup_10sec": "sync once after 10 seconds of start up",
"settings_runoncestartup_30sec": "sync once after 30 seconds of start up",
"settings_saverun": "Sync On Save (experimental)",
"settings_saverun_desc": "A sync will be triggered if a file save action happened within a few seconds. Please pay attention that syncing is potentially a heavy action and battery may be impacted. (May need to reload the plugin or restart Obsidian after changing)",
"settings_saverun_notset": "(not set)",
"settings_saverun_1sec": "check every 1 second",
"settings_saverun_5sec": "check every 5 seconds",
"settings_saverun_10sec": "check every 10 seconds (recommended)",
"settings_saverun_1min": "check every 1 minute",
"settings_skiplargefiles": "Skip Large Files",
"settings_skiplargefiles_desc": "Skip files with sizes larger than the threshold. Here 1 MB = 10^6 bytes.",
"settings_skiplargefiles_notset": "(not set)",
"settings_ignorepaths": "Regex Of Paths To Ignore",
"settings_ignorepaths_desc": "Regex of paths of folders or files to ignore. One regex per line. The path is relative to the vault root without leading slash.",
"settings_enablestatusbar_info": "Show Last Successful Sync In Status Bar",
"settings_enablestatusbar_info_desc": "Show the time of the last successful sync in the status bar.",
"settings_enablestatusbar_reloadrequired_notice": "Reload the plugin for the changes to take effect.",
"settings_resetstatusbar_time": "Reset Last Successful Sync Time",
"settings_resetstatusbar_time_desc": "Reset last successful sync time.",
"settings_resetstatusbar_button": "Reset",
"settings_resetstatusbar_notice": "Reset done!",
"settings_checkonnectivity": "Check Connectivity",
"settings_checkonnectivity_desc": "Check connectivity.",
"settings_checkonnectivity_button": "Check",
"settings_checkonnectivity_checking": "Checking...",
"settings_remotebasedir": "Change The Remote Base Directory (experimental)",
"settings_remotebasedir_desc": "By default the content is synced to a remote directory with the same name as the vault name. You can change the remote folder name here, or keep the input field empty to reset to the default. You need to click \"Confirm\".",
"settings_remoteprefix": "Change The Remote Prefix (experimental)",
"settings_remoteprefix_desc": "By default in s3 the files are saved at the root of the bucket. You can change the remote prefix here, or keep the input field empty to reset to the default. You need to click \"Confirm\".",
"settings_s3": "Remote For S3 or compatible",
"settings_s3_disclaimer1": "Disclaimer: This plugin is NOT an official Amazon product.",
"settings_s3_disclaimer2": "Disclaimer: The information is stored locally. Other malicious/harmful/faulty plugins could read the info. If you see any unintentional access to your bucket, please immediately delete the access key on your AWS (or other S3-service provider) settings.",
"settings_s3_cors": "You need to configure CORS to allow requests from origin app://obsidian.md and capacitor://localhost and http://localhost, and add ETag into exposed headers.",
"settings_s3_prod": "Some Amazon S3 official docs for references:",
"settings_s3_prod1": "Endpoint and region info",
"settings_s3_prod2": "Access Key ID and Secret Access Key info",
"settings_s3_prod3": "Configuring CORS",
"settings_s3_endpoint": "Endpoint",
"settings_s3_region": "Region",
"settings_s3_region_desc": "If you are not sure what to enter, you could try the value: us-east-1 .",
"settings_s3_accesskeyid": "Access Key ID",
"settings_s3_accesskeyid_desc": "Access key ID. Attention: Access key ID and other info are saved locally.",
"settings_s3_secretaccesskey": "Secret Access Key",
"settings_s3_secretaccesskey_desc": "Secret access key. Attention: Secret access key and other info are saved locally.",
"settings_s3_bucketname": "Bucket Name",
"settings_s3_bypasscorslocally": "Bypass CORS Issue Locally",
"settings_s3_bypasscorslocally_desc": "The plugin allows skipping server CORS config in new version of Obsidian ( desktop>=0.13.25 or iOS>=1.1.1 or Android>=1.2.1). If you encounter any issues, please disable this setting and config CORS on servers (allowing requests from app://obsidian.md and capacitor://localhost and http://localhost and add ETag into exposed headers).",
"settings_s3_parts": "Parts Concurrency",
"settings_s3_parts_desc": "Large files are split into small parts to upload in S3. How many parts do you want to upload in parallel at most?",
"settings_s3_accuratemtime": "Use Accurate MTime",
"settings_s3_accuratemtime_desc": "Read the uploaded accurate last modified time for better sync algorithm. But it causes extra api requests / time / money to the S3 endpoint.",
"settings_s3_urlstyle": "S3 URL style",
"settings_s3_urlstyle_desc": "Whether to force path-style URLs for S3 objects (e.g., https://s3.amazonaws.com/*/ instead of https://*.s3.amazonaws.com/).",
"settings_s3_reverse_proxy_url": "S3 Reverse Proxy Url",
"settings_s3_reverse_proxy_url_desc": "S3 Reverse Proxy Url.(Leave blank if you don't have a reverse proxy)",
"settings_s3_connect_succ": "Great! The bucket can be accessed.",
"settings_s3_connect_fail": "The S3 bucket cannot be reached.",
"settings_dropbox": "Remote For Dropbox",
"settings_dropbox_disclaimer1": "Disclaimer: This app is NOT an official Dropbox product.",
"settings_dropbox_disclaimer2": "Disclaimer: The information is stored locally. Other malicious/harmful/faulty plugins could read the info. If you see any unintentional access to your Dropbox, please immediately disconnect this app on https://www.dropbox.com/account/connected_apps .",
"settings_dropbox_folder": "We will create and sync inside the folder /Apps/{{pluginID}}/{{remoteBaseDir}} on your Dropbox.",
"settings_dropbox_revoke": "Revoke Auth",
"settings_dropbox_revoke_desc": "You've connected as user {{username}}. If you want to disconnect, click this button.",
"settings_dropbox_revoke_button": "Revoke Auth",
"settings_dropbox_revoke_notice": "Revoked!",
"settings_dropbox_revoke_noticeerr": "Something goes wrong while revoking.",
"settings_dropbox_clearlocal": "Clear Locally Saved Credentials",
"settings_dropbox_clearlocal_desc": "You can forcefully clear the locally saved Dropbox login credentials, but not send revoke auth request to the server.",
"settings_dropbox_clearlocal_button": "Clear",
"settings_dropbox_clearlocal_notice": "Cleared!",
"settings_dropbox_auth": "Auth",
"settings_dropbox_auth_desc": "Auth.",
"settings_dropbox_auth_button": "Auth",
"settings_dropbox_connect_succ": "Great! We can connect to Dropbox!",
"settings_dropbox_connect_fail": "We cannot connect to Dropbox.",
"settings_onedrive": "Remote For Onedrive (for personal)",
"settings_onedrive_disclaimer1": "Disclaimer: This app is NOT an official Microsoft / OneDrive product.",
"settings_onedrive_disclaimer2": "Disclaimer: The information is stored locally. Other malicious/harmful/faulty plugins could read the info. If you see any unintentional access to your Onedrive, please immediately disconnect this app on https://microsoft.com/consent .",
"settings_onedrive_folder": "We will create and sync inside the folder /Apps/{{pluginID}}/{{remoteBaseDir}} on your OneDrive.",
"settings_onedrive_nobiz": "Currently only OneDrive for personal is supported. OneDrive for Business is NOT supported (yet).",
"settings_onedrive_revoke": "Revoke Auth",
"settings_onedrive_revoke_desc": "You've connected as user {{username}}. If you want to disconnect, click this button.",
"settings_onedrive_revoke_button": "Revoke Auth",
"settings_onedrive_auth": "Auth",
"settings_onedrive_auth_desc": "Auth.",
"settings_onedrive_auth_button": "Auth",
"settings_onedrive_connect_succ": "Great! We can connect to Onedrive!",
"settings_onedrive_connect_fail": "We cannot connect to Onedrive.",
"settings_webdav": "Remote For Webdav",
"settings_webdav_disclaimer1": "Disclaimer: The information is stored locally. Other malicious/harmful/faulty plugins may read the info. If you see any unintentional access to your webdav server, please immediately change the username and password.",
"settings_webdav_cors_os": "Obsidian desktop>=0.13.25 or iOS>=1.1.1 or Android>=1.2.1 supports bypassing CORS locally. But you are using an old version, and you're suggested to upgrade Obsidian.",
"settings_webdav_cors": "You need to configure CORS to allow requests from origin app://obsidian.md and capacitor://localhost and http://localhost",
"settings_webdav_folder": "We will create and sync inside the folder /{{remoteBaseDir}} on your server.",
"settings_webdav_addr": "Server Address",
"settings_webdav_addr_desc": "Server address.",
"settings_webdav_user": "Username",
"settings_webdav_user_desc": "Username. Attention: the username and other info are saved locally.",
"settings_webdav_password": "Password",
"settings_webdav_password_desc": "Password. Attention: the password and other info are saved locally.",
"settings_webdav_auth": "Auth Type",
"settings_webdav_auth_desc": "If no password, this option would be ignored.",
"settings_webdav_depth": "Depth Header Sent To Servers",
"settings_webdav_depth_desc": "Webdav servers should be configured to allow requests with header Depth being '1' or 'Infinity'. If you are not sure what's this, choose \"depth='1'\". If you are sure your server supports depth='infinity', please choose that to get way better performance.",
"settings_webdav_depth_1": "only supports depth='1'",
"settings_webdav_depth_inf": "supports depth='infinity'",
"settings_webdav_connect_succ": "Great! The webdav server can be accessed.",
"settings_webdav_connect_fail": "The webdav server cannot be reached (possible to be any of address/username/password/authtype errors).",
"settings_webdav_connect_fail_withcors": "The webdav server cannot be reached (possible to be any of address/username/password/authtype/CORS errors).",
"settings_chooseservice": "Choose A Remote Service",
"settings_chooseservice_desc": "Start here. What service are you connecting to? S3, Dropbox, Webdav, or OneDrive for personal?",
"settings_chooseservice_s3": "S3 or compatible",
"settings_chooseservice_dropbox": "Dropbox",
"settings_chooseservice_webdav": "Webdav",
"settings_chooseservice_onedrive": "OneDrive for personal",
"settings_adv": "Advanced Settings",
"settings_concurrency": "Concurrency",
"settings_concurrency_desc": "How many files do you want to download or upload in parallel at most? By default it's set to 5. If you meet any problems such as rate limit, you can reduce the concurrency to a lower value.",
"settings_syncunderscore": "Sync _ Files Or Folders",
"settings_syncunderscore_desc": "Sync files or folders starting with _ (\"underscore\") or not",
"settings_configdir": "Sync Config Dir (experimental)",
"settings_configdir_desc": "Sync config dir {{configDir}} or not (inner folder .git and node_modules would be ignored). Please be aware that this may impact all your plugins' or Obsidian's settings, and may require you restart Obsidian after sync. Enable this at your own risk.",
"settings_deletetowhere": "Deletion Destination",
"settings_deletetowhere_desc": "Which trash should the plugin put the files into while deleting?",
"settings_deletetowhere_system_trash": "system trash (default)",
"settings_deletetowhere_obsidian_trash": "Obsidian .trash folder",
"settings_conflictaction": "Action For Conflict",
"settings_conflictaction_desc": "If a file is created or modified on both side since last update, it's a conflict event. How to deal with it? This only works for bidirectional sync.",
"settings_conflictaction_keep_newer": "newer version survives (default)",
"settings_conflictaction_keep_larger": "larger size version survives",
"settings_cleanemptyfolder": "Action For Empty Folders",
"settings_cleanemptyfolder_desc": "The sync algorithm majorly deals with files, so you need to specify how to deal with empty folders.",
"settings_cleanemptyfolder_skip": "leave them as is (default)",
"settings_cleanemptyfolder_clean_both": "delete local and remote",
"settings_protectmodifypercentage": "Abort Sync If Modification Above Percentage",
"settings_protectmodifypercentage_desc": "Abort the sync if more than n% of the files are going to be deleted / modified. Useful to protect users' files from unexpected modifications. You can set to 100 to disable the protection, or set to 0 to always block the sync.",
"settings_protectmodifypercentage_000_desc": "0 (always block)",
"settings_protectmodifypercentage_050_desc": "50 (default)",
"settings_protectmodifypercentage_100_desc": "100 (disable the protection)",
"setting_syncdirection": "Sync Direction",
"setting_syncdirection_desc": "Which direction should the plugin sync to? Please be aware that only CHANGED files (based on time and size) are synced regardless any option.",
"setting_syncdirection_bidirectional_desc": "Bidirectional (default)",
"setting_syncdirection_incremental_push_only_desc": "Incremental Push Only (aka backup mode)",
"setting_syncdirection_incremental_pull_only_desc": "Incremental Pull Only",
"settings_enablemobilestatusbar": "Mobile Status Bar (experimental)",
"settings_enablemobilestatusbar_desc": "By default Obsidian mobile hides status bar. But some users want to show it up. So here is a hack.",
"settings_importexport": "Import and Export Partial Settings",
"settings_export": "Export",
"settings_export_desc": "Export settings by generating a QR code or URI.",
"settings_export_all_but_oauth2_button": "Export Non-Oauth2 Part",
"settings_export_dropbox_button": "Export Dropbox Part",
"settings_export_onedrive_button": "Export OneDrive Part",
"settings_import": "Import",
"settings_import_desc": "Paste the exported URI into here and click \"Import\". Or, you can open a camera or scan-qrcode app to scan the QR code.",
"settings_import_button": "Import",
"settings_import_error_notice": "Your URI string is empty or not correct!",
"settings_debug": "Debug",
"settings_debuglevel": "Alter Notice Level",
"settings_debuglevel_desc": "By default the notice level is \"info\". You can change to \"debug\" to get verbose information while syncing.",
"settings_outputsettingsconsole": "Output Current Settings From Disk To Console",
"settings_outputsettingsconsole_desc": "The settings save on disk in encoded. Click this to see the decoded settings in console.",
"settings_outputsettingsconsole_button": "Output",
"settings_outputsettingsconsole_notice": "Finished outputing in console.",
"settings_obfuscatesettingfile": "Obfuscate The Setting File Or Not",
"settings_obfuscatesettingfile_desc": "The setting file (data.json) has some sensitive information. It's strongly recommended to obfuscate it to avoid unexpected read and modification. If you are sure to view and edit it manually, you can disable the obfuscation.",
"settings_viewconsolelog": "View Console Log",
"settings_viewconsolelog_desc": "On desktop, please press \"ctrl+shift+i\" or \"cmd+shift+i\" to view the log. On mobile, please install the third-party plugin <a href='https://obsidian.md/plugins?search=Logstravaganza'>Logstravaganza</a> to export the console log to a note.",
"settings_syncplans": "Export Sync Plans",
"settings_syncplans_desc": "Sync plans are created every time after you trigger sync and before the actual sync. Useful to know what would actually happen in those sync. Click the button to export sync plans.",
"settings_syncplans_button_1": "Export latest 1",
"settings_syncplans_button_5": "Export latest 5",
"settings_syncplans_button_all": "Export All",
"settings_syncplans_notice": "Sync plans history exported.",
"settings_delsyncplans": "Delete Sync Plans History In DB",
"settings_delsyncplans_desc": "Delete sync plans history in DB.",
"settings_delsyncplans_button": "Delete Sync Plans History",
"settings_delsyncplans_notice": "Sync plans history (in DB) deleted.",
"settings_delprevsync": "Delete Prev Sync Details In DB",
"settings_delprevsync_desc": "The sync algorithm keeps the previous successful sync information in DB to determine the file changes. If you want to ignore them so that all files are treated newly created, you can delete the prev sync info here.",
"settings_delprevsync_button": "Delete Prev Sync Details",
"settings_delprevsync_notice": "Previous sync history (in local DB) deleted",
"settings_profiler_results": "Export Profiler Results",
"settings_profiler_results_desc": "The plugin records the time cost of each steps. Here you can export them to know which step is slow.",
"settings_profiler_results_notice": "Profiler results exported.",
"settings_profiler_results_button_all": "Export All",
"settings_outputbasepathvaultid": "Output Vault Base Path And Randomly Assigned ID",
"settings_outputbasepathvaultid_desc": "For debugging purposes.",
"settings_outputbasepathvaultid_button": "Output",
"settings_resetcache": "Reset Local Internal Cache/Databases",
"settings_resetcache_desc": "Reset local internal caches/databases (for debugging purposes). You would want to reload the plugin after resetting this. This option will not empty the {s3, password...} settings.",
"settings_resetcache_button": "Reset",
"settings_resetcache_notice": "Local internal cache/databases deleted. Please manually reload the plugin.",
"syncalgov3_title": "Remotely Save has HUGE updates on the sync algorithm",
"syncalgov3_texts": "Welcome to use Remotely Save!\nFrom this version, a new algorithm has been developed:\n<ul><li>More robust deletion sync,</li><li>minimal conflict handling,</li><li>no meta data uploaded any more,</li><li>deletion / modification protection,</li><li>backup mode</li><li>new encryption method</li><li>...</li></ul>\nStay tune for more! A full introduction is in the <a href='https://github.com/remotely-save/remotely-save/tree/master/docs/sync_algorithm/v3/intro.md'>doc website</a>.\nIf you agree to use this, please read and check two checkboxes then click the \"Agree\" button, and enjoy the plugin!\nIf you do not agree, please click the \"Do Not Agree\" button, the plugin will unload itself.\nAlso, please consider <a href='https://github.com/remotely-save/remotely-save'>visit the GitHub repo and star ⭐ it</a>! Or even <a href='https://github.com/remotely-save/donation'>buy me a coffee</a>. Your support is very important to me! Thanks!",
"syncalgov3_checkbox_manual_backup": "I will backup my vault manually firstly.",
"syncalgov3_checkbox_requiremultidevupdate": "I understand I need to update the plugin ACROSS ALL DEVICES to make them work properly.",
"syncalgov3_button_agree": "Agree",
"syncalgov3_button_disagree": "Do Not Agree"
}

9
src/langs/index.ts Normal file
View File

@ -0,0 +1,9 @@
import en from "./en.json";
import zh_cn from "./zh_cn.json";
import zh_tw from "./zh_tw.json";
export const LANGS = {
en: en,
zh_cn: zh_cn,
zh_tw: zh_tw,
};

Some files were not shown because too many files have changed in this diff Show More