Merge branch 'syncv3'

This commit is contained in:
fyears 2024-03-18 00:04:28 +08:00
commit e40a6adf63
41 changed files with 2230 additions and 2550 deletions

View File

@ -28,12 +28,11 @@ This is yet another unofficial sync plugin for Obsidian. If you like it or find
- **[Minimal Intrusive](./docs/minimal_intrusive_design.md).**
- **Skip Large files** and **skip paths** by custom regex conditions!
- **Fully open source under [Apache-2.0 License](./LICENSE).**
- **[Sync Algorithm open](./docs/sync_algorithm_v2.md) for discussion.**
- **[Sync Algorithm open](./docs/sync_algorithm/v3/intro.md) for discussion.**
- **[Basic Conflict Detection And Handling](./docs/sync_algorithm/v3/intro.md)** now, more to come!
## Limitations
- **To support deletions sync, extra metadata will also be uploaded.** See [Minimal Intrusive](./docs/minimal_intrusive_design.md).
- **No Conflict resolution. No content-diff-and-patch algorithm.** All files and folders are compared using their local and remote "last modified time" and those with later "last modified time" wins.
- **Cloud services cost you money.** Always be aware of the costs and pricing. Specifically, all the operations, including but not limited to downloading, uploading, listing all files, calling any api, storage sizes, may or may not cost you money.
- **Some limitations from the browser environment.** More technical details are [in the doc](./docs/browser_env.md).
- **You should protect your `data.json` file.** The file contains sensitive information.

View File

@ -12,8 +12,8 @@ See [here](./export_sync_plans.md).
See [here](./check_console_output.md).
## Advanced: Save Console Output Then Read Them Later
## Advanced: Use `Logstravaganza` to export logs
This method works for desktop and mobile devices (iOS, Android).
This method works for desktop and mobile devices (iOS, Android), especially useful for iOS.
See [here](./save_console_output_and_export.md).
See [here](./use_logstravaganza.md).

View File

@ -1,25 +0,0 @@
# Save Console Output And Read Them Later
## Disable Auto Sync Firstly
You should disable auto sync to avoid any unexpected running.
## Set The Output Level To Debug
Go to the plugin settings, scroll down to the section "Debug" -> "alter console log level", and change it from "info" to "debug".
## Enable Saving The Output To DB
Go to the plugin settings, scroll down to the section "Debug" -> "Save Console Logs Into DB", and change it from "disable" to "enable". **This setting has some performance cost, so do NOT always turn this on when not necessary!**
## Run The Sync
Trigger the sync manually (by clicking the icon on the ribbon sidebar). Something (hopefully) helpful should show up in the console. The the console logs are also saved into DB now.
## Export The Output And Read The Logs
Go to the plugin settings, scroll down to the section "Debug" -> "Export Console Logs From DB", and click the button. A new file `log_hist_exported_on_....md` should be created inside the special folder `_debug_remotely_save/`. You could read it and hopefully find something useful.
## Disable Saving The Output To DB
After debugging, go to the plugin settings, scroll down to the section "Debug" -> "Save Console Logs Into DB", and change it from "enable" to "disable".

View File

@ -0,0 +1,14 @@
# Use `Logstravaganza`
On iOS, it's quite hard to directly check the console logs.
Luckily, there is a third-party plugin: [`Logstravaganza`](https://obsidian.md/plugins?search=Logstravaganza#), by Carlo Zottmann, that can redirect the output to a note.
You can just:
1. Install it.
2. Enable it.
3. Do something, to trigger some console logs.
4. Checkout `LOGGING-NOTE (device name).md` in the root of your vault.
See more on its site: <https://github.com/czottmann/obsidian-logstravaganza>.

View File

@ -1,8 +1,10 @@
# Minimal Intrusive Design
Before version 0.3.0, the plugin did not upload additional meta data to the remote.
~~Before version 0.3.0, the plugin did not upload additional meta data to the remote.~~
From and after version 0.3.0, the plugin just upload minimal extra necessary meta data to the remote.
~~From version 0.3.0 ~ 0.3.40, the plugin just upload minimal extra necessary meta data to the remote.~~
From version 0.4.1 and above, the plugin doesn't need uploading meta data due to the sync algorithm upgrade.
## Benefits
@ -12,10 +14,14 @@ For example, it's possbile for a uses to manually upload a file to s3, and next
And it's also possible to combine another "sync-to-s3" solution (like, another software) on desktops, and this plugin on mobile devices, together.
## Necessarity Of Uploading Extra Metadata
## ~~Necessarity Of Uploading Extra Metadata from 0.3.0 ~ 0.3.40~~
The main issue comes from deletions (and renamings which is actually interpreted as "deletion-then-creation").
~~The main issue comes from deletions (and renamings which is actually interpreted as "deletion-then-creation").~~
If we don't upload any extra info to the remote, there's usually no way for the second device to know what files / folders have been deleted on the first device.
~~If we don't upload any extra info to the remote, there's usually no way for the second device to know what files / folders have been deleted on the first device.~~
To overcome this issue, from and after version 0.3.0, the plugin uploads extra metadata files `_remotely-save-metadata-on-remote.{json,bin}` to users' configured cloud services. Those files contain some info about what has been deleted on the first device, so that the second device can read the list to apply the deletions to itself. Some other necessary meta info would also be written into the extra files.
~~To overcome this issue, from and after version 0.3.0, the plugin uploads extra metadata files `_remotely-save-metadata-on-remote.{json,bin}` to users' configured cloud services. Those files contain some info about what has been deleted on the first device, so that the second device can read the list to apply the deletions to itself. Some other necessary meta info would also be written into the extra files.~~
## No uploading extra metadata from 0.4.1
Some information, including previous successful sync status of each file, is kept locally.

View File

@ -0,0 +1,7 @@
# Sync Algorithm
- [v1](./v1/README.md)
- [v2](./v2/README.md)
- v3
- [intro doc for end users](./v3/intro.md)
- [design doc](./v3/design.md)

View File

@ -0,0 +1,4 @@
# Sync Algorithm V3
- [intro doc for end users](./intro.md)
- [design doc](./design.md)

View File

@ -0,0 +1,71 @@
# Sync Algorithm V3
Drafted on 20240117.
An absolutely better sync algorithm. Better for tracking deletions and better for subbranching.
## Huge Thanks
Basically a combination of algorithm v2 + [synclone](https://github.com/Jwink3101/syncrclone/blob/master/docs/algorithm.md) + [rsinc](https://github.com/ConorWilliams/rsinc) + (some of rclone [bisync](https://rclone.org/bisync/)). All of the later three are released under MIT License so no worries about the licenses.
## Features
Must have
1. true deletion detection
2. deletion protection (blocking) with a setting
3. transaction from the old algorithm
4. user warning show up, **new algorithm needs all clients to be updated!** (deliberately corrput the metadata file??)
5. filters
6. conflict warning
7. partial sync
Nice to have
1. true time and hash
2. conflict rename
## Description
We have _five_ input sources:
1. local all files
2. remote all files
3. _local previous succeeded sync history_
4. local deletions
5. remote deletions.
Init run, consuming remote deletions :
Change history data into _local previous succeeded sync history_.
Later runs, use the first, second, third sources **only**.
Bidirectional table is modified based on synclone and rsinc. Incremental push / pull only tables is further modified based on the bidirectional table. The number inside the table cell is the decision branch in the code.
Bidirectional:
| local\remote | remote unchanged | remote modified | remote deleted | remote created |
| --------------- | ------------------ | ------------------------- | ------------------ | ------------------------- |
| local unchanged | (02/21) do nothing | (09) pull | (07) delete local | (??) conflict |
| local modified | (10) push | (16/17/18/19/20) conflict | (08) push | (??) conflict |
| local deleted | (04) delete remote | (05) pull | (01) clean history | (03) pull |
| local created | (??) conflict | (??) conflict | (06) push | (11/12/13/14/15) conflict |
Incremental push only:
| local\remote | remote unchanged | remote modified | remote deleted | remote created |
| --------------- | ---------------------------- | ---------------------------- | ---------------------- | ---------------------------- |
| local unchanged | (02/21) do nothing | **(26) conflict push** | **(32) conflict push** | (??) conflict |
| local modified | (10) push | **(25) conflict push** | (08) push | (??) conflict |
| local deleted | **(29) conflict do nothing** | **(30) conflict do nothing** | (01) clean history | **(28) conflict do nothing** |
| local created | (??) conflict | (??) conflict | (06) push | **(23) conflict push** |
Incremental pull only:
| local\remote | remote unchanged | remote modified | remote deleted | remote created |
| --------------- | ---------------------- | ---------------------- | ---------------------------- | ---------------------- |
| local unchanged | (02/21) do nothing | (09) pull | **(33) conflict do nothing** | (??) conflict |
| local modified | **(27) conflict pull** | **(24) conflict pull** | **(34) conflict do nothing** | (??) conflict |
| local deleted | **(35) conflict pull** | (05) pull | (01) clean history | (03) pull |
| local created | (??) conflict | (??) conflict | **(31) conflict do nothing** | **(22) conflict pull** |

View File

@ -0,0 +1,13 @@
# Introduction To Sync Algorithm V3
- [x] sync conflict: keep newer
- [x] sync conflict: keep larger
- [ ] sync conflict: keep both and rename
- [ ] sync conflict: show warning
- [x] deletion: true deletion status computation
- [x] meta data: no remote meta data any more
- [x] migration: old data auto transfer to new db (hopefully)
- [x] sync direction: incremental push only
- [x] sync direction: incremental pull only
- [x] sync protection: warning based on the threshold
- [ ] partial sync: better sync on save

View File

@ -1,38 +0,0 @@
# Sync Algorithm V3
Drafted on 20240117.
An absolutely better sync algorithm. Better for tracking deletions and better for subbranching.
## Huge Thanks
Basically a combination of algorithm v2 + [synclone](https://github.com/Jwink3101/syncrclone) + [rsinc](https://github.com/ConorWilliams/rsinc) + (some of rclone [bisync](https://rclone.org/bisync/)). All of the later three are released under MIT License so no worries about the licenses.
## Features
Must have
1. true deletion detection
2. deletion protection (blocking) with a setting
3. transaction from the old algorithm
4. user warning show up, **new algorithm needs all clients to be updated!** (deliberately corrput the metadata file??)
5. filters
6. conflict warning
7. partial sync
Nice to have
1. true time and hash
2. conflict rename
## Description
We have _five_ input sources: local all files, remote all files, _local previous succeeded sync history_, local deletions, remote deletions.
Init run, consuming local deletions and remote deletions :
TBD
Later runs, use the first, second, third sources **only**.
TBD

View File

@ -1,7 +1,7 @@
{
"id": "remotely-save",
"name": "Remotely Save",
"version": "0.3.40",
"version": "0.4.1",
"minAppVersion": "0.13.21",
"description": "Yet another unofficial plugin allowing users to synchronize notes between local device and the cloud service.",
"author": "fyears",

View File

@ -1,7 +1,7 @@
{
"id": "remotely-save",
"name": "Remotely Save",
"version": "0.3.40",
"version": "0.4.1",
"minAppVersion": "0.13.21",
"description": "Yet another unofficial plugin allowing users to synchronize notes between local device and the cloud service.",
"author": "fyears",

View File

@ -1,6 +1,6 @@
{
"name": "remotely-save",
"version": "0.3.40",
"version": "0.4.1",
"description": "This is yet another sync plugin for Obsidian app.",
"scripts": {
"dev2": "node esbuild.config.mjs --watch",

View File

@ -83,6 +83,11 @@ export interface OnedriveConfig {
remoteBaseDir?: string;
}
export type SyncDirectionType =
| "bidirectional"
| "incremental_pull_only"
| "incremental_push_only";
export interface RemotelySavePluginSettings {
s3: S3Config;
webdav: WebdavConfig;
@ -94,16 +99,26 @@ export interface RemotelySavePluginSettings {
autoRunEveryMilliseconds?: number;
initRunAfterMilliseconds?: number;
syncOnSaveAfterMilliseconds?: number;
agreeToUploadExtraMetadata?: boolean;
concurrency?: number;
syncConfigDir?: boolean;
syncUnderscoreItems?: boolean;
lang?: LangTypeAndAuto;
agreeToUseSyncV3?: boolean;
skipSizeLargerThan?: number;
ignorePaths?: string[];
enableStatusBarInfo?: boolean;
deleteToWhere?: "system" | "obsidian";
conflictAction?: ConflictActionType;
howToCleanEmptyFolder?: EmptyFolderCleanType;
protectModifyPercentage?: number;
syncDirection?: SyncDirectionType;
/**
* @deprecated
*/
agreeToUploadExtraMetadata?: boolean;
/**
* @deprecated
@ -116,14 +131,6 @@ export interface RemotelySavePluginSettings {
logToDB?: boolean;
}
export interface RemoteItem {
key: string;
lastModified?: number;
size: number;
remoteType: SUPPORTED_SERVICES_TYPE;
etag?: string;
}
export const COMMAND_URI = "remotely-save";
export const COMMAND_CALLBACK = "remotely-save-cb";
export const COMMAND_CALLBACK_ONEDRIVE = "remotely-save-cb-onedrive";
@ -139,32 +146,76 @@ export interface UriParams {
// 80 days
export const OAUTH2_FORCE_EXPIRE_MILLISECONDS = 1000 * 60 * 60 * 24 * 80;
type DecisionTypeForFile =
| "skipUploading" // special, mtimeLocal === mtimeRemote
| "uploadLocalDelHistToRemote" // "delLocalIfExists && delRemoteIfExists && cleanLocalDelHist && uploadLocalDelHistToRemote"
| "keepRemoteDelHist" // "delLocalIfExists && delRemoteIfExists && cleanLocalDelHist && keepRemoteDelHist"
| "uploadLocalToRemote" // "skipLocal && uploadLocalToRemote && cleanLocalDelHist && cleanRemoteDelHist"
| "downloadRemoteToLocal"; // "downloadRemoteToLocal && skipRemote && cleanLocalDelHist && cleanRemoteDelHist"
export type EmptyFolderCleanType = "skip" | "clean_both";
type DecisionTypeForFileSize =
| "skipUploadingTooLarge"
| "skipDownloadingTooLarge"
| "skipUsingLocalDelTooLarge"
| "skipUsingRemoteDelTooLarge"
| "errorLocalTooLargeConflictRemote"
| "errorRemoteTooLargeConflictLocal";
export type ConflictActionType = "keep_newer" | "keep_larger" | "rename_both";
type DecisionTypeForFolder =
| "createFolder"
| "uploadLocalDelHistToRemoteFolder"
| "keepRemoteDelHistFolder"
| "skipFolder";
export type DecisionTypeForMixedEntity =
| "only_history"
| "equal"
| "local_is_modified_then_push"
| "remote_is_modified_then_pull"
| "local_is_created_then_push"
| "remote_is_created_then_pull"
| "local_is_deleted_thus_also_delete_remote"
| "remote_is_deleted_thus_also_delete_local"
| "conflict_created_then_keep_local"
| "conflict_created_then_keep_remote"
| "conflict_created_then_keep_both"
| "conflict_created_then_do_nothing"
| "conflict_modified_then_keep_local"
| "conflict_modified_then_keep_remote"
| "conflict_modified_then_keep_both"
| "folder_existed_both_then_do_nothing"
| "folder_existed_local_then_also_create_remote"
| "folder_existed_remote_then_also_create_local"
| "folder_to_be_created"
| "folder_to_skip"
| "folder_to_be_deleted";
export type DecisionType =
| DecisionTypeForFile
| DecisionTypeForFileSize
| DecisionTypeForFolder;
/**
* uniform representation
* everything should be flat and primitive, so that we can copy.
*/
export interface Entity {
key?: string;
keyEnc?: string;
keyRaw: string;
mtimeCli?: number;
mtimeCliFmt?: string;
mtimeSvr?: number;
mtimeSvrFmt?: string;
prevSyncTime?: number;
prevSyncTimeFmt?: string;
size?: number; // might be unknown or to be filled
sizeEnc?: number;
sizeRaw: number;
hash?: string;
etag?: string;
}
export interface UploadedType {
entity: Entity;
mtimeCli?: number;
}
/**
* A replacement of FileOrFolderMixedState
*/
export interface MixedEntity {
key: string;
local?: Entity;
prevSync?: Entity;
remote?: Entity;
decisionBranch?: number;
decision?: DecisionTypeForMixedEntity;
conflictAction?: ConflictActionType;
}
/**
* @deprecated
*/
export interface FileOrFolderMixedState {
key: string;
existLocal?: boolean;
@ -179,7 +230,7 @@ export interface FileOrFolderMixedState {
sizeRemoteEnc?: number;
changeRemoteMtimeUsingMapping?: boolean;
changeLocalMtimeUsingMapping?: boolean;
decision?: DecisionType;
decision?: string; // old DecisionType is deleted, fallback to string
decisionBranch?: number;
syncDone?: "done";
remoteEncryptedKey?: string;

View File

@ -3,8 +3,6 @@ import { reverseString } from "./misc";
import type { RemotelySavePluginSettings } from "./baseTypes";
import { log } from "./moreOnLog";
const DEFAULT_README: string =
"The file contains sensitive info, so DO NOT take screenshot of, copy, or share it to anyone! It's also generated automatically, so do not edit it manually.";
@ -19,10 +17,10 @@ interface MessyConfigType {
export const messyConfigToNormal = (
x: MessyConfigType | RemotelySavePluginSettings | null | undefined
): RemotelySavePluginSettings | null | undefined => {
// log.debug("loading, original config on disk:");
// log.debug(x);
// console.debug("loading, original config on disk:");
// console.debug(x);
if (x === null || x === undefined) {
log.debug("the messy config is null or undefined, skip");
console.debug("the messy config is null or undefined, skip");
return x as any;
}
if ("readme" in x && "d" in x) {
@ -35,12 +33,12 @@ export const messyConfigToNormal = (
}) as Buffer
).toString("utf-8")
);
// log.debug("loading, parsed config is:");
// log.debug(y);
// console.debug("loading, parsed config is:");
// console.debug(y);
return y;
} else {
// return as is
// log.debug("loading, parsed config is the same");
// console.debug("loading, parsed config is the same");
return x;
}
};
@ -52,7 +50,7 @@ export const normalConfigToMessy = (
x: RemotelySavePluginSettings | null | undefined
) => {
if (x === null || x === undefined) {
log.debug("the normal config is null or undefined, skip");
console.debug("the normal config is null or undefined, skip");
return x;
}
const y = {
@ -63,7 +61,7 @@ export const normalConfigToMessy = (
})
),
};
// log.debug("encoding, encoded config is:");
// log.debug(y);
// console.debug("encoding, encoded config is:");
// console.debug(y);
return y;
};

View File

@ -11,8 +11,6 @@ import {
FileOrFolderMixedState,
} from "./baseTypes";
import { log } from "./moreOnLog";
const turnSyncPlanToTable = (record: string) => {
const syncPlan: SyncPlanType = JSON.parse(record);
const { ts, tsFmt, remoteType, mixedStates } = syncPlan;
@ -77,7 +75,7 @@ export const exportVaultSyncPlansToFiles = async (
vault: Vault,
vaultRandomID: string
) => {
log.info("exporting");
console.info("exporting");
await mkdirpInVault(DEFAULT_DEBUG_FOLDER, vault);
const records = await readAllSyncPlanRecordTextsByVault(db, vaultRandomID);
let md = "";
@ -93,5 +91,5 @@ export const exportVaultSyncPlansToFiles = async (
await vault.create(filePath, md, {
mtime: ts,
});
log.info("finish exporting");
console.info("finish exporting");
};

View File

@ -1,8 +1,6 @@
import { base32, base64url } from "rfc4648";
import { bufferToArrayBuffer, hexStringToTypedArray } from "./misc";
import { log } from "./moreOnLog";
const DEFAULT_ITER = 20000;
// base32.stringify(Buffer.from('Salted__'))

View File

@ -7,8 +7,6 @@ import {
RemotelySavePluginSettings,
} from "./baseTypes";
import { log } from "./moreOnLog";
export const exportQrCodeUri = async (
settings: RemotelySavePluginSettings,
currentVaultName: string,
@ -22,7 +20,7 @@ export const exportQrCodeUri = async (
const vault = encodeURIComponent(currentVaultName);
const version = encodeURIComponent(pluginVersion);
const rawUri = `obsidian://${COMMAND_URI}?func=settings&version=${version}&vault=${vault}&data=${data}`;
// log.info(uri)
// console.info(uri)
const imgUri = await QRCode.toDataURL(rawUri);
return {
rawUri,

View File

@ -12,8 +12,8 @@
"syncrun_step2": "2/8 Starting to fetch remote meta data.",
"syncrun_step3": "3/8 Checking password correct or not.",
"syncrun_passworderr": "Something goes wrong while checking password.",
"syncrun_step4": "4/8 Trying to fetch extra meta data from remote.",
"syncrun_step5": "5/8 Starting to fetch local meta data.",
"syncrun_step4": "4/8 Starting to fetch local meta data.",
"syncrun_step5": "5/8 Starting to fetch local prev sync data.",
"syncrun_step6": "6/8 Starting to generate sync plan.",
"syncrun_step7": "7/8 Remotely Save Sync data exchanging!",
"syncrun_step7skip": "7/8 Remotely Save real sync is skipped in dry run mode.",
@ -23,6 +23,7 @@
"syncrun_shortstep2skip": "2/2 Remotely Save real sync is skipped in dry run mode.",
"syncrun_shortstep2": "2/2 Remotely Save finish!",
"syncrun_abort": "{{manifestID}}-{{theDate}}: abort sync, triggerSource={{triggerSource}}, error while {{syncStatus}}",
"syncrun_abort_protectmodifypercentage": "Abort! you set changing files >= {{protectModifyPercentage}}% is not allowed but {{realModifyDeleteCount}}/{{allFilesCount}}={{percent}}% is going to be modified or deleted! If you are sure you want this sync, please adjust the allowed ratio in the settings.",
"protocol_saveqr": "New not-oauth2 settings for {{manifestName}} saved. Reopen the plugin Settings to the effect.",
"protocol_callbacknotsupported": "Your uri call a callback that's not supported yet: {{params}}",
"protocol_dropbox_connecting": "Connecting to Dropbox...\nPlease DO NOT close this modal.",
@ -106,10 +107,6 @@
"modal_sizesconflict_desc": "You've set skipping files larger than {{thresholdMB}} MB ({{thresholdBytes}} bytes).\nBut the following files have sizes larger than the threshold on one side, and sizes smaller than the threshold on the other side.\nTo avoid unexpected overwriting or deleting, the plugin stops, and you have to manually deal with at least one side of the files.",
"modal_sizesconflict_copybutton": "Click to copy all the below sizes conflicts info",
"modal_sizesconflict_copynotice": "All the sizes conflicts info have been copied to the clipboard!",
"modal_logtohttpserver_title": "Log To HTTP(S) Server Is DANGEROUS!",
"modal_logtohttpserver_desc": "All your sensitive logging information will be posted to the HTTP(S) server without any authentications!!!!!\nPlease make sure you trust the HTTP(S) server, and it's better to setup a HTTPS one instead of HTTP one.\nIt's for debugging purposes only, especially on mobile.",
"modal_logtohttpserver_secondconfirm": "I know it's dangerous, and insist, and am willing to bear all possible losses.",
"modal_logtohttpserver_notice": "OK.",
"settings_basic": "Basic Settings",
"settings_password": "Encryption Password",
"settings_password_desc": "Password for E2E encryption. Empty for no password. You need to click \"Confirm\". Attention: the password and other info are saved locally.",
@ -249,6 +246,24 @@
"settings_deletetowhere_desc": "Which trash should the plugin put the files into while deleting?",
"settings_deletetowhere_system_trash": "system trash (default)",
"settings_deletetowhere_obsidian_trash": "Obsidian .trash folder",
"settings_conflictaction": "Action For Conflict",
"settings_conflictaction_desc": "If a file is created or modified on both side since last update, it's a conflict event. How to deal with it? This only works for bidirectional sync.",
"settings_conflictaction_keep_newer": "newer version survives (default)",
"settings_conflictaction_keep_larger": "larger size version survives",
"settings_cleanemptyfolder": "Action For Empty Folders",
"settings_cleanemptyfolder_desc": "The sync algorithm majorly deals with files, so you need to specify how to deal with empty folders.",
"settings_cleanemptyfolder_skip": "leave them as is (default)",
"settings_cleanemptyfolder_clean_both": "delete local and remote",
"settings_protectmodifypercentage": "Abort Sync If Modification Above Percentage",
"settings_protectmodifypercentage_desc": "Abort the sync if more than n% of the files are going to be deleted / modified. Useful to protect users' files from unexpected modifications. You can set to 100 to disable the protection, or set to 0 to always block the sync.",
"settings_protectmodifypercentage_000_desc": "0 (always block)",
"settings_protectmodifypercentage_050_desc": "50 (default)",
"settings_protectmodifypercentage_100_desc": "100 (disable the protection)",
"setting_syncdirection": "Sync Direction",
"setting_syncdirection_desc": "Which direction should the plugin sync to? Please be aware that only CHANGED files (based on time and size) are synced regardless any option.",
"setting_syncdirection_bidirectional_desc": "Bidirectional (default)",
"setting_syncdirection_incremental_push_only_desc": "Incremental Push Only (aka backup mode)",
"setting_syncdirection_incremental_pull_only_desc": "Incremental Pull Only",
"settings_importexport": "Import and Export Partial Settings",
"settings_export": "Export",
"settings_export_desc": "Export not-oauth2 settings by generating a qrcode.",
@ -256,12 +271,14 @@
"settings_import": "Import",
"settings_import_desc": "You should open a camera or scan-qrcode app, to manually scan the QR code.",
"settings_debug": "Debug",
"settings_debuglevel": "Alter Console Log Level",
"settings_debuglevel_desc": "By default the log level is \"info\". You can change to \"debug\" to get verbose information in console.",
"settings_debuglevel": "Alter Notice Level",
"settings_debuglevel_desc": "By default the notice level is \"info\". You can change to \"debug\" to get verbose information while syncing.",
"settings_outputsettingsconsole": "Output Current Settings From Disk To Console",
"settings_outputsettingsconsole_desc": "The settings save on disk in encoded. Click this to see the decoded settings in console.",
"settings_outputsettingsconsole_button": "Output",
"settings_outputsettingsconsole_notice": "Finished outputing in console.",
"settings_viewconsolelog": "View Console Log",
"settings_viewconsolelog_desc": "On desktop, please press \"ctrl+shift+i\" or \"cmd+shift+i\" to view the log. On mobile, please install the third-party plugin <a href='https://obsidian.md/plugins?search=Logstravaganza'>Logstravaganza</a> to export the console log to a note.",
"settings_syncplans": "Export Sync Plans",
"settings_syncplans_desc": "Sync plans are created every time after you trigger sync and before the actual sync. Useful to know what would actually happen in those sync. Click the button to export sync plans.",
"settings_syncplans_button_json": "Export",
@ -270,13 +287,10 @@
"settings_delsyncplans_desc": "Delete sync plans history in DB.",
"settings_delsyncplans_button": "Delete Sync Plans History",
"settings_delsyncplans_notice": "Sync plans history (in DB) deleted.",
"settings_logtohttpserver": "Log To HTTP(S) Server Temporarily",
"settings_logtohttpserver_desc": "It's very dangerous and please use the function with greate cautions!!!!! It will temporarily allow sending console loggings to HTTP(S) server.",
"settings_logtohttpserver_reset_notice": "Your input doesn't starts with \"http(s)\". Already removed the setting of logging to HTTP(S) server.",
"settings_delsyncmap": "Delete Sync Mappings History In DB",
"settings_delsyncmap_desc": "Sync mappings history stores the actual LOCAL last modified time of the REMOTE objects. Clearing it may cause unnecessary data exchanges in next-time sync. Click the button to delete sync mappings history in DB.",
"settings_delsyncmap_button": "Delete Sync Mappings",
"settings_delsyncmap_notice": "Sync mappings history (in local DB) deleted",
"settings_delprevsync": "Delete Prev Sync Details In DB",
"settings_delprevsync_desc": "The sync algorithm keeps the previous successful sync information in DB to determine the file changes. If you want to ignore them so that all files are treated newly created, you can delete the prev sync info here.",
"settings_delprevsync_button": "Delete Prev Sync Details",
"settings_delprevsync_notice": "Previous sync history (in local DB) deleted",
"settings_outputbasepathvaultid": "Output Vault Base Path And Randomly Assigned ID",
"settings_outputbasepathvaultid_desc": "For debugging purposes.",
"settings_outputbasepathvaultid_button": "Output",
@ -284,10 +298,10 @@
"settings_resetcache_desc": "Reset local internal caches/databases (for debugging purposes). You would want to reload the plugin after resetting this. This option will not empty the {s3, password...} settings.",
"settings_resetcache_button": "Reset",
"settings_resetcache_notice": "Local internal cache/databases deleted. Please manually reload the plugin.",
"syncalgov2_title": "Remotely Save has a better sync algorithm",
"syncalgov2_texts": "Welcome to use Remotely Save!\nFrom version 0.3.0, a new algorithm has been developed, but it needs uploading extra meta data files _remotely-save-metadata-on-remote.{json,bin} to YOUR configured cloud destinations, besides your notes.\nSo that, for example, the second device can know that what files/folders have been deleted on the first device by reading those files.\nIf you agree, plase click the button \"Agree\", and enjoy the plugin! AND PLEASE REMEMBER TO BACKUP YOUR VAULT FIRSTLY!\nIf you do not agree, you should stop using the current and later versions of Remotely Save. You could consider manually install the old version 0.2.14 which uses old algorithm and does not upload any extra meta data files. By clicking the \"Do Not Agree\" button, the plugin will unload itself, and you need to manually disable it in Obsidian settings.",
"syncalgov2_button_agree": "Agree",
"syncalgov2_button_disagree": "Do Not Agree",
"official_notice_2024_first_party": "Plugin Remotely-Save is back to the party and get a HUGE update!🎉🎉🎉 Try it yourself or see the release note on https://github.com/remotely-save/remotely-save/releases."
"syncalgov3_title": "Remotely Save has HUGE updates on the sync algorithm",
"syncalgov3_texts": "Welcome to use Remotely Save!\nFrom this version, a new algorithm has been developed:\n<ul><li>More robust deletion sync,</li><li>minimal conflict handling,</li><li>no meta data uploaded any more,</li><li>deletion / modification protection,</li><li>backup mode</li><li>...</li></ul>\nStay tune for more! A full introduction is in the <a href='https://github.com/remotely-save/remotely-save/tree/master/docs/sync_algorithm/v3/intro.md'>doc website</a>.\nIf you agree to use this, please read and check two checkboxes then click the \"Agree\" button, and enjoy the plugin!\nIf you do not agree, please click the \"Do Not Agree\" button, the plugin will unload itself.\nAlso, please consider <a href='https://github.com/remotely-save/remotely-save'>visit the GitHub repo and star ⭐ it</a>! Or even <a href='https://github.com/remotely-save/donation'>buy me a coffee</a>. Your support is very important to me! Thanks!",
"syncalgov3_checkbox_manual_backup": "I will backup my vault manually firstly.",
"syncalgov3_checkbox_requiremultidevupdate": "I understand I need to update the plugin ACROSS ALL DEVICES to make them work properly.",
"syncalgov3_button_agree": "Agree",
"syncalgov3_button_disagree": "Do Not Agree"
}

View File

@ -12,8 +12,8 @@
"syncrun_step2": "2/8 正在获取远端的元数据。",
"syncrun_step3": "3/8 正在检查密码正确与否。",
"syncrun_passworderr": "检查密码时候出错。",
"syncrun_step4": "4/8 正在获取远端的额外的元数据。",
"syncrun_step5": "5/8 正在获取本地的元数据。",
"syncrun_step4": "4/8 正在获取本地的元数据。",
"syncrun_step5": "5/8 正在获取本地上一次同步的元数据。",
"syncrun_step6": "6/8 正在生成同步计划。",
"syncrun_step7": "7/8 Remotely Save 开始发生数据交换!",
"syncrun_step7skip": "7/8 Remotely Save 在空跑模式,跳过实际数据交换步骤。",
@ -23,6 +23,7 @@
"syncrun_shortstep2skip": "2/2 Remotely Save 在空跑模式,跳过实际数据交换步骤。",
"syncrun_shortstep2": "2/2 Remotely Save 已完成同步!",
"syncrun_abort": "{{manifestID}}-{{theDate}}:中断同步,同步来源={{triggerSource}},出错阶段={{syncStatus}}",
"syncrun_abort_protectmodifypercentage": "中断同步!您设置了不允许 >= {{protectModifyPercentage}}% 的变更,但是现在 {{realModifyDeleteCount}}/{{allFilesCount}}={{percent}}% 的文件会被修改或删除!如果您确认这次同步是您想要的,那么请在设置里修改允许比例。",
"protocol_saveqr": " {{manifestName}} 新的非 oauth2 设置保存完成。请重启插件设置页使之生效。",
"protocol_callbacknotsupported": "您的 uri callback 暂不支持: {{params}}",
"protocol_dropbox_connecting": "正在连接 Dropbox……\n请不要关闭此弹窗。",
@ -106,10 +107,6 @@
"modal_sizesconflict_desc": "您设置了跳过同步大于 {{thresholdMB}} MB{{thresholdBytes}} bytes的文件。\n但是以下文件的大小在一端大于阈值在另一端则小于阈值。\n为了避免意外的覆盖或删除插件停止了运作您需要手动处理至少一端的文件。",
"modal_sizesconflict_copybutton": "点击以复制以下所有文件大小冲突信息",
"modal_sizesconflict_copynotice": "所有的文件大小冲突信息,已被复制到剪贴板!",
"modal_logtohttpserver_title": "转发终端日志到 HTTP 服务器,此操作很危险!",
"modal_logtohttpserver_desc": "所有您的带敏感信息的终端日志,都会被转发到 HTTP(S) 服务器,没有任何鉴权!!!!!\n请确保您信任对应的服务器最好设置为 HTTPS 而不是 HTTP。\n仅仅用于 debug 用途,例如手机上的 debug。",
"modal_logtohttpserver_secondconfirm": "我知道很危险,坚持要设置,愿意承担所有可能损失。",
"modal_logtohttpserver_notice": "已设置。",
"settings_basic": "基本设置",
"settings_password": "密码",
"settings_password_desc": "端到端加密的密码。不填写则代表没密码。您需要点击“确认”来修改。注意:密码和其它信息都会在本地保存。",
@ -249,6 +246,24 @@
"settings_deletetowhere_desc": "插件触发删除操作时候,删除到哪里?",
"settings_deletetowhere_system_trash": "系统回收站(默认)",
"settings_deletetowhere_obsidian_trash": "Obsidian .trash 文件夹",
"settings_conflictaction": "处理冲突",
"settings_conflictaction_desc": "如果一个文件,在本地和服务器都被创建或者修改了,那么这就是一个“冲突”情况。如何处理?这个设置只在双向同步时候生效。",
"settings_conflictaction_keep_newer": "保留最后修改的版本(默认)",
"settings_conflictaction_keep_larger": "保留文件体积较大的版本",
"settings_cleanemptyfolder": "处理空文件夹",
"settings_cleanemptyfolder_desc": "同步算法主要是针对文件处理的,您要要手动指定空文件夹如何处理。",
"settings_cleanemptyfolder_skip": "跳过处理空文件夹(默认)",
"settings_cleanemptyfolder_clean_both": "删除本地和服务器的空文件夹",
"settings_protectmodifypercentage": "如果修改超过百分比则中止同步",
"settings_protectmodifypercentage_desc": "如果算法检测到超过 n% 的文件会被修改或删除,则中止同步。从而可以保护用户的文件免受预料之外的修改。您可以设置为 100 而去除此保护,也可以设置为 0 总是强制中止所有同步。",
"settings_protectmodifypercentage_000_desc": "0总是强制中止",
"settings_protectmodifypercentage_050_desc": "50默认值",
"settings_protectmodifypercentage_100_desc": "100去除此保护",
"setting_syncdirection": "同步方向",
"setting_syncdirection_desc": "插件应该向哪里同步?注意每个选项都是只有修改了的文件(基于修改时间和大小判断)才会触发同步动作。",
"setting_syncdirection_bidirectional_desc": "双向同步(默认)",
"setting_syncdirection_incremental_push_only_desc": "只增量推送(也即:备份模式)",
"setting_syncdirection_incremental_pull_only_desc": "只增量拉取",
"settings_importexport": "导入导出部分设置",
"settings_export": "导出",
"settings_export_desc": "用 QR 码导出非 oauth2 的设置信息。",
@ -256,12 +271,14 @@
"settings_import": "导入",
"settings_import_desc": "您需要使用系统拍摄 app 或者扫描 QR 码的app来扫描对应的 QR 码。",
"settings_debug": "调试",
"settings_debuglevel": "修改终端输出的 level",
"settings_debuglevel_desc": "默认值为 \"info\"。您可以改为 \"debug\" 从而在终端里获取更多信息。",
"settings_debuglevel": "修改同步提示信息",
"settings_debuglevel_desc": "默认值为 \"info\"。您可以改为 \"debug\" 从而在同步时候里获取更多信息。",
"settings_outputsettingsconsole": "读取硬盘上的设置文件输出到终端",
"settings_outputsettingsconsole_desc": "硬盘上的设置文件是编码过的,点击这里从而解码并输出到终端。",
"settings_outputsettingsconsole_button": "输出",
"settings_outputsettingsconsole_notice": "已输出到终端",
"settings_viewconsolelog": "查看终端输出",
"settings_viewconsolelog_desc": "电脑上输入“ctrl+shift+i”或“cmd+shift+i”来查看终端输出。手机上安装第三方插件 <a href='https://obsidian.md/plugins?search=Logstravaganza'>Logstravaganza</a> 来导出终端输出到一篇笔记上。",
"settings_syncplans": "导出同步计划",
"settings_syncplans_desc": "每次您启动同步,并在实际上传下载前,插件会生成同步计划。它可以使您知道每次同步发生了什么。点击按钮可以导出同步计划。",
"settings_syncplans_button_json": "导出",
@ -270,13 +287,10 @@
"settings_delsyncplans_desc": "删除数据库里的同步计划历史。",
"settings_delsyncplans_button": "删除同步计划历史",
"settings_delsyncplans_notice": "(数据库里的)同步计划已被删除。",
"settings_logtohttpserver": "临时设定终端日志实时转发到 HTTP(S) 服务器。",
"settings_logtohttpserver_desc": "非常危险,谨慎行动!!!!!临时设定终端日志实时转发到 HTTP(S) 服务器。",
"settings_logtohttpserver_reset_notice": "您的输入不是“http(s)”开头的。已移除了终端日志转发到 HTTP(S) 服务器的设定。",
"settings_delsyncmap": "删除数据库里的同步映射历史",
"settings_delsyncmap_desc": "同步映射历史存储了本地真正的最后修改时间和远程文件时间的映射。删除之可能会导致下一次同步时发生不必要的数据交换。点击按钮删除数据库里的同步映射历史。",
"settings_delsyncmap_button": "删除同步映射历史",
"settings_delsyncmap_notice": "(本地数据库里的)同步映射历史已被删除。",
"settings_delprevsync": "删除数据库里的上次同步明细",
"settings_delprevsync_desc": "同步算法需要上次成功同步的信息来决定文件变更,这个信息保存在本地的数据库里。如果您想忽略这些信息从而所有文件都被视为新创建的话,可以在此删除之前的信息。",
"settings_delprevsync_button": "删除上次同步明细",
"settings_delprevsync_notice": "(本地数据库里的)上次同步明细已被删除。",
"settings_outputbasepathvaultid": "输出资料库对应的位置和随机分配的 ID",
"settings_outputbasepathvaultid_desc": "用于调试。",
"settings_outputbasepathvaultid_button": "输出",
@ -284,10 +298,10 @@
"settings_resetcache_desc": "(出于调试原因)重设本地缓存和数据库。您需要在重设之后重新载入此插件。本重设不会删除 s3密码……等设定。",
"settings_resetcache_button": "重设",
"settings_resetcache_notice": "本地同步缓存和数据库已被删除。请手动重新载入此插件。",
"syncalgov2_title": "Remotely Save 的同步算法得到优化",
"syncalgov2_texts": "欢迎使用 Remotely Save!\n从版本 0.3.0 开始,它带来了新的同步算法,但是,除了您的笔记之外,它还需要上传额外的带有元信息的文件 _remotely-save-metadata-on-remote.{json,bin} 到您的云服务目的地上。\n从而比如说通过读取这些信息另一台设备可以知道什么文件或文件夹在第一台设备上被删除了。\n如果您同意此策略请点击按钮 \"同意\"然后开始享用此插件且特别要注意使用插件之前请首先备份好您的库Vault\n如果您不同意此策略您应该停止使用此版本和之后版本的 Remotely Save。您可以考虑手动安装旧版 0.2.14,它使用旧的同步算法,并不上传额外元信息文件。点击 \"不同意\" 之后插件会自动停止运行unload然后您需要 Obsidian 设置里手动停用disable此插件。",
"syncalgov2_button_agree": "同意",
"syncalgov2_button_disagree": "不同意",
"official_notice_2024_first_party": "插件 Remotely-Save 回来了,更新了一大堆功能!🎉🎉🎉请自行使用,或参阅更新文档: https://github.com/remotely-save/remotely-save/releases 。"
"syncalgov3_title": "Remotely Save 的同步算法有重大更新",
"syncalgov3_texts": "欢迎使用 Remotely Save\n从这个版本开始插件更新了同步算法\n<ul><li>更稳健的删除同步</li><li>引入冲突处理</li><li>避免上传元数据</li><li>修改删除保护</li><li>备份模式</li><li>……</li></ul>\n敬请期待更多更新详细介绍请参阅<a href='https://github.com/remotely-save/remotely-save/tree/master/docs/sync_algorithm/v3/intro.md'>文档网站</a>。\n如果您同意使用新版本请阅读和勾选两个勾选框然后点击“同意”按钮开始使用插件吧\n如果您不同意请点击“不同意”按钮插件将自动停止运行unload。\n此外请考虑<a href='https://github.com/remotely-save/remotely-save'>访问 GitHub 页面然后点赞 ⭐</a>!您的支持对我十分重要!谢谢!",
"syncalgov3_checkbox_manual_backup": "我将会首先手动备份我的库Vault",
"syncalgov3_checkbox_requiremultidevupdate": "我理解,我需要在所有设备上都更新此插件使之正常运行。",
"syncalgov3_button_agree": "同意",
"syncalgov3_button_disagree": "不同意"
}

View File

@ -12,8 +12,8 @@
"syncrun_step2": "2/8 正在獲取遠端的元資料。",
"syncrun_step3": "3/8 正在檢查密碼正確與否。",
"syncrun_passworderr": "檢查密碼時候出錯。",
"syncrun_step4": "4/8 正在獲取遠端的額外的元資料。",
"syncrun_step5": "5/8 正在獲取本地的元資料。",
"syncrun_step4": "4/8 正在獲取本地的元資料。",
"syncrun_step5": "5/8 正在獲取本地上一次同步的元資料。",
"syncrun_step6": "6/8 正在生成同步計劃。",
"syncrun_step7": "7/8 Remotely Save 開始發生資料交換!",
"syncrun_step7skip": "7/8 Remotely Save 在空跑模式,跳過實際資料交換步驟。",
@ -23,6 +23,7 @@
"syncrun_shortstep2skip": "2/2 Remotely Save 在空跑模式,跳過實際資料交換步驟。",
"syncrun_shortstep2": "2/2 Remotely Save 已完成同步!",
"syncrun_abort": "{{manifestID}}-{{theDate}}:中斷同步,同步來源={{triggerSource}},出錯階段={{syncStatus}}",
"syncrun_abort_protectmodifypercentage": "中斷同步!您設定了不允許 >= {{protectModifyPercentage}}% 的變更,但是現在 {{realModifyDeleteCount}}/{{allFilesCount}}={{percent}}% 的檔案會被修改或刪除!如果您確認這次同步是您想要的,那麼請在設定裡修改允許比例。",
"protocol_saveqr": " {{manifestName}} 新的非 oauth2 設定儲存完成。請重啟外掛設定頁使之生效。",
"protocol_callbacknotsupported": "您的 uri callback 暫不支援: {{params}}",
"protocol_dropbox_connecting": "正在連線 Dropbox……\n請不要關閉此彈窗。",
@ -106,10 +107,6 @@
"modal_sizesconflict_desc": "您設定了跳過同步大於 {{thresholdMB}} MB{{thresholdBytes}} bytes的檔案。\n但是以下檔案的大小在一端大於閾值在另一端則小於閾值。\n為了避免意外的覆蓋或刪除外掛停止了運作您需要手動處理至少一端的檔案。",
"modal_sizesconflict_copybutton": "點選以複製以下所有檔案大小衝突資訊",
"modal_sizesconflict_copynotice": "所有的檔案大小衝突資訊,已被複制到剪貼簿!",
"modal_logtohttpserver_title": "轉發終端日誌到 HTTP 伺服器,此操作很危險!",
"modal_logtohttpserver_desc": "所有您的帶敏感資訊的終端日誌,都會被轉發到 HTTP(S) 伺服器,沒有任何鑑權!!!!!\n請確保您信任對應的伺服器最好設定為 HTTPS 而不是 HTTP。\n僅僅用於 debug 用途,例如手機上的 debug。",
"modal_logtohttpserver_secondconfirm": "我知道很危險,堅持要設定,願意承擔所有可能損失。",
"modal_logtohttpserver_notice": "已設定。",
"settings_basic": "基本設定",
"settings_password": "密碼",
"settings_password_desc": "端到端加密的密碼。不填寫則代表沒密碼。您需要點選“確認”來修改。注意:密碼和其它資訊都會在本地儲存。",
@ -249,6 +246,24 @@
"settings_deletetowhere_desc": "外掛觸發刪除操作時候,刪除到哪裡?",
"settings_deletetowhere_system_trash": "系統回收站(預設)",
"settings_deletetowhere_obsidian_trash": "Obsidian .trash 資料夾",
"settings_conflictaction": "處理衝突",
"settings_conflictaction_desc": "如果一個檔案,在本地和伺服器都被建立或者修改了,那麼這就是一個“衝突”情況。如何處理?這個設定只在雙向同步時候生效。",
"settings_conflictaction_keep_newer": "保留最後修改的版本(預設)",
"settings_conflictaction_keep_larger": "保留檔案體積較大的版本",
"settings_cleanemptyfolder": "處理空資料夾",
"settings_cleanemptyfolder_desc": "同步演算法主要是針對檔案處理的,您需要手動指定空資料夾如何處理。",
"settings_cleanemptyfolder_skip": "跳過處理空資料夾(預設)",
"settings_cleanemptyfolder_clean_both": "刪除本地和伺服器的空資料夾",
"settings_protectmodifypercentage": "如果修改超過百分比則中止同步",
"settings_protectmodifypercentage_desc": "如果演算法檢測到超過 n% 的檔案會被修改或刪除,則中止同步。從而可以保護使用者的檔案免受預料之外的修改。您可以設定為 100 而去除此保護,也可以設定為 0 總是強制中止所有同步。",
"settings_protectmodifypercentage_000_desc": "0總是強制中止",
"settings_protectmodifypercentage_050_desc": "50預設值",
"settings_protectmodifypercentage_100_desc": "100去除此保護",
"setting_syncdirection": "同步方向",
"setting_syncdirection_desc": "外掛應該向哪裡同步?注意每個選項都是隻有修改了的檔案(基於修改時間和大小判斷)才會觸發同步動作。",
"setting_syncdirection_bidirectional_desc": "雙向同步(預設)",
"setting_syncdirection_incremental_push_only_desc": "只增量推送(也即:備份模式)",
"setting_syncdirection_incremental_pull_only_desc": "只增量拉取",
"settings_importexport": "匯入匯出部分設定",
"settings_export": "匯出",
"settings_export_desc": "用 QR 碼匯出非 oauth2 的設定資訊。",
@ -256,12 +271,14 @@
"settings_import": "匯入",
"settings_import_desc": "您需要使用系統拍攝 app 或者掃描 QR 碼的app來掃描對應的 QR 碼。",
"settings_debug": "除錯",
"settings_debuglevel": "修改終端輸出的 level",
"settings_debuglevel_desc": "預設值為 \"info\"。您可以改為 \"debug\" 從而在終端裡獲取更多資訊。",
"settings_debuglevel": "修改同步提示資訊",
"settings_debuglevel_desc": "預設值為 \"info\"。您可以改為 \"debug\" 從而在同步時候裡獲取更多資訊。",
"settings_outputsettingsconsole": "讀取硬碟上的設定檔案輸出到終端",
"settings_outputsettingsconsole_desc": "硬碟上的設定檔案是編碼過的,點選這裡從而解碼並輸出到終端。",
"settings_outputsettingsconsole_button": "輸出",
"settings_outputsettingsconsole_notice": "已輸出到終端",
"settings_viewconsolelog": "檢視終端輸出",
"settings_viewconsolelog_desc": "電腦上輸入“ctrl+shift+i”或“cmd+shift+i”來檢視終端輸出。手機上安裝第三方外掛 <a href='https://obsidian.md/plugins?search=Logstravaganza'>Logstravaganza</a> 來匯出終端輸出到一篇筆記上。",
"settings_syncplans": "匯出同步計劃",
"settings_syncplans_desc": "每次您啟動同步,並在實際上傳下載前,外掛會生成同步計劃。它可以使您知道每次同步發生了什麼。點選按鈕可以匯出同步計劃。",
"settings_syncplans_button_json": "匯出",
@ -270,13 +287,10 @@
"settings_delsyncplans_desc": "刪除資料庫裡的同步計劃歷史。",
"settings_delsyncplans_button": "刪除同步計劃歷史",
"settings_delsyncplans_notice": "(資料庫裡的)同步計劃已被刪除。",
"settings_logtohttpserver": "臨時設定終端日誌實時轉發到 HTTP(S) 伺服器。",
"settings_logtohttpserver_desc": "非常危險,謹慎行動!!!!!臨時設定終端日誌實時轉發到 HTTP(S) 伺服器。",
"settings_logtohttpserver_reset_notice": "您的輸入不是“http(s)”開頭的。已移除了終端日誌轉發到 HTTP(S) 伺服器的設定。",
"settings_delsyncmap": "刪除資料庫裡的同步對映歷史",
"settings_delsyncmap_desc": "同步對映歷史儲存了本地真正的最後修改時間和遠端檔案時間的對映。刪除之可能會導致下一次同步時發生不必要的資料交換。點選按鈕刪除資料庫裡的同步對映歷史。",
"settings_delsyncmap_button": "刪除同步對映歷史",
"settings_delsyncmap_notice": "(本地資料庫裡的)同步對映歷史已被刪除。",
"settings_delprevsync": "刪除資料庫裡的上次同步明細",
"settings_delprevsync_desc": "同步演算法需要上次成功同步的資訊來決定檔案變更,這個資訊儲存在本地的資料庫裡。如果您想忽略這些資訊從而所有檔案都被視為新建立的話,可以在此刪除之前的資訊。",
"settings_delprevsync_button": "刪除上次同步明細",
"settings_delprevsync_notice": "(本地資料庫裡的)上次同步明細已被刪除。",
"settings_outputbasepathvaultid": "輸出資料庫對應的位置和隨機分配的 ID",
"settings_outputbasepathvaultid_desc": "用於除錯。",
"settings_outputbasepathvaultid_button": "輸出",
@ -284,9 +298,10 @@
"settings_resetcache_desc": "(出於除錯原因)重設本地快取和資料庫。您需要在重設之後重新載入此外掛。本重設不會刪除 s3密碼……等設定。",
"settings_resetcache_button": "重設",
"settings_resetcache_notice": "本地同步快取和資料庫已被刪除。請手動重新載入此外掛。",
"syncalgov2_title": "Remotely Save 的同步演算法得到最佳化",
"syncalgov2_texts": "歡迎使用 Remotely Save!\n從版本 0.3.0 開始,它帶來了新的同步演算法,但是,除了您的筆記之外,它還需要上傳額外的帶有元資訊的檔案 _remotely-save-metadata-on-remote.{json,bin} 到您的雲服務目的地上。\n從而比如說透過讀取這些資訊另一臺裝置可以知道什麼檔案或資料夾在第一臺裝置上被刪除了。\n如果您同意此策略請點選按鈕 \"同意\"然後開始享用此外掛且特別要注意使用外掛之前請首先備份好您的儲存庫Vault\n如果您不同意此策略您應該停止使用此版本和之後版本的 Remotely Save。您可以考慮手動安裝舊版 0.2.14,它使用舊的同步演算法,並不上傳額外元資訊檔案。點選 \"不同意\" 之後外掛會自動停止執行unload然後您需要 Obsidian 設定裡手動停用disable此外掛。",
"syncalgov2_button_agree": "同意",
"syncalgov2_button_disagree": "不同意",
"official_notice_2024_first_party": "外掛 Remotely-Save 回來了,更新了一大堆功能!🎉🎉🎉請自行使用,或參閱更新文件: https://github.com/remotely-save/remotely-save/releases 。"
"syncalgov3_title": "Remotely Save 的同步演算法有重大更新",
"syncalgov3_texts": "歡迎使用 Remotely Save\n從這個版本開始外掛更新了同步演算法\n<ul><li>更穩健的刪除同步</li><li>引入衝突處理</li><li>避免上傳元資料</li><li>修改刪除保護</li><li>備份模式</li><li>……</li></ul>\n敬請期待更多更新詳細介紹請參閱<a href='https://github.com/remotely-save/remotely-save/tree/master/docs/sync_algorithm/v3/intro.md'>文件網站</a>。\n如果您同意使用新版本請閱讀和勾選兩個勾選框然後點選“同意”按鈕開始使用外掛吧\n如果您不同意請點選“不同意”按鈕外掛將自動停止執行unload。\n此外請考慮<a href='https://github.com/remotely-save/remotely-save'>訪問 GitHub 頁面然後點贊 ⭐</a>!您的支援對我十分重要!謝謝!",
"syncalgov3_checkbox_manual_backup": "我將會首先手動備份我的庫Vault。",
"syncalgov3_checkbox_requiremultidevupdate": "我理解,我需要在所有裝置上都更新此外掛使之正常執行。",
"syncalgov3_button_agree": "同意",
"syncalgov3_button_disagree": "不同意"
}

65
src/local.ts Normal file
View File

@ -0,0 +1,65 @@
import { TFile, TFolder, type Vault } from "obsidian";
import type { Entity, MixedEntity } from "./baseTypes";
import { listFilesInObsFolder } from "./obsFolderLister";
export const getLocalEntityList = async (
vault: Vault,
syncConfigDir: boolean,
configDir: string,
pluginID: string
) => {
const local: Entity[] = [];
const localTAbstractFiles = vault.getAllLoadedFiles();
for (const entry of localTAbstractFiles) {
let r = {} as Entity;
let key = entry.path;
if (entry.path === "/") {
// ignore
continue;
} else if (entry instanceof TFile) {
let mtimeLocal: number | undefined = entry.stat.mtime;
if (mtimeLocal <= 0) {
mtimeLocal = entry.stat.ctime;
}
if (mtimeLocal === 0) {
mtimeLocal = undefined;
}
if (mtimeLocal === undefined) {
throw Error(
`Your file has last modified time 0: ${key}, don't know how to deal with it`
);
}
r = {
key: entry.path, // local always unencrypted
keyRaw: entry.path,
mtimeCli: mtimeLocal,
mtimeSvr: mtimeLocal,
size: entry.stat.size, // local always unencrypted
sizeRaw: entry.stat.size,
};
} else if (entry instanceof TFolder) {
key = `${entry.path}/`;
r = {
key: key,
keyRaw: key,
size: 0,
sizeRaw: 0,
};
} else {
throw Error(`unexpected ${entry}`);
}
local.push(r);
}
if (syncConfigDir) {
const syncFiles = await listFilesInObsFolder(configDir, vault, pluginID);
for (const f of syncFiles) {
local.push(f);
}
}
return local;
};

View File

@ -3,35 +3,34 @@ export type LocalForage = typeof localforage;
import { nanoid } from "nanoid";
import { requireApiVersion, TAbstractFile, TFile, TFolder } from "obsidian";
import { API_VER_STAT_FOLDER, SUPPORTED_SERVICES_TYPE } from "./baseTypes";
import { API_VER_STAT_FOLDER } from "./baseTypes";
import type { Entity, MixedEntity, SUPPORTED_SERVICES_TYPE } from "./baseTypes";
import type { SyncPlanType } from "./sync";
import { statFix, toText, unixTimeToStr } from "./misc";
import { log } from "./moreOnLog";
const DB_VERSION_NUMBER_IN_HISTORY = [20211114, 20220108, 20220326];
export const DEFAULT_DB_VERSION_NUMBER: number = 20220326;
const DB_VERSION_NUMBER_IN_HISTORY = [20211114, 20220108, 20220326, 20240220];
export const DEFAULT_DB_VERSION_NUMBER: number = 20240220;
export const DEFAULT_DB_NAME = "remotelysavedb";
export const DEFAULT_TBL_VERSION = "schemaversion";
export const DEFAULT_TBL_FILE_HISTORY = "filefolderoperationhistory";
export const DEFAULT_TBL_SYNC_MAPPING = "syncmetadatahistory";
export const DEFAULT_SYNC_PLANS_HISTORY = "syncplanshistory";
export const DEFAULT_TBL_VAULT_RANDOM_ID_MAPPING = "vaultrandomidmapping";
export const DEFAULT_TBL_LOGGER_OUTPUT = "loggeroutput";
export const DEFAULT_TBL_SIMPLE_KV_FOR_MISC = "simplekvformisc";
export const DEFAULT_TBL_PREV_SYNC_RECORDS = "prevsyncrecords";
export interface FileFolderHistoryRecord {
key: string;
ctime: number;
mtime: number;
size: number;
actionWhen: number;
actionType: "delete" | "rename" | "renameDestination";
keyType: "folder" | "file";
renameTo: string;
vaultRandomID: string;
}
/**
* @deprecated
*/
export const DEFAULT_TBL_FILE_HISTORY = "filefolderoperationhistory";
/**
* @deprecated
*/
export const DEFAULT_TBL_SYNC_MAPPING = "syncmetadatahistory";
/**
* @deprecated
* But we cannot remove it. Because we want to migrate the old data.
*/
interface SyncMetaMappingRecord {
localKey: string;
remoteKey: string;
@ -54,132 +53,118 @@ interface SyncPlanRecord {
export interface InternalDBs {
versionTbl: LocalForage;
fileHistoryTbl: LocalForage;
syncMappingTbl: LocalForage;
syncPlansTbl: LocalForage;
vaultRandomIDMappingTbl: LocalForage;
loggerOutputTbl: LocalForage;
simpleKVForMiscTbl: LocalForage;
prevSyncRecordsTbl: LocalForage;
/**
* @deprecated
* But we cannot remove it. Because we want to migrate the old data.
*/
fileHistoryTbl: LocalForage;
/**
* @deprecated
* But we cannot remove it. Because we want to migrate the old data.
*/
syncMappingTbl: LocalForage;
}
/**
* This migration mainly aims to assign vault name or vault id into all tables.
* @param db
* @param vaultRandomID
* TODO
* @param syncMappings
* @returns
*/
const migrateDBsFrom20211114To20220108 = async (
db: InternalDBs,
vaultRandomID: string
) => {
const oldVer = 20211114;
const newVer = 20220108;
log.debug(`start upgrading internal db from ${oldVer} to ${newVer}`);
const fromSyncMappingsToPrevSyncRecords = (
oldSyncMappings: SyncMetaMappingRecord[]
): Entity[] => {
const res: Entity[] = [];
for (const oldMapping of oldSyncMappings) {
const newEntity: Entity = {
key: oldMapping.localKey,
keyEnc: oldMapping.remoteKey,
keyRaw:
oldMapping.remoteKey !== undefined && oldMapping.remoteKey !== ""
? oldMapping.remoteKey
: oldMapping.localKey,
mtimeCli: oldMapping.localMtime,
mtimeSvr: oldMapping.remoteMtime,
size: oldMapping.localSize,
sizeEnc: oldMapping.remoteSize,
sizeRaw:
oldMapping.remoteKey !== undefined && oldMapping.remoteKey !== ""
? oldMapping.remoteSize
: oldMapping.localSize,
etag: oldMapping.remoteExtraKey,
};
const allPromisesToWait: Promise<any>[] = [];
log.debug("assign vault id to any delete history");
const keysInDeleteHistoryTbl = await db.fileHistoryTbl.keys();
for (const key of keysInDeleteHistoryTbl) {
if (key.startsWith(vaultRandomID)) {
continue;
}
const value = (await db.fileHistoryTbl.getItem(
key
)) as FileFolderHistoryRecord;
if (value === null || value === undefined) {
continue;
}
if (value.vaultRandomID === undefined || value.vaultRandomID === "") {
value.vaultRandomID = vaultRandomID;
}
const newKey = `${vaultRandomID}\t${key}`;
allPromisesToWait.push(db.fileHistoryTbl.setItem(newKey, value));
allPromisesToWait.push(db.fileHistoryTbl.removeItem(key));
res.push(newEntity);
}
log.debug("assign vault id to any sync mapping");
const keysInSyncMappingTbl = await db.syncMappingTbl.keys();
for (const key of keysInSyncMappingTbl) {
if (key.startsWith(vaultRandomID)) {
continue;
}
const value = (await db.syncMappingTbl.getItem(
key
)) as SyncMetaMappingRecord;
if (value === null || value === undefined) {
continue;
}
if (value.vaultRandomID === undefined || value.vaultRandomID === "") {
value.vaultRandomID = vaultRandomID;
}
const newKey = `${vaultRandomID}\t${key}`;
allPromisesToWait.push(db.syncMappingTbl.setItem(newKey, value));
allPromisesToWait.push(db.syncMappingTbl.removeItem(key));
}
log.debug("assign vault id to any sync plan records");
const keysInSyncPlansTbl = await db.syncPlansTbl.keys();
for (const key of keysInSyncPlansTbl) {
if (key.startsWith(vaultRandomID)) {
continue;
}
const value = (await db.syncPlansTbl.getItem(key)) as SyncPlanRecord;
if (value === null || value === undefined) {
continue;
}
if (value.vaultRandomID === undefined || value.vaultRandomID === "") {
value.vaultRandomID = vaultRandomID;
}
const newKey = `${vaultRandomID}\t${key}`;
allPromisesToWait.push(db.syncPlansTbl.setItem(newKey, value));
allPromisesToWait.push(db.syncPlansTbl.removeItem(key));
}
log.debug("finally update version if everything is ok");
await Promise.all(allPromisesToWait);
await db.versionTbl.setItem("version", newVer);
log.debug(`finish upgrading internal db from ${oldVer} to ${newVer}`);
return res;
};
/**
* no need to do anything except changing version
* we just add more file operations in db, and no schema is changed.
*
* @param db
* @param vaultRandomID
* Migrate the sync mapping record to sync Entity.
*/
const migrateDBsFrom20220108To20220326 = async (
const migrateDBsFrom20220326To20240220 = async (
db: InternalDBs,
vaultRandomID: string
vaultRandomID: string,
profileID: string
) => {
const oldVer = 20220108;
const newVer = 20220326;
log.debug(`start upgrading internal db from ${oldVer} to ${newVer}`);
await db.versionTbl.setItem("version", newVer);
log.debug(`finish upgrading internal db from ${oldVer} to ${newVer}`);
const oldVer = 20220326;
const newVer = 20240220;
console.debug(`start upgrading internal db from ${oldVer} to ${newVer}`);
// from sync mapping to prev sync
const syncMappings = await getAllSyncMetaMappingByVault(db, vaultRandomID);
const prevSyncRecords = fromSyncMappingsToPrevSyncRecords(syncMappings);
for (const prevSyncRecord of prevSyncRecords) {
await upsertPrevSyncRecordByVaultAndProfile(
db,
vaultRandomID,
profileID,
prevSyncRecord
);
}
// // clear not used data
// // as of 20240220, we don't call them,
// // for the opportunity for users to downgrade
// await clearFileHistoryOfEverythingByVault(db, vaultRandomID);
// await clearAllSyncMetaMappingByVault(db, vaultRandomID);
await db.versionTbl.setItem(`${vaultRandomID}\tversion`, newVer);
console.debug(`finish upgrading internal db from ${oldVer} to ${newVer}`);
};
const migrateDBs = async (
db: InternalDBs,
oldVer: number,
newVer: number,
vaultRandomID: string
vaultRandomID: string,
profileID: string
) => {
if (oldVer === newVer) {
return;
}
if (oldVer === 20211114 && newVer === 20220108) {
return await migrateDBsFrom20211114To20220108(db, vaultRandomID);
// as of 20240220, we assume everyone is using 20220326 already
// drop any old code to reduce the verbose
if (oldVer < 20220326) {
throw Error(
"You are using a very old version of Remotely Save. No way to auto update internal DB. Please install and enable 0.3.40 firstly, then install a later version."
);
}
if (oldVer === 20220108 && newVer === 20220326) {
return await migrateDBsFrom20220108To20220326(db, vaultRandomID);
}
if (oldVer === 20211114 && newVer === 20220326) {
// TODO: more steps with more versions in the future
await migrateDBsFrom20211114To20220108(db, vaultRandomID);
await migrateDBsFrom20220108To20220326(db, vaultRandomID);
return;
if (oldVer === 20220326 && newVer === 20240220) {
return await migrateDBsFrom20220326To20240220(db, vaultRandomID, profileID);
}
if (newVer < oldVer) {
throw Error(
"You've installed a new version, but then downgrade to an old version. Stop working!"
@ -191,21 +176,14 @@ const migrateDBs = async (
export const prepareDBs = async (
vaultBasePath: string,
vaultRandomIDFromOldConfigFile: string
vaultRandomIDFromOldConfigFile: string,
profileID: string
) => {
const db = {
versionTbl: localforage.createInstance({
name: DEFAULT_DB_NAME,
storeName: DEFAULT_TBL_VERSION,
}),
fileHistoryTbl: localforage.createInstance({
name: DEFAULT_DB_NAME,
storeName: DEFAULT_TBL_FILE_HISTORY,
}),
syncMappingTbl: localforage.createInstance({
name: DEFAULT_DB_NAME,
storeName: DEFAULT_TBL_SYNC_MAPPING,
}),
syncPlansTbl: localforage.createInstance({
name: DEFAULT_DB_NAME,
storeName: DEFAULT_SYNC_PLANS_HISTORY,
@ -222,6 +200,19 @@ export const prepareDBs = async (
name: DEFAULT_DB_NAME,
storeName: DEFAULT_TBL_SIMPLE_KV_FOR_MISC,
}),
prevSyncRecordsTbl: localforage.createInstance({
name: DEFAULT_DB_NAME,
storeName: DEFAULT_TBL_PREV_SYNC_RECORDS,
}),
fileHistoryTbl: localforage.createInstance({
name: DEFAULT_DB_NAME,
storeName: DEFAULT_TBL_FILE_HISTORY,
}),
syncMappingTbl: localforage.createInstance({
name: DEFAULT_DB_NAME,
storeName: DEFAULT_TBL_SYNC_MAPPING,
}),
} as InternalDBs;
// try to get vaultRandomID firstly
@ -253,27 +244,35 @@ export const prepareDBs = async (
throw Error("no vaultRandomID found or generated");
}
const originalVersion: number | null = await db.versionTbl.getItem("version");
// as of 20240220, we set the version per vault, instead of global "version"
const originalVersion: number | null =
(await db.versionTbl.getItem(`${vaultRandomID}\tversion`)) ??
(await db.versionTbl.getItem("version"));
if (originalVersion === null) {
log.debug(
console.debug(
`no internal db version, setting it to ${DEFAULT_DB_VERSION_NUMBER}`
);
await db.versionTbl.setItem("version", DEFAULT_DB_VERSION_NUMBER);
// as of 20240220, we set the version per vault, instead of global "version"
await db.versionTbl.setItem(
`${vaultRandomID}\tversion`,
DEFAULT_DB_VERSION_NUMBER
);
} else if (originalVersion === DEFAULT_DB_VERSION_NUMBER) {
// do nothing
} else {
log.debug(
console.debug(
`trying to upgrade db version from ${originalVersion} to ${DEFAULT_DB_VERSION_NUMBER}`
);
await migrateDBs(
db,
originalVersion,
DEFAULT_DB_VERSION_NUMBER,
vaultRandomID
vaultRandomID,
profileID
);
}
log.info("db connected");
console.info("db connected");
return {
db: db,
vaultRandomID: vaultRandomID,
@ -284,306 +283,79 @@ export const destroyDBs = async () => {
// await localforage.dropInstance({
// name: DEFAULT_DB_NAME,
// });
// log.info("db deleted");
// console.info("db deleted");
const req = indexedDB.deleteDatabase(DEFAULT_DB_NAME);
req.onsuccess = (event) => {
log.info("db deleted");
console.info("db deleted");
};
req.onblocked = (event) => {
log.warn("trying to delete db but it was blocked");
console.warn("trying to delete db but it was blocked");
};
req.onerror = (event) => {
log.error("tried to delete db but something goes wrong!");
log.error(event);
console.error("tried to delete db but something goes wrong!");
console.error(event);
};
};
export const loadFileHistoryTableByVault = async (
export const clearFileHistoryOfEverythingByVault = async (
db: InternalDBs,
vaultRandomID: string
) => {
const records = [] as FileFolderHistoryRecord[];
await db.fileHistoryTbl.iterate((value, key, iterationNumber) => {
const keys = await db.fileHistoryTbl.keys();
for (const key of keys) {
if (key.startsWith(`${vaultRandomID}\t`)) {
records.push(value as FileFolderHistoryRecord);
await db.fileHistoryTbl.removeItem(key);
}
});
records.sort((a, b) => a.actionWhen - b.actionWhen); // ascending
return records;
};
export const clearDeleteRenameHistoryOfKeyAndVault = async (
db: InternalDBs,
key: string,
vaultRandomID: string
) => {
const fullKey = `${vaultRandomID}\t${key}`;
const item: FileFolderHistoryRecord | null =
await db.fileHistoryTbl.getItem(fullKey);
if (
item !== null &&
(item.actionType === "delete" || item.actionType === "rename")
) {
await db.fileHistoryTbl.removeItem(fullKey);
}
};
export const insertDeleteRecordByVault = async (
db: InternalDBs,
fileOrFolder: TAbstractFile | string,
vaultRandomID: string
) => {
// log.info(fileOrFolder);
let k: FileFolderHistoryRecord;
if (fileOrFolder instanceof TFile) {
k = {
key: fileOrFolder.path,
ctime: fileOrFolder.stat.ctime,
mtime: fileOrFolder.stat.mtime,
size: fileOrFolder.stat.size,
actionWhen: Date.now(),
actionType: "delete",
keyType: "file",
renameTo: "",
vaultRandomID: vaultRandomID,
};
await db.fileHistoryTbl.setItem(`${vaultRandomID}\t${k.key}`, k);
} else if (fileOrFolder instanceof TFolder) {
// key should endswith "/"
const key = fileOrFolder.path.endsWith("/")
? fileOrFolder.path
: `${fileOrFolder.path}/`;
const ctime = 0; // they are deleted, so no way to get ctime, mtime
const mtime = 0; // they are deleted, so no way to get ctime, mtime
k = {
key: key,
ctime: ctime,
mtime: mtime,
size: 0,
actionWhen: Date.now(),
actionType: "delete",
keyType: "folder",
renameTo: "",
vaultRandomID: vaultRandomID,
};
await db.fileHistoryTbl.setItem(`${vaultRandomID}\t${k.key}`, k);
} else if (typeof fileOrFolder === "string") {
// always the deletions in .obsidian folder
// so annoying that the path doesn't exists
// and we have to guess whether the path is folder or file
k = {
key: fileOrFolder,
ctime: 0,
mtime: 0,
size: 0,
actionWhen: Date.now(),
actionType: "delete",
keyType: "file",
renameTo: "",
vaultRandomID: vaultRandomID,
};
await db.fileHistoryTbl.setItem(`${vaultRandomID}\t${k.key}`, k);
for (const ext of [
"json",
"js",
"mjs",
"ts",
"md",
"txt",
"css",
"png",
"gif",
"jpg",
"jpeg",
"gitignore",
"gitkeep",
]) {
if (fileOrFolder.endsWith(`.${ext}`)) {
// stop here, no more need to insert the folder record later
return;
}
}
// also add a deletion record as folder if not ending with special exts
k = {
key: `${fileOrFolder}/`,
ctime: 0,
mtime: 0,
size: 0,
actionWhen: Date.now(),
actionType: "delete",
keyType: "folder",
renameTo: "",
vaultRandomID: vaultRandomID,
};
await db.fileHistoryTbl.setItem(`${vaultRandomID}\t${k.key}`, k);
}
};
/**
* A file/folder is renamed from A to B
* We insert two records:
* A with actionType="rename"
* B with actionType="renameDestination"
* @deprecated But we cannot remove it. Because we want to migrate the old data.
* @param db
* @param fileOrFolder
* @param oldPath
* @param vaultRandomID
* @returns
*/
export const insertRenameRecordByVault = async (
export const getAllSyncMetaMappingByVault = async (
db: InternalDBs,
fileOrFolder: TAbstractFile,
oldPath: string,
vaultRandomID: string
) => {
// log.info(fileOrFolder);
let k1: FileFolderHistoryRecord | undefined;
let k2: FileFolderHistoryRecord | undefined;
const actionWhen = Date.now();
if (fileOrFolder instanceof TFile) {
k1 = {
key: oldPath,
ctime: fileOrFolder.stat.ctime,
mtime: fileOrFolder.stat.mtime,
size: fileOrFolder.stat.size,
actionWhen: actionWhen,
actionType: "rename",
keyType: "file",
renameTo: fileOrFolder.path,
vaultRandomID: vaultRandomID,
};
k2 = {
key: fileOrFolder.path,
ctime: fileOrFolder.stat.ctime,
mtime: fileOrFolder.stat.mtime,
size: fileOrFolder.stat.size,
actionWhen: actionWhen,
actionType: "renameDestination",
keyType: "file",
renameTo: "", // itself is the destination, so no need to set this field
vaultRandomID: vaultRandomID,
};
} else if (fileOrFolder instanceof TFolder) {
const key = oldPath.endsWith("/") ? oldPath : `${oldPath}/`;
const renameTo = fileOrFolder.path.endsWith("/")
? fileOrFolder.path
: `${fileOrFolder.path}/`;
let ctime = 0;
let mtime = 0;
if (requireApiVersion(API_VER_STAT_FOLDER)) {
// TAbstractFile does not contain these info
// but from API_VER_STAT_FOLDER we can manually stat them by path.
const s = await statFix(fileOrFolder.vault, fileOrFolder.path);
if (s !== undefined && s !== null) {
ctime = s.ctime;
mtime = s.mtime;
}
}
k1 = {
key: key,
ctime: ctime,
mtime: mtime,
size: 0,
actionWhen: actionWhen,
actionType: "rename",
keyType: "folder",
renameTo: renameTo,
vaultRandomID: vaultRandomID,
};
k2 = {
key: renameTo,
ctime: ctime,
mtime: mtime,
size: 0,
actionWhen: actionWhen,
actionType: "renameDestination",
keyType: "folder",
renameTo: "", // itself is the destination, so no need to set this field
vaultRandomID: vaultRandomID,
};
}
await Promise.all([
db.fileHistoryTbl.setItem(`${vaultRandomID}\t${k1!.key}`, k1),
db.fileHistoryTbl.setItem(`${vaultRandomID}\t${k2!.key}`, k2),
]);
};
export const upsertSyncMetaMappingDataByVault = async (
serviceType: SUPPORTED_SERVICES_TYPE,
db: InternalDBs,
localKey: string,
localMTime: number,
localSize: number,
remoteKey: string,
remoteMTime: number,
remoteSize: number,
remoteExtraKey: string,
vaultRandomID: string
) => {
const aggregratedInfo: SyncMetaMappingRecord = {
localKey: localKey,
localMtime: localMTime,
localSize: localSize,
remoteKey: remoteKey,
remoteMtime: remoteMTime,
remoteSize: remoteSize,
remoteExtraKey: remoteExtraKey,
remoteType: serviceType,
keyType: localKey.endsWith("/") ? "folder" : "file",
vaultRandomID: vaultRandomID,
};
await db.syncMappingTbl.setItem(
`${vaultRandomID}\t${remoteKey}`,
aggregratedInfo
return await Promise.all(
((await db.syncMappingTbl.keys()) ?? [])
.filter((key) => key.startsWith(`${vaultRandomID}\t`))
.map(
async (key) =>
(await db.syncMappingTbl.getItem(key)) as SyncMetaMappingRecord
)
);
};
export const getSyncMetaMappingByRemoteKeyAndVault = async (
serviceType: SUPPORTED_SERVICES_TYPE,
export const clearAllSyncMetaMappingByVault = async (
db: InternalDBs,
remoteKey: string,
remoteMTime: number,
remoteExtraKey: string,
vaultRandomID: string
) => {
const potentialItem = (await db.syncMappingTbl.getItem(
`${vaultRandomID}\t${remoteKey}`
)) as SyncMetaMappingRecord;
if (potentialItem === null) {
// no result was found
return undefined;
const keys = await db.syncMappingTbl.keys();
for (const key of keys) {
if (key.startsWith(`${vaultRandomID}\t`)) {
await db.syncMappingTbl.removeItem(key);
}
}
if (
potentialItem.remoteKey === remoteKey &&
potentialItem.remoteMtime === remoteMTime &&
potentialItem.remoteExtraKey === remoteExtraKey &&
potentialItem.remoteType === serviceType
) {
// the result was found
return potentialItem;
} else {
return undefined;
}
};
export const clearAllSyncMetaMapping = async (db: InternalDBs) => {
await db.syncMappingTbl.clear();
};
export const insertSyncPlanRecordByVault = async (
db: InternalDBs,
syncPlan: SyncPlanType,
vaultRandomID: string
vaultRandomID: string,
remoteType: SUPPORTED_SERVICES_TYPE
) => {
const now = Date.now();
const record = {
ts: syncPlan.ts,
tsFmt: syncPlan.tsFmt,
ts: now,
tsFmt: unixTimeToStr(now),
vaultRandomID: vaultRandomID,
remoteType: syncPlan.remoteType,
remoteType: remoteType,
syncPlan: JSON.stringify(syncPlan /* directly stringify */, null, 2),
} as SyncPlanRecord;
await db.syncPlansTbl.setItem(`${vaultRandomID}\t${syncPlan.ts}`, record);
await db.syncPlansTbl.setItem(`${vaultRandomID}\t${now}`, record);
};
export const clearAllSyncPlanRecords = async (db: InternalDBs) => {
@ -651,12 +423,67 @@ export const clearExpiredSyncPlanRecords = async (db: InternalDBs) => {
await Promise.all(ps);
};
export const clearAllLoggerOutputRecords = async (db: InternalDBs) => {
await db.loggerOutputTbl.clear();
log.debug(`successfully clearAllLoggerOutputRecords`);
export const getAllPrevSyncRecordsByVaultAndProfile = async (
db: InternalDBs,
vaultRandomID: string,
profileID: string
) => {
// console.debug('inside getAllPrevSyncRecordsByVaultAndProfile')
const keys = await db.prevSyncRecordsTbl.keys();
// console.debug(`inside getAllPrevSyncRecordsByVaultAndProfile, keys=${keys}`)
const res: Entity[] = [];
for (const key of keys) {
if (key.startsWith(`${vaultRandomID}\t${profileID}\t`)) {
const val: Entity | null = await db.prevSyncRecordsTbl.getItem(key);
if (val !== null) {
res.push(val);
}
}
}
return res;
};
export const upsertLastSuccessSyncByVault = async (
export const upsertPrevSyncRecordByVaultAndProfile = async (
db: InternalDBs,
vaultRandomID: string,
profileID: string,
prevSync: Entity
) => {
await db.prevSyncRecordsTbl.setItem(
`${vaultRandomID}\t${profileID}\t${prevSync.key}`,
prevSync
);
};
export const clearPrevSyncRecordByVaultAndProfile = async (
db: InternalDBs,
vaultRandomID: string,
profileID: string,
key: string
) => {
await db.prevSyncRecordsTbl.removeItem(
`${vaultRandomID}\t${profileID}\t${key}`
);
};
export const clearAllPrevSyncRecordByVault = async (
db: InternalDBs,
vaultRandomID: string
) => {
const keys = await db.prevSyncRecordsTbl.keys();
for (const key of keys) {
if (key.startsWith(`${vaultRandomID}\t`)) {
await db.prevSyncRecordsTbl.removeItem(key);
}
}
};
export const clearAllLoggerOutputRecords = async (db: InternalDBs) => {
await db.loggerOutputTbl.clear();
console.debug(`successfully clearAllLoggerOutputRecords`);
};
export const upsertLastSuccessSyncTimeByVault = async (
db: InternalDBs,
vaultRandomID: string,
millis: number
@ -667,7 +494,7 @@ export const upsertLastSuccessSyncByVault = async (
);
};
export const getLastSuccessSyncByVault = async (
export const getLastSuccessSyncTimeByVault = async (
db: InternalDBs,
vaultRandomID: string
) => {

View File

@ -7,15 +7,12 @@ import {
setIcon,
FileSystemAdapter,
Platform,
TFile,
TFolder,
requestUrl,
requireApiVersion,
} from "obsidian";
import cloneDeep from "lodash/cloneDeep";
import { createElement, RotateCcw, RefreshCcw, FileText } from "lucide";
import type {
FileOrFolderMixedState,
RemotelySavePluginSettings,
SyncTriggerSourceType,
} from "./baseTypes";
@ -24,22 +21,19 @@ import {
COMMAND_CALLBACK_ONEDRIVE,
COMMAND_CALLBACK_DROPBOX,
COMMAND_URI,
REMOTELY_SAVE_VERSION_2024PREPARE,
API_VER_ENSURE_REQURL_OK,
} from "./baseTypes";
import { importQrCodeUri } from "./importExport";
import {
insertDeleteRecordByVault,
insertRenameRecordByVault,
insertSyncPlanRecordByVault,
loadFileHistoryTableByVault,
prepareDBs,
InternalDBs,
clearExpiredSyncPlanRecords,
upsertLastSuccessSyncByVault,
getLastSuccessSyncByVault,
upsertPluginVersionByVault,
clearAllLoggerOutputRecords,
upsertLastSuccessSyncTimeByVault,
getLastSuccessSyncTimeByVault,
getAllPrevSyncRecordsByVaultAndProfile,
} from "./localdb";
import { RemoteClient } from "./remote";
import {
@ -57,20 +51,21 @@ import {
import { DEFAULT_S3_CONFIG } from "./remoteForS3";
import { DEFAULT_WEBDAV_CONFIG } from "./remoteForWebdav";
import { RemotelySaveSettingTab } from "./settings";
import { fetchMetadataFile, parseRemoteItems, SyncStatusType } from "./sync";
import { doActualSync, getSyncPlan, isPasswordOk } from "./sync";
import {
doActualSync,
ensembleMixedEnties,
getSyncPlanInplace,
isPasswordOk,
SyncStatusType,
} from "./sync";
import { messyConfigToNormal, normalConfigToMessy } from "./configPersist";
import { ObsConfigDirFileType, listFilesInObsFolder } from "./obsFolderLister";
import { getLocalEntityList } from "./local";
import { I18n } from "./i18n";
import type { LangType, LangTypeAndAuto, TransItemType } from "./i18n";
import { SyncAlgoV3Modal } from "./syncAlgoV3Notice";
import { DeletionOnRemote, MetadataOnRemote } from "./metadataOnRemote";
import { SyncAlgoV2Modal } from "./syncAlgoV2Notice";
import { applyLogWriterInplace, log } from "./moreOnLog";
import AggregateError from "aggregate-error";
import { exportVaultSyncPlansToFiles } from "./debugMode";
import { SizesConflictModal } from "./syncSizesConflictNotice";
import { compareVersion } from "./misc";
const DEFAULT_SETTINGS: RemotelySavePluginSettings = {
@ -95,6 +90,11 @@ const DEFAULT_SETTINGS: RemotelySavePluginSettings = {
ignorePaths: [],
enableStatusBarInfo: true,
deleteToWhere: "system",
agreeToUseSyncV3: false,
conflictAction: "keep_newer",
howToCleanEmptyFolder: "skip",
protectModifyPercentage: 50,
syncDirection: "bidirectional",
};
interface OAuth2Info {
@ -151,6 +151,8 @@ export default class RemotelySavePlugin extends Plugin {
return this.i18n.t(x, vars);
};
const profileID = this.getCurrProfileID();
const getNotice = (x: string, timeout?: number) => {
// only show notices in manual mode
// no notice in auto mode
@ -178,7 +180,7 @@ export default class RemotelySavePlugin extends Plugin {
}
try {
log.info(
console.info(
`${
this.manifest.id
}-${Date.now()}: start sync, triggerSource=${triggerSource}`
@ -207,7 +209,7 @@ export default class RemotelySavePlugin extends Plugin {
if (this.statusBarElement !== undefined) {
this.updateLastSuccessSyncMsg(-1);
}
//log.info(`huh ${this.settings.password}`)
//console.info(`huh ${this.settings.password}`)
if (this.settings.currLogLevel === "info") {
getNotice(
t("syncrun_shortstep1", {
@ -240,8 +242,9 @@ export default class RemotelySavePlugin extends Plugin {
this.app.vault.getName(),
() => self.saveSettings()
);
const remoteRsp = await client.listAllFromRemote();
// log.debug(remoteRsp);
const remoteEntityList = await client.listAllFromRemote();
console.debug("remoteEntityList:");
console.debug(remoteEntityList);
if (this.settings.currLogLevel === "info") {
// pass
@ -250,7 +253,7 @@ export default class RemotelySavePlugin extends Plugin {
}
this.syncStatus = "checking_password";
const passwordCheckResult = await isPasswordOk(
remoteRsp.Contents,
remoteEntityList,
this.settings.password
);
if (!passwordCheckResult.ok) {
@ -263,43 +266,29 @@ export default class RemotelySavePlugin extends Plugin {
} else {
getNotice(t("syncrun_step4"));
}
this.syncStatus = "getting_remote_extra_meta";
const { remoteStates, metadataFile } = await parseRemoteItems(
remoteRsp.Contents,
this.db,
this.vaultRandomID,
client.serviceType,
this.settings.password
);
const origMetadataOnRemote = await fetchMetadataFile(
metadataFile,
client,
this.syncStatus = "getting_local_meta";
const localEntityList = await getLocalEntityList(
this.app.vault,
this.settings.password
this.settings.syncConfigDir ?? false,
this.app.vault.configDir,
this.manifest.id
);
console.debug("localEntityList:");
console.debug(localEntityList);
if (this.settings.currLogLevel === "info") {
// pass
} else {
getNotice(t("syncrun_step5"));
}
this.syncStatus = "getting_local_meta";
const local = this.app.vault.getAllLoadedFiles();
const localHistory = await loadFileHistoryTableByVault(
this.syncStatus = "getting_local_prev_sync";
const prevSyncEntityList = await getAllPrevSyncRecordsByVaultAndProfile(
this.db,
this.vaultRandomID
this.vaultRandomID,
profileID
);
let localConfigDirContents: ObsConfigDirFileType[] | undefined =
undefined;
if (this.settings.syncConfigDir) {
localConfigDirContents = await listFilesInObsFolder(
this.app.vault.configDir,
this.app.vault,
this.manifest.id
);
}
// log.info(local);
// log.info(localHistory);
console.debug("prevSyncEntityList:");
console.debug(prevSyncEntityList);
if (this.settings.currLogLevel === "info") {
// pass
@ -307,24 +296,31 @@ export default class RemotelySavePlugin extends Plugin {
getNotice(t("syncrun_step6"));
}
this.syncStatus = "generating_plan";
const { plan, sortedKeys, deletions, sizesGoWrong } = await getSyncPlan(
remoteStates,
local,
localConfigDirContents,
origMetadataOnRemote.deletions,
localHistory,
client.serviceType,
triggerSource,
this.app.vault,
let mixedEntityMappings = await ensembleMixedEnties(
localEntityList,
prevSyncEntityList,
remoteEntityList,
this.settings.syncConfigDir ?? false,
this.app.vault.configDir,
this.settings.syncUnderscoreItems ?? false,
this.settings.skipSizeLargerThan ?? -1,
this.settings.ignorePaths ?? [],
this.settings.password
);
log.info(plan.mixedStates); // for debugging
await insertSyncPlanRecordByVault(this.db, plan, this.vaultRandomID);
mixedEntityMappings = await getSyncPlanInplace(
mixedEntityMappings,
this.settings.howToCleanEmptyFolder ?? "skip",
this.settings.skipSizeLargerThan ?? -1,
this.settings.conflictAction ?? "keep_newer",
this.settings.syncDirection ?? "bidirectional"
);
console.info(`mixedEntityMappings:`);
console.info(mixedEntityMappings); // for debugging
await insertSyncPlanRecordByVault(
this.db,
mixedEntityMappings,
this.vaultRandomID,
client.serviceType
);
// The operations above are almost read only and kind of safe.
// The operations below begins to write or delete (!!!) something.
@ -336,32 +332,46 @@ export default class RemotelySavePlugin extends Plugin {
getNotice(t("syncrun_step7"));
}
this.syncStatus = "syncing";
await doActualSync(
mixedEntityMappings,
client,
this.db,
this.vaultRandomID,
profileID,
this.app.vault,
plan,
sortedKeys,
metadataFile,
origMetadataOnRemote,
sizesGoWrong,
deletions,
(key: string) => self.trash(key),
this.settings.password,
this.settings.concurrency,
(ss: FileOrFolderMixedState[]) => {
new SizesConflictModal(
self.app,
self,
this.settings.skipSizeLargerThan ?? -1,
ss,
this.settings.password !== ""
).open();
this.settings.concurrency ?? 5,
(key: string) => self.trash(key),
this.settings.protectModifyPercentage ?? 50,
(
protectModifyPercentage: number,
realModifyDeleteCount: number,
allFilesCount: number
) => {
const percent = (
(100 * realModifyDeleteCount) /
allFilesCount
).toFixed(1);
const res = t("syncrun_abort_protectmodifypercentage", {
protectModifyPercentage,
realModifyDeleteCount,
allFilesCount,
percent,
});
return res;
},
(i: number, totalCount: number, pathName: string, decision: string) =>
self.setCurrSyncMsg(i, totalCount, pathName, decision)
(
realCounter: number,
realTotalCount: number,
pathName: string,
decision: string
) =>
self.setCurrSyncMsg(
realCounter,
realTotalCount,
pathName,
decision
),
this.db
);
} else {
this.syncStatus = "syncing";
@ -382,7 +392,7 @@ export default class RemotelySavePlugin extends Plugin {
this.syncStatus = "idle";
const lastSuccessSyncMillis = Date.now();
await upsertLastSuccessSyncByVault(
await upsertLastSuccessSyncTimeByVault(
this.db,
this.vaultRandomID,
lastSuccessSyncMillis
@ -397,7 +407,7 @@ export default class RemotelySavePlugin extends Plugin {
this.updateLastSuccessSyncMsg(lastSuccessSyncMillis);
}
log.info(
console.info(
`${
this.manifest.id
}-${Date.now()}: finish sync, triggerSource=${triggerSource}`
@ -409,8 +419,8 @@ export default class RemotelySavePlugin extends Plugin {
triggerSource: triggerSource,
syncStatus: this.syncStatus,
});
log.error(msg);
log.error(error);
console.error(msg);
console.error(error);
getNotice(msg, 10 * 1000);
if (error instanceof AggregateError) {
for (const e of error.errors) {
@ -428,7 +438,7 @@ export default class RemotelySavePlugin extends Plugin {
}
async onload() {
log.info(`loading plugin ${this.manifest.id}`);
console.info(`loading plugin ${this.manifest.id}`);
const { iconSvgSyncWait, iconSvgSyncRunning, iconSvgLogs } = getIconSvg();
@ -448,6 +458,9 @@ export default class RemotelySavePlugin extends Plugin {
await this.loadSettings();
// MUST after loadSettings and before prepareDB
const profileID: string = this.getCurrProfileID();
// lang should be load early, but after settings
this.i18n = new I18n(this.settings.lang!, async (lang: LangTypeAndAuto) => {
this.settings.lang = lang;
@ -457,10 +470,6 @@ export default class RemotelySavePlugin extends Plugin {
return this.i18n.t(x, vars);
};
if (this.settings.currLogLevel !== undefined) {
log.setLevel(this.settings.currLogLevel as any);
}
await this.checkIfOauthExpires();
// MUST before prepareDB()
@ -477,7 +486,8 @@ export default class RemotelySavePlugin extends Plugin {
try {
await this.prepareDBAndVaultRandomID(
vaultBasePath,
vaultRandomIDFromOldConfigFile
vaultRandomIDFromOldConfigFile,
profileID
);
} catch (err: any) {
new Notice(
@ -488,7 +498,6 @@ export default class RemotelySavePlugin extends Plugin {
}
// must AFTER preparing DB
this.redirectLoggingOuputBasedOnSetting();
this.enableAutoClearOutputToDBHistIfSet();
// must AFTER preparing DB
@ -496,52 +505,6 @@ export default class RemotelySavePlugin extends Plugin {
this.syncStatus = "idle";
this.registerEvent(
this.app.vault.on("delete", async (fileOrFolder) => {
await insertDeleteRecordByVault(
this.db,
fileOrFolder,
this.vaultRandomID
);
})
);
this.registerEvent(
this.app.vault.on("rename", async (fileOrFolder, oldPath) => {
await insertRenameRecordByVault(
this.db,
fileOrFolder,
oldPath,
this.vaultRandomID
);
})
);
function getMethods(obj: any) {
var result = [];
for (var id in obj) {
try {
if (typeof obj[id] == "function") {
result.push(id + ": " + obj[id].toString());
}
} catch (err) {
result.push(id + ": inaccessible");
}
}
return result.join("\n");
}
this.registerEvent(
this.app.vault.on("raw" as any, async (fileOrFolder) => {
// special track on .obsidian folder
const name = `${fileOrFolder}`;
if (name.startsWith(this.app.vault.configDir)) {
if (!(await this.app.vault.adapter.exists(name))) {
await insertDeleteRecordByVault(this.db, name, this.vaultRandomID);
}
}
})
);
this.registerObsidianProtocolHandler(COMMAND_URI, async (inputParams) => {
const parsed = importQrCodeUri(inputParams, this.app.vault.getName());
if (parsed.status === "error") {
@ -764,13 +727,13 @@ export default class RemotelySavePlugin extends Plugin {
this.statusBarElement.setAttribute("data-tooltip-position", "top");
this.updateLastSuccessSyncMsg(
await getLastSuccessSyncByVault(this.db, this.vaultRandomID)
await getLastSuccessSyncTimeByVault(this.db, this.vaultRandomID)
);
// update statusbar text every 30 seconds
this.registerInterval(
window.setInterval(async () => {
this.updateLastSuccessSyncMsg(
await getLastSuccessSyncByVault(this.db, this.vaultRandomID)
await getLastSuccessSyncTimeByVault(this.db, this.vaultRandomID)
);
}, 1000 * 30)
);
@ -811,12 +774,12 @@ export default class RemotelySavePlugin extends Plugin {
this.addSettingTab(new RemotelySaveSettingTab(this.app, this));
// this.registerDomEvent(document, "click", (evt: MouseEvent) => {
// log.info("click", evt);
// console.info("click", evt);
// });
if (!this.settings.agreeToUploadExtraMetadata) {
const syncAlgoV2Modal = new SyncAlgoV2Modal(this.app, this);
syncAlgoV2Modal.open();
if (!this.settings.agreeToUseSyncV3) {
const syncAlgoV3Modal = new SyncAlgoV3Modal(this.app, this);
syncAlgoV3Modal.open();
} else {
this.enableAutoSyncIfSet();
this.enableInitSyncIfSet();
@ -829,13 +792,10 @@ export default class RemotelySavePlugin extends Plugin {
this.vaultRandomID,
this.manifest.version
);
if (compareVersion(REMOTELY_SAVE_VERSION_2024PREPARE, oldVersion) >= 0) {
new Notice(t("official_notice_2024_first_party"), 10 * 1000);
}
}
async onunload() {
log.info(`unloading plugin ${this.manifest.id}`);
console.info(`unloading plugin ${this.manifest.id}`);
this.syncRibbon = undefined;
if (this.oauth2Info !== undefined) {
this.oauth2Info.helperModal = undefined;
@ -918,6 +878,22 @@ export default class RemotelySavePlugin extends Plugin {
this.settings.s3.bypassCorsLocally = true; // deprecated as of 20240113
}
if (this.settings.agreeToUseSyncV3 === undefined) {
this.settings.agreeToUseSyncV3 = false;
}
if (this.settings.conflictAction === undefined) {
this.settings.conflictAction = "keep_newer";
}
if (this.settings.howToCleanEmptyFolder === undefined) {
this.settings.howToCleanEmptyFolder = "skip";
}
if (this.settings.protectModifyPercentage === undefined) {
this.settings.protectModifyPercentage = 50;
}
if (this.settings.syncDirection === undefined) {
this.settings.syncDirection = "bidirectional";
}
await this.saveSettings();
}
@ -925,6 +901,17 @@ export default class RemotelySavePlugin extends Plugin {
await this.saveData(normalConfigToMessy(this.settings));
}
/**
* After 202403 the data should be of profile based.
*/
getCurrProfileID() {
if (this.settings.serviceType !== undefined) {
return `${this.settings.serviceType}-default-1`;
} else {
throw Error("unknown serviceType in the setting!");
}
}
async checkIfOauthExpires() {
let needSave: boolean = false;
const current = Date.now();
@ -1004,7 +991,7 @@ export default class RemotelySavePlugin extends Plugin {
// a real string was assigned before
vaultRandomID = this.settings.vaultRandomID;
}
log.debug("vaultRandomID is no longer saved in data.json");
console.debug("vaultRandomID is no longer saved in data.json");
delete this.settings.vaultRandomID;
await this.saveSettings();
}
@ -1034,11 +1021,13 @@ export default class RemotelySavePlugin extends Plugin {
async prepareDBAndVaultRandomID(
vaultBasePath: string,
vaultRandomIDFromOldConfigFile: string
vaultRandomIDFromOldConfigFile: string,
profileID: string
) {
const { db, vaultRandomID } = await prepareDBs(
vaultBasePath,
vaultRandomIDFromOldConfigFile
vaultRandomIDFromOldConfigFile,
profileID
);
this.db = db;
this.vaultRandomID = vaultRandomID;
@ -1084,7 +1073,7 @@ export default class RemotelySavePlugin extends Plugin {
let needToRunAgain = false;
const scheduleSyncOnSave = (scheduleTimeFromNow: number) => {
log.info(
console.info(
`schedule a run for ${scheduleTimeFromNow} milliseconds later`
);
runScheduled = true;
@ -1137,7 +1126,7 @@ export default class RemotelySavePlugin extends Plugin {
}
async saveAgreeToUseNewSyncAlgorithm() {
this.settings.agreeToUploadExtraMetadata = true;
this.settings.agreeToUseSyncV3 = true;
await this.saveSettings();
}
@ -1248,31 +1237,6 @@ export default class RemotelySavePlugin extends Plugin {
}
}
redirectLoggingOuputBasedOnSetting() {
applyLogWriterInplace((...msg: any[]) => {
if (
this.debugServerTemp !== undefined &&
this.debugServerTemp.trim().startsWith("http")
) {
try {
requestUrl({
url: this.debugServerTemp,
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
send_time: Date.now(),
log_text: msg,
}),
});
} catch (e) {
// pass
}
}
});
}
enableAutoClearOutputToDBHistIfSet() {
const initClearOutputToDBHistAfterMilliseconds = 1000 * 30;

View File

@ -1,7 +1,6 @@
import isEqual from "lodash/isEqual";
import { base64url } from "rfc4648";
import { reverseString } from "./misc";
import { log } from "./moreOnLog";
const DEFAULT_README_FOR_METADATAONREMOTE =
"Do NOT edit or delete the file manually. This file is for the plugin remotely-save to store some necessary meta data on the remote services. Its content is slightly obfuscated.";

View File

@ -5,8 +5,6 @@ import { base32, base64url } from "rfc4648";
import XRegExp from "xregexp";
import emojiRegex from "emoji-regex";
import { log } from "./moreOnLog";
declare global {
interface Window {
moment: (...data: any) => any;
@ -30,7 +28,7 @@ export const isHiddenPath = (
}
const k = path.posix.normalize(item); // TODO: only unix path now
const k2 = k.split("/"); // TODO: only unix path now
// log.info(k2)
// console.info(k2)
for (const singlePart of k2) {
if (singlePart === "." || singlePart === ".." || singlePart === "") {
continue;
@ -75,14 +73,14 @@ export const getFolderLevels = (x: string, addEndingSlash: boolean = false) => {
};
export const mkdirpInVault = async (thePath: string, vault: Vault) => {
// log.info(thePath);
// console.info(thePath);
const foldersToBuild = getFolderLevels(thePath);
// log.info(foldersToBuild);
// console.info(foldersToBuild);
for (const folder of foldersToBuild) {
const r = await vault.adapter.exists(folder);
// log.info(r);
// console.info(r);
if (!r) {
log.info(`mkdir ${folder}`);
console.info(`mkdir ${folder}`);
await vault.adapter.mkdir(folder);
}
}
@ -435,7 +433,10 @@ export const statFix = async (vault: Vault, path: string) => {
return s;
};
export const isFolderToSkip = (x: string, more: string[] | undefined) => {
export const isSpecialFolderNameToSkip = (
x: string,
more: string[] | undefined
) => {
let specialFolders = [
".git",
".github",
@ -490,3 +491,15 @@ export const compareVersion = (x: string | null, y: string | null) => {
}
return -1;
};
/**
* https://stackoverflow.com/questions/19929641/how-to-append-an-html-string-to-a-documentfragment
* To introduce some advanced html fragments.
* @param string
* @returns
*/
export const stringToFragment = (string: string) => {
const wrapper = document.createElement("template");
wrapper.innerHTML = string;
return wrapper.content;
};

View File

@ -1,40 +0,0 @@
// It's very dangerous for this file to depend on other files in the same project.
// We should avoid this situation as much as possible.
import { TAbstractFile, TFolder, TFile, Vault } from "obsidian";
import * as origLog from "loglevel";
import type {
LogLevelNumbers,
Logger,
LogLevel,
LogLevelDesc,
LogLevelNames,
} from "loglevel";
const log2 = origLog.getLogger("rs-default");
const originalFactory = log2.methodFactory;
export const applyLogWriterInplace = function (writer: (...msg: any[]) => any) {
log2.methodFactory = function (
methodName: LogLevelNames,
logLevel: LogLevelNumbers,
loggerName: string | symbol
) {
const rawMethod = originalFactory(methodName, logLevel, loggerName);
return function (...msg: any[]) {
rawMethod.apply(undefined, msg);
writer(...msg);
};
};
log2.setLevel(log2.getLevel());
};
export const restoreLogWritterInplace = () => {
log2.methodFactory = originalFactory;
log2.setLevel(log2.getLevel());
};
export const log = log2;

View File

@ -1,16 +1,10 @@
import { Vault, Stat, ListedFiles } from "obsidian";
import type { Vault, Stat, ListedFiles } from "obsidian";
import type { Entity, MixedEntity } from "./baseTypes";
import { Queue } from "@fyears/tsqueue";
import chunk from "lodash/chunk";
import flatten from "lodash/flatten";
import { statFix, isFolderToSkip } from "./misc";
export interface ObsConfigDirFileType {
key: string;
ctime: number;
mtime: number;
size: number;
type: "folder" | "file";
}
import { statFix, isSpecialFolderNameToSkip } from "./misc";
const isPluginDirItself = (x: string, pluginId: string) => {
return (
@ -48,10 +42,10 @@ export const listFilesInObsFolder = async (
configDir: string,
vault: Vault,
pluginId: string
) => {
): Promise<Entity[]> => {
const q = new Queue([configDir]);
const CHUNK_SIZE = 10;
const contents: ObsConfigDirFileType[] = [];
const contents: Entity[] = [];
while (q.length > 0) {
const itemsToFetch: string[] = [];
while (q.length > 0) {
@ -72,11 +66,26 @@ export const listFilesInObsFolder = async (
children = await vault.adapter.list(x);
}
if (
!isFolder &&
(statRes.mtime === undefined ||
statRes.mtime === null ||
statRes.mtime === 0)
) {
throw Error(
`File in Obsidian ${configDir} has last modified time 0: ${x}, don't know how to deal with it.`
);
}
return {
itself: {
key: isFolder ? `${x}/` : x,
...statRes,
} as ObsConfigDirFileType,
key: isFolder ? `${x}/` : x, // local always unencrypted
keyRaw: isFolder ? `${x}/` : x,
mtimeCli: statRes.mtime,
mtimeSvr: statRes.mtime,
size: statRes.size, // local always unencrypted
sizeRaw: statRes.size,
},
children: children,
};
});
@ -87,7 +96,9 @@ export const listFilesInObsFolder = async (
const isInsideSelfPlugin = isPluginDirItself(iter.itself.key, pluginId);
if (iter.children !== undefined) {
for (const iter2 of iter.children.folders) {
if (isFolderToSkip(iter2, ["workspace", "workspace.json"])) {
if (
isSpecialFolderNameToSkip(iter2, ["workspace", "workspace.json"])
) {
continue;
}
if (isInsideSelfPlugin && !isLikelyPluginSubFiles(iter2)) {
@ -97,7 +108,9 @@ export const listFilesInObsFolder = async (
q.push(iter2);
}
for (const iter2 of iter.children.files) {
if (isFolderToSkip(iter2, ["workspace", "workspace.json"])) {
if (
isSpecialFolderNameToSkip(iter2, ["workspace", "workspace.json"])
) {
continue;
}
if (isInsideSelfPlugin && !isLikelyPluginSubFiles(iter2)) {

View File

@ -1,18 +1,18 @@
import { Vault } from "obsidian";
import type {
Entity,
DropboxConfig,
OnedriveConfig,
S3Config,
SUPPORTED_SERVICES_TYPE,
WebdavConfig,
UploadedType,
} from "./baseTypes";
import * as dropbox from "./remoteForDropbox";
import * as onedrive from "./remoteForOnedrive";
import * as s3 from "./remoteForS3";
import * as webdav from "./remoteForWebdav";
import { log } from "./moreOnLog";
export class RemoteClient {
readonly serviceType: SUPPORTED_SERVICES_TYPE;
readonly s3Config?: S3Config;
@ -111,7 +111,7 @@ export class RemoteClient {
foldersCreatedBefore: Set<string> | undefined = undefined,
uploadRaw: boolean = false,
rawContent: string | ArrayBuffer = ""
) => {
): Promise<UploadedType> => {
if (this.serviceType === "s3") {
return await s3.uploadToRemote(
s3.getS3Client(this.s3Config!),
@ -164,7 +164,7 @@ export class RemoteClient {
}
};
listAllFromRemote = async () => {
listAllFromRemote = async (): Promise<Entity[]> => {
if (this.serviceType === "s3") {
return await s3.listAllFromRemote(
s3.getS3Client(this.s3Config!),

View File

@ -5,9 +5,10 @@ import { Vault } from "obsidian";
import * as path from "path";
import {
DropboxConfig,
RemoteItem,
Entity,
COMMAND_CALLBACK_DROPBOX,
OAUTH2_FORCE_EXPIRE_MILLISECONDS,
UploadedType,
} from "./baseTypes";
import { decryptArrayBuffer, encryptArrayBuffer } from "./encrypt";
import {
@ -20,8 +21,6 @@ import {
export { Dropbox } from "dropbox";
import { log } from "./moreOnLog";
export const DEFAULT_DROPBOX_CONFIG: DropboxConfig = {
accessToken: "",
clientID: process.env.DEFAULT_DROPBOX_APP_KEY ?? "",
@ -42,7 +41,7 @@ export const getDropboxPath = (
// special
key = `/${remoteBaseDir}`;
} else if (fileOrFolderPath.startsWith("/")) {
log.warn(
console.warn(
`why the path ${fileOrFolderPath} starts with '/'? but we just go on.`
);
key = `/${remoteBaseDir}${fileOrFolderPath}`;
@ -69,13 +68,13 @@ const getNormPath = (fileOrFolderPath: string, remoteBaseDir: string) => {
return fileOrFolderPath.slice(`/${remoteBaseDir}/`.length);
};
const fromDropboxItemToRemoteItem = (
const fromDropboxItemToEntity = (
x:
| files.FileMetadataReference
| files.FolderMetadataReference
| files.DeletedMetadataReference,
remoteBaseDir: string
): RemoteItem => {
): Entity => {
let key = getNormPath(x.path_display!, remoteBaseDir);
if (x[".tag"] === "folder" && !key.endsWith("/")) {
key = `${key}/`;
@ -83,94 +82,27 @@ const fromDropboxItemToRemoteItem = (
if (x[".tag"] === "folder") {
return {
key: key,
lastModified: undefined,
size: 0,
remoteType: "dropbox",
keyRaw: key,
sizeRaw: 0,
etag: `${x.id}\t`,
} as RemoteItem;
} as Entity;
} else if (x[".tag"] === "file") {
let mtime = Date.parse(x.client_modified).valueOf();
if (mtime === 0) {
mtime = Date.parse(x.server_modified).valueOf();
}
const mtimeCli = Date.parse(x.client_modified).valueOf();
const mtimeSvr = Date.parse(x.server_modified).valueOf();
return {
key: key,
lastModified: mtime,
size: x.size,
remoteType: "dropbox",
keyRaw: key,
mtimeCli: mtimeCli,
mtimeSvr: mtimeSvr,
sizeRaw: x.size,
hash: x.content_hash,
etag: `${x.id}\t${x.content_hash}`,
} as RemoteItem;
} as Entity;
} else {
// x[".tag"] === "deleted"
throw Error("do not support deleted tag");
}
};
/**
* Dropbox api doesn't return mtime for folders.
* This is a try to assign mtime by using files in folder.
* @param allFilesFolders
* @returns
*/
const fixLastModifiedTimeInplace = (allFilesFolders: RemoteItem[]) => {
if (allFilesFolders.length === 0) {
return;
}
// sort by longer to shorter
allFilesFolders.sort((a, b) => b.key.length - a.key.length);
// a "map" from dir to mtime
let potentialMTime = {} as Record<string, number>;
// first sort pass, from buttom to up
for (const item of allFilesFolders) {
if (item.key.endsWith("/")) {
// itself is a folder, and initially doesn't have mtime
if (item.lastModified === undefined && item.key in potentialMTime) {
// previously we gathered all sub info of this folder
item.lastModified = potentialMTime[item.key];
}
}
const parent = `${path.posix.dirname(item.key)}/`;
if (item.lastModified !== undefined) {
if (parent in potentialMTime) {
potentialMTime[parent] = Math.max(
potentialMTime[parent],
item.lastModified
);
} else {
potentialMTime[parent] = item.lastModified;
}
}
}
// second pass, from up to buttom.
// fill mtime by parent folder or Date.Now() if still not available.
// this is only possible if no any sub-folder-files recursively.
// we do not sort the array again, just iterate over it by reverse
// using good old for loop.
for (let i = allFilesFolders.length - 1; i >= 0; --i) {
const item = allFilesFolders[i];
if (!item.key.endsWith("/")) {
continue; // skip files
}
if (item.lastModified !== undefined) {
continue; // don't need to deal with it
}
const parent = `${path.posix.dirname(item.key)}/`;
if (parent in potentialMTime) {
item.lastModified = potentialMTime[parent];
} else {
item.lastModified = Date.now().valueOf();
potentialMTime[item.key] = item.lastModified;
}
}
return allFilesFolders;
};
////////////////////////////////////////////////////////////////////////////////
// Dropbox authorization using PKCE
// see https://dropbox.tech/developers/pkce--what-and-why-
@ -235,7 +167,7 @@ export const sendAuthReq = async (
const resp2 = (await resp1.json()) as DropboxSuccessAuthRes;
return resp2;
} catch (e) {
log.error(e);
console.error(e);
if (errorCallBack !== undefined) {
await errorCallBack(e);
}
@ -247,7 +179,7 @@ export const sendRefreshTokenReq = async (
refreshToken: string
) => {
try {
log.info("start auto getting refreshed Dropbox access token.");
console.info("start auto getting refreshed Dropbox access token.");
const resp1 = await fetch("https://api.dropboxapi.com/oauth2/token", {
method: "POST",
body: new URLSearchParams({
@ -257,10 +189,10 @@ export const sendRefreshTokenReq = async (
}),
});
const resp2 = (await resp1.json()) as DropboxSuccessAuthRes;
log.info("finish auto getting refreshed Dropbox access token.");
console.info("finish auto getting refreshed Dropbox access token.");
return resp2;
} catch (e) {
log.error(e);
console.error(e);
throw e;
}
};
@ -270,7 +202,7 @@ export const setConfigBySuccessfullAuthInplace = async (
authRes: DropboxSuccessAuthRes,
saveUpdatedConfigFunc: () => Promise<any> | undefined
) => {
log.info("start updating local info of Dropbox token");
console.info("start updating local info of Dropbox token");
config.accessToken = authRes.access_token;
config.accessTokenExpiresInSeconds = parseInt(authRes.expires_in);
@ -290,7 +222,7 @@ export const setConfigBySuccessfullAuthInplace = async (
await saveUpdatedConfigFunc();
}
log.info("finish updating local info of Dropbox token");
console.info("finish updating local info of Dropbox token");
};
////////////////////////////////////////////////////////////////////////////////
@ -311,7 +243,7 @@ async function retryReq<T>(
for (let idx = 0; idx < waitSeconds.length; ++idx) {
try {
if (idx !== 0) {
log.warn(
console.warn(
`${extraHint === "" ? "" : extraHint + ": "}The ${
idx + 1
}-th try starts at time ${Date.now()}`
@ -348,7 +280,7 @@ async function retryReq<T>(
const fallbackSec = waitSeconds[idx];
const secMin = Math.max(svrSec, fallbackSec);
const secMax = Math.max(secMin * 1.8, 2);
log.warn(
console.warn(
`${
extraHint === "" ? "" : extraHint + ": "
}We have "429 too many requests" error of ${
@ -421,9 +353,9 @@ export class WrappedDropboxClient {
}
// check vault folder
// log.info(`checking remote has folder /${this.remoteBaseDir}`);
// console.info(`checking remote has folder /${this.remoteBaseDir}`);
if (this.vaultFolderExists) {
// log.info(`already checked, /${this.remoteBaseDir} exist before`)
// console.info(`already checked, /${this.remoteBaseDir} exist before`)
} else {
const res = await this.dropbox.filesListFolder({
path: "",
@ -436,7 +368,7 @@ export class WrappedDropboxClient {
}
}
if (!this.vaultFolderExists) {
log.info(`remote does not have folder /${this.remoteBaseDir}`);
console.info(`remote does not have folder /${this.remoteBaseDir}`);
if (hasEmojiInText(`/${this.remoteBaseDir}`)) {
throw new Error(
@ -447,10 +379,10 @@ export class WrappedDropboxClient {
await this.dropbox.filesCreateFolderV2({
path: `/${this.remoteBaseDir}`,
});
log.info(`remote folder /${this.remoteBaseDir} created`);
console.info(`remote folder /${this.remoteBaseDir} created`);
this.vaultFolderExists = true;
} else {
// log.info(`remote folder /${this.remoteBaseDir} exists`);
// console.info(`remote folder /${this.remoteBaseDir} exists`);
}
}
@ -498,7 +430,7 @@ export const getRemoteMeta = async (
// size: 0,
// remoteType: "dropbox",
// etag: undefined,
// } as RemoteItem;
// } as Entity;
// }
const rsp = await retryReq(() =>
@ -512,7 +444,7 @@ export const getRemoteMeta = async (
if (rsp.status !== 200) {
throw Error(JSON.stringify(rsp));
}
return fromDropboxItemToRemoteItem(rsp.result, client.remoteBaseDir);
return fromDropboxItemToEntity(rsp.result, client.remoteBaseDir);
};
export const uploadToRemote = async (
@ -527,11 +459,16 @@ export const uploadToRemote = async (
rawContent: string | ArrayBuffer = "",
rawContentMTime: number = 0,
rawContentCTime: number = 0
) => {
): Promise<UploadedType> => {
await client.init();
let uploadFile = fileOrFolderPath;
if (password !== "") {
if (remoteEncryptedKey === undefined || remoteEncryptedKey === "") {
throw Error(
`uploadToRemote(dropbox) you have password but remoteEncryptedKey is empty!`
);
}
uploadFile = remoteEncryptedKey;
}
uploadFile = getDropboxPath(uploadFile, client.remoteBaseDir);
@ -588,7 +525,10 @@ export const uploadToRemote = async (
}
}
const res = await getRemoteMeta(client, uploadFile);
return res;
return {
entity: res,
mtimeCli: mtime,
};
} else {
// if encrypted, upload a fake file with the encrypted file name
await retryReq(
@ -600,7 +540,10 @@ export const uploadToRemote = async (
}),
fileOrFolderPath
);
return await getRemoteMeta(client, uploadFile);
return {
entity: await getRemoteMeta(client, uploadFile),
mtimeCli: mtime,
};
}
} else {
// file
@ -649,7 +592,10 @@ export const uploadToRemote = async (
foldersCreatedBefore?.add(dir);
}
}
return await getRemoteMeta(client, uploadFile);
return {
entity: await getRemoteMeta(client, uploadFile),
mtimeCli: mtime,
};
}
};
@ -664,13 +610,13 @@ export const listAllFromRemote = async (client: WrappedDropboxClient) => {
if (res.status !== 200) {
throw Error(JSON.stringify(res));
}
// log.info(res);
// console.info(res);
const contents = res.result.entries;
const unifiedContents = contents
.filter((x) => x[".tag"] !== "deleted")
.filter((x) => x.path_display !== `/${client.remoteBaseDir}`)
.map((x) => fromDropboxItemToRemoteItem(x, client.remoteBaseDir));
.map((x) => fromDropboxItemToEntity(x, client.remoteBaseDir));
while (res.result.has_more) {
res = await client.dropbox.filesListFolderContinue({
@ -684,15 +630,11 @@ export const listAllFromRemote = async (client: WrappedDropboxClient) => {
const unifiedContents2 = contents2
.filter((x) => x[".tag"] !== "deleted")
.filter((x) => x.path_display !== `/${client.remoteBaseDir}`)
.map((x) => fromDropboxItemToRemoteItem(x, client.remoteBaseDir));
.map((x) => fromDropboxItemToEntity(x, client.remoteBaseDir));
unifiedContents.push(...unifiedContents2);
}
fixLastModifiedTimeInplace(unifiedContents);
return {
Contents: unifiedContents,
};
return unifiedContents;
};
const downloadFromRemoteRaw = async (
@ -792,8 +734,8 @@ export const deleteFromRemote = async (
fileOrFolderPath
);
} catch (err) {
log.error("some error while deleting");
log.error(err);
console.error("some error while deleting");
console.error(err);
}
};
@ -809,7 +751,7 @@ export const checkConnectivity = async (
}
return true;
} catch (err) {
log.debug(err);
console.debug(err);
if (callbackFunc !== undefined) {
callbackFunc(err);
}

View File

@ -14,7 +14,8 @@ import {
DEFAULT_CONTENT_TYPE,
OAUTH2_FORCE_EXPIRE_MILLISECONDS,
OnedriveConfig,
RemoteItem,
Entity,
UploadedType,
} from "./baseTypes";
import { decryptArrayBuffer, encryptArrayBuffer } from "./encrypt";
import {
@ -24,8 +25,6 @@ import {
mkdirpInVault,
} from "./misc";
import { log } from "./moreOnLog";
const SCOPES = ["User.Read", "Files.ReadWrite.AppFolder", "offline_access"];
const REDIRECT_URI = `obsidian://${COMMAND_CALLBACK_ONEDRIVE}`;
@ -116,8 +115,8 @@ export const sendAuthReq = async (
// code: authCode,
// codeVerifier: verifier, // PKCE Code Verifier
// });
// log.info('authResponse')
// log.info(authResponse)
// console.info('authResponse')
// console.info(authResponse)
// return authResponse;
// Because of the CORS problem,
@ -142,7 +141,7 @@ export const sendAuthReq = async (
});
const rsp2 = JSON.parse(rsp1);
// log.info(rsp2);
// console.info(rsp2);
if (rsp2.error !== undefined) {
return rsp2 as AccessCodeResponseFailedType;
@ -150,7 +149,7 @@ export const sendAuthReq = async (
return rsp2 as AccessCodeResponseSuccessfulType;
}
} catch (e) {
log.error(e);
console.error(e);
await errorCallBack(e);
}
};
@ -176,7 +175,7 @@ export const sendRefreshTokenReq = async (
});
const rsp2 = JSON.parse(rsp1);
// log.info(rsp2);
// console.info(rsp2);
if (rsp2.error !== undefined) {
return rsp2 as AccessCodeResponseFailedType;
@ -184,7 +183,7 @@ export const sendRefreshTokenReq = async (
return rsp2 as AccessCodeResponseSuccessfulType;
}
} catch (e) {
log.error(e);
console.error(e);
throw e;
}
};
@ -194,7 +193,7 @@ export const setConfigBySuccessfullAuthInplace = async (
authRes: AccessCodeResponseSuccessfulType,
saveUpdatedConfigFunc: () => Promise<any> | undefined
) => {
log.info("start updating local info of OneDrive token");
console.info("start updating local info of OneDrive token");
config.accessToken = authRes.access_token;
config.accessTokenExpiresAtTime =
Date.now() + authRes.expires_in - 5 * 60 * 1000;
@ -209,7 +208,7 @@ export const setConfigBySuccessfullAuthInplace = async (
await saveUpdatedConfigFunc();
}
log.info("finish updating local info of Onedrive token");
console.info("finish updating local info of Onedrive token");
};
////////////////////////////////////////////////////////////////////////////////
@ -230,7 +229,7 @@ const getOnedrivePath = (fileOrFolderPath: string, remoteBaseDir: string) => {
}
if (key.startsWith("/")) {
log.warn(`why the path ${key} starts with '/'? but we just go on.`);
console.warn(`why the path ${key} starts with '/'? but we just go on.`);
key = `${prefix}${key}`;
} else {
key = `${prefix}/${key}`;
@ -255,16 +254,13 @@ const getNormPath = (fileOrFolderPath: string, remoteBaseDir: string) => {
return fileOrFolderPath.slice(`${prefix}/`.length);
};
const constructFromDriveItemToRemoteItemError = (x: DriveItem) => {
const constructFromDriveItemToEntityError = (x: DriveItem) => {
return `parentPath="${
x.parentReference?.path ?? "(no parentReference or path)"
}", selfName="${x.name}"`;
};
const fromDriveItemToRemoteItem = (
x: DriveItem,
remoteBaseDir: string
): RemoteItem => {
const fromDriveItemToEntity = (x: DriveItem, remoteBaseDir: string): Entity => {
let key = "";
// possible prefix:
@ -333,14 +329,14 @@ const fromDriveItemToRemoteItem = (
key = x.name;
} else {
throw Error(
`we meet file/folder and do not know how to deal with it:\n${constructFromDriveItemToRemoteItemError(
`we meet file/folder and do not know how to deal with it:\n${constructFromDriveItemToEntityError(
x
)}`
);
}
} else {
throw Error(
`we meet file/folder and do not know how to deal with it:\n${constructFromDriveItemToRemoteItemError(
`we meet file/folder and do not know how to deal with it:\n${constructFromDriveItemToEntityError(
x
)}`
);
@ -350,11 +346,15 @@ const fromDriveItemToRemoteItem = (
if (isFolder) {
key = `${key}/`;
}
const mtimeSvr = Date.parse(x?.fileSystemInfo!.lastModifiedDateTime!);
const mtimeCli = Date.parse(x?.fileSystemInfo!.lastModifiedDateTime!);
return {
key: key,
lastModified: Date.parse(x!.fileSystemInfo!.lastModifiedDateTime!),
size: isFolder ? 0 : x.size!,
remoteType: "onedrive",
keyRaw: key,
mtimeSvr: mtimeSvr,
mtimeCli: mtimeCli,
sizeRaw: isFolder ? 0 : x.size!,
// hash: ?? // TODO
etag: x.cTag || "", // do NOT use x.eTag because it changes if meta changes
};
};
@ -401,7 +401,7 @@ class MyAuthProvider implements AuthenticationProvider {
this.onedriveConfig.accessTokenExpiresAtTime =
currentTs + r2.expires_in * 1000 - 60 * 2 * 1000;
await this.saveUpdatedConfigFunc();
log.info("Onedrive accessToken updated");
console.info("Onedrive accessToken updated");
return this.onedriveConfig.accessToken;
}
};
@ -435,26 +435,26 @@ export class WrappedOnedriveClient {
}
// check vault folder
// log.info(`checking remote has folder /${this.remoteBaseDir}`);
// console.info(`checking remote has folder /${this.remoteBaseDir}`);
if (this.vaultFolderExists) {
// log.info(`already checked, /${this.remoteBaseDir} exist before`)
// console.info(`already checked, /${this.remoteBaseDir} exist before`)
} else {
const k = await this.getJson("/drive/special/approot/children");
// log.debug(k);
// console.debug(k);
this.vaultFolderExists =
(k.value as DriveItem[]).filter((x) => x.name === this.remoteBaseDir)
.length > 0;
if (!this.vaultFolderExists) {
log.info(`remote does not have folder /${this.remoteBaseDir}`);
console.info(`remote does not have folder /${this.remoteBaseDir}`);
await this.postJson("/drive/special/approot/children", {
name: `${this.remoteBaseDir}`,
folder: {},
"@microsoft.graph.conflictBehavior": "replace",
});
log.info(`remote folder /${this.remoteBaseDir} created`);
console.info(`remote folder /${this.remoteBaseDir} created`);
this.vaultFolderExists = true;
} else {
// log.info(`remote folder /${this.remoteBaseDir} exists`);
// console.info(`remote folder /${this.remoteBaseDir} exists`);
}
}
};
@ -476,7 +476,7 @@ export class WrappedOnedriveClient {
getJson = async (pathFragOrig: string) => {
const theUrl = this.buildUrl(pathFragOrig);
log.debug(`getJson, theUrl=${theUrl}`);
console.debug(`getJson, theUrl=${theUrl}`);
return JSON.parse(
await request({
url: theUrl,
@ -492,7 +492,7 @@ export class WrappedOnedriveClient {
postJson = async (pathFragOrig: string, payload: any) => {
const theUrl = this.buildUrl(pathFragOrig);
log.debug(`postJson, theUrl=${theUrl}`);
console.debug(`postJson, theUrl=${theUrl}`);
return JSON.parse(
await request({
url: theUrl,
@ -508,7 +508,7 @@ export class WrappedOnedriveClient {
patchJson = async (pathFragOrig: string, payload: any) => {
const theUrl = this.buildUrl(pathFragOrig);
log.debug(`patchJson, theUrl=${theUrl}`);
console.debug(`patchJson, theUrl=${theUrl}`);
return JSON.parse(
await request({
url: theUrl,
@ -524,7 +524,7 @@ export class WrappedOnedriveClient {
deleteJson = async (pathFragOrig: string) => {
const theUrl = this.buildUrl(pathFragOrig);
log.debug(`deleteJson, theUrl=${theUrl}`);
console.debug(`deleteJson, theUrl=${theUrl}`);
if (VALID_REQURL) {
await requestUrl({
url: theUrl,
@ -545,7 +545,7 @@ export class WrappedOnedriveClient {
putArrayBuffer = async (pathFragOrig: string, payload: ArrayBuffer) => {
const theUrl = this.buildUrl(pathFragOrig);
log.debug(`putArrayBuffer, theUrl=${theUrl}`);
console.debug(`putArrayBuffer, theUrl=${theUrl}`);
// TODO:
// 20220401: On Android, requestUrl has issue that text becomes base64.
// Use fetch everywhere instead!
@ -588,7 +588,7 @@ export class WrappedOnedriveClient {
size: number
) => {
const theUrl = this.buildUrl(pathFragOrig);
log.debug(
console.debug(
`putUint8ArrayByRange, theUrl=${theUrl}, range=${rangeStart}-${
rangeEnd - 1
}, len=${rangeEnd - rangeStart}, size=${size}`
@ -653,7 +653,7 @@ export const listAllFromRemote = async (client: WrappedOnedriveClient) => {
`/drive/special/approot:/${client.remoteBaseDir}:/delta`
);
let driveItems = res.value as DriveItem[];
// log.debug(driveItems);
// console.debug(driveItems);
while (NEXT_LINK_KEY in res) {
res = await client.getJson(res[NEXT_LINK_KEY]);
@ -666,14 +666,12 @@ export const listAllFromRemote = async (client: WrappedOnedriveClient) => {
await client.saveUpdatedConfigFunc();
}
// unify everything to RemoteItem
// unify everything to Entity
const unifiedContents = driveItems
.map((x) => fromDriveItemToRemoteItem(x, client.remoteBaseDir))
.filter((x) => x.key !== "/");
.map((x) => fromDriveItemToEntity(x, client.remoteBaseDir))
.filter((x) => x.keyRaw !== "/");
return {
Contents: unifiedContents,
};
return unifiedContents;
};
export const getRemoteMeta = async (
@ -681,14 +679,14 @@ export const getRemoteMeta = async (
remotePath: string
) => {
await client.init();
// log.info(`remotePath=${remotePath}`);
// console.info(`remotePath=${remotePath}`);
const rsp = await client.getJson(
`${remotePath}?$select=cTag,eTag,fileSystemInfo,folder,file,name,parentReference,size`
);
// log.info(rsp);
// console.info(rsp);
const driveItem = rsp as DriveItem;
const res = fromDriveItemToRemoteItem(driveItem, client.remoteBaseDir);
// log.info(res);
const res = fromDriveItemToEntity(driveItem, client.remoteBaseDir);
// console.info(res);
return res;
};
@ -702,15 +700,20 @@ export const uploadToRemote = async (
foldersCreatedBefore: Set<string> | undefined = undefined,
uploadRaw: boolean = false,
rawContent: string | ArrayBuffer = ""
) => {
): Promise<UploadedType> => {
await client.init();
let uploadFile = fileOrFolderPath;
if (password !== "") {
if (remoteEncryptedKey === undefined || remoteEncryptedKey === "") {
throw Error(
`uploadToRemote(onedrive) you have password but remoteEncryptedKey is empty!`
);
}
uploadFile = remoteEncryptedKey;
}
uploadFile = getOnedrivePath(uploadFile, client.remoteBaseDir);
log.debug(`uploadFile=${uploadFile}`);
console.debug(`uploadFile=${uploadFile}`);
let mtime = 0;
let ctime = 0;
@ -755,7 +758,10 @@ export const uploadToRemote = async (
await client.patchJson(uploadFile, k);
}
const res = await getRemoteMeta(client, uploadFile);
return res;
return {
entity: res,
mtimeCli: mtime,
};
} else {
// if encrypted,
// upload a fake, random-size file
@ -784,9 +790,12 @@ export const uploadToRemote = async (
} as FileSystemInfo,
});
}
// log.info(uploadResult)
// console.info(uploadResult)
const res = await getRemoteMeta(client, uploadFile);
return res;
return {
entity: res,
mtimeCli: mtime,
};
}
} else {
// file
@ -863,8 +872,8 @@ export const uploadToRemote = async (
k
);
const uploadUrl = s.uploadUrl!;
log.debug("uploadSession = ");
log.debug(s);
console.debug("uploadSession = ");
console.debug(s);
// 2. upload by ranges
// convert to uint8
@ -885,7 +894,10 @@ export const uploadToRemote = async (
}
const res = await getRemoteMeta(client, uploadFile);
return res;
return {
entity: res,
mtimeCli: mtime,
};
}
};
@ -981,7 +993,7 @@ export const checkConnectivity = async (
const k = await getUserDisplayName(client);
return k !== "<unknown display name>";
} catch (err) {
log.debug(err);
console.debug(err);
if (callbackFunc !== undefined) {
callbackFunc(err);
}

View File

@ -28,8 +28,9 @@ import * as path from "path";
import AggregateError from "aggregate-error";
import {
DEFAULT_CONTENT_TYPE,
RemoteItem,
Entity,
S3Config,
UploadedType,
VALID_REQURL,
} from "./baseTypes";
import { decryptArrayBuffer, encryptArrayBuffer } from "./encrypt";
@ -41,7 +42,6 @@ import {
export { S3Client } from "@aws-sdk/client-s3";
import { log } from "./moreOnLog";
import PQueue from "p-queue";
////////////////////////////////////////////////////////////////////////////////
@ -220,51 +220,67 @@ const getLocalNoPrefixPath = (
return fileOrFolderPathWithRemotePrefix.slice(`${remotePrefix}`.length);
};
const fromS3ObjectToRemoteItem = (
const fromS3ObjectToEntity = (
x: S3ObjectType,
remotePrefix: string,
mtimeRecords: Record<string, number>,
ctimeRecords: Record<string, number>
) => {
let mtime = x.LastModified!.valueOf();
// console.debug(`fromS3ObjectToEntity: ${x.Key!}, ${JSON.stringify(x,null,2)}`);
// S3 officially only supports seconds precision!!!!!
const mtimeSvr = Math.floor(x.LastModified!.valueOf() / 1000.0) * 1000;
let mtimeCli = mtimeSvr;
if (x.Key! in mtimeRecords) {
const m2 = mtimeRecords[x.Key!];
if (m2 !== 0) {
mtime = m2;
mtimeCli = m2;
}
}
const r: RemoteItem = {
key: getLocalNoPrefixPath(x.Key!, remotePrefix),
lastModified: mtime,
size: x.Size!,
remoteType: "s3",
const key = getLocalNoPrefixPath(x.Key!, remotePrefix);
const r: Entity = {
keyRaw: key,
mtimeSvr: mtimeSvr,
mtimeCli: mtimeCli,
sizeRaw: x.Size!,
etag: x.ETag,
};
return r;
};
const fromS3HeadObjectToRemoteItem = (
const fromS3HeadObjectToEntity = (
fileOrFolderPathWithRemotePrefix: string,
x: HeadObjectCommandOutput,
remotePrefix: string,
useAccurateMTime: boolean
remotePrefix: string
) => {
let mtime = x.LastModified!.valueOf();
if (useAccurateMTime && x.Metadata !== undefined) {
// console.debug(`fromS3HeadObjectToEntity: ${fileOrFolderPathWithRemotePrefix}: ${JSON.stringify(x,null,2)}`);
// S3 officially only supports seconds precision!!!!!
const mtimeSvr = Math.floor(x.LastModified!.valueOf() / 1000.0) * 1000;
let mtimeCli = mtimeSvr;
if (x.Metadata !== undefined) {
const m2 = Math.round(
parseFloat(x.Metadata.mtime || x.Metadata.MTime || "0")
);
if (m2 !== 0) {
mtime = m2;
mtimeCli = m2;
}
}
// console.debug(
// `fromS3HeadObjectToEntity, fileOrFolderPathWithRemotePrefix=${fileOrFolderPathWithRemotePrefix}, remotePrefix=${remotePrefix}, x=${JSON.stringify(
// x
// )} `
// );
const key = getLocalNoPrefixPath(
fileOrFolderPathWithRemotePrefix,
remotePrefix
);
// console.debug(`fromS3HeadObjectToEntity, key=${key} after removing prefix`);
return {
key: getLocalNoPrefixPath(fileOrFolderPathWithRemotePrefix, remotePrefix),
lastModified: mtime,
size: x.ContentLength,
remoteType: "s3",
keyRaw: key,
mtimeSvr: mtimeSvr,
mtimeCli: mtimeCli,
sizeRaw: x.ContentLength,
etag: x.ETag,
} as RemoteItem;
} as Entity;
};
export const getS3Client = (s3Config: S3Config) => {
@ -330,11 +346,10 @@ export const getRemoteMeta = async (
})
);
return fromS3HeadObjectToRemoteItem(
return fromS3HeadObjectToEntity(
fileOrFolderPathWithRemotePrefix,
res,
s3Config.remotePrefix ?? "",
s3Config.useAccurateMTime ?? false
s3Config.remotePrefix ?? ""
);
};
@ -350,12 +365,19 @@ export const uploadToRemote = async (
rawContent: string | ArrayBuffer = "",
rawContentMTime: number = 0,
rawContentCTime: number = 0
) => {
): Promise<UploadedType> => {
console.debug(`uploading ${fileOrFolderPath}`);
let uploadFile = fileOrFolderPath;
if (password !== "") {
if (remoteEncryptedKey === undefined || remoteEncryptedKey === "") {
throw Error(
`uploadToRemote(s3) you have password but remoteEncryptedKey is empty!`
);
}
uploadFile = remoteEncryptedKey;
}
uploadFile = getRemoteWithPrefixPath(uploadFile, s3Config.remotePrefix ?? "");
// console.debug(`actual uploadFile=${uploadFile}`);
const isFolder = fileOrFolderPath.endsWith("/");
if (isFolder && isRecursively) {
@ -385,7 +407,11 @@ export const uploadToRemote = async (
},
})
);
return await getRemoteMeta(s3Client, s3Config, uploadFile);
const res = await getRemoteMeta(s3Client, s3Config, uploadFile);
return {
entity: res,
mtimeCli: mtime,
};
} else {
// file
// we ignore isRecursively parameter here
@ -445,11 +471,18 @@ export const uploadToRemote = async (
},
});
upload.on("httpUploadProgress", (progress) => {
// log.info(progress);
// console.info(progress);
});
await upload.done();
return await getRemoteMeta(s3Client, s3Config, uploadFile);
const res = await getRemoteMeta(s3Client, s3Config, uploadFile);
// console.debug(
// `uploaded ${uploadFile} with res=${JSON.stringify(res, null, 2)}`
// );
return {
entity: res,
mtimeCli: mtime,
};
}
};
@ -538,16 +571,14 @@ const listFromRemoteRaw = async (
// ensemble fake rsp
// in the end, we need to transform the response list
// back to the local contents-alike list
return {
Contents: contents.map((x) =>
fromS3ObjectToRemoteItem(
x,
s3Config.remotePrefix ?? "",
mtimeRecords,
ctimeRecords
)
),
};
return contents.map((x) =>
fromS3ObjectToEntity(
x,
s3Config.remotePrefix ?? "",
mtimeRecords,
ctimeRecords
)
);
};
export const listAllFromRemote = async (
@ -692,7 +723,7 @@ export const deleteFromRemote = async (
if (fileOrFolderPath.endsWith("/") && password === "") {
const x = await listFromRemoteRaw(s3Client, s3Config, remoteFileName);
x.Contents.forEach(async (element) => {
x.forEach(async (element) => {
await s3Client.send(
new DeleteObjectCommand({
Bucket: s3Config.s3BucketName,
@ -740,7 +771,7 @@ export const checkConnectivity = async (
results.$metadata.httpStatusCode === undefined
) {
const err = "results or $metadata or httStatusCode is undefined";
log.debug(err);
console.debug(err);
if (callbackFunc !== undefined) {
callbackFunc(err);
}
@ -748,7 +779,7 @@ export const checkConnectivity = async (
}
return results.$metadata.httpStatusCode === 200;
} catch (err: any) {
log.debug(err);
console.debug(err);
if (callbackFunc !== undefined) {
if (s3Config.s3Endpoint.contains(s3Config.s3BucketName)) {
const err2 = new AggregateError([

View File

@ -5,12 +5,10 @@ import { Queue } from "@fyears/tsqueue";
import chunk from "lodash/chunk";
import flatten from "lodash/flatten";
import { getReasonPhrase } from "http-status-codes";
import { RemoteItem, VALID_REQURL, WebdavConfig } from "./baseTypes";
import { Entity, UploadedType, VALID_REQURL, WebdavConfig } from "./baseTypes";
import { decryptArrayBuffer, encryptArrayBuffer } from "./encrypt";
import { bufferToArrayBuffer, getPathFolder, mkdirpInVault } from "./misc";
import { log } from "./moreOnLog";
import type {
FileStat,
WebDAVClient,
@ -85,9 +83,9 @@ if (VALID_REQURL) {
}
}
}
// log.info(`requesting url=${options.url}`);
// log.info(`contentType=${contentType}`);
// log.info(`rspHeaders=${JSON.stringify(rspHeaders)}`)
// console.info(`requesting url=${options.url}`);
// console.info(`contentType=${contentType}`);
// console.info(`rspHeaders=${JSON.stringify(rspHeaders)}`)
// let r2: Response = undefined;
// if (contentType.includes("xml")) {
@ -100,9 +98,9 @@ if (VALID_REQURL) {
// contentType.includes("json") ||
// contentType.includes("javascript")
// ) {
// log.info('inside json branch');
// console.info('inside json branch');
// // const j = r.json;
// // log.info(j);
// // console.info(j);
// r2 = new Response(
// r.text, // yea, here is the text because Response constructor expects a text
// {
@ -178,7 +176,7 @@ const getWebdavPath = (fileOrFolderPath: string, remoteBaseDir: string) => {
// special
key = `/${remoteBaseDir}/`;
} else if (fileOrFolderPath.startsWith("/")) {
log.warn(
console.warn(
`why the path ${fileOrFolderPath} starts with '/'? but we just go on.`
);
key = `/${remoteBaseDir}${fileOrFolderPath}`;
@ -205,18 +203,19 @@ const getNormPath = (fileOrFolderPath: string, remoteBaseDir: string) => {
return fileOrFolderPath.slice(`/${remoteBaseDir}/`.length);
};
const fromWebdavItemToRemoteItem = (x: FileStat, remoteBaseDir: string) => {
const fromWebdavItemToEntity = (x: FileStat, remoteBaseDir: string) => {
let key = getNormPath(x.filename, remoteBaseDir);
if (x.type === "directory" && !key.endsWith("/")) {
key = `${key}/`;
}
const mtimeSvr = Date.parse(x.lastmod).valueOf();
return {
key: key,
lastModified: Date.parse(x.lastmod).valueOf(),
size: x.size,
remoteType: "webdav",
etag: x.etag || undefined,
} as RemoteItem;
keyRaw: key,
mtimeSvr: mtimeSvr,
mtimeCli: mtimeSvr, // no universal way to set mtime in webdav
sizeRaw: x.size,
etag: x.etag,
} as Entity;
};
export class WrappedWebdavClient {
@ -258,7 +257,7 @@ export class WrappedWebdavClient {
: AuthType.Password,
});
} else {
log.info("no password");
console.info("no password");
this.client = createClient(this.webdavConfig.address, {
headers: headers,
});
@ -270,12 +269,12 @@ export class WrappedWebdavClient {
} else {
const res = await this.client.exists(`/${this.remoteBaseDir}/`);
if (res) {
// log.info("remote vault folder exits!");
// console.info("remote vault folder exits!");
this.vaultFolderExists = true;
} else {
log.info("remote vault folder not exists, creating");
console.info("remote vault folder not exists, creating");
await this.client.createDirectory(`/${this.remoteBaseDir}/`);
log.info("remote vault folder created!");
console.info("remote vault folder created!");
this.vaultFolderExists = true;
}
}
@ -291,7 +290,7 @@ export class WrappedWebdavClient {
this.webdavConfig.manualRecursive = true;
if (this.saveUpdatedConfigFunc !== undefined) {
await this.saveUpdatedConfigFunc();
log.info(
console.info(
`webdav depth="auto_???" is changed to ${this.webdavConfig.depth}`
);
}
@ -322,12 +321,12 @@ export const getRemoteMeta = async (
remotePath: string
) => {
await client.init();
log.debug(`getRemoteMeta remotePath = ${remotePath}`);
console.debug(`getRemoteMeta remotePath = ${remotePath}`);
const res = (await client.client.stat(remotePath, {
details: false,
})) as FileStat;
log.debug(`getRemoteMeta res=${JSON.stringify(res)}`);
return fromWebdavItemToRemoteItem(res, client.remoteBaseDir);
console.debug(`getRemoteMeta res=${JSON.stringify(res)}`);
return fromWebdavItemToEntity(res, client.remoteBaseDir);
};
export const uploadToRemote = async (
@ -339,10 +338,15 @@ export const uploadToRemote = async (
remoteEncryptedKey: string = "",
uploadRaw: boolean = false,
rawContent: string | ArrayBuffer = ""
) => {
): Promise<UploadedType> => {
await client.init();
let uploadFile = fileOrFolderPath;
if (password !== "") {
if (remoteEncryptedKey === undefined || remoteEncryptedKey === "") {
throw Error(
`uploadToRemote(webdav) you have password but remoteEncryptedKey is empty!`
);
}
uploadFile = remoteEncryptedKey;
}
uploadFile = getWebdavPath(uploadFile, client.remoteBaseDir);
@ -359,25 +363,30 @@ export const uploadToRemote = async (
if (password === "") {
// if not encrypted, mkdir a remote folder
await client.client.createDirectory(uploadFile, {
recursive: false, // the sync algo should guarantee no need to recursive
recursive: true,
});
const res = await getRemoteMeta(client, uploadFile);
return res;
return {
entity: res,
};
} else {
// if encrypted, upload a fake file with the encrypted file name
await client.client.putFileContents(uploadFile, "", {
overwrite: true,
onUploadProgress: (progress: any) => {
// log.info(`Uploaded ${progress.loaded} bytes of ${progress.total}`);
// console.info(`Uploaded ${progress.loaded} bytes of ${progress.total}`);
},
});
return await getRemoteMeta(client, uploadFile);
return {
entity: await getRemoteMeta(client, uploadFile),
};
}
} else {
// file
// we ignore isRecursively parameter here
let localContent = undefined;
let localContent: ArrayBuffer | undefined = undefined;
let mtimeCli: number | undefined = undefined;
if (uploadRaw) {
if (typeof rawContent === "string") {
localContent = new TextEncoder().encode(rawContent).buffer;
@ -391,6 +400,7 @@ export const uploadToRemote = async (
);
}
localContent = await vault.adapter.readBinary(fileOrFolderPath);
mtimeCli = (await vault.adapter.stat(fileOrFolderPath))?.mtime;
}
let remoteContent = localContent;
if (password !== "") {
@ -400,16 +410,19 @@ export const uploadToRemote = async (
// // we need to create folders before uploading
// const dir = getPathFolder(uploadFile);
// if (dir !== "/" && dir !== "") {
// await client.client.createDirectory(dir, { recursive: false });
// await client.client.createDirectory(dir, { recursive: true });
// }
await client.client.putFileContents(uploadFile, remoteContent, {
overwrite: true,
onUploadProgress: (progress: any) => {
log.info(`Uploaded ${progress.loaded} bytes of ${progress.total}`);
console.info(`Uploaded ${progress.loaded} bytes of ${progress.total}`);
},
});
return await getRemoteMeta(client, uploadFile);
return {
entity: await getRemoteMeta(client, uploadFile),
mtimeCli: mtimeCli,
};
}
};
@ -434,7 +447,7 @@ export const listAllFromRemote = async (client: WrappedWebdavClient) => {
itemsToFetch.push(q.pop()!);
}
const itemsToFetchChunks = chunk(itemsToFetch, CHUNK_SIZE);
// log.debug(itemsToFetchChunks);
// console.debug(itemsToFetchChunks);
const subContents = [] as FileStat[];
for (const singleChunk of itemsToFetchChunks) {
const r = singleChunk.map((x) => {
@ -472,11 +485,7 @@ export const listAllFromRemote = async (client: WrappedWebdavClient) => {
}
)) as FileStat[];
}
return {
Contents: contents.map((x) =>
fromWebdavItemToRemoteItem(x, client.remoteBaseDir)
),
};
return contents.map((x) => fromWebdavItemToEntity(x, client.remoteBaseDir));
};
const downloadFromRemoteRaw = async (
@ -484,7 +493,7 @@ const downloadFromRemoteRaw = async (
remotePath: string
) => {
await client.init();
// log.info(`getWebdavPath=${remotePath}`);
// console.info(`getWebdavPath=${remotePath}`);
const buff = (await client.client.getFileContents(remotePath)) as BufferLike;
if (buff instanceof ArrayBuffer) {
return buff;
@ -524,7 +533,7 @@ export const downloadFromRemote = async (
downloadFile = remoteEncryptedKey;
}
downloadFile = getWebdavPath(downloadFile, client.remoteBaseDir);
// log.info(`downloadFile=${downloadFile}`);
// console.info(`downloadFile=${downloadFile}`);
const remoteContent = await downloadFromRemoteRaw(client, downloadFile);
let localContent = remoteContent;
if (password !== "") {
@ -557,10 +566,10 @@ export const deleteFromRemote = async (
await client.init();
try {
await client.client.deleteFile(remoteFileName);
// log.info(`delete ${remoteFileName} succeeded`);
// console.info(`delete ${remoteFileName} succeeded`);
} catch (err) {
log.error("some error while deleting");
log.error(err);
console.error("some error while deleting");
console.error(err);
}
};
@ -575,7 +584,7 @@ export const checkConnectivity = async (
)
) {
const err = "Error: the url should start with http(s):// but it does not!";
log.error(err);
console.error(err);
if (callbackFunc !== undefined) {
callbackFunc(err);
}
@ -586,7 +595,7 @@ export const checkConnectivity = async (
const results = await getRemoteMeta(client, `/${client.remoteBaseDir}/`);
if (results === undefined) {
const err = "results is undefined";
log.error(err);
console.error(err);
if (callbackFunc !== undefined) {
callbackFunc(err);
}
@ -594,7 +603,7 @@ export const checkConnectivity = async (
}
return true;
} catch (err) {
log.error(err);
console.error(err);
if (callbackFunc !== undefined) {
callbackFunc(err);
}

View File

@ -13,9 +13,12 @@ import { createElement, Eye, EyeOff } from "lucide";
import {
API_VER_ENSURE_REQURL_OK,
API_VER_REQURL,
ConflictActionType,
DEFAULT_DEBUG_FOLDER,
EmptyFolderCleanType,
SUPPORTED_SERVICES_TYPE,
SUPPORTED_SERVICES_TYPE_WITH_REMOTE_BASE_DIR,
SyncDirectionType,
VALID_REQURL,
WebdavAuthType,
WebdavDepthType,
@ -23,10 +26,10 @@ import {
import { exportVaultSyncPlansToFiles } from "./debugMode";
import { exportQrCodeUri } from "./importExport";
import {
clearAllSyncMetaMapping,
clearAllPrevSyncRecordByVault,
clearAllSyncPlanRecords,
destroyDBs,
upsertLastSuccessSyncByVault,
upsertLastSuccessSyncTimeByVault,
} from "./localdb";
import type RemotelySavePlugin from "./main"; // unavoidable
import { RemoteClient } from "./remote";
@ -42,13 +45,7 @@ import {
} from "./remoteForOnedrive";
import { messyConfigToNormal } from "./configPersist";
import type { TransItemType } from "./i18n";
import { checkHasSpecialCharForDir } from "./misc";
import {
applyLogWriterInplace,
log,
restoreLogWritterInplace,
} from "./moreOnLog";
import { checkHasSpecialCharForDir, stringToFragment } from "./misc";
import { simpleTransRemotePrefix } from "./remoteForS3";
class PasswordModal extends Modal {
@ -450,7 +447,7 @@ class DropboxAuthModal extends Modal {
);
this.close();
} catch (err) {
log.error(err);
console.error(err);
new Notice(t("modal_dropboxauth_maualinput_conn_fail"));
}
});
@ -586,7 +583,7 @@ export class OnedriveRevokeAuthModal extends Modal {
new Notice(t("modal_onedriverevokeauth_clean_notice"));
this.close();
} catch (err) {
log.error(err);
console.error(err);
new Notice(t("modal_onedriverevokeauth_clean_fail"));
}
});
@ -713,65 +710,6 @@ class ExportSettingsQrCodeModal extends Modal {
}
}
class SetLogToHttpServerModal extends Modal {
plugin: RemotelySavePlugin;
serverAddr: string;
callBack: any;
constructor(
app: App,
plugin: RemotelySavePlugin,
serverAddr: string,
callBack: any
) {
super(app);
this.plugin = plugin;
this.serverAddr = serverAddr;
this.callBack = callBack;
}
onOpen() {
let { contentEl } = this;
const t = (x: TransItemType, vars?: any) => {
return this.plugin.i18n.t(x, vars);
};
contentEl.createEl("h2", { text: t("modal_logtohttpserver_title") });
const div1 = contentEl.createDiv();
div1.addClass("logtohttpserver-warning");
t("modal_logtohttpserver_desc")
.split("\n")
.forEach((val) => {
div1.createEl("p", {
text: val,
});
});
new Setting(contentEl)
.addButton((button) => {
button.setButtonText(t("modal_logtohttpserver_secondconfirm"));
button.setClass("logtohttpserver-warning");
button.onClick(async () => {
this.callBack();
new Notice(t("modal_logtohttpserver_notice"));
this.close();
});
})
.addButton((button) => {
button.setButtonText(t("goback"));
button.onClick(() => {
this.close();
});
});
}
onClose() {
let { contentEl } = this;
contentEl.empty();
}
}
const getEyesElements = () => {
const eyeEl = createElement(Eye);
const eyeOffEl = createElement(EyeOff);
@ -1151,7 +1089,7 @@ export class RemotelySaveSettingTab extends PluginSettingTab {
);
new Notice(t("settings_dropbox_revoke_notice"));
} catch (err) {
log.error(err);
console.error(err);
new Notice(t("settings_dropbox_revoke_noticeerr"));
}
});
@ -1721,7 +1659,7 @@ export class RemotelySaveSettingTab extends PluginSettingTab {
realVal > 0
) {
const intervalID = window.setInterval(() => {
log.info("auto run from settings.ts");
console.info("auto run from settings.ts");
this.plugin.syncRun("auto");
}, realVal);
this.plugin.autoRunIntervalID = intervalID;
@ -1793,7 +1731,7 @@ export class RemotelySaveSettingTab extends PluginSettingTab {
// then schedule a run for syncOnSaveAfterMilliseconds after it was modified
const lastModified = currentFile.stat.mtime;
const currentTime = Date.now();
// log.debug(
// console.debug(
// `Checking if file was modified within last ${
// this.plugin.settings.syncOnSaveAfterMilliseconds / 1000
// } seconds, last modified: ${
@ -1808,7 +1746,7 @@ export class RemotelySaveSettingTab extends PluginSettingTab {
const scheduleTimeFromNow =
this.plugin.settings.syncOnSaveAfterMilliseconds! -
(currentTime - lastModified);
log.info(
console.info(
`schedule a run for ${scheduleTimeFromNow} milliseconds later`
);
runScheduled = true;
@ -1866,7 +1804,7 @@ export class RemotelySaveSettingTab extends PluginSettingTab {
button.setButtonText(t("settings_resetstatusbar_button"));
button.onClick(async () => {
// reset last sync time
await upsertLastSuccessSyncByVault(
await upsertLastSuccessSyncTimeByVault(
this.plugin.db,
this.plugin.vaultRandomID,
-1
@ -1993,6 +1931,92 @@ export class RemotelySaveSettingTab extends PluginSettingTab {
});
});
new Setting(advDiv)
.setName(t("settings_conflictaction"))
.setDesc(t("settings_conflictaction_desc"))
.addDropdown((dropdown) => {
dropdown.addOption(
"keep_newer",
t("settings_conflictaction_keep_newer")
);
dropdown.addOption(
"keep_larger",
t("settings_conflictaction_keep_larger")
);
dropdown
.setValue(this.plugin.settings.conflictAction ?? "keep_newer")
.onChange(async (val) => {
this.plugin.settings.conflictAction = val as ConflictActionType;
await this.plugin.saveSettings();
});
});
new Setting(advDiv)
.setName(t("settings_cleanemptyfolder"))
.setDesc(t("settings_cleanemptyfolder_desc"))
.addDropdown((dropdown) => {
dropdown.addOption("skip", t("settings_cleanemptyfolder_skip"));
dropdown.addOption(
"clean_both",
t("settings_cleanemptyfolder_clean_both")
);
dropdown
.setValue(this.plugin.settings.howToCleanEmptyFolder ?? "skip")
.onChange(async (val) => {
this.plugin.settings.howToCleanEmptyFolder =
val as EmptyFolderCleanType;
await this.plugin.saveSettings();
});
});
new Setting(advDiv)
.setName(t("settings_protectmodifypercentage"))
.setDesc(t("settings_protectmodifypercentage_desc"))
.addDropdown((dropdown) => {
for (const i of Array.from({ length: 11 }, (x, i) => i * 10)) {
let desc = `${i}`;
if (i === 0) {
desc = t("settings_protectmodifypercentage_000_desc");
} else if (i === 50) {
desc = t("settings_protectmodifypercentage_050_desc");
} else if (i === 100) {
desc = t("settings_protectmodifypercentage_100_desc");
}
dropdown.addOption(`${i}`, desc);
}
dropdown
.setValue(`${this.plugin.settings.protectModifyPercentage ?? 50}`)
.onChange(async (val) => {
this.plugin.settings.protectModifyPercentage = parseInt(val);
await this.plugin.saveSettings();
});
});
new Setting(advDiv)
.setName(t("setting_syncdirection"))
.setDesc(t("setting_syncdirection_desc"))
.addDropdown((dropdown) => {
dropdown.addOption(
"bidirectional",
t("setting_syncdirection_bidirectional_desc")
);
dropdown.addOption(
"incremental_push_only",
t("setting_syncdirection_incremental_push_only_desc")
);
dropdown.addOption(
"incremental_pull_only",
t("setting_syncdirection_incremental_pull_only_desc")
);
dropdown
.setValue(this.plugin.settings.syncDirection ?? "bidirectional")
.onChange(async (val) => {
this.plugin.settings.syncDirection = val as SyncDirectionType;
await this.plugin.saveSettings();
});
});
//////////////////////////////////////////////////
// below for import and export functions
//////////////////////////////////////////////////
@ -2034,9 +2058,8 @@ export class RemotelySaveSettingTab extends PluginSettingTab {
.setValue(this.plugin.settings.currLogLevel ?? "info")
.onChange(async (val: string) => {
this.plugin.settings.currLogLevel = val;
log.setLevel(val as any);
await this.plugin.saveSettings();
log.info(`the log level is changed to ${val}`);
console.info(`the log level is changed to ${val}`);
});
});
@ -2047,11 +2070,15 @@ export class RemotelySaveSettingTab extends PluginSettingTab {
button.setButtonText(t("settings_outputsettingsconsole_button"));
button.onClick(async () => {
const c = messyConfigToNormal(await this.plugin.loadData());
log.info(c);
console.info(c);
new Notice(t("settings_outputsettingsconsole_notice"));
});
});
new Setting(debugDiv)
.setName(t("settings_viewconsolelog"))
.setDesc(stringToFragment(t("settings_viewconsolelog_desc")));
new Setting(debugDiv)
.setName(t("settings_syncplans"))
.setDesc(t("settings_syncplans_desc"))
@ -2078,61 +2105,17 @@ export class RemotelySaveSettingTab extends PluginSettingTab {
});
});
let logToHttpServer = this.plugin.debugServerTemp || "";
new Setting(debugDiv)
.setName(t("settings_logtohttpserver"))
.setDesc(t("settings_logtohttpserver_desc"))
.addText(async (text) => {
text.setValue(logToHttpServer).onChange(async (value) => {
logToHttpServer = value.trim();
});
})
.setName(t("settings_delprevsync"))
.setDesc(t("settings_delprevsync_desc"))
.addButton(async (button) => {
button.setButtonText(t("confirm"));
button.setButtonText(t("settings_delprevsync_button"));
button.onClick(async () => {
if (logToHttpServer === "" || !logToHttpServer.startsWith("http")) {
this.plugin.debugServerTemp = "";
logToHttpServer = "";
// restoreLogWritterInplace();
new Notice(t("settings_logtohttpserver_reset_notice"));
} else {
new SetLogToHttpServerModal(
this.app,
this.plugin,
logToHttpServer,
() => {
this.plugin.debugServerTemp = logToHttpServer;
// applyLogWriterInplace((...msg: any[]) => {
// try {
// requestUrl({
// url: logToHttpServer,
// method: "POST",
// headers: {
// "Content-Type": "application/json",
// },
// body: JSON.stringify({
// send_time: Date.now(),
// log_text: msg,
// }),
// });
// } catch (e) {
// // pass
// }
// });
}
).open();
}
});
});
new Setting(debugDiv)
.setName(t("settings_delsyncmap"))
.setDesc(t("settings_delsyncmap_desc"))
.addButton(async (button) => {
button.setButtonText(t("settings_delsyncmap_button"));
button.onClick(async () => {
await clearAllSyncMetaMapping(this.plugin.db);
new Notice(t("settings_delsyncmap_notice"));
await clearAllPrevSyncRecordByVault(
this.plugin.db,
this.plugin.vaultRandomID
);
new Notice(t("settings_delprevsync_notice"));
});
});

File diff suppressed because it is too large Load Diff

View File

@ -1,64 +0,0 @@
import { App, Modal, Notice, PluginSettingTab, Setting } from "obsidian";
import type RemotelySavePlugin from "./main"; // unavoidable
import type { TransItemType } from "./i18n";
import { log } from "./moreOnLog";
export class SyncAlgoV2Modal extends Modal {
agree: boolean;
readonly plugin: RemotelySavePlugin;
constructor(app: App, plugin: RemotelySavePlugin) {
super(app);
this.plugin = plugin;
this.agree = false;
}
onOpen() {
let { contentEl } = this;
const t = (x: TransItemType, vars?: any) => {
return this.plugin.i18n.t(x, vars);
};
contentEl.createEl("h2", {
text: t("syncalgov2_title"),
});
const ul = contentEl.createEl("ul");
t("syncalgov2_texts")
.split("\n")
.forEach((val) => {
ul.createEl("li", {
text: val,
});
});
new Setting(contentEl)
.addButton((button) => {
button.setButtonText(t("syncalgov2_button_agree"));
button.onClick(async () => {
this.agree = true;
this.close();
});
})
.addButton((button) => {
button.setButtonText(t("syncalgov2_button_disagree"));
button.onClick(() => {
this.close();
});
});
}
onClose() {
let { contentEl } = this;
contentEl.empty();
if (this.agree) {
log.info("agree to use the new algorithm");
this.plugin.saveAgreeToUseNewSyncAlgorithm();
this.plugin.enableAutoSyncIfSet();
this.plugin.enableInitSyncIfSet();
this.plugin.enableSyncOnSaveIfSet();
} else {
log.info("do not agree to use the new algorithm");
this.plugin.unload();
}
}
}

128
src/syncAlgoV3Notice.ts Normal file
View File

@ -0,0 +1,128 @@
import { App, Modal, Notice, PluginSettingTab, Setting } from "obsidian";
import type RemotelySavePlugin from "./main"; // unavoidable
import type { TransItemType } from "./i18n";
import { stringToFragment } from "./misc";
export class SyncAlgoV3Modal extends Modal {
agree: boolean;
manualBackup: boolean;
requireUpdateAllDev: boolean;
readonly plugin: RemotelySavePlugin;
constructor(app: App, plugin: RemotelySavePlugin) {
super(app);
this.plugin = plugin;
this.agree = false;
this.manualBackup = false;
this.requireUpdateAllDev = false;
}
onOpen() {
let { contentEl } = this;
const t = (x: TransItemType, vars?: any) => {
return this.plugin.i18n.t(x, vars);
};
contentEl.createEl("h2", {
text: t("syncalgov3_title"),
});
const ul = contentEl.createEl("ul");
t("syncalgov3_texts")
.split("\n")
.forEach((val) => {
ul.createEl("li", {
text: stringToFragment(val),
});
});
// code modified partially from BART released under MIT License
contentEl.createDiv("modal-button-container", (buttonContainerEl) => {
let agreeBtn: HTMLButtonElement | undefined = undefined;
buttonContainerEl.createEl(
"label",
{
cls: "mod-checkbox",
},
(labelEl) => {
const checkboxEl = labelEl.createEl("input", {
attr: { tabindex: -1 },
type: "checkbox",
});
checkboxEl.checked = this.manualBackup;
checkboxEl.addEventListener("click", () => {
this.manualBackup = checkboxEl.checked;
if (agreeBtn !== undefined) {
if (this.manualBackup && this.requireUpdateAllDev) {
agreeBtn.removeAttribute("disabled");
} else {
agreeBtn.setAttr("disabled", true);
}
}
});
labelEl.appendText(t("syncalgov3_checkbox_manual_backup"));
}
);
buttonContainerEl.createEl(
"label",
{
cls: "mod-checkbox",
},
(labelEl) => {
const checkboxEl = labelEl.createEl("input", {
attr: { tabindex: -1 },
type: "checkbox",
});
checkboxEl.checked = this.requireUpdateAllDev;
checkboxEl.addEventListener("click", () => {
this.requireUpdateAllDev = checkboxEl.checked;
if (agreeBtn !== undefined) {
if (this.manualBackup && this.requireUpdateAllDev) {
agreeBtn.removeAttribute("disabled");
} else {
agreeBtn.setAttr("disabled", true);
}
}
});
labelEl.appendText(t("syncalgov3_checkbox_requiremultidevupdate"));
}
);
agreeBtn = buttonContainerEl.createEl("button", {
attr: { type: "button" },
cls: "mod-cta",
text: t("syncalgov3_button_agree"),
});
agreeBtn.setAttr("disabled", true);
agreeBtn.addEventListener("click", () => {
this.agree = true;
this.close();
});
buttonContainerEl
.createEl("button", {
attr: { type: "submit" },
text: t("syncalgov3_button_disagree"),
})
.addEventListener("click", () => {
this.close();
});
});
}
onClose() {
let { contentEl } = this;
contentEl.empty();
if (this.agree) {
console.info("agree to use the new algorithm");
this.plugin.saveAgreeToUseNewSyncAlgorithm();
this.plugin.enableAutoSyncIfSet();
this.plugin.enableInitSyncIfSet();
this.plugin.enableSyncOnSaveIfSet();
} else {
console.info("do not agree to use the new algorithm");
this.plugin.unload();
}
}
}

View File

@ -1,90 +0,0 @@
import { App, Modal, Notice, PluginSettingTab, Setting } from "obsidian";
import type RemotelySavePlugin from "./main"; // unavoidable
import type { TransItemType } from "./i18n";
import type { FileOrFolderMixedState } from "./baseTypes";
import { log } from "./moreOnLog";
export class SizesConflictModal extends Modal {
readonly plugin: RemotelySavePlugin;
readonly skipSizeLargerThan: number;
readonly sizesGoWrong: FileOrFolderMixedState[];
readonly hasPassword: boolean;
constructor(
app: App,
plugin: RemotelySavePlugin,
skipSizeLargerThan: number,
sizesGoWrong: FileOrFolderMixedState[],
hasPassword: boolean
) {
super(app);
this.plugin = plugin;
this.skipSizeLargerThan = skipSizeLargerThan;
this.sizesGoWrong = sizesGoWrong;
this.hasPassword = hasPassword;
}
onOpen() {
let { contentEl } = this;
const t = (x: TransItemType, vars?: any) => {
return this.plugin.i18n.t(x, vars);
};
contentEl.createEl("h2", {
text: t("modal_sizesconflict_title"),
});
t("modal_sizesconflict_desc", {
thresholdMB: `${this.skipSizeLargerThan / 1000 / 1000}`,
thresholdBytes: `${this.skipSizeLargerThan}`,
})
.split("\n")
.forEach((val) => {
contentEl.createEl("p", { text: val });
});
const info = this.serialize();
contentEl.createDiv().createEl(
"button",
{
text: t("modal_sizesconflict_copybutton"),
},
(el) => {
el.onclick = async () => {
await navigator.clipboard.writeText(info);
new Notice(t("modal_sizesconflict_copynotice"));
};
}
);
contentEl.createEl("pre", {
text: info,
});
}
serialize() {
return this.sizesGoWrong
.map((x) => {
return [
x.key,
this.hasPassword
? `encrypted name: ${x.remoteEncryptedKey}`
: undefined,
`local ${this.hasPassword ? "encrypted " : ""}bytes: ${
this.hasPassword ? x.sizeLocalEnc : x.sizeLocal
}`,
`remote ${this.hasPassword ? "encrypted " : ""}bytes: ${
this.hasPassword ? x.sizeRemoteEnc : x.sizeRemote
}`,
]
.filter((tmp) => tmp !== undefined)
.join("\n");
})
.join("\n\n");
}
onClose() {
let { contentEl } = this;
contentEl.empty();
}
}