If rclone is run with the --rc
flag then it starts an HTTP server
which can be used to remote control rclone using its API.
You can either use the rc command to access the API or use HTTP directly.
If you just want to run a remote control then see the rcd command.
Flag to start the http server listen on remote requests
IPaddress:Port or :Port to bind server to. (default "localhost:5572")
SSL PEM key (concatenation of certificate and CA certificate)
Client certificate authority to verify clients with
htpasswd file - if not provided no authentication is done
SSL PEM Private key
Maximum size of request header (default 4096)
The minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
User name for authentication.
Password for authentication.
Realm for authentication (default "rclone")
Timeout for server reading data (default 1h0m0s)
Timeout for server writing data (default 1h0m0s)
Enable the serving of remote objects via the HTTP interface. This means objects will be accessible at http://127.0.0.1:5572/ by default, so you can browse to http://127.0.0.1:5572/ or http://127.0.0.1:5572/* to see a listing of the remotes. Objects may be requested from remotes using this syntax http://127.0.0.1:5572/[remote:path]/path/to/object
Default Off.
Set this flag to skip reading the modification time (can speed things up).
Default Off.
Path to local files to serve on the HTTP server.
If this is set then rclone will serve the files in that directory. It will also open the root in the web browser if specified. This is for implementing browser based GUIs for rclone functions.
If --rc-user
or --rc-pass
is set then the URL that is opened will
have the authorization in the URL in the http://user:pass@localhost/
style.
Default Off.
Enable OpenMetrics/Prometheus compatible endpoint at /metrics
.
If more control over the metrics is desired (for example running it on a different port or with different auth) then endpoint can be enabled with the --metrics-*
flags instead.
Default Off.
Set this flag to serve the default web gui on the same port as rclone.
Default Off.
Set the allowed Access-Control-Allow-Origin for rc requests.
Can be used with --rc-web-gui if the rclone is running on different IP than the web-gui.
Default is IP address on which rc is running.
Set the URL to fetch the rclone-web-gui files from.
Default https://api.github.com/repos/rclone/rclone-webui-react/releases/latest.
Set this flag to check and update rclone-webui-react from the rc-web-fetch-url.
Default Off.
Set this flag to force update rclone-webui-react from the rc-web-fetch-url.
Default Off.
Set this flag to disable opening browser automatically when using web-gui.
Default Off.
Expire finished async jobs older than DURATION (default 60s).
Interval duration to check for expired async jobs (default 10s).
By default rclone will require authorisation to have been set up on
the rc interface in order to use any methods which access any rclone
remotes. Eg operations/list
is denied as it involved creating a
remote as is sync/copy
.
If this is set then no authorisation will be required on the server to
use these methods. The alternative is to use --rc-user
and
--rc-pass
and use these credentials in the request.
Default Off.
Prefix for URLs.
Default is root
User-specified template.
Rclone itself implements the remote control protocol in its rclone rc
command.
You can use it like this
$ rclone rc rc/noop param1=one param2=two
{
"param1": "one",
"param2": "two"
}
Run rclone rc
on its own to see the help for the installed remote
control commands.
rclone rc
also supports a --json
flag which can be used to send
more complicated input parameters.
$ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 } }' rc/noop
{
"p1": [
1,
"2",
null,
4
],
"p2": {
"a": 1,
"b": 2
}
}
If the parameter being passed is an object then it can be passed as a
JSON string rather than using the --json
flag which simplifies the
command line.
rclone rc operations/list fs=/tmp remote=test opt='{"showHash": true}'
Rather than
rclone rc operations/list --json '{"fs": "/tmp", "remote": "test", "opt": {"showHash": true}}'
The rc interface supports some special parameters which apply to
all commands. These start with _
to show they are different.
Each rc call is classified as a job and it is assigned its own id. By default jobs are executed immediately as they are created or synchronously.
If _async
has a true value when supplied to an rc call then it will
return immediately with a job id and the task will be run in the
background. The job/status
call can be used to get information of
the background job. The job can be queried for up to 1 minute after
it has finished.
It is recommended that potentially long running jobs, e.g. sync/sync
,
sync/copy
, sync/move
, operations/purge
are run with the _async
flag to avoid any potential problems with the HTTP request and
response timing out.
Starting a job with the _async
flag:
$ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 }, "_async": true }' rc/noop
{
"jobid": 2
}
Query the status to see if the job has finished. For more information
on the meaning of these return parameters see the job/status
call.
$ rclone rc --json '{ "jobid":2 }' job/status
{
"duration": 0.000124163,
"endTime": "2018-10-27T11:38:07.911245881+01:00",
"error": "",
"finished": true,
"id": 2,
"output": {
"_async": true,
"p1": [
1,
"2",
null,
4
],
"p2": {
"a": 1,
"b": 2
}
},
"startTime": "2018-10-27T11:38:07.911121728+01:00",
"success": true
}
job/list
can be used to show the running or recently completed jobs
$ rclone rc job/list
{
"jobids": [
2
]
}
If you wish to set config (the equivalent of the global flags) for the
duration of an rc call only then pass in the _config
parameter.
This should be in the same format as the config
key returned by
options/get.
For example, if you wished to run a sync with the --checksum
parameter, you would pass this parameter in your JSON blob.
"_config":{"CheckSum": true}
If using rclone rc
this could be passed as
rclone rc sync/sync ... _config='{"CheckSum": true}'
Any config parameters you don't set will inherit the global defaults which were set with command line flags or environment variables.
Note that it is possible to set some values as strings or integers -
see data types for more info. Here is an example
setting the equivalent of --buffer-size
in string or integer format.
"_config":{"BufferSize": "42M"}
"_config":{"BufferSize": 44040192}
If you wish to check the _config
assignment has worked properly then
calling options/local
will show what the value got set to.
If you wish to set filters for the duration of an rc call only then
pass in the _filter
parameter.
This should be in the same format as the filter
key returned by
options/get.
For example, if you wished to run a sync with these flags
--max-size 1M --max-age 42s --include "a" --include "b"
you would pass this parameter in your JSON blob.
"_filter":{"MaxSize":"1M", "IncludeRule":["a","b"], "MaxAge":"42s"}
If using rclone rc
this could be passed as
rclone rc ... _filter='{"MaxSize":"1M", "IncludeRule":["a","b"], "MaxAge":"42s"}'
Any filter parameters you don't set will inherit the global defaults which were set with command line flags or environment variables.
Note that it is possible to set some values as strings or integers -
see data types for more info. Here is an example
setting the equivalent of --buffer-size
in string or integer format.
"_filter":{"MinSize": "42M"}
"_filter":{"MinSize": 44040192}
If you wish to check the _filter
assignment has worked properly then
calling options/local
will show what the value got set to.
Each rc call has its own stats group for tracking its metrics. By default
grouping is done by the composite group name from prefix job/
and id of the
job like so job/1
.
If _group
has a value then stats for that request will be grouped under that
value. This allows caller to group stats under their own name.
Stats for specific group can be accessed by passing group
to core/stats
:
$ rclone rc --json '{ "group": "job/1" }' core/stats
{
"speed": 12345
...
}
When the API returns types, these will mostly be straight forward integer, string or boolean types.
However some of the types returned by the options/get
call and taken by the options/set calls as well as the
vfsOpt
, mountOpt
and the _config
parameters.
Duration
- these are returned as an integer duration in
nanoseconds. They may be set as an integer, or they may be set with
time string, eg "5s". See the options section for
more info.Size
- these are returned as an integer number of bytes. They may
be set as an integer or they may be set with a size suffix string,
eg "10M". See the options section for more info.CutoffMode
, DumpFlags
, LogLevel
,
VfsCacheMode
- these will be returned as an integer and may be set
as an integer but more conveniently they can be set as a string, eg
"HARD" for CutoffMode
or DEBUG
for LogLevel
.BandwidthSpec
- this will be set and returned as a string, eg
"1M".The calls options/info (for the main config) and config/providers (for the backend config) may be used to get information on the rclone configuration options. This can be used to build user interfaces for displaying and setting any rclone option.
These consist of arrays of Option
blocks. These have the following
format. Each block describes a single option.
Field | Type | Optional | Description |
---|---|---|---|
Name | string | N | name of the option in snake_case |
FieldName | string | N | name of the field used in the rc - if blank use Name |
Help | string | N | help, started with a single sentence on a single line |
Groups | string | Y | groups this option belongs to - comma separated string for options classification |
Provider | string | Y | set to filter on provider |
Default | any | N | default value, if set (and not to nil or "") then Required does nothing |
Value | any | N | value to be set by flags |
Examples | Examples | Y | predefined values that can be selected from list (multiple-choice option) |
ShortOpt | string | Y | the short command line option for this |
Hide | Visibility | N | if non zero, this option is hidden from the configurator or the command line |
Required | bool | N | this option is required, meaning value cannot be empty unless there is a default |
IsPassword | bool | N | set if the option is a password |
NoPrefix | bool | N | set if the option for this should not use the backend prefix |
Advanced | bool | N | set if this is an advanced config option |
Exclusive | bool | N | set if the answer can only be one of the examples (empty string allowed unless Required or Default is set) |
Sensitive | bool | N | set if this option should be redacted when using rclone config redacted |
An example of this might be the --log-level
flag. Note that the
Name
of the option becomes the command line flag with _
replaced
with -
.
{
"Advanced": false,
"Default": 5,
"DefaultStr": "NOTICE",
"Examples": [
{
"Help": "",
"Value": "EMERGENCY"
},
{
"Help": "",
"Value": "ALERT"
},
...
],
"Exclusive": true,
"FieldName": "LogLevel",
"Groups": "Logging",
"Help": "Log level DEBUG|INFO|NOTICE|ERROR",
"Hide": 0,
"IsPassword": false,
"Name": "log_level",
"NoPrefix": true,
"Required": true,
"Sensitive": false,
"Type": "LogLevel",
"Value": null,
"ValueStr": "NOTICE"
},
Note that the Help
may be multiple lines separated by \n
. The
first line will always be a short sentence and this is the sentence
shown when running rclone help flags
.
Remotes are specified with the fs=
, srcFs=
, dstFs=
parameters depending on the command being used.
The parameters can be a string as per the rest of rclone, eg
s3:bucket/path
or :sftp:/my/dir
. They can also be specified as
JSON blobs.
If specifying a JSON blob it should be a object mapping strings to strings. These values will be used to configure the remote. There are 3 special values which may be set:
type
- set to type
to specify a remote called :type:
_name
- set to name
to specify a remote called name:
_root
- sets the root of the remote - may be emptyOne of _name
or type
should normally be set. If the local
backend is desired then type
should be set to local
. If _root
isn't specified then it defaults to the root of the remote.
For example this JSON is equivalent to remote:/tmp
{
"_name": "remote",
"_root": "/tmp"
}
And this is equivalent to :sftp,host='example.com':/tmp
{
"type": "sftp",
"host": "example.com",
"_root": "/tmp"
}
And this is equivalent to /tmp/dir
{
type = "local",
_root = "/tmp/dir"
}
This takes the following parameters:
Returns:
Example:
rclone rc backend/command command=noop fs=. -o echo=yes -o blue -a path1 -a path2
Returns
{
"result": {
"arg": [
"path1",
"path2"
],
"name": "noop",
"opt": {
"blue": "",
"echo": "yes"
}
}
}
Note that this is the direct equivalent of using this "backend" command:
rclone backend noop . -o echo=yes -o blue path1 path2
Note that arguments must be preceded by the "-a" flag
See the backend command for more information.
Authentication is required for this call.
Purge a remote from the cache backend. Supports either a directory or a file. Params:
Eg
rclone rc cache/expire remote=path/to/sub/folder/
rclone rc cache/expire remote=/ withData=true
Ensure the specified file chunks are cached on disk.
The chunks= parameter specifies the file chunks to check. It takes a comma separated list of array slice indices. The slice indices are similar to Python slices: start[:end]
start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning of the file to fetch exclusive. Both values can be negative, in which case they count from the back of the file. The value "-5:" represents the last 5 chunks of a file.
Some valid examples are: ":5,-5:" -> the first and last five chunks "0,-2" -> the first and the second last chunk "0:10" -> the first ten chunks
Any parameter with a key that starts with "file" can be used to specify files to fetch, e.g.
rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye
File names will automatically be encrypted when the a crypt remote is used on top of the cache.
Show statistics for the cache remote.
This takes the following parameters:
See the config create command for more information on the above.
Authentication is required for this call.
Parameters:
See the config delete command for more information on the above.
Authentication is required for this call.
Returns a JSON object:
Where keys are remote names and values are the config parameters.
See the config dump command for more information on the above.
Authentication is required for this call.
Parameters:
See the config dump command for more information on the above.
Authentication is required for this call.
Returns
See the listremotes command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
See the config password command for more information on the above.
Authentication is required for this call.
Returns a JSON object with the following keys:
Eg
{
"cache": "/home/USER/.cache/rclone",
"config": "/home/USER/.rclone.conf",
"temp": "/tmp"
}
See the config paths command for more information on the above.
Authentication is required for this call.
Returns a JSON object:
See the config providers command for more information on the above.
Note that the Options blocks are in the same format as returned by "options/info". They are described in the option blocks section.
Authentication is required for this call.
Parameters:
Authentication is required for this call.
This takes the following parameters:
See the config update command for more information on the above.
Authentication is required for this call.
This sets the bandwidth limit to the string passed in. This should be a single bandwidth limit entry or a pair of upload:download bandwidth.
Eg
rclone rc core/bwlimit rate=off
{
"bytesPerSecond": -1,
"bytesPerSecondTx": -1,
"bytesPerSecondRx": -1,
"rate": "off"
}
rclone rc core/bwlimit rate=1M
{
"bytesPerSecond": 1048576,
"bytesPerSecondTx": 1048576,
"bytesPerSecondRx": 1048576,
"rate": "1M"
}
rclone rc core/bwlimit rate=1M:100k
{
"bytesPerSecond": 1048576,
"bytesPerSecondTx": 1048576,
"bytesPerSecondRx": 131072,
"rate": "1M"
}
If the rate parameter is not supplied then the bandwidth is queried
rclone rc core/bwlimit
{
"bytesPerSecond": 1048576,
"bytesPerSecondTx": 1048576,
"bytesPerSecondRx": 1048576,
"rate": "1M"
}
The format of the parameter is exactly the same as passed to --bwlimit except only one bandwidth may be specified.
In either case "rate" is returned as a human-readable string, and "bytesPerSecond" is returned as a number.
This takes the following parameters:
Returns:
Example:
rclone rc core/command command=ls -a mydrive:/ -o max-depth=1
rclone rc core/command -a ls -a mydrive:/ -o max-depth=1
Returns:
{
"error": false,
"result": "<Raw command line output>"
}
OR
{
"error": true,
"result": "<Raw command line output>"
}
Authentication is required for this call.
This returns the disk usage for the local directory passed in as dir.
If the directory is not passed in, it defaults to the directory pointed to by --cache-dir.
Returns:
{
"dir": "/",
"info": {
"Available": 361769115648,
"Free": 361785892864,
"Total": 982141468672
}
}
This tells the go runtime to do a garbage collection run. It isn't necessary to call this normally, but it can be useful for debugging memory problems.
This returns list of stats groups currently in memory.
Returns the following values:
{
"groups": an array of group names:
[
"group1",
"group2",
...
]
}
This returns the memory statistics of the running program. What the values mean are explained in the go docs: https://golang.org/pkg/runtime/#MemStats
The most interesting values for most people are:
Pass a clear string and rclone will obscure it for the config file:
Returns:
This returns PID of current process. Useful for stopping rclone process.
(Optional) Pass an exit code to be used for terminating the app:
This returns all available stats:
rclone rc core/stats
If group is not provided then summed up stats for all groups will be returned.
Parameters
Returns the following values:
{
"bytes": total transferred bytes since the start of the group,
"checks": number of files checked,
"deletes" : number of files deleted,
"elapsedTime": time in floating point seconds since rclone was started,
"errors": number of errors,
"eta": estimated time in seconds until the group completes,
"fatalError": boolean whether there has been at least one fatal error,
"lastError": last error string,
"renames" : number of files renamed,
"retryError": boolean showing whether there has been at least one non-NoRetryError,
"serverSideCopies": number of server side copies done,
"serverSideCopyBytes": number bytes server side copied,
"serverSideMoves": number of server side moves done,
"serverSideMoveBytes": number bytes server side moved,
"speed": average speed in bytes per second since start of the group,
"totalBytes": total number of bytes in the group,
"totalChecks": total number of checks in the group,
"totalTransfers": total number of transfers in the group,
"transferTime" : total time spent on running jobs,
"transfers": number of transferred files,
"transferring": an array of currently active file transfers:
[
{
"bytes": total transferred bytes for this file,
"eta": estimated time in seconds until file transfer completion
"name": name of the file,
"percentage": progress of the file transfer in percent,
"speed": average speed over the whole transfer in bytes per second,
"speedAvg": current speed in bytes per second as an exponentially weighted moving average,
"size": size of the file in bytes
}
],
"checking": an array of names of currently active file checks
[]
}
Values for "transferring", "checking" and "lastError" are only assigned if data is available. The value for "eta" is null if an eta cannot be determined.
This deletes entire stats group.
Parameters
This clears counters, errors and finished transfers for all stats or specific stats group if group is provided.
Parameters
This returns stats about completed transfers:
rclone rc core/transferred
If group is not provided then completed transfers for all groups will be returned.
Note only the last 100 completed transfers are returned.
Parameters
Returns the following values:
{
"transferred": an array of completed transfers (including failed ones):
[
{
"name": name of the file,
"size": size of the file in bytes,
"bytes": total transferred bytes for this file,
"checked": if the transfer is only checked (skipped, deleted),
"timestamp": integer representing millisecond unix epoch,
"error": string description of the error (empty if successful),
"jobid": id of the job that this transfer belongs to
}
]
}
This shows the current version of go and the go runtime:
SetBlockProfileRate controls the fraction of goroutine blocking events that are reported in the blocking profile. The profiler aims to sample an average of one blocking event per rate nanoseconds spent blocked.
To include every blocking event in the profile, pass rate = 1. To turn off profiling entirely, pass rate <= 0.
After calling this you can use this to see the blocking profile:
go tool pprof http://localhost:5572/debug/pprof/block
Parameters:
SetGCPercent sets the garbage collection target percentage: a collection is triggered when the ratio of freshly allocated data to live data remaining after the previous collection reaches this percentage. SetGCPercent returns the previous setting. The initial setting is the value of the GOGC environment variable at startup, or 100 if the variable is not set.
This setting may be effectively reduced in order to maintain a memory limit. A negative percentage effectively disables garbage collection, unless the memory limit is reached.
See https://pkg.go.dev/runtime/debug#SetMemoryLimit for more details.
Parameters:
SetMutexProfileFraction controls the fraction of mutex contention events that are reported in the mutex profile. On average 1/rate events are reported. The previous rate is returned.
To turn off profiling entirely, pass rate 0. To just read the current rate, pass rate < 0. (For n>1 the details of sampling may change.)
Once this is set you can look use this to profile the mutex contention:
go tool pprof http://localhost:5572/debug/pprof/mutex
Parameters:
Results:
SetMemoryLimit provides the runtime with a soft memory limit.
The runtime undertakes several processes to try to respect this memory limit, including adjustments to the frequency of garbage collections and returning memory to the underlying system more aggressively. This limit will be respected even if GOGC=off (or, if SetGCPercent(-1) is executed).
The input limit is provided as bytes, and includes all memory mapped, managed, and not released by the Go runtime. Notably, it does not account for space used by the Go binary and memory external to Go, such as memory managed by the underlying system on behalf of the process, or memory managed by non-Go code inside the same process. Examples of excluded memory sources include: OS kernel memory held on behalf of the process, memory allocated by C code, and memory mapped by syscall.Mmap (because it is not managed by the Go runtime).
A zero limit or a limit that's lower than the amount of memory used by the Go runtime may cause the garbage collector to run nearly continuously. However, the application may still make progress.
The memory limit is always respected by the Go runtime, so to effectively disable this behavior, set the limit very high. math.MaxInt64 is the canonical value for disabling the limit, but values much greater than the available memory on the underlying system work just as well.
See https://go.dev/doc/gc-guide for a detailed guide explaining the soft memory limit in more detail, as well as a variety of common use-cases and scenarios.
SetMemoryLimit returns the previously set memory limit. A negative input does not adjust the limit, and allows for retrieval of the currently set memory limit.
Parameters:
This clears the fs cache. This is where remotes created from backends are cached for a short while to make repeated rc calls more efficient.
If you change the parameters of a backend then you may want to call this to clear an existing remote out of the cache before re-creating it.
Authentication is required for this call.
This returns the number of entries in the fs cache.
Returns
Authentication is required for this call.
Parameters: None.
Results:
Parameters:
Results:
Parameters:
Parameters:
This shows currently mounted points, which can be used for performing an unmount.
This takes no parameters and returns
Eg
rclone rc mount/listmounts
Authentication is required for this call.
rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
If no mountType is provided, the priority is given as follows: 1. mount 2.cmount 3.mount2
This takes the following parameters:
Example:
rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint
rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mountType=mount
rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}'
The vfsOpt are as described in options/get and can be seen in the the "vfs" section when running and the mountOpt can be seen in the "mount" section:
rclone rc options/get
Authentication is required for this call.
This shows all possible mount types and returns them as a list.
This takes no parameters and returns
The mount types are strings like "mount", "mount2", "cmount" and can be passed to mount/mount as the mountType parameter.
Eg
rclone rc mount/types
Authentication is required for this call.
rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
This takes the following parameters:
Example:
rclone rc mount/unmount mountPoint=/home/<user>/mountPoint
Authentication is required for this call.
rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
This takes no parameters and returns error if unmount does not succeed.
Eg
rclone rc mount/unmountall
Authentication is required for this call.
This takes the following parameters:
The result is as returned from rclone about --json
See the about command for more information on the above.
Authentication is required for this call.
Checks the files in the source and destination match. It compares sizes and hashes and logs a report of files that don't match. It doesn't alter the source or destination.
This takes the following parameters:
If you supply the download flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.
If you supply the size-only global flag, it will only compare the sizes not the hashes as well. Use this for a quick check.
If you supply the checkFileHash option with a valid hash name, the checkFileFs:checkFileRemote must point to a text file in the SUM format. This treats the checksum file as the source and dstFs as the destination. Note that srcFs is not used and should not be supplied in this case.
Returns:
Authentication is required for this call.
This takes the following parameters:
See the cleanup command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
Authentication is required for this call.
This takes the following parameters:
See the copyurl command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
See the delete command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
See the deletefile command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
This returns info about the remote passed in;
{
// optional features and whether they are available or not
"Features": {
"About": true,
"BucketBased": false,
"BucketBasedRootOK": false,
"CanHaveEmptyDirectories": true,
"CaseInsensitive": false,
"ChangeNotify": false,
"CleanUp": false,
"Command": true,
"Copy": false,
"DirCacheFlush": false,
"DirMove": true,
"Disconnect": false,
"DuplicateFiles": false,
"GetTier": false,
"IsLocal": true,
"ListR": false,
"MergeDirs": false,
"MetadataInfo": true,
"Move": true,
"OpenWriterAt": true,
"PublicLink": false,
"Purge": true,
"PutStream": true,
"PutUnchecked": false,
"ReadMetadata": true,
"ReadMimeType": false,
"ServerSideAcrossConfigs": false,
"SetTier": false,
"SetWrapper": false,
"Shutdown": false,
"SlowHash": true,
"SlowModTime": false,
"UnWrap": false,
"UserInfo": false,
"UserMetadata": true,
"WrapFs": false,
"WriteMetadata": true,
"WriteMimeType": false
},
// Names of hashes available
"Hashes": [
"md5",
"sha1",
"whirlpool",
"crc32",
"sha256",
"dropbox",
"mailru",
"quickxor"
],
"Name": "local", // Name as created
"Precision": 1, // Precision of timestamps in ns
"Root": "/", // Path as created
"String": "Local file system at /", // how the remote will appear in logs
// Information about the system metadata for this backend
"MetadataInfo": {
"System": {
"atime": {
"Help": "Time of last access",
"Type": "RFC 3339",
"Example": "2006-01-02T15:04:05.999999999Z07:00"
},
"btime": {
"Help": "Time of file birth (creation)",
"Type": "RFC 3339",
"Example": "2006-01-02T15:04:05.999999999Z07:00"
},
"gid": {
"Help": "Group ID of owner",
"Type": "decimal number",
"Example": "500"
},
"mode": {
"Help": "File type and mode",
"Type": "octal, unix style",
"Example": "0100664"
},
"mtime": {
"Help": "Time of last modification",
"Type": "RFC 3339",
"Example": "2006-01-02T15:04:05.999999999Z07:00"
},
"rdev": {
"Help": "Device ID (if special file)",
"Type": "hexadecimal",
"Example": "1abc"
},
"uid": {
"Help": "User ID of owner",
"Type": "decimal number",
"Example": "500"
}
},
"Help": "Textual help string\n"
}
}
This command does not have a command line equivalent so use this instead:
rclone rc --loopback operations/fsinfo fs=remote:
Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool.
This takes the following parameters:
If you supply the download flag, it will download the data from the remote and create the hash on the fly. This can be useful for remotes that don't support the given hash or if you really want to check all the data.
Note that if you wish to supply a checkfile to check hashes against the current files then you should use operations/check instead of operations/hashsum.
Returns:
Example:
$ rclone rc --loopback operations/hashsum fs=bin hashType=MD5 download=true base64=true
{
"hashType": "md5",
"hashsum": [
"WTSVLpuiXyJO_kGzJerRLg== backend-versions.sh",
"v1b_OlWCJO9LtNq3EIKkNQ== bisect-go-rclone.sh",
"VHbmHzHh4taXzgag8BAIKQ== bisect-rclone.sh",
]
}
See the hashsum command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
Returns:
See the lsjson command for more information on the above and examples.
Authentication is required for this call.
This takes the following parameters:
See the mkdir command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
Authentication is required for this call.
This takes the following parameters:
Returns:
See the link command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
See the purge command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
See the rmdir command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
See the rmdirs command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
See the settier command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
See the settierfile command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
Returns:
See the size command for more information on the above.
Authentication is required for this call.
This takes the following parameters
The result is
Note that if you are only interested in files then it is much more efficient to set the filesOnly flag in the options.
See the lsjson command for more information on the above and examples.
Authentication is required for this call.
This takes the following parameters:
See the uploadfile command for more information on the above.
Authentication is required for this call.
Returns:
Returns an object where keys are option block names and values are an object with the current option values in.
Parameters:
Note that these are the global options which are unaffected by use of the _config and _filter parameters. If you wish to read the parameters set in _config then use options/config and for _filter use options/filter.
This shows the internal names of the option within rclone which should map to the external options very easily with a few exceptions.
Returns an object where keys are option block names and values are an array of objects with info about each options.
Parameters:
These objects are in the same format as returned by "config/providers". They are described in the option blocks section.
Returns an object with the keys "config" and "filter". The "config" key contains the local config and the "filter" key contains the local filters.
Note that these are the local options specific to this rc call. If _config was not supplied then they will be the global options. Likewise with "_filter".
This call is mostly useful for seeing if _config and _filter passing is working.
This shows the internal names of the option within rclone which should map to the external options very easily with a few exceptions.
Parameters:
Repeated as often as required.
Only supply the options you wish to change. If an option is unknown it will be silently ignored. Not all options will have an effect when changed like this.
For example:
This sets DEBUG level logs (-vv) (these can be set by number or string)
rclone rc options/set --json '{"main": {"LogLevel": "DEBUG"}}'
rclone rc options/set --json '{"main": {"LogLevel": 8}}'
And this sets INFO level logs (-v)
rclone rc options/set --json '{"main": {"LogLevel": "INFO"}}'
And this sets NOTICE level logs (normal without -v)
rclone rc options/set --json '{"main": {"LogLevel": "NOTICE"}}'
Used for adding a plugin to the webgui.
This takes the following parameters:
Example:
rclone rc pluginsctl/addPlugin
Authentication is required for this call.
This shows all possible plugins by a mime type.
This takes the following parameters:
Returns:
Example:
rclone rc pluginsctl/getPluginsForType type=video/mp4
Authentication is required for this call.
This allows you to get the currently enabled plugins and their details.
This takes no parameters and returns:
E.g.
rclone rc pluginsctl/listPlugins
Authentication is required for this call.
Allows listing of test plugins with the rclone.test set to true in package.json of the plugin.
This takes no parameters and returns:
E.g.
rclone rc pluginsctl/listTestPlugins
Authentication is required for this call.
This allows you to remove a plugin using it's name.
This takes parameters:
author
/plugin_name
.E.g.
rclone rc pluginsctl/removePlugin name=rclone/video-plugin
Authentication is required for this call.
This allows you to remove a plugin using it's name.
This takes the following parameters:
author
/plugin_name
.Example:
rclone rc pluginsctl/removeTestPlugin name=rclone/rclone-webui-react
Authentication is required for this call.
This returns an error with the input as part of its error string. Useful for testing error handling.
This lists all the registered remote control commands as a JSON map in the commands response.
This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.
This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.
Authentication is required for this call.
This takes the following parameters
drive:path1
drive:path2
true
by default, false
disables comparison of final listings,
only
will skip sync, only compare listings from the last run~/.cache/rclone/bisync
)See bisync command help and full bisync description for more information.
Authentication is required for this call.
This takes the following parameters:
See the copy command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
See the move command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
See the sync command for more information on the above.
Authentication is required for this call.
This forgets the paths in the directory cache causing them to be re-read from the remote when needed.
If no paths are passed in then it will forget all the paths in the directory cache.
rclone rc vfs/forget
Otherwise pass files or dirs in as file=path or dir=path. Any parameter key starting with file will forget that file and any starting with dir will forget that dir, e.g.
rclone rc vfs/forget file=hello file2=goodbye dir=home/junk
This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied.
This lists the active VFSes.
It returns a list under the key "vfses" where the values are the VFS names that could be passed to the other VFS commands in the "fs" parameter.
Without any parameter given this returns the current status of the poll-interval setting.
When the interval=duration parameter is set, the poll-interval value is updated and the polling function is notified. Setting interval=0 disables poll-interval.
rclone rc vfs/poll-interval interval=5m
The timeout=duration parameter can be used to specify a time to wait for the current poll function to apply the new value. If timeout is less or equal 0, which is the default, wait indefinitely.
The new poll-interval value will only be active when the timeout is not reached.
If poll-interval is updated or disabled temporarily, some changes might not get picked up by the polling function, depending on the used remote.
This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied.
This returns info about the upload queue for the selected VFS.
This is only useful if --vfs-cache-mode
> off. If you call it when
the --vfs-cache-mode
is off, it will return an empty result.
{
"queued": // an array of files queued for upload
[
{
"name": "file", // string: name (full path) of the file,
"id": 123, // integer: id of this item in the queue,
"size": 79, // integer: size of the file in bytes
"expiry": 1.5 // float: time until file is eligible for transfer, lowest goes first
"tries": 1, // integer: number of times we have tried to upload
"delay": 5.0, // float: seconds between upload attempts
"uploading": false, // boolean: true if item is being uploaded
},
],
}
The expiry
time is the time until the file is elegible for being
uploaded in floating point seconds. This may go negative. As rclone
only transfers --transfers
files at once, only the lowest
--transfers
expiry times will have uploading
as true
. So there
may be files with negative expiry times for which uploading
is
false
.
This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied.
Use this to adjust the expiry
time for an item in the upload queue.
You will need to read the id
of the item using vfs/queue
before
using this call.
You can then set expiry
to a floating point number of seconds from
now when the item is eligible for upload. If you want the item to be
uploaded as soon as possible then set it to a large negative number (eg
-1000000000). If you want the upload of the item to be delayed
for a long time then set it to a large positive number.
Setting the expiry
of an item which has already has started uploading
will have no effect - the item will carry on being uploaded.
This will return an error if called with --vfs-cache-mode
off or if
the id
passed is not found.
This takes the following parameters
fs
- select the VFS in use (optional)id
- a numeric ID as returned from vfs/queue
expiry
- a new expiry time as floating point secondsThis returns an empty result on success, or an error.
This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied.
This reads the directories for the specified paths and freshens the directory cache.
If no paths are passed in then it will refresh the root directory.
rclone rc vfs/refresh
Otherwise pass directories in as dir=path. Any parameter key starting with dir will refresh that directory, e.g.
rclone rc vfs/refresh dir=home/junk dir2=data/misc
If the parameter recursive=true is given the whole directory tree will get refreshed. This refresh will use --fast-list if enabled.
This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied.
This returns stats for the selected VFS.
{
// Status of the disk cache - only present if --vfs-cache-mode > off
"diskCache": {
"bytesUsed": 0,
"erroredFiles": 0,
"files": 0,
"hashType": 1,
"outOfSpace": false,
"path": "/home/user/.cache/rclone/vfs/local/mnt/a",
"pathMeta": "/home/user/.cache/rclone/vfsMeta/local/mnt/a",
"uploadsInProgress": 0,
"uploadsQueued": 0
},
"fs": "/mnt/a",
"inUse": 1,
// Status of the in memory metadata cache
"metadataCache": {
"dirs": 1,
"files": 0
},
// Options as returned by options/get
"opt": {
"CacheMaxAge": 3600000000000,
// ...
"WriteWait": 1000000000
}
}
This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied.
Rclone implements a simple HTTP based protocol.
Each endpoint takes an JSON object and returns a JSON object or an error. The JSON objects are essentially a map of string names to values.
All calls must made using POST.
The input objects can be supplied using URL parameters, POST
parameters or by supplying "Content-Type: application/json" and a JSON
blob in the body. There are examples of these below using curl
.
The response will be a JSON blob in the body of the response. This is formatted to be reasonably human-readable.
If an error occurs then there will be an HTTP error status (e.g. 500) and the body of the response will contain a JSON encoded error object, e.g.
{
"error": "Expecting string value for key \"remote\" (was float64)",
"input": {
"fs": "/tmp",
"remote": 3
},
"status": 400
"path": "operations/rmdir",
}
The keys in the error response are
The sever implements basic CORS support and allows all origins for that. The response to a preflight OPTIONS request will echo the requested "Access-Control-Request-Headers" back.
curl -X POST 'http://localhost:5572/rc/noop?potato=1&sausage=2'
Response
{
"potato": "1",
"sausage": "2"
}
Here is what an error response looks like:
curl -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'
{
"error": "arbitrary error on input map[potato:1 sausage:2]",
"input": {
"potato": "1",
"sausage": "2"
}
}
Note that curl doesn't return errors to the shell unless you use the -f
option
$ curl -f -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'
curl: (22) The requested URL returned error: 400 Bad Request
$ echo $?
22
curl --data "potato=1" --data "sausage=2" http://localhost:5572/rc/noop
Response
{
"potato": "1",
"sausage": "2"
}
Note that you can combine these with URL parameters too with the POST parameters taking precedence.
curl --data "potato=1" --data "sausage=2" "http://localhost:5572/rc/noop?rutabaga=3&sausage=4"
Response
{
"potato": "1",
"rutabaga": "3",
"sausage": "4"
}
curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' http://localhost:5572/rc/noop
response
{
"password": "xyz",
"username": "xyz"
}
This can be combined with URL parameters too if required. The JSON blob takes precedence.
curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' 'http://localhost:5572/rc/noop?rutabaga=3&potato=4'
{
"potato": 2,
"rutabaga": "3",
"sausage": 1
}
If you use the --rc
flag this will also enable the use of the go
profiling tools on the same port.
To use these, first install go.
To profile rclone's memory use you can run:
go tool pprof -web http://localhost:5572/debug/pprof/heap
This should open a page in your browser showing what is using what memory.
You can also use the -text
flag to produce a textual summary
$ go tool pprof -text http://localhost:5572/debug/pprof/heap
Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
flat flat% sum% cum cum%
1024.03kB 66.62% 66.62% 1024.03kB 66.62% github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.addDecoderNode
513kB 33.38% 100% 513kB 33.38% net/http.newBufioWriterSize
0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/cmd/all.init
0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/cmd/serve.init
0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/cmd/serve/restic.init
0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/vendor/golang.org/x/net/http2.init
0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.init
0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.init.0
0 0% 100% 1024.03kB 66.62% main.init
0 0% 100% 513kB 33.38% net/http.(*conn).readRequest
0 0% 100% 513kB 33.38% net/http.(*conn).serve
0 0% 100% 1024.03kB 66.62% runtime.main
Memory leaks are most often caused by go routine leaks keeping memory alive which should have been garbage collected.
See all active go routines using
curl http://localhost:5572/debug/pprof/goroutine?debug=1
Or go to http://localhost:5572/debug/pprof/goroutine?debug=1 in your browser.
You can see a summary of profiles available at http://localhost:5572/debug/pprof/
Here is how to use some of them:
go tool pprof http://localhost:5572/debug/pprof/heap
curl http://localhost:5572/debug/pprof/goroutine?debug=1
go tool pprof http://localhost:5572/debug/pprof/profile
wget http://localhost:5572/debug/pprof/trace?seconds=5
rclone rc debug/set-block-profile-rate rate=1
(docs)go tool pprof http://localhost:5572/debug/pprof/block
rclone rc debug/set-mutex-profile-fraction rate=1
(docs)go tool pprof http://localhost:5572/debug/pprof/mutex
See the net/http/pprof docs for more info on how to use the profiling and for a general overview see the Go team's blog post on profiling go programs.
The profiling hook is zero overhead unless it is used.