Paths are specified as
remote: for the
command.) You may put subdirectories in too, e.g.
Here is an example of making a Microsoft Azure Blob Storage
configuration. For a remote called
remote. First run:
This will guide you through an interactive setup process:
No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Microsoft Azure Blob Storage \ "azureblob" [snip] Storage> azureblob Storage Account Name account> account_name Storage Account Key key> base64encodedkey== Endpoint for the service - leave blank normally. endpoint> Remote config -------------------- [remote] account = account_name key = base64encodedkey== endpoint = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y
See all containers
rclone lsd remote:
Make a new container
rclone mkdir remote:container
List the contents of a container
rclone ls remote:container
/home/local/directory to the remote container, deleting any excess
files in the container.
rclone sync -i /home/local/directory remote:container
This remote supports
--fast-list which allows you to use fewer
transactions in exchange for more memory. See the rclone
docs for more details.
The modified time is stored as metadata on the object with the
key. It is stored using RFC3339 Format time with nanosecond
precision. The metadata is supplied during directory listings so
there is no overhead to using it.
When uploading large files, increasing the value of
--azureblob-upload-concurrency will increase performance at the cost
of using more memory. The default of 16 is set quite conservatively to
use less memory. It maybe be necessary raise it to 64 or higher to
fully utilize a 1 GBit/s link with a single file transfer.
In addition to the default restricted characters set the following characters are also replaced:
File names can also not end with the following characters. These only get replaced if they are the last character in the name:
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
MD5 hashes are stored with blobs. However blobs that were uploaded in chunks only have an MD5 if the source remote was capable of MD5 hashes, e.g. the local disk.
Rclone has 3 ways of authenticating with Azure Blob Storage:
This is the most straight forward and least flexible way. Just fill
key lines and leave the rest blank.
This can be an account level SAS URL or container level SAS URL.
To use it leave
key blank and fill in
An account level SAS URL or container level SAS URL can be obtained from the Azure portal or the Azure Storage Explorer. To get a container level SAS URL right click on a container in the Azure Blob explorer in the Azure portal.
If you use a container level SAS URL, rclone operations are permitted only on a particular container, e.g.
rclone ls azureblob:container
You can also list the single container from the root. This will only show the container specified by the SAS URL.
$ rclone lsd azureblob: container/
Note that you can't see or access any other containers - this will fail
rclone ls azureblob:othercontainer
Container level SAS URLs are useful for temporarily allowing third parties access to a single container or putting credentials into an untrusted environment such as a CI build server.
Here are the standard options specific to azureblob (Microsoft Azure Blob Storage).
Storage Account Name.
Leave blank to use SAS URL or Emulator.
Path to file containing credentials for use with a service principal.
Leave blank normally. Needed only if you want to use a service principal instead of interactive login.
$ az ad sp create-for-rbac --name "<name>" \ --role "Storage Blob Data Owner" \ --scopes "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/blobServices/default/containers/<container>" \ > azure-principal.json
Storage Account Key.
Leave blank to use SAS URL or Emulator.
SAS URL for container level access only.
Leave blank if using account/key or Emulator.
Use a managed service identity to authenticate (only works in Azure).
When true, use a managed service identity to authenticate to Azure Storage instead of a SAS token or account key.
If the VM(SS) on which this program is running has a system-assigned identity, it will be used by default. If the resource has no system-assigned but exactly one user-assigned identity, the user-assigned identity will be used by default. If the resource has multiple user-assigned identities, the identity to use must be explicitly specified using exactly one of the msi_object_id, msi_client_id, or msi_mi_res_id parameters.
Uses local storage emulator if provided as 'true'.
Leave blank if using real azure storage endpoint.
Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage).
Object ID of the user-assigned MSI to use, if any.
Leave blank if msi_client_id or msi_mi_res_id specified.
Object ID of the user-assigned MSI to use, if any.
Leave blank if msi_object_id or msi_mi_res_id specified.
Azure resource ID of the user-assigned MSI to use, if any.
Leave blank if msi_client_id or msi_object_id specified.
Endpoint for the service.
Leave blank normally.
Cutoff for switching to chunked upload (<= 256 MiB) (deprecated).
Upload chunk size.
Note that this is stored in memory and there may be up to "--transfers" * "--azureblob-upload-concurrency" chunks stored at once in memory.
Concurrency for multipart uploads.
This is the number of chunks of the same file that are uploaded concurrently.
If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.
In tests, upload speed increases almost linearly with upload concurrency. For example to fill a gigabit pipe it may be necessary to raise this to 64. Note that this will use more memory.
Note that chunks are stored in memory and there may be up to "--transfers" * "--azureblob-upload-concurrency" chunks stored at once in memory.
Size of blob list.
This sets the number of blobs requested in each listing chunk. Default is the maximum, 5000. "List blobs" requests are permitted 2 minutes per megabyte to complete. If an operation is taking longer than 2 minutes per megabyte on average, it will time out ( source ). This can be used to limit the number of blobs items to return, to avoid the time out.
Access tier of blob: hot, cool or archive.
Archived blobs can be restored by setting access tier to hot or cool. Leave blank if you intend to use default access tier, which is set at account level
If there is no "access tier" specified, rclone doesn't apply any tier. rclone performs "Set Tier" operation on blobs while uploading, if objects are not modified, specifying "access tier" to new one will have no effect. If blobs are in "archive tier" at remote, trying to perform data transfer operations from remote will not be allowed. User should first restore by tiering blob to "Hot" or "Cool".
Delete archive tier blobs before overwriting.
Archive tier blobs cannot be updated. So without this flag, if you attempt to update an archive tier blob, then rclone will produce the error:
can't update archive tier blob without --azureblob-archive-tier-delete
With this flag set then before rclone attempts to overwrite an archive tier blob, it will delete the existing blob before uploading its replacement. This has the potential for data loss if the upload fails (unlike updating a normal blob) and also may cost more since deleting archive tier blobs early may be chargable.
Don't store MD5 checksum with object metadata.
Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading.
How often internal memory buffer pools will be flushed.
Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool.
Whether to use mmap buffers in internal memory pool.
The encoding for the backend.
See the encoding section in the overview for more info.
Public access level of a container: blob or container.
If set, do not do HEAD before GET when getting objects.
MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy.
rclone about is not supported by the Microsoft Azure Blob storage backend. Backends without
this capability cannot determine free space for an rclone mount or
mfs (most free space) as a member of an rclone union
You can test rclone with storage emulator locally, to do this make sure azure storage emulator
installed locally and set up a new remote with
rclone config follow instructions described in
use_emulator config as
true, you do not need to provide default account name
or key if using emulator.