rclone备份文件至cloudflare的r2

安装 rclone

sudo -v ; curl https://rclone.org/install.sh | sudo bash

一定要从这里下载,apt-get 的最新版本依然不支持 R2

配置 rclone

配置文件

cd ~/.config/rclone

正常会有一个 rclone.conf,没有就新建,,编辑它

[wordpress_backup]
type = s3
provider = Cloudflare
access_key_id = 你的access_key_id
secret_access_key = 你的secret_access_key
region = auto
endpoint = https://你的id.r2.cloudflarestorage.com

命令行

rclone config
图片[1]|rclone备份文件至cloudflare的r2|不死鸟资源网
n

填写名字
比如我这里计划是备份 wordpress,所以我填 wordpress_backup
这时会弹出一堆选择,
以下是各项服务及其对应的说明的表格(由 GPT 生成并修改):

编号服务名称说明
11Fichier一个文件托管服务,用于存储和分享文件。
2Alias for an existing remote为已有的远程连接创建别名,方便管理。
3Amazon Drive亚马逊云端存储服务,已停止接受上传。
4Amazon S3 Compliant Storage ProviderAmazon S3 兼容的存储提供商,包括 AWS、阿里云、Ceph,包括 cloudflare 的 R2
5Backblaze B2一种经济实惠的云存储解决方案,提供对象存储服务。
6Box提供在线文件存储和协作功能的云存储服务。
7Cache a remote缓存一个远程连接,加快访问速度。
8Citrix Sharefile企业级文件存储和分享服务,适合商业用途。
9Dropbox一种流行的云存储服务,允许文件同步和分享。
10Encrypt/Decrypt a remote为远程存储加密或解密文件,提高安全性。
11FTP Connection通过 FTP 协议连接远程服务器,用于文件传输。
12Google Cloud Storage谷歌云存储服务,主要用于企业级的云端存储(非 Google Drive)。
13Google Drive谷歌的个人和团队文件存储与共享服务。
14Google Photos谷歌的照片存储与管理服务。
15Hubic法国 Orange 公司提供的云存储服务。
16In memory object storage system基于内存的对象存储系统,用于高速数据访问。
17Jottacloud挪威的云存储服务,注重隐私保护。
18Koofr允许整合多个云存储账户的服务,也提供自己的存储空间。
19Local Disk连接到本地磁盘或本地存储设备。
20Mail.ru Cloud俄罗斯的 Mail.ru 提供的云存储服务。
21Microsoft Azure Blob Storage微软 Azure 平台的对象存储服务,支持大量数据存储。
22Microsoft OneDrive微软的个人云存储服务,集成于 Windows 和 Office 应用中。
23OpenDrive提供无限存储空间的云存储服务。
24OpenStack SwiftOpenStack 平台的对象存储服务,支持大规模云存储。
25Pcloud提供安全、易用的云存储服务,支持加密和文件共享。
26Put.io在线文件存储和下载服务,支持直接从种子文件下载内容。
27SSH/SFTP Connection使用 SSH/SFTP 协议连接远程服务器,用于安全文件传输。
28Sugarsync提供文件同步和备份功能的云存储服务。
29Transparently chunk/split large files将大文件透明地分块或分割,以便于存储或传输。
30Union merges the contents of several upstream fs将多个上游文件系统的内容合并为一个视图。
31Webdav基于 HTTP 的 WebDAV 协议,用于远程文件管理和传输。
32Yandex Disk俄罗斯 Yandex 提供的云存储服务。
33http Connection通过 HTTP 协议连接,适用于网页上的文件访问和传输。
34premiumize.me提供下载和文件存储服务的多合一平台,支持从多个来源下载。
35seafile一种开源的云存储服务,注重数据同步和协作功能。

我们为了使用 cloudflare 的 R2,选择 4,然后你可以看到 Cloudflare,输入 6,但你仍热可以输入 Cloudflare,进入下一步

图片[2]|rclone备份文件至cloudflare的r2|不死鸟资源网

下一步的意思是,从环境变量获取密钥还是输入,我们选择输入,也就是 false,或者直接回车

Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default (“false”).
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ “false”
2 / Get AWS credentials from the environment (env vars or IAM)
\ “true”

然后输入指定的密钥和 id

图片[3]|rclone备份文件至cloudflare的r2|不死鸟资源网

下一步是选择区域 region,填写 auto 就行
再下一步是 endpoint,这个填写密钥界面的

图片[4]|rclone备份文件至cloudflare的r2|不死鸟资源网

下一步是选择权限,回车就行
然后再下一步是高级设置,直接回车,然后确认信息保存,然后输入 q 退出就行,下面是一段完整的配置命令

root@ecspNjn:~/bash# rclone config
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> wordpress_backup
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / 1Fichier
   \ "fichier"
 2 / Alias for an existing remote
   \ "alias"
 3 / Amazon Drive
   \ "amazon cloud drive"
 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)
   \ "s3"
 5 / Backblaze B2
   \ "b2"
 6 / Box
   \ "box"
 7 / Cache a remote
   \ "cache"
 8 / Citrix Sharefile
   \ "sharefile"
 9 / Dropbox
   \ "dropbox"
10 / Encrypt/Decrypt a remote
   \ "crypt"
11 / FTP Connection
   \ "ftp"
12 / Google Cloud Storage (this is not Google Drive)
   \ "google cloud storage"
13 / Google Drive
   \ "drive"
14 / Google Photos
   \ "google photos"
15 / Hubic
   \ "hubic"
16 / In memory object storage system.
   \ "memory"
17 / Jottacloud
   \ "jottacloud"
18 / Koofr
   \ "koofr"
19 / Local Disk
   \ "local"
20 / Mail.ru Cloud
   \ "mailru"
21 / Microsoft Azure Blob Storage
   \ "azureblob"
22 / Microsoft OneDrive
   \ "onedrive"
23 / OpenDrive
   \ "opendrive"
24 / OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
   \ "swift"
25 / Pcloud
   \ "pcloud"
26 / Put.io
   \ "putio"
27 / SSH/SFTP Connection
   \ "sftp"
28 / Sugarsync
   \ "sugarsync"
29 / Transparently chunk/split large files
   \ "chunker"
30 / Union merges the contents of several upstream fs
   \ "union"
31 / Webdav
   \ "webdav"
32 / Yandex Disk
   \ "yandex"
33 / http Connection
   \ "http"
34 / premiumize.me
   \ "premiumizeme"
35 / seafile
   \ "seafile"
Storage> 4
** See help for s3 backend at: https://rclone.org/s3/ **

Choose your S3 provider.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Amazon Web Services (AWS) S3
   \ "AWS"
 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
   \ "Alibaba"
 3 / Ceph Object Storage
   \ "Ceph"
 4 / Digital Ocean Spaces
   \ "DigitalOcean"
 5 / Dreamhost DreamObjects
   \ "Dreamhost"
 6 / IBM COS S3
   \ "IBMCOS"
 7 / Minio Object Storage
   \ "Minio"
 8 / Netease Object Storage (NOS)
   \ "Netease"
 9 / Scaleway Object Storage
   \ "Scaleway"
10 / StackPath Object Storage
   \ "StackPath"
11 / Tencent Cloud Object Storage (COS)
   \ "TencentCOS"
12 / Wasabi Object Storage
   \ "Wasabi"
13 / Any other S3 compatible provider
   \ "Other"
provider> cloudflare
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ "false"
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ "true"
env_auth> 
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id> access_key_id
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key> secret_access_key
Region to connect to.
Leave blank if you are using an S3 clone and you don't have a region.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Use this if unsure. Will use v4 signatures and an empty region.
   \ ""
 2 / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
   \ "other-v2-signature"
region> auto
Endpoint for S3 API.
Required when using an S3 clone.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
endpoint> https://你的id.r2.cloudflarestorage.com
Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
Enter a string value. Press Enter for the default ("").
location_constraint> 
Canned ACL used when creating buckets and storing or copying objects.

This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.

For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl

Note that this ACL is applied when server side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
   \ "private"
 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
   \ "public-read"
   / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
 3 | Granting this on a bucket is generally not recommended.
   \ "public-read-write"
 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
   \ "authenticated-read"
   / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ "bucket-owner-read"
   / Both the object owner and the bucket owner get FULL_CONTROL over the object.
 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ "bucket-owner-full-control"
acl> private
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n
Remote config
--------------------
[wordpress_backup]
provider = cloudflare
access_key_id = access_key_id
secret_access_key = secret_access_key
region = auto
endpoint = https://你的id.r2.cloudflarestorage.com
acl = private
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:

Name                 Type
====                 ====
wordpress_backup     s3

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q

编写定时脚本

上传脚本

这个脚本实现了运行自动压缩,并上传至远程,超过最大限制之后删除远端文件
下面这个脚本需要修改几个地方,SOURCE_DIR 是你要备份的文件夹,RCLONE_REMOTE 是你 R2 的地址,格式是 {配置名}: 桶名 / 目录

#!/bin/bash

# 需要备份的文件夹路径
SOURCE_DIR="/data/compose/2/"

# 获取脚本所在目录路径
SCRIPT_DIR=$(dirname "$(readlink -f "$0")")

# 备份文件临时存放路径
TEMP_DIR="$SCRIPT_DIR/temp"

# rclone配置名称和目标路径
RCLONE_REMOTE="wordpress_backup:wordpress-backup/blog"

# 日志文件路径
LOG_FILE="$SCRIPT_DIR/backup.log"

# 保留的最大备份数量
MAX_BACKUPS=2

# 当前时间,用于备份文件命名
DATE=$(date +"%Y-%m-%d_%H-%M-%S")

# 创建临时目录(如果不存在)
mkdir -p "$TEMP_DIR"

# 压缩并生成日志
ARCHIVE_NAME="${TEMP_DIR}/$(basename "$SOURCE_DIR")-${DATE}.tar.gz"
echo "[$(date +"%Y-%m-%d_%H-%M-%S")] Compression started" >> "$LOG_FILE"
tar -czf "$ARCHIVE_NAME" "$SOURCE_DIR" >> /dev/null 2>&1
echo "[$(date +"%Y-%m-%d_%H-%M-%S")] Compression completed" >> "$LOG_FILE"

# 删除云端最早的备份(超过最大保留数量)
EXISTING_BACKUPS=$(rclone lsf "$RCLONE_REMOTE" | grep "$(basename "$SOURCE_DIR")-.*\.tar\.gz" | sort)
NUM_BACKUPS=$(echo "$EXISTING_BACKUPS" | wc -l)

if [ "$NUM_BACKUPS" -gt "$MAX_BACKUPS" ]; then
    NUM_TO_DELETE=$((NUM_BACKUPS - MAX_BACKUPS))
    BACKUPS_TO_DELETE=$(echo "$EXISTING_BACKUPS" | head -n "$NUM_TO_DELETE")

    for BACKUP in $BACKUPS_TO_DELETE; do
        echo "[$(date +"%Y-%m-%d_%H-%M-%S")] Deleting old backup: $BACKUP" >> "$LOG_FILE"
        rclone delete "$RCLONE_REMOTE/$BACKUP" >> "$LOG_FILE" 2>&1
    done
fi

# 上传新的备份
echo "[$(date +"%Y-%m-%d_%H-%M-%S")] Starting upload of $ARCHIVE_NAME to $RCLONE_REMOTE" >> "$LOG_FILE"
echo "本次运行的命令为:rclone copy \"$ARCHIVE_NAME\" \"$RCLONE_REMOTE/\" --log-file=\"$LOG_FILE\" --log-level INFO –s3-no-check-bucket" >> "$LOG_FILE"
rclone copy "$ARCHIVE_NAME" "$RCLONE_REMOTE/" --s3-no-check-bucket --log-file="$LOG_FILE" --log-level INFO
UPLOAD_STATUS=$?

# 检查上传是否成功
if [ $UPLOAD_STATUS -eq 0 ]; then
    echo "[$(date +"%Y-%m-%d_%H-%M-%S")] Upload of $ARCHIVE_NAME completed successfully" >> "$LOG_FILE"
    rm "$ARCHIVE_NAME"  # 上传成功后删除本地文件
else
    echo "[$(date +"%Y-%m-%d_%H-%M-%S")] Failed to upload $ARCHIVE_NAME" >> "$LOG_FILE"
fi

# 记录备份过程完成状态
if [ $UPLOAD_STATUS -eq 0 ]; then
    echo "[$(date +"%Y-%m-%d_%H-%M-%S")] Backup process completed successfully" >> "$LOG_FILE"
else
    echo "[$(date +"%Y-%m-%d_%H-%M-%S")] Backup process failed" >> "$LOG_FILE"
fi

我这里将他放在 /root/bash/backup.sh 这个位置

定时设置

打开定时配置

crontab -e

会弹出选择编辑器的,选择一个喜欢的

图片[5]|rclone备份文件至cloudflare的r2|不死鸟资源网

写入以下内容

0 2 * * * /root/bash/backup.sh

保存即可

本站资源均为作者提供和网友推荐收集整理而来,仅供学习和研究使用,请在下载后24小时内删除,谢谢合作!
rclone备份文件至cloudflare的r2|不死鸟资源网
rclone备份文件至cloudflare的r2
此内容为免费阅读,请登录后查看
¥0
限时特惠
¥99
文章采用CC BY-NC-SA 4.0许可协议授权
免费阅读
THE END
点赞6 分享