Proxmox Backups To Amazon S3/Backblaze B2

I moved all of my personal servers to 3 Proxmox hosts so I can take advantage of LXC. Proxmox has really nailed it on the LXC front, with most things being available in the GUI, yet still being stock LXC in the background for me to go and edit things for advanced needs, such as cgroups.

Overall, I’m very pleased with Proxmox. It’s backup jobs can be scheduled and have all the typical features you’d expect. Except, natively, it will only send backups and snapshots to disks that Proxmox itself has configured. So you’re limited to whatever you can mount on your server. Which isn’t a huge deal as in a larger world, you’d drop an NFS mount on your Proxmox server and just shoot the backups there. No worries.

Personally, though, I don’t really have a large NAS and Amazon S3 or Backblaze B2, and so many other, are just so inexpensive, I want to just shove daily Proxmox backups and snapshots there, and just never think about it. I push about 30G every night with this method, and have been doing so for about 3 months now. My Backblaze bill is less than $5/mo. $5/mo to restore any VM to any date over the past 3 months. Sold.

Since there is no native way to have Proxmox send things to Amazon S3 or Backblaze B2, we have to use a third party utility, and write a quick one line script for Proxmox to execute after the backup job.


I chose rclone as I don’t need to mount and browse the object store, I just need something to take files I have locally, and push them to the remote.

Installing rclone is simple as it is maintained in the official package repositories.

sudo apt-get install rclone

Once rclone is installed, it is pretty straightforward how to configure a remote destination. The config command walks you through a wizard of sorts. Have your bucket name and API keys ready.

sudo rclone config

Once you get the remote configured, now we can configure Proxmox to use this.


I am assuming you have a standard Proxmox backup job configured already. Here’s what my daily backup job looks like.

It’s important to note that in my example, I’m just storing these snapshots to the local disk. You are free to still send this to a different destination. For me, my backups end up at /var/lib/vz/dump/.

Knowing that, we need to create a quick script now that utilizes rclone and the remote that was setup to move whatever is in /var/lib/vz/dump/ to that remote in a dated folder. We saw move here as we want to delete the local copy on a successful transfer. At least that’s what I want in this case. You may set up 3 remotes, copy everything to the remotes, then delete the local copy. Up to you, this is a simple example though.

I save this in /root/scripts/

The above script does three things.

  • Creates a date variable for today.
  • Proxmox will run this script (you’ll see in next step) at every step in the backup job, it will pass this script some arguments. $1 will be set to where in the process the job is. Since we only want to run the rclone after the job is done, we have an if statement to see if the job-end event has been reached.
  • If the job-end event has been reached, we are moving everything in our backup directory to the remote that you setup. In this case, the remote was named b2, yours will be whatever you created.

Let’s make sure the script is executable.

chmod +x /root/scripts/

Only left now is to tell the Proxmox backup utility to actually execute this script while it is doing the backups and snapshots. For this, we open /etc/vzdump.conf, and you should already have a commented out template of different options we can use. We want to uncomment the script field and supply it with the script we saved above. So in the end, it looks something like this.

That’s it! Now when the backup job runs, the script will execute on every event, on the job-end event, we’ll move all the backup archives on the local disk to a dated folder on the remote!

Leave a Reply

Your email address will not be published. Required fields are marked *