🛡️ Proxmox Backup Strategy: PBS + Synology + Cloud Sync
📑 Table of Contents
Introduction
This will be one of my final posts in the Proxmox series, but I couldn't finish without covering a crucial topic—backups. 🛡️
Let's be honest: backups are often an afterthought. Despite my background as a storage admin, I successfully ignored proper home lab backups for years—fortunately, without major incidents. 😅 But as my setup grew, I realized it was time to build a professional-grade strategy using Synology NAS and Proxmox Backup Server (PBS).
Proxmox Backup Server
While Proxmox has built-in backups, PBS is a game-changer. I run my PBS instance on a Debian base to manage my specific environment.
My PBS Datastore Stats
Currently, I manage four distinct datastores to keep things organized:
graph TD
NAS[Synology Volume] --> Root[/volume1/nas-archive/]
Root --> D1[Ahmed2 - 20 LXCs]
Root --> D2[Backup-local]
Root --> D3[k3-cluster]
Root --> D4[nas-archive]
Root --- Hidden([.chunks hidden folder])
D1 -.-> Hidden
D2 -.-> Hidden
D3 -.-> Hidden
D4 -.-> Hidden
note[All datastores deduplicate against the central .chunks folder]
Hidden --- note
| Datastore | Size/Usage | Purpose |
|---|---|---|
| Ahmed2 | 1.8TB (2% Used) | Primary storage for 20 LXC containers. |
| Backup-local | Active | General storage for 20 containers and 3 VMs. |
| k3-cluster | Dedicated | Isolated backups for my Kubernetes cluster. |
| nas-archive | 3.5TB (5% Used) | Synology NFS share for long-term archiving. |
Why PBS?
- Deduplication: Only unique data blocks are stored, saving massive amounts of space.
- Incremental Backups: Only stores changes, making daily backups finish in seconds.
- Encryption: Data is secure both at rest and during transit.
Implementation: Synology Integration
Integrating a Synology NAS as the storage backend for PBS is the most reliable way to scale.
graph LR
subgraph Proxmox_Cluster [Proxmox VE Cluster]
N1[Node 1]
N2[Node 2]
N3[Node 3]
end
subgraph PBS_VM [PBS on Debian]
PBS_Logic[PBS Service]
end
subgraph Storage [Synology NAS]
NFS_Share[(NFS Share: /volume1/nas-archive)]
end
Proxmox_Cluster -- "Daily Incremental" --> PBS_Logic
PBS_Logic -- "Persistent Mount" --> NFS_Share
Step 1: Prepare the Synology NAS
- Enable NFS: In DSM, go to
Control Panel > File Services > NFSand enable the service. - NFS Permissions: In
Shared Folder, edit your backup folder's permissions:- Hostname: Your PBS IP.
- Privilege: Read/Write.
- Squash: Map all users to admin (prevents permission denied errors).
- Security: sys.
Step 2: Mount on PBS (Debian)
In your PBS shell, create a persistent mount point so the NAS storage is always available:
# Create the local directory
mkdir -p /mnt/nas-archive
chown backup:backup /mnt/nas-archive
# Mount the share
mount -t nfs 192.168.1.218:/volume1/proxmox-backup /mnt/nas-archive
# Make it persistent in /etc/fstab
echo "192.168.1.218:/volume1/proxmox-backup /mnt/nas-archive nfs vers=3,nofail,intr,_netdev 0 0" >> /etc/fstab
Step 3: Add to PBS & Proxmox
- In PBS Web UI: Go to
Datastores > Add Datastoreand point it to/mnt/nas-archive. - In Proxmox VE: Go to
Datacenter > Storage > Add > Proxmox Backup Server. - Create Schedule: Set your backup window.
Pro Tip: Do not set any retention in the Proxmox job itself! Pruning is handled much more efficiently inside the PBS Datastore settings.
Backup Plan: The 3-2-1 Strategy
To ensure total data safety, I follow the 3-2-1 rule: 3 copies of data, on 2 different media, with 1 copy offsite.
- Local Copies: Live data on Proxmox nodes.
- Second Media: Daily deduplicated backups on the Synology NAS via PBS.
- Offsite: Encrypted copies synced to Google Drive.
Lessons Learned: The ".chunks" Problem
What looked good on paper failed in practice. PBS stores data in a hidden .chunks folder. My environment currently has over 250,000 files in that folder.
When I tried to use Synology Cloud Sync to push these to Google Drive, the system choked. Calculating changes for 250k tiny files took hours every single day. Furthermore, for a true Disaster Recovery (DR) scenario, I don't need the entire incremental history—I just need the latest working version of my VMs.
The Solution: Tiered Backups
I revised my strategy to be more efficient:
flowchart TD
Start((Proxmox VMs/LXCs)) --> Tier1[Tier 1: Local DR]
Start --> Tier2[Tier 2: Offsite DR]
subgraph Local_Path [Fast Granular Recovery]
Tier1 --> PBS[Proxmox Backup Server]
PBS --> Synology_PBS[Synology: /pbs_datastore]
Synology_PBS -.-> Chunks{{.chunks folder: 250k+ files}}
end
subgraph Offsite_Path [Disaster Recovery]
Tier2 --> VZDump[Weekly VZDump .vma.zst]
VZDump --> Synology_Sync[Synology: /cloud_dump]
Synology_Sync -- "Synology Cloud Sync" --> GDrive[Google Drive]
end
style Chunks fill:#f96,stroke:#333,stroke-dasharray: 5 5
style GDrive fill:#4285F4,color:#fff
- PBS (Local DR): Handles daily, fast, incremental backups. This is for when I accidentally delete a file or a VM update fails.
- Direct Dump (Cloud DR): I created a second backup job in Proxmox that bypasses PBS. It dumps a standard
.vma.zstfile directly to the Synology. - Cloud Sync: Synology Cloud Sync only watches the "Direct Dump" folder. Since it only contains a few large files, it syncs to Google Drive in minutes.
Summary
By using PBS for speed and Synology Cloud Sync for offsite redundancy, I've built a system that is both fast and bulletproof. I no longer worry about the "hidden folder" sync issues, and my Google Drive storage remains clean with only the latest two versions of my critical infrastructure.
What Comes Next? Testing Recovery!
A backup plan is just a "hope" until you test it.
- Verify: Set up a "Verification Job" in PBS to run weekly to check for data corruption.
- Test Restore: Once a month, restore a VM from Google Drive to a separate "Test Node" to ensure the offsite path is valid.
References
This video walk-through is an excellent resource for visualizing the exact steps of connecting Proxmox Backup Server to an external NFS share on a NAS.
Found this helpful? Share it with your homelab community! 🚀
← Back to Blog