mirror of
git://git.proxmox.com/git/pve-docs.git
synced 2025-01-08 21:17:52 +03:00
3802f512b9
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
310 lines
9.6 KiB
Plaintext
310 lines
9.6 KiB
Plaintext
[[chapter_vzdump]]
|
|
ifdef::manvolnum[]
|
|
vzdump(1)
|
|
=========
|
|
:pve-toplevel:
|
|
|
|
NAME
|
|
----
|
|
|
|
vzdump - Backup Utility for VMs and Containers
|
|
|
|
|
|
SYNOPSIS
|
|
--------
|
|
|
|
include::vzdump.1-synopsis.adoc[]
|
|
|
|
|
|
DESCRIPTION
|
|
-----------
|
|
endif::manvolnum[]
|
|
ifndef::manvolnum[]
|
|
Backup and Restore
|
|
==================
|
|
:pve-toplevel:
|
|
endif::manvolnum[]
|
|
|
|
Backups are a requirements for any sensible IT deployment, and {pve}
|
|
provides a fully integrated solution, using the capabilities of each
|
|
storage and each guest system type. This allows the system
|
|
administrator to fine tune via the `mode` option between consistency
|
|
of the backups and downtime of the guest system.
|
|
|
|
{pve} backups are always full backups - containing the VM/CT
|
|
configuration and all data. Backups can be started via the GUI or via
|
|
the `vzdump` command line tool.
|
|
|
|
.Backup Storage
|
|
|
|
Before a backup can run, a backup storage must be defined. Refer to
|
|
the Storage documentation on how to add a storage. A backup storage
|
|
must be a file level storage, as backups are stored as regular files.
|
|
In most situations, using a NFS server is a good way to store backups.
|
|
You can save those backups later to a tape drive, for off-site
|
|
archiving.
|
|
|
|
.Scheduled Backup
|
|
|
|
Backup jobs can be scheduled so that they are executed automatically
|
|
on specific days and times, for selectable nodes and guest systems.
|
|
Configuration of scheduled backups is done at the Datacenter level in
|
|
the GUI, which will generate a cron entry in /etc/cron.d/vzdump.
|
|
|
|
Backup modes
|
|
------------
|
|
|
|
There are several ways to provide consistency (option `mode`),
|
|
depending on the guest type.
|
|
|
|
.Backup modes for VMs:
|
|
|
|
`stop` mode::
|
|
|
|
This mode provides the highest consistency of the backup, at the cost
|
|
of a short downtime in the VM operation. It works by executing an
|
|
orderly shutdown of the VM, and then runs a background Qemu process to
|
|
backup the VM data. After the backup is started, the VM goes to full
|
|
operation mode if it was previously running. Consistency is guaranteed
|
|
by using the live backup feature.
|
|
|
|
`suspend` mode::
|
|
|
|
This mode is provided for compatibility reason, and suspends the VM
|
|
before calling the `snapshot` mode. Since suspending the VM results in
|
|
a longer downtime and does not necessarily improve the data
|
|
consistency, the use of the `snapshot` mode is recommended instead.
|
|
|
|
`snapshot` mode::
|
|
|
|
This mode provides the lowest operation downtime, at the cost of a
|
|
small inconstancy risk. It works by performing a Proxmox VE live
|
|
backup, in which data blocks are copied while the VM is running. If the
|
|
guest agent is enabled (`agent: 1`) and running, it calls
|
|
`guest-fsfreeze-freeze` and `guest-fsfreeze-thaw` to improve
|
|
consistency.
|
|
|
|
A technical overview of the Proxmox VE live backup for QemuServer can
|
|
be found online
|
|
https://git.proxmox.com/?p=pve-qemu.git;a=blob_plain;f=backup.txt[here].
|
|
|
|
NOTE: Proxmox VE live backup provides snapshot-like semantics on any
|
|
storage type. It does not require that the underlying storage supports
|
|
snapshots. Also please note that since the backups are done via
|
|
a background Qemu process, a stopped VM will appear as running for a
|
|
short amount of time while the VM disks are being read by Qemu.
|
|
However the VM itself is not booted, only its disk(s) are read.
|
|
|
|
.Backup modes for Containers:
|
|
|
|
`stop` mode::
|
|
|
|
Stop the container for the duration of the backup. This potentially
|
|
results in a very long downtime.
|
|
|
|
`suspend` mode::
|
|
|
|
This mode uses rsync to copy the container data to a temporary
|
|
location (see option `--tmpdir`). Then the container is suspended and
|
|
a second rsync copies changed files. After that, the container is
|
|
started (resumed) again. This results in minimal downtime, but needs
|
|
additional space to hold the container copy.
|
|
+
|
|
When the container is on a local file system and the target storage of
|
|
the backup is an NFS server, you should set `--tmpdir` to reside on a
|
|
local file system too, as this will result in a many fold performance
|
|
improvement. Use of a local `tmpdir` is also required if you want to
|
|
backup a local container using ACLs in suspend mode if the backup
|
|
storage is an NFS server.
|
|
|
|
`snapshot` mode::
|
|
|
|
This mode uses the snapshotting facilities of the underlying
|
|
storage. First, the container will be suspended to ensure data consistency.
|
|
A temporary snapshot of the container's volumes will be made and the
|
|
snapshot content will be archived in a tar file. Finally, the temporary
|
|
snapshot is deleted again.
|
|
|
|
NOTE: `snapshot` mode requires that all backed up volumes are on a storage that
|
|
supports snapshots. Using the `backup=no` mount point option individual volumes
|
|
can be excluded from the backup (and thus this requirement).
|
|
|
|
// see PVE::VZDump::LXC::prepare()
|
|
NOTE: By default additional mount points besides the Root Disk mount point are
|
|
not included in backups. For volume mount points you can set the *Backup* option
|
|
to include the mount point in the backup. Device and bind mounts are never
|
|
backed up as their content is managed outside the {pve} storage library.
|
|
|
|
Backup File Names
|
|
-----------------
|
|
|
|
Newer versions of vzdump encode the guest type and the
|
|
backup time into the filename, for example
|
|
|
|
vzdump-lxc-105-2009_10_09-11_04_43.tar
|
|
|
|
That way it is possible to store several backup in the same
|
|
directory. The parameter `maxfiles` can be used to specify the
|
|
maximum number of backups to keep.
|
|
|
|
[[vzdump_restore]]
|
|
Restore
|
|
-------
|
|
|
|
A backup archive can be restored through the {pve} web GUI or through the
|
|
following CLI tools:
|
|
|
|
|
|
`pct restore`:: Container restore utility
|
|
|
|
`qmrestore`:: Virtual Machine restore utility
|
|
|
|
For details see the corresponding manual pages.
|
|
|
|
Bandwidth Limit
|
|
~~~~~~~~~~~~~~~
|
|
|
|
Restoring one or more big backups may need a lot of resources, especially
|
|
storage bandwidth for both reading from the backup storage and writing to
|
|
the target storage. This can negatively effect other virtual guest as access
|
|
to storage can get congested.
|
|
|
|
To avoid this you can set bandwidth limits for a backup job. {pve}
|
|
implements two kinds of limits for restoring and archive:
|
|
|
|
* per-restore limit: denotes the maximal amount of bandwidth for
|
|
reading from a backup archive
|
|
|
|
* per-storage write limit: denotes the maximal amount of bandwidth used for
|
|
writing to a specific storage
|
|
|
|
The read limit indirectly affects the write limit, as we cannot write more
|
|
than we read. A smaller per-job limit will overwrite a bigger per-storage
|
|
limit. A bigger per-job limit will only overwrite the per-storage limit if
|
|
you have `Data.Allocate' permissions on the affected storage.
|
|
|
|
You can use the `--bwlimit <integer>` option from the restore CLI commands
|
|
to set up a restore job specific bandwidth limit. Kibit/s is used as unit
|
|
for the limit, this means passing `10240' will limit the read speed of the
|
|
backup to 10 MiB/s, ensuring that the rest of the possible storage bandwidth
|
|
is available for the already running virtual guests, and thus the backup
|
|
does not impact their operations.
|
|
|
|
NOTE: You can use `0` for the `bwlimit` parameter to disable all limits for
|
|
a specific restore job. This can be helpful if you need to restore a very
|
|
important virtual guest as fast as possible. (Needs `Data.Allocate'
|
|
permissions on storage)
|
|
|
|
Most times your storage's generally available bandwidth stays the same over
|
|
time, thus we implemented the possibility to set a default bandwidth limit
|
|
per configured storage, this can be done with:
|
|
|
|
----
|
|
# pvesm set STORAGEID --bwlimit KIBs
|
|
----
|
|
|
|
|
|
Configuration
|
|
-------------
|
|
|
|
Global configuration is stored in `/etc/vzdump.conf`. The file uses a
|
|
simple colon separated key/value format. Each line has the following
|
|
format:
|
|
|
|
OPTION: value
|
|
|
|
Blank lines in the file are ignored, and lines starting with a `#`
|
|
character are treated as comments and are also ignored. Values from
|
|
this file are used as default, and can be overwritten on the command
|
|
line.
|
|
|
|
We currently support the following options:
|
|
|
|
include::vzdump.conf.5-opts.adoc[]
|
|
|
|
|
|
.Example `vzdump.conf` Configuration
|
|
----
|
|
tmpdir: /mnt/fast_local_disk
|
|
storage: my_backup_storage
|
|
mode: snapshot
|
|
bwlimit: 10000
|
|
----
|
|
|
|
Hook Scripts
|
|
------------
|
|
|
|
You can specify a hook script with option `--script`. This script is
|
|
called at various phases of the backup process, with parameters
|
|
accordingly set. You can find an example in the documentation
|
|
directory (`vzdump-hook-script.pl`).
|
|
|
|
File Exclusions
|
|
---------------
|
|
|
|
NOTE: this option is only available for container backups.
|
|
|
|
`vzdump` skips the following files by default (disable with the option
|
|
`--stdexcludes 0`)
|
|
|
|
/tmp/?*
|
|
/var/tmp/?*
|
|
/var/run/?*pid
|
|
|
|
You can also manually specify (additional) exclude paths, for example:
|
|
|
|
# vzdump 777 --exclude-path /tmp/ --exclude-path '/var/foo*'
|
|
|
|
(only excludes tmp directories)
|
|
|
|
Configuration files are also stored inside the backup archive
|
|
(in `./etc/vzdump/`) and will be correctly restored.
|
|
|
|
Examples
|
|
--------
|
|
|
|
Simply dump guest 777 - no snapshot, just archive the guest private area and
|
|
configuration files to the default dump directory (usually
|
|
`/var/lib/vz/dump/`).
|
|
|
|
# vzdump 777
|
|
|
|
Use rsync and suspend/resume to create a snapshot (minimal downtime).
|
|
|
|
# vzdump 777 --mode suspend
|
|
|
|
Backup all guest systems and send notification mails to root and admin.
|
|
|
|
# vzdump --all --mode suspend --mailto root --mailto admin
|
|
|
|
Use snapshot mode (no downtime) and non-default dump directory.
|
|
|
|
# vzdump 777 --dumpdir /mnt/backup --mode snapshot
|
|
|
|
Backup more than one guest (selectively)
|
|
|
|
# vzdump 101 102 103 --mailto root
|
|
|
|
Backup all guests excluding 101 and 102
|
|
|
|
# vzdump --mode suspend --exclude 101,102
|
|
|
|
Restore a container to a new CT 600
|
|
|
|
# pct restore 600 /mnt/backup/vzdump-lxc-777.tar
|
|
|
|
Restore a QemuServer VM to VM 601
|
|
|
|
# qmrestore /mnt/backup/vzdump-qemu-888.vma 601
|
|
|
|
Clone an existing container 101 to a new container 300 with a 4GB root
|
|
file system, using pipes
|
|
|
|
# vzdump 101 --stdout | pct restore --rootfs 4 300 -
|
|
|
|
|
|
ifdef::manvolnum[]
|
|
include::pve-copyright.adoc[]
|
|
endif::manvolnum[]
|
|
|