2017-06-28 18:11:50 +02:00
[[chapter_pvesr]]
2017-05-10 07:47:36 +02:00
ifdef::manvolnum[]
pvesr(1)
========
:pve-toplevel:
NAME
----
2017-06-26 16:08:53 +02:00
pvesr - Proxmox VE Storage Replication
2017-05-10 07:47:36 +02:00
SYNOPSIS
--------
include::pvesr.1-synopsis.adoc[]
DESCRIPTION
-----------
endif::manvolnum[]
ifndef::manvolnum[]
Storage Replication
===================
:pve-toplevel:
endif::manvolnum[]
2023-07-03 14:04:34 +02:00
The `pvesr` command-line tool manages the {PVE} storage replication
2017-06-28 09:49:00 +02:00
framework. Storage replication brings redundancy for guests using
2017-06-28 17:56:48 +02:00
local storage and reduces migration time.
2017-06-28 09:49:00 +02:00
2017-06-28 17:56:48 +02:00
It replicates guest volumes to another node so that all data is available
without using shared storage. Replication uses snapshots to minimize traffic
sent over the network. Therefore, new data is sent only incrementally after
2020-03-17 13:58:37 +01:00
the initial full sync. In the case of a node failure, your guest data is
2017-06-28 17:56:48 +02:00
still available on the replicated node.
2017-06-26 16:08:53 +02:00
2020-03-17 13:58:37 +01:00
The replication is done automatically in configurable intervals.
The minimum replication interval is one minute, and the maximal interval
2017-06-28 17:56:48 +02:00
once a week. The format used to specify those intervals is a subset of
`systemd` calendar events, see
xref:pvesr_schedule_time_format[Schedule Format] section:
2017-06-26 16:08:53 +02:00
2020-03-17 13:58:37 +01:00
It is possible to replicate a guest to multiple target nodes,
but not twice to the same target node.
2017-06-26 16:08:53 +02:00
2017-06-28 17:56:48 +02:00
Each replications bandwidth can be limited, to avoid overloading a storage
or server.
2017-06-26 16:08:53 +02:00
2020-03-17 13:58:37 +01:00
Only changes since the last replication (so-called `deltas`) need to be
transferred if the guest is migrated to a node to which it already is
replicated. This reduces the time needed significantly. The replication
direction automatically switches if you migrate a guest to the replication
target node.
2017-06-28 17:56:48 +02:00
For example: VM100 is currently on `nodeA` and gets replicated to `nodeB`.
You migrate it to `nodeB`, so now it gets automatically replicated back from
`nodeB` to `nodeA`.
If you migrate to a node where the guest is not replicated, the whole disk
2020-03-17 13:58:37 +01:00
data must send over. After the migration, the replication job continues to
2017-06-28 17:56:48 +02:00
replicate this guest to the configured nodes.
[IMPORTANT]
====
2021-07-26 10:33:17 +02:00
High-Availability is allowed in combination with storage replication, but there
may be some data loss between the last synced time and the time a node failed.
2017-06-28 17:56:48 +02:00
====
2017-06-26 16:08:53 +02:00
Supported Storage Types
-----------------------
.Storage Types
[width="100%",options="header"]
2023-03-28 14:03:57 +02:00
|=============================================
|Description |Plugin type |Snapshots|Stable
2017-06-26 16:08:53 +02:00
|ZFS (local) |zfspool |yes |yes
2023-03-28 14:03:57 +02:00
|=============================================
2017-06-26 16:08:53 +02:00
2017-06-28 17:56:48 +02:00
[[pvesr_schedule_time_format]]
Schedule Format
---------------
2021-11-11 12:07:02 +01:00
Replication uses xref:chapter_calendar_events[calendar events] for
configuring the schedule.
2017-06-26 16:08:53 +02:00
2017-06-28 17:56:48 +02:00
Error Handling
--------------
2017-06-26 16:08:53 +02:00
2020-03-17 13:58:37 +01:00
If a replication job encounters problems, it is placed in an error state.
In this state, the configured replication intervals get suspended
temporarily. The failed replication is repeatedly tried again in a
30 minute interval.
Once this succeeds, the original schedule gets activated again.
2017-06-26 16:08:53 +02:00
2017-06-28 17:56:48 +02:00
Possible issues
~~~~~~~~~~~~~~~
2017-06-26 16:08:53 +02:00
2020-03-17 13:58:37 +01:00
Some of the most common issues are in the following list. Depending on your
setup there may be another cause.
2017-06-26 16:08:53 +02:00
2017-06-28 17:56:48 +02:00
* Network is not working.
2017-06-26 16:08:53 +02:00
2017-06-28 17:56:48 +02:00
* No free space left on the replication target storage.
2017-06-26 16:08:53 +02:00
2020-03-17 13:58:37 +01:00
* Storage with same storage ID available on the target node
2017-06-26 16:08:53 +02:00
2020-03-17 13:58:37 +01:00
NOTE: You can always use the replication log to find out what is causing the problem.
2017-06-26 16:08:53 +02:00
2017-06-28 17:56:48 +02:00
Migrating a guest in case of Error
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
// FIXME: move this to better fitting chapter (sysadmin ?) and only link to
// it here
2017-06-26 16:08:53 +02:00
2020-03-17 13:58:37 +01:00
In the case of a grave error, a virtual guest may get stuck on a failed
2017-06-28 17:56:48 +02:00
node. You then need to move it manually to a working node again.
2017-06-26 16:08:53 +02:00
2017-06-28 17:56:48 +02:00
Example
~~~~~~~
2017-06-26 16:08:53 +02:00
2020-03-17 13:58:37 +01:00
Let's assume that you have two guests (VM 100 and CT 200) running on node A
2017-06-28 17:56:48 +02:00
and replicate to node B.
Node A failed and can not get back online. Now you have to migrate the guest
to Node B manually.
2017-06-26 16:08:53 +02:00
2023-11-22 08:49:28 +01:00
- connect to node B over ssh or open its shell via the web UI
2017-06-26 16:08:53 +02:00
2017-06-28 17:56:48 +02:00
- check if that the cluster is quorate
+
----
# pvecm status
----
2017-06-26 16:08:53 +02:00
2020-03-17 13:58:37 +01:00
- If you have no quorum, we strongly advise to fix this first and make the
node operable again. Only if this is not possible at the moment, you may
2017-06-28 17:56:48 +02:00
use the following command to enforce quorum on the current node:
+
----
# pvecm expected 1
----
2017-06-26 16:08:53 +02:00
2020-03-17 13:58:37 +01:00
WARNING: Avoid changes which affect the cluster if `expected votes` are set
(for example adding/removing nodes, storages, virtual guests) at all costs.
2019-09-02 11:19:58 +02:00
Only use it to get vital guests up and running again or to resolve the quorum
2017-06-28 17:56:48 +02:00
issue itself.
2017-06-26 16:08:53 +02:00
2017-06-28 17:56:48 +02:00
- move both guest configuration files form the origin node A to node B:
+
----
2017-09-08 11:31:33 +02:00
# mv /etc/pve/nodes/A/qemu-server/100.conf /etc/pve/nodes/B/qemu-server/100.conf
# mv /etc/pve/nodes/A/lxc/200.conf /etc/pve/nodes/B/lxc/200.conf
2017-06-28 17:56:48 +02:00
----
2017-06-26 16:08:53 +02:00
2017-06-28 17:56:48 +02:00
- Now you can start the guests again:
+
----
# qm start 100
# pct start 200
----
2017-06-26 16:08:53 +02:00
2017-06-28 17:56:48 +02:00
Remember to replace the VMIDs and node names with your respective values.
2017-06-26 16:08:53 +02:00
2017-06-28 17:56:48 +02:00
Managing Jobs
-------------
2017-06-26 16:08:53 +02:00
2018-08-16 09:31:05 +02:00
[thumbnail="screenshot/gui-qemu-add-replication-job.png"]
2017-06-30 10:42:15 +02:00
2020-03-17 13:58:37 +01:00
You can use the web GUI to create, modify, and remove replication jobs
2023-07-03 14:04:34 +02:00
easily. Additionally, the command-line interface (CLI) tool `pvesr` can be
2017-06-28 17:56:48 +02:00
used to do this.
2017-06-26 16:08:53 +02:00
2017-06-28 17:56:48 +02:00
You can find the replication panel on all levels (datacenter, node, virtual
2020-03-17 13:58:37 +01:00
guest) in the web GUI. They differ in which jobs get shown:
all, node- or guest-specific jobs.
2017-06-26 16:08:53 +02:00
2020-03-17 13:58:37 +01:00
When adding a new job, you need to specify the guest if not already selected
as well as the target node. The replication
2017-06-28 17:56:48 +02:00
xref:pvesr_schedule_time_format[schedule] can be set if the default of `all
2020-03-17 13:58:37 +01:00
15 minutes` is not desired. You may impose a rate-limit on a replication
job. The rate limit can help to keep the load on the storage acceptable.
2017-06-26 16:08:53 +02:00
2020-03-17 13:58:37 +01:00
A replication job is identified by a cluster-wide unique ID. This ID is
composed of the VMID in addition to a job number.
2017-06-28 17:56:48 +02:00
This ID must only be specified manually if the CLI tool is used.
2017-06-26 16:08:53 +02:00
2023-07-03 14:04:34 +02:00
Command-line Interface Examples
2017-06-28 17:56:48 +02:00
-------------------------------
2017-06-26 16:08:53 +02:00
2020-03-17 13:58:37 +01:00
Create a replication job which runs every 5 minutes with a limited bandwidth
of 10 Mbps (megabytes per second) for the guest with ID 100.
2017-06-26 16:08:53 +02:00
2017-06-28 17:56:48 +02:00
----
# pvesr create-local-job 100-0 pve1 --schedule "*/5" --rate 10
----
2017-06-26 16:08:53 +02:00
2020-03-17 13:58:37 +01:00
Disable an active job with ID `100-0`.
2017-06-26 16:08:53 +02:00
2017-06-28 17:56:48 +02:00
----
# pvesr disable 100-0
----
2017-06-26 16:08:53 +02:00
2020-03-17 13:58:37 +01:00
Enable a deactivated job with ID `100-0`.
2017-06-26 16:08:53 +02:00
2017-06-28 17:56:48 +02:00
----
# pvesr enable 100-0
----
2017-06-26 16:08:53 +02:00
2020-03-17 13:58:37 +01:00
Change the schedule interval of the job with ID `100-0` to once per hour.
2017-06-26 16:08:53 +02:00
2017-06-28 17:56:48 +02:00
----
# pvesr update 100-0 --schedule '*/00'
----
2017-05-10 07:47:36 +02:00
ifdef::manvolnum[]
include::pve-copyright.adoc[]
endif::manvolnum[]