IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
according to the schema, else some combinations of migration / guest /
storage settings will fail validation:
2024-05-15 11:48:51 ERROR: migration_snapshot: type check ('boolean') failed - got ''
since this is client / source side, remote migrations to a remote node
with validation enabled can fail without this change.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
to avoid breakage with schema validation turned on.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Since commit a6f5b35 ("replication: prepare: include volumes without
snapshots in the result"), attempts would be made to remove previous
replication snapshots from volumes on which they didn't exist. This
was noticed by Thomas since the output of a replication test in
pve-manager changed.
The issue is not completely new, i.e. there was no check that the
(previous) replication snapshot acutally exists before attempting
removal during the cleanup phase. Fix the issue by adding such a
check.
The $replicate_snapshots hash is only used for this, so the change
there is fine.
Fixes: a6f5b35 ("replication: prepare: include volumes without snapshots in the result")
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
If the user can already stop all tasks there is no point in spending
some work on every task to check if the user could also stop if
without those powerful permissions.
To avoid to much indentation rework the filter to an early-next style.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Given a `(type, user, vmid)` tuple, the helper aborts all tasks of the
given `type` for guest `vmid` that `user` is allowed to abort:
- If `user` has `Sys.Modify` on the node, they can abort any task
- If `user` is an API token, it can abort any task it started itself
- If `user` is a user, they can abort any task started by themselves
or one of their API tokens.
The helper is used to overrule any active qmshutdown/vzshutdown tasks
when attempting to stop a VM/CT (if requested).
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
so it doesn't need to be set when explicitly disabling fleecing. Needs
a custom verifier to enforce it being set when enabled.
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
It's a property string, because that avoids having an implicit
"enabled" as part of a 'fleecing-storage' property. And there likely
will be more options in the future, e.g. threshold/limit for the
fleecing image size.
Storage is non-optional, so the storage choice needs to be a conscious
decision. Can allow for a default later, when a good choice can be
made further down the stack. The original idea with "same storage as
VM disk" is not great, because e.g. for LVM, it would require the same
size as the disk up front.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
[ TL: style fix for whitespace placement in multi-line strings ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Do not pass the cleanup flag to get_replicatable_volumes() which leads
to replicatable volumes that have the replicate setting turned off to
be part of the result.
Instead pass the noerr flag, because things like missing the
storage-level replicate feature should not lead to an error here.
Reported in the community forum:
https://forum.proxmox.com/threads/120910/post-605574
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Suggest an alternative solution by removing the problematic volumes
from the replication target rather than the whole job.
This is helpful if there are multiple replicated volumes to avoid the
need to fully re-sync all volumes in many cases.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Note that PVE::Storage::volume_snapshot_info() will fail when a volume
does not exist, so no non-existing volume will end up in the result
(prepare() is only called with volumes that should exist).
This makes it possible to detect a volume without snapshots in the
result of prepare(), and as a consequence, replication will now also
fail early in a situation where source and remote volume both exist,
but (at least) one of them doesn't have any snapshots.
Such a situation can happen, for example, by deleting and re-creating
a volume with the same name on the source side without running
replication after deletion.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
'legacy-sendmail': Use mailto/mailnotification parameters and send
emails directly.
'notification-system': Always notify via notification system
'auto': Notify via mail if mailto is set, otherwise use notification
system.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
The first two will be migrated to the notification system, the second
were part for the first attempt for the new notification system.
The first attempt only ever hit pvetest, so we simply tell the user
to not use the two params.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
configuring pbs-entries-max can avoid failing backups due to a high
amount of files in folders where a folder exclusion is not possible
Signed-off-by: Alexander Zeidler <a.zeidler@proxmox.com>
After removing a storage, replication states can still contain
references to it, even if no volume references it anymore.
If a storage does not exist in the storage configuration, the
replication target runs into an error when preparing the job locally.
This error prevents both running and removing the replication job. Fix
it by not passing the invalid storage ID in the first place.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
- Add new option 'notification-target'
Allows to select to which endpoint/group notifications shall be sent
- Add new option 'notification-policy'
Replacement for the now deprecated 'mailnotification' option. Mostly
just a rename for consistency, but also adds the 'never' option.
- Mark 'mailnotification' as deprecated in favor of 'notification-policy'
- Clarify that 'mailto' is ignored if 'notification-target' is set
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
This ensures that the alert counter is incremented when a message
with such a level is logged, and that the task is prominently marked
in the web UI task log.
The log_warn produces the exact same message format for the warn
level, so we can just swap printing to STDERR for the warning level
without any change to the resulting text in the log. Keep printing to
the (on storage saved) backup log-fd as is.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
HA manager currently needs to know about internal details about the
configs and how the properties are calculated. With this method, those
details are abstracted away, allowing to change the configuration
structure. In particular, QemuConfig's 'memory' can be turned into
a property string without HA manager needing to know about it (once HA
manager switched to using this mehtod).
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
The CFQ scheduler was removed with Linux 5.0 and ionice is now used
by the newer BFQ scheduler. Mention what the special value 8 does.
Also mention that for snapshot and suspend mode backups of VMs, the
setting only affects the compressor, because the kvm process is not a
child process of vzdump then and does not inherit the ionice priority.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
adds a config file for each type of resource (usb/pci) by using a 'map'
array propertystring for each node mapping
in each mapping we save the path(s) and some other information to detect
hardware changes (if possible) like the vendor/device id
both configs have custom header parser/formatter to omit the type (since
we only want one type per config here)
also each config has some helpers like find_on_current_node
the resulting config (e.g. for pci) would look like this:
if a tag is defined, test if user have a specific access to the vlan (or propagate from full bridge acl or zone)
if trunks is defined, we check permissions for each vlan of the trunks
if no tag, test if user have access to full bridge.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
FG:
- conditionalize check for bridge
- make trunk to tags helper private for now
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
This makes the description consistent with the other places that
have bwlimit as a parameter as well.
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>