IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
for re-use with remote migration, where import and export happen on
different clusters connected via a websocket instead of SSH tunnel.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
this allows migrating from btrfs to other raw+size accepting storages,
provided no snapshots exist.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
The first step is to allocate rbd images correctly.
The metadata objects still need to be stored in a replicated pool, but
by providing the --data-pool parameter on image creation, we can place
the data objects on the erasure coded (EC) pool.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
to allow reusing this with remote migration, where parsing of the source
volid has to happen on the source node, but this call has to happen on
the target node.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Ensure that the user provided $secret ends in a newline. Otherwise we
will have Input/output errors from rados_connect.
For consistency and possible future proofing, also add a newline to
CephFS secrets.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
Some versions of ZFS do not automatically display the child snapshots
when '-t snapshot' is used, but require '-r' to be present
additionally[1]. And in general, it's cleaner to specify the flag
explicitly.
Because of that, commit ac5c1af led to a regression[0] in the context
of ZFS over iSCSI with zfs_get_sorted_snapshot_list. Fix it, by adding
a -r flag again.
The volume_snapshot_info function is currently only used in the
context of replication and that requires a local ZFS pool, but it
would be affected by the same issue if it is ever used in the context
of ZFS over iSCSI, so also add -r there.
[0]: https://forum.proxmox.com/threads/102683/
[1]: https://forum.proxmox.com/threads/102683/post-442577
Fixes: 8c20d8a ("plugin: add volume_snapshot_info function")
Fixes: ac5c1af ("zfspool: add zfs_get_sorted_snapshot_list helper")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
There are cases where autoactivation can fail, as reported in the
community forum [0]. And it could also be that a volume was
deactivated by something outside of our control.
It doesn't seem strictly necessary to activate the thin pool itself
(creating/removing/activating LVs within the pool still works if it's
not active), but it does not report usage information as long as
neither the pool nor any of its LVs are active. Activate the pool for
that, for being able to use the flag in status(), and it should also
serve as a good indicator that there's a problem with the pool if it
can't be activated.
Before activating, check the (cached) lv_state from lvm_list_volumes.
It's necessary to update the cache in activate_storage, because the
flag is re-used in status(). Also update it for other (de)activations
to be more future-proof.
[0]: https://forum.proxmox.com/threads/local-lvm-not-available-after-kernel-update-on-pve-7.97406
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Update node restrictions to reflect that the storage is not available
anymore on the particular node. If the storage was only configured for
that node, remove it altogether.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
slight style fixup
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
For ZFS and directory storages, clean up the whole disk when the
layout is as usual to avoid left-overs.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
to avoid duplication. Current callers pass along at least one device,
but anticipate future callers that might call with the empty list. Do
nothing in that case, rather than triggering everything.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Functionality has been added for the following storage types:
* directory ones, based on the default implementation:
* directory
* NFS
* CIFS
* gluster
* ZFS
* (thin) LVM
* Ceph
A new feature `rename` has been introduced to mark which storage
plugin supports the feature.
Version API and AGE have been bumped.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
the intention of this feature is to support the following use-cases:
- reassign a volume from one owning guest to another (which usually
entails a rename, since the owning vmid is encoded in the volume name)
- rename a volume (e.g., to use a more meaningful name instead of the
auto-assigned ...-disk-123)
only the former is implemented at the caller side in
qemu-server/pve-container for now, but since the lower-level feature is
basically the same for both, we can take advantage of the storage plugin
API bump now to get the building block for this future feature in place
already.
adapt ApiChangelog change to fix conflicts and added more detail above
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
free_image doesn't need to check for protection, because that will
happen on the server.
Getting/updating notes has also been refactored to re-use the code
for the PBS api calls.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
add missing b-d and depend on libposix-strptime-perl
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
While it makes no difference for pruning itself, protected backups are
additionally protected against removal. Avoid the potential to confuse
the two. Also update the description for the API return value and add
an enum constraint.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
A protected backup is not removed by free_image and ignored when
pruning.
The protection_file_path function is introduced in Storage.pm, so that
it can also be used by vzdump itself and in archive_remove.
For pruning, renamed backups already behaved similiar to how protected
backups will, but there are a few reasons to not just use that for
implementing the new feature:
1. It wouldn't protect against removal.
2. It would make it necessary to rename notes and log files too.
3. It wouldn't naturally extend to other volumes if that's needed.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
replacing the ones for handling notes. To ensure backwards
compatibility with external plugins, all plugins that do not just call
another implementation need to call $class->{get, update}_volume_notes
when the attribute is 'notes' to catch any derived implementations.
This is mainly done to avoid the need to add new methods every time a
new attribute is added.
Not adding a timeout parameter like the notes functions have, because
it was not used and can still be added if it ever is needed in the
future.
For get_volume_attribute, undef will indicate that the attribute is
not supported. This makes it possible to distinguish "not supported"
from "error getting the attribute", which is useful when the attribute
is important for an operation. For example, free_image checking for
protection (introduced in a later patch) can abort if getting the
'protected' attribute fails.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
This avoids showing empty notes in the result of the content/{volid}
API call for volumes that do not even support notes. It's also in
preparation for the proposed get_volume_attribute generalization,
which expects undef to be returned when an attribute is not supported.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
useful for rollback, so that only the required replication snapshots
can be removed, and it's possible to abort early without deleting any
replication snapshots if there are other non-replication snasphots
blocking rollback.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
mirroring what commit 9177cc2eda did
for the plugin system already, with QEMU 6.1 this is now a hard
requirement
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the plugins for file based storages
* BTRFS
* CIFS
* Dir
* Glusterfs
* NFS
now allow the option 'preallocation'.
'preallocation' can have four values:
* default
* off
* metadata
* falloc
* full
see man pages for `qemu-img` for what these mean exactly. [0]
the defualt value was chosen to be
* qcow2: metadata (as previously)
* raw: off
when using 'metadata' as preallocation mode, for raw images 'off'
is used.
[0] https://qemu.readthedocs.io/en/latest/system/images.html#disk-image-file-formats
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
The calls for directory and ZFS need slight adaptations. Except for
those, the only thing that needs to be done is support partitions in
the disk_is_used helper.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
In preparation to extend disk_is_used to support partitions. Without
this new check, initgpt would also allow partitions once disk_is_used
supports partitions, which is not desirable.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The disk type is already 'partition' so there's no additional
information here. And it would need to serve as a code-word for
unused partitions. The cleaner approach is to not set the usage.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
when called with a partition. Since get_disks uses the partition type
(among other things) to detect LVM and ZFS volumes, such volumes would
still be seen as in-use after wiping. Thus, also change the partition
type and simply use 0x83 "Linux filesystem".
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
For change_parttype, only GPT-partitioned disks are supported, as I
didn't see an option for sgdisk to make it also work with
MBR-partitioned disks. And while sfdisk could be used instead (or
additionally) it would be a new dependency, and AFAICS require some
conversion of partition type GUIDs to MBR types on our part.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>