IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
we want this, because the model in /sys/block/<device>/device/model
is limited to 16 characters
and since the model is not always in the udevadm output (nvme),
also read the model from the model file as fallback
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since we iterate over the entries in /sys/block
it makes sense to use this path
this should fix#1099
because udevadm does not take
-n cciss!c0d0 (because it only looks in dev for this)
but takes
-p /sys/block/cciss!c0d0
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
without this, having an efidisk on a ceph storage
prevents creating another disk on the same
ceph storage, because it will not be detected
and we try to allocate one with the same name
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
refactored the wear level parsing into its
own function, where we can now define a
vendor <-> attribute id
mapping
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of parsing the output of smart in two places,
give get_smart_data a flag if we only want health
this fixes a bug (not on the bugtracker), where
an ssd with disabled smart had an empty string as health
in the gui
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the smart checks are only needed for the API call(s) that
list all disks and their status, but get_disks is also used
in disk usage checks and in the Ceph code, where the smart
status is completely irrelevant.
drop the implicit skipping of smart checks if $disk is set,
since we have an explicit parameter for this now.
because we never ever want to die in get_disks because of a
single disk, but the nodes/xyz/disks/smart API path is
allowed to fail if a disk device is unsupported by smartctl
or something else goes wrong.
While the mkdir option deals with the case where we don't
want to clobber a mount point with directories (like ZFS,
gluster or NFS), putting a directory storage directly onto a
mount point is still risky:
If the path exists - which it usually does even if not
mounted - the storage will be considered successfully
activated, but empty (or with unexpected content). Some
operations will then lead to unexpected problems: the
free_disk operation for instance only warns if the disk does
not exist, but does not throw an error. In this case the
configuration might be updated without the real disk being
deleted. Once it's mounted back in, later operations which
check existing disks which are not part of the current VM
configuration (like migration) might error unexpectedly.
This adds an 'is_mountpoint' option to directory storages
which assumes the directory is an externally managed mount
point (eg. fstab or zfs) and changes activate_storage() to
throw an error if the path is not mounted.
So far this only prevented the creation of the toplevel
directory. This does not cover all problem cases,
particularly when said directory is supposed to be a mount
point, including NFS and glusterfs beside ZFS.
The directory based storages we have already use mkpath
whenever they need to create files, and for actions on files
which are supposed to exist it's fine if it errors out.
So it should also be safe to skip the creation of standard
subdirectories in activate_storage().
Additionally NFS and glusterfs storages should also accept
the mkdir option as they otherwise may exhibit similar
issues, eg. when an NFS storage is mounted onto a directory
inside a ZFS subvolume.
since the rbd images themselves are named differently than
the volumes in our config files, we need to recreate this
information from the parent relation in the ceph metadata,
otherwise list_images() might return wrong volume names/IDs
since list_images is used by PVE::Storage::vdisk_free() to
check for children still referencing a base image, because
of the wrong volume id RBDPlugin->parse_volname() does not
detect the base image of linked clones and the check fails.
this is thankfully mitigated by the protected status of the
base snapshot, but creates a rather confusing error message.
scenario (VM 701 is a linked clone of template VM 700):
$ qm config 700 | grep virtio0:
virtio0: ceph_qemu:base-700-disk-1,size=2G
$ qm config 701 | grep virtio0:
virtio0: ceph_qemu:base-700-disk-1/vm-701-disk-1,size=2G
before (pvesm list reports wrong volume ID, check fails):
$ pvesm list ceph_qemu
ceph_qemu:base-700-disk-1 raw 2147483648 700
ceph_qemu:vm-701-disk-1 raw 2147483648 701
$ pvesm free ceph_qemu:base-700-disk-1
snap_unprotect: can't unprotect; at least 1 child(ren) in pool rbd
rbd unprotect base-700-disk-1 snap '__base__' error: snap_unprotect: can't unprotect; at least 1 child(ren) in pool rbd
after (correct volume ID, check works as intended):
$ pvesm list ceph_qemu
ceph_qemu:base-700-disk-1 raw 2147483648 700
ceph_qemu:base-700-disk-1/vm-701-disk-1 raw 2147483648 701
$ pvesm free ceph_qemu:base-700-disk-1
base volume 'base-700-disk-1' is still in use (use by 'base-700-disk-1/vm-701-disk-1')
since smartctl uses the return value to encode
disk health status (such as failure in the past)
we cannot die there, but have to parse the returncode
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
adds a new class (intended to be used under nodes in pve-manager)
which adds the three api calls: list, smart and init
list being a general list of the available disk with infos
smart being a call to get the smart data from a given device
init being a call to write a gpt header to an unused disk
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this adds the functions for listing the disks (mostly copied from
the ceph code), checking if a disk is a valid blockdevice, if it
is used/in a zfs pool/as an lvm pv, and an init function (just to add a gpt header;
this is important if one wants to use a fresh disk for ceph journals)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
If you want to use different ceph storage,
something they have differents values like ms_nocrc = true.(they are also others ones).
The client need to specify theses special options to be able to connect
This patch allow to create a ceph config file for each storeid in
/etc/pve/priv/ceph/$storeid.conf
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
PVE team cannot support specialized vendor-specific storage
plugins because of lack of hardware. But we can allow users to
add own plugins for their storages without need to rewrite any
PVE code and thus ease PVE updates to them.
Idea of this patch is to add folder /usr/share/perl5/PVE/Storage/Custom
where user can place his plugins and PVE will automatically load
them on start or warn if it could not and continue. Maybe we could
even load all plugins (except PVE::Storage::Plugin itself) this way,
because current storage plugins are not really plugins, if they
need to be explicitly loaded in PVE code :-).
Custom plugins MUST have api() method returning version for which
it was designed. If API changes from PVE side, module is just not
being registered and warnig message is printed do log, so user have
to update module. Until module update, corresponding storage will
just disappear from PVE, so it shall not impose any data damage
because of API change.
This approach works (with some limitations) if plugin works in
generic PVE way: full control of volumes lifecycle. And will not
currently work for custom plugins like iSCSI, which needs to select
pre-existing volumes. Maybe someone will add more flexible way to
pve-manager to select input elements for storage plugins to target
this.
Currently tested with my NetApp plugin.
Signed-off-by: Dmitry Petuhov <mityapetuhov@gmail.com>
ssh(1) mentions that compression is only disirable on slow
connections.
since migration from cluster node to cluster node needs a
fast network anyway, we can drop the compression for
a speed improvement
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This way we get parameter verification on monitor addresses
as well as the ability to pass multiple `--monhost`
arguments to `pvesm add`.
Since our '-list' schemas default to using commas we now
need to properly support these, so all uses of the monhost
property now replace all of semicolon, space or comma into
the currently required character.
This should fix the issues reported by Alwin Antreich on the
pve-user list.
Since this schema supports both ipv6+port notations we need
to make sure we convert to the bracket enclosed variant.
Added a helper for this.