IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Allow to specify a separate cluster network when initializing ceph.
Ceph docs[0] imply a possibility for performance increase and
enhanced security in environments where the public network serves not
fully trusted peers, which could else provoke a DOS to the cluster
traffic[0].
Make this optional, but if passed `network` is required too.
[0]: http://docs.ceph.com/docs/luminous/rados/configuration/network-config-ref/
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Introduce the mdsCount again, I know remember again why I had it in
v3 of my CephFS series.. Use this to disable the CephFS create button
if we have no MDS configured, as this is a requirement.
Further change the gettext for 'No XY configure' to a format string
so that it can be reused.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
as we depend on ceph-fuse elsewhere (pve-storage) this gets installed
from Debians repositories with the Ceph 10 version.
So ensure that an up to date version, from our current supported Ceph
release, gets installed when doing `pveceph install` else you may
fall into certain issues which would have been already resolved with
a newer version.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
a MDS gets transferred to the active MDS info once it got selected as
active, not once it really _is_ active, it can be there in the
'up:creating' state. So ensure that a MDS with 'u:active' could be
found
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
A MDS gets only active once a FS is there, and we need an MDS active
to be able to add a storage, as the CephFS plugin does an immediate
mount check. As an MDS needs some time to get active we had a
problematic time window where this mounting could fail.
Wait for a MDS to get in active state.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
add aliases for the existing ones, ignore the ones for MDS and
CephFS, they did never hit any repo.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allow to create a new CephFS instance and allow to list them.
As deletion requires coordination between the active MDS and all
standby MDS next in line this needs a bit more work. One could mark
the MDS cluster down and stop the active, that should work but as
destroying is quite a sensible operation, in production not often
needed I deemed it better to document this only, and leaving API
endpoints for this to the future.
For index/list I slightly transform the result of an RADOS `fs ls`
monitor command, this would allow relative easy display of a CephFS
and it's backing metadata and data pools in a GUI.
While for now it's not enabled by default and marked as experimental,
this API is designed to host multiple CephFS instances - we may not
need this at all, but I did not want to limit us early. And anybody
liking to experiment can use it after the respective ceph.conf
settings.
When encountering errors try to rollback. As we verified at the
beginning that we did not reused pools, destroy the ones which we
created.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Co-authored-by: Alwin Antreich <a.antreich@proxmox.com>
Allow to create, list and destroy and Ceph Metadata Server (MDS) over
the API and the CLI `pveceph` tool.
Besides setting up the local systemd service template and the MDS
data directory we also add a reference to the MDS in the ceph.conf
We note the backing host (node) from the respective MDS and set up a
'mds standby for name' = 'pve' so that the PVE created ones are a
single group. If we decide to add integration for rank/path specific
MDS (possible useful for CephFS with quite a bit of load) then this
may help as a starting point.
On create, check early if a reference already exists in ceph.conf and
abort in that case. If we only see existing data directories later
on we abort but do not remove them, they could well be from an older
manual create - where it's possible dangerous to just remove it. Let
the user handle it themself in that case.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Co-authored-by: Alwin Antreich <a.antreich@proxmox.com>
We will reuse this in the future, e.g., when creating a data and
metadata pool for CephFS.
Allow to pass a $rados object (to reuse it, as initializing is not
that cheap) but also create it if it's undefined, fro convenience.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
hard to translate sensible and also better for user to have them in
english as much more can be found when searching those specific
terms. To what would one translate CKSUM for example?
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
As requested by various users and also for Dominik's upcomming
restart Ceph [Mo, Mgr, ...} series where else two buttons just called
reboot would be exacly besides each other.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this enables the gui part to enable the user to add a pci device
via the gui
we disable the button after 4 devices, since that is the maximum
in the backend
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this patch adds the PCIEdit window and InputPanel uses PCISelector
and MDevSelector
when we detect an iommugroup of -1, we put a warning on top to inform
the user that IOMMU is not activated (but let him add the devices
regardless, so that he can use it after IOMMU is activated)
also puts a warning if he selects a device that shares an iommugroup
with a different device (but not the same device with different
function). that detection is not perfect, but we cannot do really
better
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
If an image has a <vmid> encoded in the image name
and the guest does not exist in the cluster
we can delete it on the GUI.
Also, if a config exists on another node and the storage is local
we can delete.
In all other cases, it is not allowed to delete it.
For safety reason the safe remove windows are used.
as done with the api ceph modules:
most of this was imported by just copying without verifying if all is
actually required. Some lost its purpose as we re-used more from our
existing module code base (e.g., pve-common) but wasn't actually
removed.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
most of this was imported by just copying without verifying if all is
actually required. Some lost its purpose as we re-used more from our
existing module code base (e.g., pve-common) but wasn't actually
removed.
As this file includes two perl modules you need to take a bit caution
when looking at this, as some things are used in one module but not
the other - simple grep'ing at this may give false positives.
Also add PVE::API2::Storage use which was missing here.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
most of this was imported by just copying without verifying if all is
actually required. Some lost its purpose as we re-used more from our
existing module code base (e.g., pve-common) but wasn't actually
removed.
As this file includes two perl modules you need to take a bit caution
when looking at this, as some things are used in one module but not
the other - simple grep'ing at this may give false positives.
Also include the missing IO::File use.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>