IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
for better building pve-manager
this moves the copyright file to debian/
the debian/changelog.Debian to debian/changelog
we do not need the debian/conffiles anymore (gets autogenerated from
files in ./etc/)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this is neither necessary, nor useful
those files are meant to be read only anyway, so there is no gain in
them being owned by www-data
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Otherwise the user running the tests may either create (and
end up being the ownderof) the system wide
/var/lock/pve-manager/* files, or the tests will fail (or
loop endlessly) if the user doesn't have access to them.
the edit window has 3 radiobuttons (spice,device,port)
and a checkbox for usb3 (which gets disabled and checked
if you choose a usb3 device)
also it makes use of the help feature
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
when having an rbd storage with no user, we displayed 'admin' by default,
this patch sets '' and will be overwritten from the backend if a value is set
when creating a rbd storage, the textfield still has 'admin' in
there by default
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
to reduce code duplication. this slightly changes behaviour
compared to the previous version:
only disks with the correct prefix are cleaned up, not all
disks with __replication* snapshots.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
/nodes/<node>/replication => list status of all jobs
/nodes/<node>/replication/<id>/status => individual job status
/nodes/<node>/replication/<id>/log => job log
We pass a list of storage to scan for stale volumes to prepare_local_job().
So we make sure that we only activate/scan related storages.
Snapshot rollback may remove local replication shapshots. In that case
we still have the $conf->{parent} snapshot on both sides, so we
can use that as base snapshot.