IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
- Now are named local. SSH drivers will be distributed for running VMs
that use the SSH drivers.
- The new drivers optimize Qcow2 operations in the same way as the
shared TM driver, including:
- thin provisioning, when available a cow copy is made
- Qcow2 snapshots for the snapshot operations
- Some operations has been improved code-wise
- It should be a drop-in replacement for ssh
- New drivers are written in Ruby to accommodate future improvements
- By default new datastores will use "local" instead of "ssh"
co-authored-by: Ruben S. Montero <rsmontero@opennebula.org>
* Remove vm_import table from DB
* Remove imported vms actions
* Fix fsck for image and network
* onedb fsck fix running_vms only for non-backup images
+ define windows profile
OpenNebula/one#6627
* Add windows profile
* Adds OS Profiles parsing & loading
- new /profiles endpoint
- profiles are stored in /etc/fireedge/sunstone/profiles/
- YAML format only
* Update profile loading
* Load profile only once per step
* Add indicator for last applied profile
* Fix autocomplete controller equality comparison
* Install new 'profiles' directory
* Installs windows_optimized profile
Signed-off-by: Victor Hansson <vhansson@opennebula.io>
This feature let's cloud admin to proxy VM connections to any service through the hypervisor. VMs uses a link local IP that is forwarded to a local proxy. This simplifies VM network requirements, e.g. to access gateway, vaults, configuration services etc...
Implementation:
- Use network namespaces to isolate VNET networking. ip netns command is executed through a wrapper to limit sudo access to commands.
- Add tproxy.rb app to manage a group of daemons on HV nodes.
- Use unix sockets for communication between proxy peers. "Inner" proxy runs in the netns without any network access. "Outer" proxy handles HV connections to services.
- Use OpenNebulaNetwork.conf + 'onehost sync -f' for configuration. Proxy can be defined per network.
This commit implements a transparent proxy for OneGate service (as well as any other TCP service)
* #6281: Disable legacy OneGateProxy
* Implement OneGateProxy in VN drivers
* GO api, including backup struct and VM states
* Java api
* remove snapshots on restore_success callback
* fix xmlrpc response in case VM doesn't exists
This commit implements the in-place restore of VM backups. Selected VM disks will
be replaced with the specified backup:
* A new API call has been added to the XML-RPC API (`one.vm.restore`) with
the following arguments:
- VM ID to be restored, needs to be in **poweroff** state
- IMAGE ID of the backup to restore
- INCREMENT ID, only for incremental backups, the increment to use
(defults to -1 to use the last increment available)
- DISK ID of the disk to restore (defaults to -1 to restore all VM
disks)
* Datastore drivers needs to implemente a new operation `ls`. This new
operation takes the VM, image information of the backup and datastore
information and returns the restore URL for the disks in the backup.
* This commit includes the implementation for qcow2 and ssh drivers,
ceph will be implemented in a separated PR. The new driver action is
`restore host:vm_dir vm_id img_id inc_id disk_id`
* The restore operation is performed in a new state `PROLOG_RESTORE`
rendered as `RESTORE` and `rest` in short form. State in in RSuntone.
TODO:
- Remove any existing VM snapshot (system/disk) in the VM. Note that
snapshots are not included in a backup.
- Ceph drivers
- JAVA, GO Lang API bindings
- Sunstone interface, new state and new operation. Review new state in
RSuntone.
co-authored-by: Pavel Czerny <pczerny@opennebula.io>
Add support to <feature> element for the virtual cpu (see [1]). It
includes:
* A new probe that gets the supported features of the hypervisor CPU
using virsh capabilities.
* Generate AUTOMATIC_REQUIREMENTS if the CPU_MODEL/FEATURES is present.
Note that a MODEL needs to be set for this to work (libvirt error otherwise is:
"XML error: Non-empty feature list specified without CPU model...")
[1] https://libvirt.org/formatdomain.html#cpu-model-and-topology
Example
--------------------------------------------------------------------------------
* Template configuration:
CPU_MODEL = [
MODEL = "host-passthrough",
FEATURES = "ss,vmx,tsc_adjust"
]
* Generated AUTOMATIC_REQUIREMENTS in the VM:
AUTOMATIC_REQUIREMENTS="(CLUSTER_ID = 0) & !(PUBLIC_CLOUD = YES) & !(PIN_POLICY = PINNED) & (KVM_CPU_FEATURES = \"*ss*\") & (KVM_CPU_FEATURES = \"*vmx*\") & (KVM_CPU_FEATURES = \"*tsc_adjust*\")"
* Generated deployment file:
<cpu mode='host-passthrough'>
<feature policy='require' name='ss'/>
<feature policy='require' name='vmx'/>
<feature policy='require' name='tsc_adjust'/>
</cpu>
* Information gathered by the probe:
...
MONITORING INFORMATION
ARCH="x86_64"
CGROUPS_VERSION="2"
...
KVM_CPU_FEATURES="ss,vmx,pdcm,osxsave,hypervisor,tsc_adjust,clflushopt,umip,md-clear,stibp,arch-capabilities,ssbd,xsaves,pdpe1gb,ibpb,ibrs,amd-stibp,amd-ssbd,rdctl-no,ibrs-all,skip-l1dfl-vmentry,mds-no,pschange-mc-no"
KVM_CPU_MODEL="Skylake-Client-noTSX-IBRS"
...
co-authored-by: Neal Hansen <nhansen@opennebula.io>
Includes the following changes:
- xml-schema for Backup Job and Scheduled Actions
- GO, Java api
- Deprecate onevm update-chart, delete-chart
* The commands are replaced by sched-update and sched-delete
* Refactor method deprecate_command, it's still possible to run the
command
* Delete 'shutdown' and 'delete' commands deprecated years ago
* Fix --verbose option for sched-update and sched-delete
- Re-implementation of scheduled actions, now are managed and executed
by oned
- Backup Job objects, API, and CLI commands
* Introduce support to follow KEEP_LAST for incremental backups.
- New increment_flatten action added for backup datastores.
- increment_flatten will consolidate KEEP_LAST increments into the
current first increment in the chain.
- increment_flatten MUST return the new chain (inc1:source1,...) and size
of the new first increment (FULL) in the chain.
* Downloader logic for restore has been extracted from downloader.sh to
reuse the increment flatten logic. A new command restic_downloader.rb
process restic:// pseudo-urls.
* Restore process uses two new attributes to customize the restore
process:
- NAME to be used as base name for images and VM Template
- INCREMENT_ID to restore disks from a given increment (not always the
last one)
* Common logic has been added to BackupImage class (backup.rb)
* Includes the following fixes:
- Fix when increment includes blocks larger than max qemu-io request size
- Fix IMAGE counter for quotas on backup images
- Fix rsync restore NO_IP / NO_NIC attributes
TODO:
* Mimic increment_flatten logic and restore images on the backup server
* Suntone restore options
co-authored-by:Michal Opala <mopala@opennebula.io>
- New TransferManager::Datastore class with confine helpers
- "Confinement" methods for backup file preparation and backup:
* ionice/nice
* systemd slice
- IONICE/NICE
* Execute commands under a given nice and ionice (class 2)
* The following variables can be set
- RESTIC_NICE
- RESTIC_IONICE
- RSYNC_NICE
- RSYNC_IONICE
- Systemd Slice
* A user slice is created for each datastore that set:
- CPUQuota
- IOReadIOPSMax
- IOWriteIOPSMax
This requires delegation of io/cpu/cpuset controllers to oneadmin
Also VM folder needs to be local (e.g. not an NFS volume).
* Commands are passed specific environment (e.g. SSH agent socket)
* The following variables can be set:
- RESTIC_MAX_RIOPS
- RESTIC_MAX_WIOPS
- RESTIC_CPU_QUOTA
- RSYNC_MAX_RIOPS
- RSYNC_MAX_WIOPS
- RSYNC_CPU_QUOTA
The new interface is added to file (qcow2/shared/ssh) and ceph TM
drivers.
(cherry picked from commit 276f093073)
- Add update_nic operation to all drivers, linked to dummy (no actual
operation will be performed, but the network will be updated)
- Add missing update_nic in bridge driver
co-authored-by: Pavel Czerný <pczerny@opennebula.systems>
co-authored-by: Frederick Borges <fborges@opennebula.io>
co-authored-by: Christian González <cgonzalez@opennebula.io>
* VNET updates trigger a driver action on running VMs with NICs in the
network.
* VNET includes a sets with VM status: updated, outdated, error and
updating. With VMs in each state.
* VNET flags error situations with a new state UPDATE_FAILURE.
* The same procedure is applied when an AR is updated (only VMs in that
AR are updated).
* A new options in the one.vn.recover API call enable to recover or
retry this VM update operations.
* The following attributes can be live-updated per VNET driver:
- PHYDEV (novlan, vlan, ovs driver)
- MTU (vlan, ovs driver)
- VLAN_ID (vlan, ovs driver)
- QINQ_TYPE (ovs driver)
- CVLANS (ovs driver)
- VLAN_TAGGED_ID (ovs driver)
- OUTER_VLAN_ID (ovs driver)
- INBOUND_AVG_BW (SG, ovs driver + KVM)
- INBOUND_PEAK_BW (SG, ovs driver + KVM)
- INBOUND_PEAK_KB (SG, ovs driver + KVM)
- OUTBOUND_AVG_BW (SG, ovs driver + KVM)
- OUTBOUND_PEAK_BW (SG, ovs driver + KVM)
- OUTBOUND_PEAK_KB (SG, ovs driver + KVM)
* New API call one.vm.updatenic, allows to update individual NICs
without the need of detach/attach (only QoS supported).
* Update operations for: 802.1Q, bridge, fw, ovswitch, ovswitch_vxlan
and vxlan network drivers.
* VNET attributes (old values) stored in VNET_UPDATE to allow
implementation of update operations. The attribute is removed after a
successful update.
* Updates to CLI onevnet (--retry option) / onevm (nicupdate command)
* XSD files updated to reflect the new data model
* Ruby and JAVA bindings updated: new VNET state and recover option, new
VM API call.
* Suntone and Fireedge implementation (lease status, recover option, new
states)
TODO: Virtual Functions does not support this functionality
iii