IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
- EXTERNAL_VM_ATTR can be defined in sched conf to add additional
VM attributes to the JSON document sent to the external sched. The
format is: "XPATH<:NAME>" (NAME is optional, if not defined the
original attribute name will be used).
- Add example server to the share folder
- remove unused call function
- context for new methods
- copy comments to context functions
- Disk method doesn't need context
- context for ByName functions
Signed-off-by: Peter Willis <peter.willis@cudoventures.com>
- Add state to Markeplace
- Add state Markeplace Appliance
- Add enable method for Marketplace
- Add tests and use gocheck
co-authored-by: Pavel Czerny <pczerny@opennebula.io>
Signed-off-by: Pierre Lafievre <pierre.lafievre@iguanesolutions.com>
When a NIC_ALIAS is detached the deactivate block is executed
incorrectly for some drivers. This can render in an unusable network for
the VM.
This commits includes also some linting
* Set priority on Backup Job create
* Fix a bug when running backup jobs in sequential mode
* Change the update semantics to support replace mode
* Update Ruby, Golang and Java API accordingly
* F #6063: Adress PR comments
- Always add a virtio-scsi controller to allow hotplug of scsi disks
- Change DISK/QUEUES to DISK/VIRTIO_BLK_QUEUES
- Default for all disk in the VM can be set in FEATURES/VIRTIO_BLK_QUEUES
- Defaults for all domains can be set in vmm_exec.conf
Adds a SCSI controller to the VM (KVM) when a SCSI disk (target sd) is
present.
Note: < 6.6.x works becasue defaults to 1 SCSI virtio queue that adds
the controller
This commit adds support for the "auto" keywork for the
VIRTIO_SCSI_QUEUES attribute. The number of queues is set to the number
of virtual CPUs in this case.
Also a new DISK attribute, QUEUES has been added to the VM DISK
definition to set the numner of virt queues for virt-blk. This parameter
also supports the auto keyword to set it to the number of VCPUs.
These parameters can be set by default in vmm_exec_kvm.conf.
Add support to <feature> element for the virtual cpu (see [1]). It
includes:
* A new probe that gets the supported features of the hypervisor CPU
using virsh capabilities.
* Generate AUTOMATIC_REQUIREMENTS if the CPU_MODEL/FEATURES is present.
Note that a MODEL needs to be set for this to work (libvirt error otherwise is:
"XML error: Non-empty feature list specified without CPU model...")
[1] https://libvirt.org/formatdomain.html#cpu-model-and-topology
Example
--------------------------------------------------------------------------------
* Template configuration:
CPU_MODEL = [
MODEL = "host-passthrough",
FEATURES = "ss,vmx,tsc_adjust"
]
* Generated AUTOMATIC_REQUIREMENTS in the VM:
AUTOMATIC_REQUIREMENTS="(CLUSTER_ID = 0) & !(PUBLIC_CLOUD = YES) & !(PIN_POLICY = PINNED) & (KVM_CPU_FEATURES = \"*ss*\") & (KVM_CPU_FEATURES = \"*vmx*\") & (KVM_CPU_FEATURES = \"*tsc_adjust*\")"
* Generated deployment file:
<cpu mode='host-passthrough'>
<feature policy='require' name='ss'/>
<feature policy='require' name='vmx'/>
<feature policy='require' name='tsc_adjust'/>
</cpu>
* Information gathered by the probe:
...
MONITORING INFORMATION
ARCH="x86_64"
CGROUPS_VERSION="2"
...
KVM_CPU_FEATURES="ss,vmx,pdcm,osxsave,hypervisor,tsc_adjust,clflushopt,umip,md-clear,stibp,arch-capabilities,ssbd,xsaves,pdpe1gb,ibpb,ibrs,amd-stibp,amd-ssbd,rdctl-no,ibrs-all,skip-l1dfl-vmentry,mds-no,pschange-mc-no"
KVM_CPU_MODEL="Skylake-Client-noTSX-IBRS"
...
co-authored-by: Neal Hansen <nhansen@opennebula.io>
The new URL format is:
- restic://<datastore_id>/<bj_id>/<id>:<snapshot_id>,.../<file_name>
- rsync://<datastore_id>/<bj_id>/<id>:<snapshot_id>,.../<file_name>
bj_id can be empty
This commits also adapts some drivers to the new BACKUP format.
This enables to use the same repo for all VMs in the same backup job,
for example to deduplicate all VMs in the backup job.
This commit changes the driver interface for the backup operation as now
the backup job id is passed:
BACKUP host:remote_dir DISK_ID:..:DISK_ID vm_uuid bj_id vm_id ds_id
When the backup job is not defined it will be '-'
The connection to an external scheduler module is configured in sched.conf:
EXTERNAL_SCHEDULER = [
SERVER = "http://localhost:4567",
PROXY = "",
TIMEOUT = 10
]
The API post on '/' the list of VMs, their pre-selected list of
candidate hosts based on REQUIREMENTS along with the VM information
(CAPACITY, TEMPLATE and USER_TEMPLATE).
Example:
{
"VMS": [
{
"CAPACITY": {
"CPU": 1.5,
"DISK_SIZE": 1024,
"MEMORY": 131072
},
"HOST_IDS": [
3,
4,
5
],
"ID": 32,
"STATE": "PENDING",
"TEMPLATE": {
"AUTOMATIC_DS_REQUIREMENTS": "(\"CLUSTERS/ID\" @> 0)",
"AUTOMATIC_NIC_REQUIREMENTS": "(\"CLUSTERS/ID\" @> 0)",
"AUTOMATIC_REQUIREMENTS": "(CLUSTER_ID = 0) & !(PUBLIC_CLOUD = YES) & !(PIN_POLICY = PINNED)",
"CPU": "1.5",
"MEMORY": "128",
...
},
"USER_TEMPLATE": {}
},
{
"CAPACITY": {
"CPU": 1.5,
"DISK_SIZE": 1024,
"MEMORY": 131072
},
"HOST_IDS": [
3,
4,
5
],
"ID": 33,
"STATE": "PENDING",
"TEMPLATE": {
...
},
"USER_TEMPLATE": {}
}
]
}
The scheduler needs to respond to this post action with a simple list of
the allocation for each VM:
{
"VMS": [
{
"ID": 32,
"HOST_ID": 2
},
{
"ID": 33,
"HOST_ID": 0
}
]
}
This commits vendorize Vendorize nlohmann-json (MIT license)
Before :
```
if (!event.empty())
{
hm->trigger_send_event(event);
}
```
After:
```
if (!nd.is_cache())
{
if (!event.empty())
{
hm->trigger_send_event(event);
}
}
```
When nebula is cache HookManager isn't initialize, therfore ```hm``` equals to null.
(cherry picked from commit 9e6d755d73)