2008-06-17 16:27:32 +00:00
/* -------------------------------------------------------------------------- */
2023-01-09 12:23:19 +01:00
/* Copyright 2002-2023, OpenNebula Project, OpenNebula Systems */
2008-06-17 16:27:32 +00:00
/* */
/* Licensed under the Apache License, Version 2.0 (the "License"); you may */
/* not use this file except in compliance with the License. You may obtain */
/* a copy of the License at */
/* */
/* http://www.apache.org/licenses/LICENSE-2.0 */
/* */
/* Unless required by applicable law or agreed to in writing, software */
/* distributed under the License is distributed on an "AS IS" BASIS, */
/* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. */
/* See the License for the specific language governing permissions and */
/* limitations under the License. */
/* -------------------------------------------------------------------------- */
# ifndef VIRTUAL_MACHINE_H_
# define VIRTUAL_MACHINE_H_
# include "VirtualMachineTemplate.h"
2016-12-11 21:05:07 +01:00
# include "VirtualMachineDisk.h"
2016-12-24 01:35:33 +01:00
# include "VirtualMachineNic.h"
2015-06-23 21:52:10 +02:00
# include "VirtualMachineMonitorInfo.h"
2019-12-10 11:45:15 +01:00
# include "PoolObjectSQL.h"
2008-06-17 16:27:32 +00:00
# include "History.h"
2012-12-04 23:19:08 +01:00
# include "Image.h"
2010-05-25 18:19:22 +02:00
# include "NebulaLog.h"
F #5516: New backup interface for OpenNebula
co-authored-by: Frederick Borges <fborges@opennebula.io>
co-authored-by: Neal Hansen <nhansen@opennebula.io>
co-authored-by: Daniel Clavijo Coca <dclavijo@opennebula.io>
co-authored-by: Pavel Czerný <pczerny@opennebula.systems>
BACKUP INTERFACE
=================
* Backups are exposed through a a special Datastore (BACKUP_DS) and
Image (BACKUP) types. These new types can only be used for backup'ing
up VMs. This approach allows to:
- Implement tier based backup policies (backups made on different
locations).
- Leverage access control and quota systems
- Support differnt storage and backup technologies
* Backup interface for the VMs:
- VM configures backups with BACKUP_CONFIG. This attribute can be set
in the VM template or updated with updateconf API call. It can include:
+ BACKUP_VOLATILE: To backup or not volatile disks
+ FS_FREEZE: How the FS is freeze for running VMs (qemu-agent,
suspend or none). When possible backups are crash consistent.
+ KEEP_LAST: keep only a given number of backups.
- Backups are initiated by the one.vm.backup API call that requires
the target Datastore to perform the backup (one-shot). This is
exposed by the onevm backup command.
- Backups can be periodic through scheduled actions.
- Backup configuration is updated with one.vm.updateconf API call.
* Restore interface:
- Restores are initiated by the one.image.restore API call. This is
exposed by oneimage restore command.
- Restore include configurable options for the VM template
+ NO_IP: to not preserve IP addresses (but keep the NICs and network
mapping)
+ NO_NIC: to not preserve network mappings
- Other template attributes:
+ Clean PCI devices, including network configuration in case of TYPE=NIC
attributes. By default it removes SHORT_ADDRESS and leave the "auto"
selection attributes.
+ Clean NUMA_NODE, removes node id and cpu sets. It keeps the NUMA node
- It is possible to restore single files stored in the repository by
using the backup specific URL.
* Sunstone (Ruby version) has been updated to expose this feautres.
BACKUP DRIVERS & IMPLEMENTATION
===============================
* Backup operation is implemented by a combination of 3 driver operations:
- VMM. New (internal oned <-> one_vmm_exec.rb) to orchestrate
backups for RUNNING VMs.
- TM. This commit introduces 2 new operations (and their
corresponding _live variants):
+ pre_backup(_live): Prepares the disks to be back'ed up in the
repository. It is specific to the driver: (i) ceph uses the export
operation; (ii) qcow2/raw uses snapshot-create-as and fs_freeze as
needed.
+ post_backup(_live): Performs cleanning operations, i.e. KVM
snapshots or tmp dirs.
- DATASTORE. Each backup technology is represented by its
corresponfing driver, that needs to implement:
+ backup: it takes the VM disks in file (qcow2) format and stores it
the backup repository.
+ restore: it takes a backup image and restores the associated disks
and VM template.
+ monitor: to gather available space in the repository
+ rm: to remove existing backups
+ stat: to return the "restored" size of a disk stored in a backup
+ downloader pseudo-URL handler: in the form
<backup_proto>://<driver_snapshot_id>/<disk filename>
BACKUP MANAGEMENT
=================
Backup actions may potentially take some time, leaving some vmm_exec threads in
use for a long time, stucking other vmm operations. Backups are planned
by the scheduler through the sched action interface.
Two attributes has been added to sched.conf:
* MAX_BACKUPS max active backup operations in the cloud. No more
backups will be started beyond this limit.
* MAX_BACKUPS_HOST max number of backups per host
* Fix onevm CLI to properly show and manage schedule actions. --schedule
supports now, as well as relative times +<seconds_from_stime>
onvm backup --schedule now -d 100 63
* Backup is added as VM_ADMIN_ACTIONS in oned.conf. Regular users needs
to use the batch interface or request specific permissions
Internal restructure of Scheduler:
- All sched_actions interface is now in SchedActionsXML class and files.
This class uses references to VM XML, and MUST be used in the same
lifetime scope.
- XMLRPC API calls for sched actions has been moved to ScheduledActionXML.cc as
static functions.
- VirtualMachineActionPool includes counters for active backups (total
and per host).
SUPPORTED PLATFORMS
====================
* hypervisor: KVM
* TM: qcow2/shared/ssh, ceph
* backup: restic, rsync
Notes on Ceph
* Ceph backups are performed in the following steps:
1. A snapshot of each disk is taken (group snapshots cannot be used as
it seems we cannot export the disks afterwards)
2. Disks are export to a file
3. File is converted to qcow2 format
4. Disk files are upload to the backup repo
TODO:
* Confirm crash consistent snapshots cannot be used in Ceph
TODO:
* Check if using VM dir instead of full path is better to accomodate
DS migrations i.e.:
- Current path: /var/lib/one/datastores/100/53/backup/disk.0
- Proposal: 53/backup/disk.0
RESTIC DRIVER
=============
Developed together with this feature is part of the EE edtion.
* It supports the SFTP protocol, the following attributes are
supported:
- RESTIC_SFTP_SERVER
- RESTIC_SFTP_USER: only if different from oneadmin
- RESTIC_PASSWORD
- RESTIC_IONICE: Run restic under a given ionice priority (class 2)
- RESTIC_NICE: Run restic under a given nice
- RESTIC_BWLIMIT: Limit restic upload/download BW
- RESTIC_COMPRESSION: Restic 0.14 implements compression (three modes:
off, auto, max). This requires repositories version 2. By default,
auto is used (average compression without to much CPU usage)
- RESTIC_CONNECTIONS: Sets the number of concurrent connections to a
backend (5 by default). For high-latency backends this number can be
increased.
* downloader URL: restic://<datastore_id>/<snapshot_id>/<file_name>
snapshot_id is the restic snapshot hash. To recover single disk images
from a backup. This URLs support:
- RESTIC_CONNECTIONS
- RESTIC_BWLIMIT
- RESTIC_IONICE
- RESTIC_NICE
These options needs to be defined in the associated datastore.
RSYNC DRIVER
=============
A rsync driver is included as part of the CE distribution. It uses the
rsync tool to store backups in a remote server through SSH:
* The following attributes are supported to configure the backup
datastore:
- RSYNC_HOST
- RSYNC_USER
- RSYNC_ARGS: Arguments to perform the rsync operatin (-aS by default)
* downloader URL: rsync://<ds_id>/<vmid>/<hash>/<file> can be used to recover
single files from an existing backup. (RSYNC_HOST and RSYN_USER needs
to be set in ds_id
EMULATOR_CPUS
=============
This commit includes a non related backup feature:
* Add EMULATOR_CPUS (KVM). This host (or cluster attribute) defines the
CPU IDs where the emulator threads will be pinned. If this value is
not defined the allocated CPU wll be used when using a PIN policy.
(cherry picked from commit a9e6a8e000e9a5a2f56f80ce622ad9ffc9fa032b)
F OpenNebula/one#5516: adding rsync backup driver
(cherry picked from commit fb52edf5d009dc02b071063afb97c6519b9e8305)
F OpenNebula/one#5516: update install.sh, add vmid to source, some polish
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 6fc6f8a67e435f7f92d5c40fdc3d1c825ab5581d)
F OpenNebula/one#5516: cleanup
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 12f4333b833f23098142cd4762eb9e6c505e1340)
F OpenNebula/one#5516: update downloader, default args, size check
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 510124ef2780a4e2e8c3d128c9a42945be38a305)
LL
(cherry picked from commit d4fcd134dc293f2b862086936db4d552792539fa)
2022-09-09 11:46:44 +02:00
# include "Backups.h"
2008-06-17 16:27:32 +00:00
# include <time.h>
2012-06-15 12:28:20 +02:00
# include <set>
2008-06-17 16:27:32 +00:00
# include <sstream>
2011-06-01 12:41:46 +02:00
class AuthRequest ;
2015-05-19 18:41:23 +02:00
class Snapshots ;
2019-07-01 17:52:47 +02:00
class HostShareCapacity ;
2011-06-01 12:41:46 +02:00
2008-06-17 16:27:32 +00:00
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
/**
* The Virtual Machine class . It represents a VM . . .
*/
class VirtualMachine : public PoolObjectSQL
{
public :
2008-06-22 01:51:49 +00:00
// -------------------------------------------------------------------------
2010-12-31 17:00:26 +01:00
// VM States
2009-03-06 12:10:15 +00:00
// -------------------------------------------------------------------------
2008-06-17 16:27:32 +00:00
/**
* Global Virtual Machine state
*/
enum VmState
{
2016-04-22 16:06:43 +02:00
INIT = 0 ,
PENDING = 1 ,
HOLD = 2 ,
ACTIVE = 3 ,
STOPPED = 4 ,
SUSPENDED = 5 ,
DONE = 6 ,
//FAILED = 7,
POWEROFF = 8 ,
UNDEPLOYED = 9 ,
CLONING = 10 ,
CLONING_FAILURE = 11
2008-06-17 16:27:32 +00:00
} ;
/**
2009-03-06 12:10:15 +00:00
* Virtual Machine state associated to the Life - cycle Manager
2008-06-17 16:27:32 +00:00
*/
enum LcmState
{
2012-10-16 14:59:16 +02:00
LCM_INIT = 0 ,
PROLOG = 1 ,
BOOT = 2 ,
RUNNING = 3 ,
MIGRATE = 4 ,
SAVE_STOP = 5 ,
SAVE_SUSPEND = 6 ,
SAVE_MIGRATE = 7 ,
PROLOG_MIGRATE = 8 ,
PROLOG_RESUME = 9 ,
EPILOG_STOP = 10 ,
EPILOG = 11 ,
SHUTDOWN = 12 ,
2015-04-26 16:47:00 +02:00
//CANCEL = 13,
2015-04-21 14:00:52 +02:00
//FAILURE = 14,
2013-01-21 12:27:18 +01:00
CLEANUP_RESUBMIT = 15 ,
2012-10-16 14:59:16 +02:00
UNKNOWN = 16 ,
HOTPLUG = 17 ,
SHUTDOWN_POWEROFF = 18 ,
2012-10-16 15:32:06 +02:00
BOOT_UNKNOWN = 19 ,
2012-10-17 12:34:32 +02:00
BOOT_POWEROFF = 20 ,
2012-10-17 15:40:02 +02:00
BOOT_SUSPENDED = 21 ,
2013-01-21 12:27:18 +01:00
BOOT_STOPPED = 22 ,
2013-02-28 02:49:56 +01:00
CLEANUP_DELETE = 23 ,
2013-03-06 18:33:18 +01:00
HOTPLUG_SNAPSHOT = 24 ,
2013-03-07 23:40:01 +01:00
HOTPLUG_NIC = 25 ,
HOTPLUG_SAVEAS = 26 ,
HOTPLUG_SAVEAS_POWEROFF = 27 ,
2013-03-27 18:15:53 +01:00
HOTPLUG_SAVEAS_SUSPENDED = 28 ,
2013-04-03 00:28:06 +02:00
SHUTDOWN_UNDEPLOY = 29 ,
2013-04-02 17:01:22 +02:00
EPILOG_UNDEPLOY = 30 ,
2013-04-04 17:29:16 +02:00
PROLOG_UNDEPLOY = 31 ,
2015-03-20 17:53:55 +01:00
BOOT_UNDEPLOY = 32 ,
2015-03-18 12:51:01 +01:00
HOTPLUG_PROLOG_POWEROFF = 33 ,
2015-03-23 12:08:36 +01:00
HOTPLUG_EPILOG_POWEROFF = 34 ,
2015-03-24 13:35:21 +01:00
BOOT_MIGRATE = 35 ,
BOOT_FAILURE = 36 ,
BOOT_MIGRATE_FAILURE = 37 ,
2015-03-24 15:36:30 +01:00
PROLOG_MIGRATE_FAILURE = 38 ,
2015-03-25 14:35:15 +01:00
PROLOG_FAILURE = 39 ,
EPILOG_FAILURE = 40 ,
EPILOG_STOP_FAILURE = 41 ,
2015-03-26 16:58:51 +01:00
EPILOG_UNDEPLOY_FAILURE = 42 ,
PROLOG_MIGRATE_POWEROFF = 43 ,
2015-03-26 18:51:13 +01:00
PROLOG_MIGRATE_POWEROFF_FAILURE = 44 ,
PROLOG_MIGRATE_SUSPEND = 45 ,
2015-04-28 11:44:56 +02:00
PROLOG_MIGRATE_SUSPEND_FAILURE = 46 ,
BOOT_UNDEPLOY_FAILURE = 47 ,
2015-05-08 13:47:48 +02:00
BOOT_STOPPED_FAILURE = 48 ,
PROLOG_RESUME_FAILURE = 49 ,
2015-05-20 00:21:26 +02:00
PROLOG_UNDEPLOY_FAILURE = 50 ,
2015-06-30 11:26:05 +02:00
DISK_SNAPSHOT_POWEROFF = 51 ,
DISK_SNAPSHOT_REVERT_POWEROFF = 52 ,
DISK_SNAPSHOT_DELETE_POWEROFF = 53 ,
DISK_SNAPSHOT_SUSPENDED = 54 ,
2023-07-20 09:51:15 +02:00
//DISK_SNAPSHOT_REVERT_SUSPENDED = 55,
2015-06-30 11:26:05 +02:00
DISK_SNAPSHOT_DELETE_SUSPENDED = 56 ,
DISK_SNAPSHOT = 57 ,
2016-03-22 23:07:35 +01:00
//DISK_SNAPSHOT_REVERT = 58,
2015-08-31 15:10:53 +03:00
DISK_SNAPSHOT_DELETE = 59 ,
PROLOG_MIGRATE_UNKNOWN = 60 ,
2016-12-11 21:05:07 +01:00
PROLOG_MIGRATE_UNKNOWN_FAILURE = 61 ,
DISK_RESIZE = 62 ,
2016-12-17 02:49:14 +01:00
DISK_RESIZE_POWEROFF = 63 ,
2020-05-27 19:35:29 +02:00
DISK_RESIZE_UNDEPLOYED = 64 ,
2020-11-17 11:24:52 +01:00
HOTPLUG_NIC_POWEROFF = 65 ,
2021-01-08 10:50:12 +01:00
HOTPLUG_RESIZE = 66 ,
HOTPLUG_SAVEAS_UNDEPLOYED = 67 ,
F #5516: New backup interface for OpenNebula
co-authored-by: Frederick Borges <fborges@opennebula.io>
co-authored-by: Neal Hansen <nhansen@opennebula.io>
co-authored-by: Daniel Clavijo Coca <dclavijo@opennebula.io>
co-authored-by: Pavel Czerný <pczerny@opennebula.systems>
BACKUP INTERFACE
=================
* Backups are exposed through a a special Datastore (BACKUP_DS) and
Image (BACKUP) types. These new types can only be used for backup'ing
up VMs. This approach allows to:
- Implement tier based backup policies (backups made on different
locations).
- Leverage access control and quota systems
- Support differnt storage and backup technologies
* Backup interface for the VMs:
- VM configures backups with BACKUP_CONFIG. This attribute can be set
in the VM template or updated with updateconf API call. It can include:
+ BACKUP_VOLATILE: To backup or not volatile disks
+ FS_FREEZE: How the FS is freeze for running VMs (qemu-agent,
suspend or none). When possible backups are crash consistent.
+ KEEP_LAST: keep only a given number of backups.
- Backups are initiated by the one.vm.backup API call that requires
the target Datastore to perform the backup (one-shot). This is
exposed by the onevm backup command.
- Backups can be periodic through scheduled actions.
- Backup configuration is updated with one.vm.updateconf API call.
* Restore interface:
- Restores are initiated by the one.image.restore API call. This is
exposed by oneimage restore command.
- Restore include configurable options for the VM template
+ NO_IP: to not preserve IP addresses (but keep the NICs and network
mapping)
+ NO_NIC: to not preserve network mappings
- Other template attributes:
+ Clean PCI devices, including network configuration in case of TYPE=NIC
attributes. By default it removes SHORT_ADDRESS and leave the "auto"
selection attributes.
+ Clean NUMA_NODE, removes node id and cpu sets. It keeps the NUMA node
- It is possible to restore single files stored in the repository by
using the backup specific URL.
* Sunstone (Ruby version) has been updated to expose this feautres.
BACKUP DRIVERS & IMPLEMENTATION
===============================
* Backup operation is implemented by a combination of 3 driver operations:
- VMM. New (internal oned <-> one_vmm_exec.rb) to orchestrate
backups for RUNNING VMs.
- TM. This commit introduces 2 new operations (and their
corresponding _live variants):
+ pre_backup(_live): Prepares the disks to be back'ed up in the
repository. It is specific to the driver: (i) ceph uses the export
operation; (ii) qcow2/raw uses snapshot-create-as and fs_freeze as
needed.
+ post_backup(_live): Performs cleanning operations, i.e. KVM
snapshots or tmp dirs.
- DATASTORE. Each backup technology is represented by its
corresponfing driver, that needs to implement:
+ backup: it takes the VM disks in file (qcow2) format and stores it
the backup repository.
+ restore: it takes a backup image and restores the associated disks
and VM template.
+ monitor: to gather available space in the repository
+ rm: to remove existing backups
+ stat: to return the "restored" size of a disk stored in a backup
+ downloader pseudo-URL handler: in the form
<backup_proto>://<driver_snapshot_id>/<disk filename>
BACKUP MANAGEMENT
=================
Backup actions may potentially take some time, leaving some vmm_exec threads in
use for a long time, stucking other vmm operations. Backups are planned
by the scheduler through the sched action interface.
Two attributes has been added to sched.conf:
* MAX_BACKUPS max active backup operations in the cloud. No more
backups will be started beyond this limit.
* MAX_BACKUPS_HOST max number of backups per host
* Fix onevm CLI to properly show and manage schedule actions. --schedule
supports now, as well as relative times +<seconds_from_stime>
onvm backup --schedule now -d 100 63
* Backup is added as VM_ADMIN_ACTIONS in oned.conf. Regular users needs
to use the batch interface or request specific permissions
Internal restructure of Scheduler:
- All sched_actions interface is now in SchedActionsXML class and files.
This class uses references to VM XML, and MUST be used in the same
lifetime scope.
- XMLRPC API calls for sched actions has been moved to ScheduledActionXML.cc as
static functions.
- VirtualMachineActionPool includes counters for active backups (total
and per host).
SUPPORTED PLATFORMS
====================
* hypervisor: KVM
* TM: qcow2/shared/ssh, ceph
* backup: restic, rsync
Notes on Ceph
* Ceph backups are performed in the following steps:
1. A snapshot of each disk is taken (group snapshots cannot be used as
it seems we cannot export the disks afterwards)
2. Disks are export to a file
3. File is converted to qcow2 format
4. Disk files are upload to the backup repo
TODO:
* Confirm crash consistent snapshots cannot be used in Ceph
TODO:
* Check if using VM dir instead of full path is better to accomodate
DS migrations i.e.:
- Current path: /var/lib/one/datastores/100/53/backup/disk.0
- Proposal: 53/backup/disk.0
RESTIC DRIVER
=============
Developed together with this feature is part of the EE edtion.
* It supports the SFTP protocol, the following attributes are
supported:
- RESTIC_SFTP_SERVER
- RESTIC_SFTP_USER: only if different from oneadmin
- RESTIC_PASSWORD
- RESTIC_IONICE: Run restic under a given ionice priority (class 2)
- RESTIC_NICE: Run restic under a given nice
- RESTIC_BWLIMIT: Limit restic upload/download BW
- RESTIC_COMPRESSION: Restic 0.14 implements compression (three modes:
off, auto, max). This requires repositories version 2. By default,
auto is used (average compression without to much CPU usage)
- RESTIC_CONNECTIONS: Sets the number of concurrent connections to a
backend (5 by default). For high-latency backends this number can be
increased.
* downloader URL: restic://<datastore_id>/<snapshot_id>/<file_name>
snapshot_id is the restic snapshot hash. To recover single disk images
from a backup. This URLs support:
- RESTIC_CONNECTIONS
- RESTIC_BWLIMIT
- RESTIC_IONICE
- RESTIC_NICE
These options needs to be defined in the associated datastore.
RSYNC DRIVER
=============
A rsync driver is included as part of the CE distribution. It uses the
rsync tool to store backups in a remote server through SSH:
* The following attributes are supported to configure the backup
datastore:
- RSYNC_HOST
- RSYNC_USER
- RSYNC_ARGS: Arguments to perform the rsync operatin (-aS by default)
* downloader URL: rsync://<ds_id>/<vmid>/<hash>/<file> can be used to recover
single files from an existing backup. (RSYNC_HOST and RSYN_USER needs
to be set in ds_id
EMULATOR_CPUS
=============
This commit includes a non related backup feature:
* Add EMULATOR_CPUS (KVM). This host (or cluster attribute) defines the
CPU IDs where the emulator threads will be pinned. If this value is
not defined the allocated CPU wll be used when using a PIN policy.
(cherry picked from commit a9e6a8e000e9a5a2f56f80ce622ad9ffc9fa032b)
F OpenNebula/one#5516: adding rsync backup driver
(cherry picked from commit fb52edf5d009dc02b071063afb97c6519b9e8305)
F OpenNebula/one#5516: update install.sh, add vmid to source, some polish
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 6fc6f8a67e435f7f92d5c40fdc3d1c825ab5581d)
F OpenNebula/one#5516: cleanup
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 12f4333b833f23098142cd4762eb9e6c505e1340)
F OpenNebula/one#5516: update downloader, default args, size check
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 510124ef2780a4e2e8c3d128c9a42945be38a305)
LL
(cherry picked from commit d4fcd134dc293f2b862086936db4d552792539fa)
2022-09-09 11:46:44 +02:00
HOTPLUG_SAVEAS_STOPPED = 68 ,
BACKUP = 69 ,
BACKUP_POWEROFF = 70
2008-06-17 16:27:32 +00:00
} ;
2023-01-26 09:57:28 +01:00
static const int MAX_VNC_PASSWD_LENGTH = 8 ;
2024-03-26 10:50:37 +01:00
static const int MAX_SPICE_PASSWD_LENGTH = 59 ;
2023-01-26 09:57:28 +01:00
2016-12-17 02:49:14 +01:00
/**
* Functions to convert to / from string the VM states
*/
2020-07-02 22:42:10 +02:00
static int vm_state_from_str ( std : : string & st , VmState & state ) ;
2013-05-18 01:04:40 +02:00
2020-07-02 22:42:10 +02:00
static std : : string & vm_state_to_str ( std : : string & st , VmState state ) ;
2013-05-18 01:04:40 +02:00
2020-07-02 22:42:10 +02:00
static int lcm_state_from_str ( std : : string & st , LcmState & state ) ;
2016-12-17 02:49:14 +01:00
2020-07-02 22:42:10 +02:00
static std : : string & lcm_state_to_str ( std : : string & st , LcmState state ) ;
2013-05-18 01:04:40 +02:00
2020-09-10 09:08:29 +02:00
virtual ~ VirtualMachine ( ) ;
2015-03-30 17:41:22 +02:00
/**
* Returns the VM state to string , using the lcm state if the current state
* is ACTIVE .
* @ return the state sting
*/
2020-07-02 22:42:10 +02:00
std : : string state_str ( ) ;
2016-12-17 02:49:14 +01:00
/**
* Returns the VM state ( Dispatch Manager )
* @ return the VM state
*/
VmState get_state ( ) const
{
return state ;
} ;
VmState get_prev_state ( ) const
{
return prev_state ;
} ;
/**
* Returns the VM state ( life - cycle Manager )
* @ return the VM state
*/
LcmState get_lcm_state ( ) const
{
return lcm_state ;
} ;
LcmState get_prev_lcm_state ( ) const
{
return prev_lcm_state ;
} ;
/**
* Sets VM state
* @ param s state
*/
void set_state ( VmState s )
2015-03-30 17:41:22 +02:00
{
2020-07-02 22:42:10 +02:00
std : : string st ;
2015-03-30 17:41:22 +02:00
2016-12-17 02:49:14 +01:00
state = s ;
log ( " VM " , Log : : INFO , " New state is " + vm_state_to_str ( st , s ) ) ;
} ;
/**
* Sets VM LCM state
* @ param s state
*/
void set_state ( LcmState s )
{
2020-07-02 22:42:10 +02:00
std : : string st ;
2015-03-30 17:41:22 +02:00
2016-12-17 02:49:14 +01:00
lcm_state = s ;
log ( " VM " , Log : : INFO , " New LCM state is " + lcm_state_to_str ( st , s ) ) ;
} ;
/**
* Sets the previous state to the current one
*/
void set_prev_state ( )
{
prev_state = state ;
prev_lcm_state = lcm_state ;
} ;
/**
* Test if the VM has changed state since last time prev state was set
* @ return true if VM changed state
*/
2019-09-30 10:01:23 +02:00
bool has_changed_state ( ) const
2016-12-17 02:49:14 +01:00
{
return ( prev_lcm_state ! = lcm_state | | prev_state ! = state ) ;
2015-03-30 17:41:22 +02:00
}
2016-12-17 02:49:14 +01:00
/**
* Sets the re - scheduling flag
* @ param set or unset the re - schedule flag
*/
void set_resched ( bool do_sched )
{
resched = do_sched ? 1 : 0 ;
} ;
2008-06-22 01:51:49 +00:00
// -------------------------------------------------------------------------
2008-06-17 16:27:32 +00:00
// Log & Print
2009-03-06 12:10:15 +00:00
// -------------------------------------------------------------------------
2008-06-17 16:27:32 +00:00
/**
* writes a log message in vm . log . The class lock should be locked and
2009-03-06 12:10:15 +00:00
* the VM MUST BE obtained through the VirtualMachinePool get ( ) method .
2008-06-17 16:27:32 +00:00
*/
void log (
const char * module ,
const Log : : MessageType type ,
2020-07-02 22:42:10 +02:00
const std : : ostringstream & message ) const
2008-06-17 16:27:32 +00:00
{
if ( _log ! = 0 )
{
2010-04-21 00:40:16 +02:00
_log - > log ( module , type , message . str ( ) . c_str ( ) ) ;
2008-06-17 16:27:32 +00:00
}
} ;
2009-03-06 12:10:15 +00:00
2008-06-17 16:27:32 +00:00
/**
* writes a log message in vm . log . The class lock should be locked and
2009-03-06 12:10:15 +00:00
* the VM MUST BE obtained through the VirtualMachinePool get ( ) method .
2008-06-17 16:27:32 +00:00
*/
void log (
const char * module ,
const Log : : MessageType type ,
2009-03-06 12:10:15 +00:00
const char * message ) const
2008-06-17 16:27:32 +00:00
{
if ( _log ! = 0 )
{
_log - > log ( module , type , message ) ;
}
} ;
2009-03-06 12:10:15 +00:00
2015-03-30 11:17:36 +02:00
/**
* writes a log message in vm . log . The class lock should be locked and
* the VM MUST BE obtained through the VirtualMachinePool get ( ) method .
*/
void log (
const char * module ,
const Log : : MessageType type ,
2020-07-02 22:42:10 +02:00
const std : : string & message ) const
2015-03-30 11:17:36 +02:00
{
log ( module , type , message . c_str ( ) ) ;
} ;
2008-06-17 16:27:32 +00:00
// ------------------------------------------------------------------------
// Dynamic Info
2009-03-06 12:10:15 +00:00
// ------------------------------------------------------------------------
2008-06-17 16:27:32 +00:00
/**
* Updates VM dynamic information ( id ) .
* @ param _deploy_id the VMM driver specific id
*/
2020-07-02 22:42:10 +02:00
void set_deploy_id ( const std : : string & _deploy_id )
2008-06-17 16:27:32 +00:00
{
deploy_id = _deploy_id ;
} ;
2015-04-28 11:44:56 +02:00
/**
2020-05-10 02:35:46 +02:00
* @ return monitor info
2015-04-28 11:44:56 +02:00
*/
2015-07-09 14:02:28 +02:00
VirtualMachineMonitorInfo & get_info ( )
2015-06-23 21:52:10 +02:00
{
return monitoring ;
2015-04-28 11:44:56 +02:00
}
2020-03-04 16:05:57 +01:00
/**
* Read monitoring from DB
*/
void load_monitoring ( ) ;
2008-06-17 16:27:32 +00:00
/**
* Returns the deployment ID
* @ return the VMM driver specific ID
*/
2020-07-02 22:42:10 +02:00
const std : : string & get_deploy_id ( ) const
2008-06-17 16:27:32 +00:00
{
return deploy_id ;
} ;
/**
* Sets the VM exit time
2012-05-04 17:29:36 +02:00
* @ param _et VM exit time ( when it arrived DONE / FAILED states )
2008-06-17 16:27:32 +00:00
*/
void set_exit_time ( time_t et )
{
etime = et ;
} ;
2012-11-19 00:47:02 +01:00
/**
* Sets the KERNEL OS attribute ( path to the kernel file ) . Used when
* the template is using a FILE Datastore for it
* @ param path to the kernel ( in the remote host )
*/
2020-07-02 22:42:10 +02:00
void set_kernel ( const std : : string & kernel )
2012-11-19 00:47:02 +01:00
{
2016-02-04 13:10:42 +01:00
VectorAttribute * os = obj_template - > get ( " OS " ) ;
2012-11-19 00:47:02 +01:00
F #6492: Index PCI passthrough devices with bus
If q35 machine type is detected the slot of the pci device is set to 0
and the bus to pci_id + 1.
Q35 models uses pcie-root-ports to attach PCI devices. Each PCI port is
selected by the bus parameter of the PCI address and it that does not accept a
slot number greater than 0.
Example:
A VM with 2 X710 VFs is defined OpenNebula as:
PCI=[
ADDRESS="0000:44:0a:0",
BUS="44",
CLASS="0200",
DEVICE="154c",
DOMAIN="0000",
FUNCTION="0",
NUMA_NODE="0",
PCI_ID="0",
SHORT_ADDRESS="44:0a.0",
SLOT="0a",
VENDOR="8086",
VM_ADDRESS="01:00.0",
VM_BUS="0x01",
VM_DOMAIN="0x0000",
VM_FUNCTION="0",
VM_SLOT="0000" ]
PCI=[
ADDRESS="0000:44:0a:1",
BUS="44",
CLASS="0200",
DEVICE="154c",
DOMAIN="0000",
FUNCTION="1",
NUMA_NODE="0",
PCI_ID="1",
SHORT_ADDRESS="44:0a.1",
SLOT="0a",
VENDOR="8086",
VM_ADDRESS="02:00.0",
VM_BUS="0x02",
VM_DOMAIN="0x0000",
VM_FUNCTION="0",
VM_SLOT="0000" ]
Each PCI VFs is attached to different pcie-root-port, selected with the
VM_BUS parameter:
00:02.0 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:02.1 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
The PCI topology is:
-[0000:00]-+-00.0 Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller
+-01.0 Cirrus Logic GD 5446
+-02.0-[01]----00.0 Intel Corporation Ethernet Virtual Function 700 Series
+-02.1-[02]----00.0 Intel Corporation Ethernet Virtual Function 700 Series
+-02.2-[03-04]----00.0-[04]--
+-02.3-[05]----00.0 Red Hat, Inc. Virtio network device
+-02.4-[06]----00.0 Red Hat, Inc. Virtio SCSI
+-02.5-[07]----00.0 Red Hat, Inc. QEMU XHCI Host Controller
+-02.6-[08]----00.0 Red Hat, Inc. Virtio console
+-02.7-[09]----00.0 Red Hat, Inc. Virtio memory balloon
+-03.0-[0a]--
+-03.1-[0b]--
+-03.2-[0c]--
+-03.3-[0d]--
+-03.4-[0e]--
+-03.5-[0f]--
+-03.6-[10]--
+-03.7-[11]--
+-1f.0 Intel Corporation 82801IB (ICH9) LPC Interface Controller
+-1f.2 Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode]
\-1f.3 Intel Corporation 82801I (ICH9 Family) SMBus Controller
2024-02-08 14:57:28 +01:00
if ( os = = nullptr )
2012-11-19 00:47:02 +01:00
{
return ;
}
os - > replace ( " KERNEL " , kernel ) ;
} ;
/**
* Sets the INITRD OS attribute ( path to the initrd file ) . Used when
* the template is using a FILE Datastore for it
* @ param path to the initrd ( in the remote host )
*/
2020-07-02 22:42:10 +02:00
void set_initrd ( const std : : string & initrd )
2012-11-19 00:47:02 +01:00
{
2016-02-04 13:10:42 +01:00
VectorAttribute * os = obj_template - > get ( " OS " ) ;
2012-11-19 00:47:02 +01:00
F #6492: Index PCI passthrough devices with bus
If q35 machine type is detected the slot of the pci device is set to 0
and the bus to pci_id + 1.
Q35 models uses pcie-root-ports to attach PCI devices. Each PCI port is
selected by the bus parameter of the PCI address and it that does not accept a
slot number greater than 0.
Example:
A VM with 2 X710 VFs is defined OpenNebula as:
PCI=[
ADDRESS="0000:44:0a:0",
BUS="44",
CLASS="0200",
DEVICE="154c",
DOMAIN="0000",
FUNCTION="0",
NUMA_NODE="0",
PCI_ID="0",
SHORT_ADDRESS="44:0a.0",
SLOT="0a",
VENDOR="8086",
VM_ADDRESS="01:00.0",
VM_BUS="0x01",
VM_DOMAIN="0x0000",
VM_FUNCTION="0",
VM_SLOT="0000" ]
PCI=[
ADDRESS="0000:44:0a:1",
BUS="44",
CLASS="0200",
DEVICE="154c",
DOMAIN="0000",
FUNCTION="1",
NUMA_NODE="0",
PCI_ID="1",
SHORT_ADDRESS="44:0a.1",
SLOT="0a",
VENDOR="8086",
VM_ADDRESS="02:00.0",
VM_BUS="0x02",
VM_DOMAIN="0x0000",
VM_FUNCTION="0",
VM_SLOT="0000" ]
Each PCI VFs is attached to different pcie-root-port, selected with the
VM_BUS parameter:
00:02.0 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:02.1 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
The PCI topology is:
-[0000:00]-+-00.0 Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller
+-01.0 Cirrus Logic GD 5446
+-02.0-[01]----00.0 Intel Corporation Ethernet Virtual Function 700 Series
+-02.1-[02]----00.0 Intel Corporation Ethernet Virtual Function 700 Series
+-02.2-[03-04]----00.0-[04]--
+-02.3-[05]----00.0 Red Hat, Inc. Virtio network device
+-02.4-[06]----00.0 Red Hat, Inc. Virtio SCSI
+-02.5-[07]----00.0 Red Hat, Inc. QEMU XHCI Host Controller
+-02.6-[08]----00.0 Red Hat, Inc. Virtio console
+-02.7-[09]----00.0 Red Hat, Inc. Virtio memory balloon
+-03.0-[0a]--
+-03.1-[0b]--
+-03.2-[0c]--
+-03.3-[0d]--
+-03.4-[0e]--
+-03.5-[0f]--
+-03.6-[10]--
+-03.7-[11]--
+-1f.0 Intel Corporation 82801IB (ICH9) LPC Interface Controller
+-1f.2 Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode]
\-1f.3 Intel Corporation 82801I (ICH9 Family) SMBus Controller
2024-02-08 14:57:28 +01:00
if ( os = = nullptr )
2012-11-19 00:47:02 +01:00
{
return ;
}
os - > replace ( " INITRD " , initrd ) ;
} ;
F #6492: Index PCI passthrough devices with bus
If q35 machine type is detected the slot of the pci device is set to 0
and the bus to pci_id + 1.
Q35 models uses pcie-root-ports to attach PCI devices. Each PCI port is
selected by the bus parameter of the PCI address and it that does not accept a
slot number greater than 0.
Example:
A VM with 2 X710 VFs is defined OpenNebula as:
PCI=[
ADDRESS="0000:44:0a:0",
BUS="44",
CLASS="0200",
DEVICE="154c",
DOMAIN="0000",
FUNCTION="0",
NUMA_NODE="0",
PCI_ID="0",
SHORT_ADDRESS="44:0a.0",
SLOT="0a",
VENDOR="8086",
VM_ADDRESS="01:00.0",
VM_BUS="0x01",
VM_DOMAIN="0x0000",
VM_FUNCTION="0",
VM_SLOT="0000" ]
PCI=[
ADDRESS="0000:44:0a:1",
BUS="44",
CLASS="0200",
DEVICE="154c",
DOMAIN="0000",
FUNCTION="1",
NUMA_NODE="0",
PCI_ID="1",
SHORT_ADDRESS="44:0a.1",
SLOT="0a",
VENDOR="8086",
VM_ADDRESS="02:00.0",
VM_BUS="0x02",
VM_DOMAIN="0x0000",
VM_FUNCTION="0",
VM_SLOT="0000" ]
Each PCI VFs is attached to different pcie-root-port, selected with the
VM_BUS parameter:
00:02.0 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:02.1 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
The PCI topology is:
-[0000:00]-+-00.0 Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller
+-01.0 Cirrus Logic GD 5446
+-02.0-[01]----00.0 Intel Corporation Ethernet Virtual Function 700 Series
+-02.1-[02]----00.0 Intel Corporation Ethernet Virtual Function 700 Series
+-02.2-[03-04]----00.0-[04]--
+-02.3-[05]----00.0 Red Hat, Inc. Virtio network device
+-02.4-[06]----00.0 Red Hat, Inc. Virtio SCSI
+-02.5-[07]----00.0 Red Hat, Inc. QEMU XHCI Host Controller
+-02.6-[08]----00.0 Red Hat, Inc. Virtio console
+-02.7-[09]----00.0 Red Hat, Inc. Virtio memory balloon
+-03.0-[0a]--
+-03.1-[0b]--
+-03.2-[0c]--
+-03.3-[0d]--
+-03.4-[0e]--
+-03.5-[0f]--
+-03.6-[10]--
+-03.7-[11]--
+-1f.0 Intel Corporation 82801IB (ICH9) LPC Interface Controller
+-1f.2 Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode]
\-1f.3 Intel Corporation 82801I (ICH9 Family) SMBus Controller
2024-02-08 14:57:28 +01:00
bool test_machine_type ( const std : : string & machine_type ) const
{
VectorAttribute * os = obj_template - > get ( " OS " ) ;
if ( os = = nullptr )
{
return false ;
}
const std : : string machine = os - > vector_value ( " MACHINE " ) ;
return machine . find ( machine_type ) ! = std : : string : : npos ;
}
2012-02-25 23:31:44 +01:00
// ------------------------------------------------------------------------
// Access to VM locations
// ------------------------------------------------------------------------
/**
* Returns the remote VM directory . The VM remote dir is in the form :
2016-04-10 22:39:21 +02:00
* $ DATASTORE_LOCATION / $ SYSTEM_DS_ID / $ VM_ID . The system_dir stores
2012-02-25 23:31:44 +01:00
* disks for a running VM in the target host .
* @ return the remote system directory for the VM
*/
2020-07-02 22:42:10 +02:00
const std : : string & get_system_dir ( ) const
2012-10-05 13:23:44 +02:00
{
2016-04-10 22:39:21 +02:00
return history - > system_dir ;
2012-10-05 13:23:44 +02:00
}
2012-02-25 23:31:44 +01:00
2015-04-29 17:01:06 +02:00
/**
2016-04-10 22:39:21 +02:00
* Returns the remote VM directory for the previous host . It maybe different
* if a system ds migration
2015-04-29 17:01:06 +02:00
* The hasPreviousHistory ( ) function MUST be called before this one .
* @ return the remote system directory for the VM
*/
2020-07-02 22:42:10 +02:00
const std : : string & get_previous_system_dir ( ) const
2015-04-29 17:01:06 +02:00
{
2016-04-10 22:39:21 +02:00
return previous_history - > system_dir ;
2015-04-29 17:01:06 +02:00
} ;
2008-06-17 16:27:32 +00:00
// ------------------------------------------------------------------------
// History
2009-03-06 12:10:15 +00:00
// ------------------------------------------------------------------------
2008-06-17 16:27:32 +00:00
/**
2008-06-22 22:38:55 +00:00
* Adds a new history record an writes it in the database .
2008-06-17 16:27:32 +00:00
*/
void add_history (
2020-07-02 22:42:10 +02:00
int hid ,
int cid ,
const std : : string & hostname ,
const std : : string & vmm_mad ,
const std : : string & tm_mad ,
int ds_id ) ;
2008-06-22 01:51:49 +00:00
/**
* Duplicates the last history record . Only the host related fields are
2008-06-22 22:38:55 +00:00
* affected ( i . e . no counter is copied nor initialized ) .
2008-06-22 01:51:49 +00:00
* @ param reason explaining the new addition .
*/
2008-06-22 22:38:55 +00:00
void cp_history ( ) ;
2008-06-22 01:51:49 +00:00
/**
* Duplicates the previous history record . Only the host related fields are
2008-06-22 22:38:55 +00:00
* affected ( i . e . no counter is copied nor initialized ) .
2008-06-22 01:51:49 +00:00
* @ param reason explaining the new addition .
*/
2008-06-22 22:38:55 +00:00
void cp_previous_history ( ) ;
2008-06-17 16:27:32 +00:00
/**
2008-06-22 01:51:49 +00:00
* Checks if the VM has a valid history record . This function
2008-06-17 16:27:32 +00:00
* MUST be called before using any history related function .
* @ return true if the VM has a record
2009-03-06 12:10:15 +00:00
*/
2008-06-17 16:27:32 +00:00
bool hasHistory ( ) const
{
return ( history ! = 0 ) ;
} ;
2008-06-22 01:51:49 +00:00
/**
* Checks if the VM has a valid previous history record . This function
* MUST be called before using any previous_history related function .
* @ return true if the VM has a previous record
2009-03-06 12:10:15 +00:00
*/
2008-06-22 01:51:49 +00:00
bool hasPreviousHistory ( ) const
{
return ( previous_history ! = 0 ) ;
} ;
2009-07-09 14:34:34 +00:00
2017-02-10 14:19:55 +01:00
bool is_history_open ( ) const
{
return ( history ! = 0 ) & & ( history - > etime = = 0 ) ;
}
bool is_previous_history_open ( ) const
{
return ( previous_history ! = 0 ) & & ( previous_history - > etime = = 0 ) ;
}
2008-06-17 16:27:32 +00:00
/**
* Returns the VMM driver name for the current host . The hasHistory ( )
* function MUST be called before this one .
* @ return the VMM mad name
2009-03-06 12:10:15 +00:00
*/
2020-07-02 22:42:10 +02:00
const std : : string & get_vmm_mad ( ) const
2008-06-17 16:27:32 +00:00
{
return history - > vmm_mad_name ;
} ;
2009-07-09 14:34:34 +00:00
/**
* Returns the VMM driver name for the previous host . The hasPreviousHistory ( )
* function MUST be called before this one .
* @ return the VMM mad name
*/
2020-07-02 22:42:10 +02:00
const std : : string & get_previous_vmm_mad ( ) const
2009-07-09 14:34:34 +00:00
{
return previous_history - > vmm_mad_name ;
} ;
2013-11-21 18:28:49 +01:00
/**
* Returns the datastore ID of the system DS for the host . The hasHistory ( )
* function MUST be called before this one .
* @ return the ds id
*/
int get_ds_id ( ) const
{
return history - > ds_id ;
} ;
2012-07-02 18:06:07 +02:00
/**
2012-10-05 13:23:44 +02:00
* Returns the datastore ID of the system DS for the previous host .
2012-07-02 18:06:07 +02:00
* The hasPreviousHistory ( ) function MUST be called before this one .
* @ return the TM mad name
*/
2013-11-22 10:15:57 +01:00
int get_previous_ds_id ( ) const
2012-07-02 18:06:07 +02:00
{
2013-11-22 10:15:57 +01:00
return previous_history - > ds_id ;
2012-07-02 18:06:07 +02:00
} ;
2012-06-28 15:32:52 +02:00
/**
* Returns the TM driver name for the current host . The hasHistory ( )
* function MUST be called before this one .
* @ return the TM mad name
*/
2020-07-02 22:42:10 +02:00
const std : : string & get_tm_mad ( ) const
2012-06-28 15:32:52 +02:00
{
return history - > tm_mad_name ;
} ;
/**
* Returns the TM driver name for the previous host . The
* hasPreviousHistory ( ) function MUST be called before this one .
* @ return the TM mad name
*/
2020-07-02 22:42:10 +02:00
const std : : string & get_previous_tm_mad ( ) const
2012-06-28 15:32:52 +02:00
{
return previous_history - > tm_mad_name ;
} ;
2008-06-17 16:27:32 +00:00
/**
2009-03-06 12:10:15 +00:00
* Returns the transfer filename . The transfer file is in the form :
2012-10-28 19:19:57 +01:00
* $ ONE_LOCATION / var / vms / $ VM_ID / transfer . $ SEQ
2009-01-02 14:58:51 +00:00
* or , in case that OpenNebula is installed in root
2012-10-28 19:19:57 +01:00
* / var / lib / one / vms / $ VM_ID / transfer . $ SEQ
2008-11-13 16:21:17 +00:00
* The hasHistory ( ) function MUST be called before this one .
* @ return the transfer filename
2009-03-06 12:10:15 +00:00
*/
2020-07-02 22:42:10 +02:00
const std : : string & get_transfer_file ( ) const
2008-06-17 16:27:32 +00:00
{
2008-11-13 16:21:17 +00:00
return history - > transfer_file ;
2008-06-17 16:27:32 +00:00
} ;
/**
2008-11-13 16:21:17 +00:00
* Returns the deployment filename . The deployment file is in the form :
2012-10-28 19:19:57 +01:00
* $ ONE_LOCATION / var / vms / $ VM_ID / deployment . $ SEQ
2009-01-02 14:58:51 +00:00
* or , in case that OpenNebula is installed in root
2012-10-28 19:19:57 +01:00
* / var / lib / one / vms / $ VM_ID / deployment . $ SEQ
2008-11-13 16:21:17 +00:00
* The hasHistory ( ) function MUST be called before this one .
2013-07-05 01:31:30 +02:00
* @ return the deployment file path
2009-03-06 12:10:15 +00:00
*/
2020-07-02 22:42:10 +02:00
const std : : string & get_deployment_file ( ) const
2008-06-17 16:27:32 +00:00
{
2008-11-13 16:21:17 +00:00
return history - > deployment_file ;
2008-06-17 16:27:32 +00:00
} ;
2009-03-06 12:10:15 +00:00
/**
* Returns the context filename . The context file is in the form :
2012-10-28 19:19:57 +01:00
* $ ONE_LOCATION / var / vms / $ VM_ID / context . sh
2009-03-06 12:10:15 +00:00
* or , in case that OpenNebula is installed in root
2012-10-28 19:19:57 +01:00
* / var / lib / one / vms / $ VM_ID / context . sh
2009-03-06 12:10:15 +00:00
* The hasHistory ( ) function MUST be called before this one .
2013-07-05 01:31:30 +02:00
* @ return the context file path
2009-03-06 12:10:15 +00:00
*/
2020-07-02 22:42:10 +02:00
const std : : string & get_context_file ( ) const
2009-03-06 12:10:15 +00:00
{
return history - > context_file ;
}
2013-07-05 01:31:30 +02:00
/**
* Returns the token filename . The token file is in the form :
* $ ONE_LOCATION / var / vms / $ VM_ID / token . txt
* or , in case that OpenNebula is installed in root
* / var / lib / one / vms / $ VM_ID / token . txt
* The hasHistory ( ) function MUST be called before this one .
* @ return the token file path
*/
2020-07-02 22:42:10 +02:00
const std : : string & get_token_file ( ) const
2013-07-05 01:31:30 +02:00
{
return history - > token_file ;
}
2008-11-13 16:21:17 +00:00
/**
* Returns the remote deployment filename . The file is in the form :
2012-10-28 19:19:57 +01:00
* $ DS_LOCATION / $ SYSTEM_DS / $ VM_ID / deployment . $ SEQ
2008-11-13 16:21:17 +00:00
* The hasHistory ( ) function MUST be called before this one .
* @ return the deployment filename
2009-03-06 12:10:15 +00:00
*/
2020-07-02 22:42:10 +02:00
const std : : string & get_remote_deployment_file ( ) const
2008-11-13 16:21:17 +00:00
{
return history - > rdeployment_file ;
2009-03-06 12:10:15 +00:00
} ;
2008-06-17 16:27:32 +00:00
/**
2009-03-06 12:10:15 +00:00
* Returns the checkpoint filename for the current host . The checkpoint file
* is in the form :
2012-10-28 19:19:57 +01:00
* $ DS_LOCATION / $ SYSTEM_DS / $ VM_ID / checkpoint
2008-11-13 16:21:17 +00:00
* The hasHistory ( ) function MUST be called before this one .
2008-06-17 16:27:32 +00:00
* @ return the checkpoint filename
2009-03-06 12:10:15 +00:00
*/
2020-07-02 22:42:10 +02:00
const std : : string & get_checkpoint_file ( ) const
2008-06-17 16:27:32 +00:00
{
return history - > checkpoint_file ;
} ;
2009-03-06 12:10:15 +00:00
2015-04-29 17:01:06 +02:00
/**
* Returns the checkpoint filename for the previous host .
* The hasPreviousHistory ( ) function MUST be called before this one .
* @ return the checkpoint filename
*/
2020-07-02 22:42:10 +02:00
const std : : string & get_previous_checkpoint_file ( ) const
2015-04-29 17:01:06 +02:00
{
return previous_history - > checkpoint_file ;
} ;
2008-06-17 16:27:32 +00:00
/**
* Returns the hostname for the current host . The hasHistory ( )
* function MUST be called before this one .
* @ return the hostname
2009-03-06 12:10:15 +00:00
*/
2020-07-02 22:42:10 +02:00
const std : : string & get_hostname ( ) const
2008-06-17 16:27:32 +00:00
{
return history - > hostname ;
} ;
2009-03-06 12:10:15 +00:00
2014-07-23 12:14:36 +02:00
/**
* Returns if the host is a public cloud based on the system ds and tm_mad .
* The hasHistory ( ) function MUST be called before this one .
* @ return the hostname
*/
bool get_host_is_cloud ( ) const
{
return ( ( history - > ds_id = = - 1 ) & & history - > tm_mad_name . empty ( ) ) ;
} ;
2013-09-02 12:53:54 +02:00
/**
* Updates the current hostname . The hasHistory ( )
* function MUST be called before this one .
* @ param hostname New hostname
*/
2020-07-02 22:42:10 +02:00
void set_hostname ( const std : : string & hostname )
2013-09-02 12:53:54 +02:00
{
history - > hostname = hostname ;
} ;
2008-06-22 01:51:49 +00:00
/**
* Returns the hostname for the previous host . The hasPreviousHistory ( )
* function MUST be called before this one .
* @ return the hostname
2009-03-06 12:10:15 +00:00
*/
2020-07-02 22:42:10 +02:00
const std : : string & get_previous_hostname ( ) const
2008-06-22 01:51:49 +00:00
{
return previous_history - > hostname ;
} ;
2009-03-06 12:10:15 +00:00
2013-11-21 18:28:49 +01:00
/**
* Returns the action that closed the current history record . The hasHistory ( )
* function MUST be called before this one .
* @ return the action that closed the current history record
*/
2020-07-05 22:01:32 +02:00
VMActions : : Action get_action ( ) const
2013-11-21 18:28:49 +01:00
{
return history - > action ;
} ;
2013-04-03 16:18:50 +02:00
/**
* Returns the action that closed the history record in the previous host
* @ return the action that closed the history record in the previous host
*/
2020-07-05 22:01:32 +02:00
VMActions : : Action get_previous_action ( ) const
2013-04-03 16:18:50 +02:00
{
return previous_history - > action ;
} ;
2008-06-17 16:27:32 +00:00
/**
* Get host id where the VM is or is going to execute . The hasHistory ( )
* function MUST be called before this one .
*/
2019-09-30 10:01:23 +02:00
int get_hid ( ) const
2008-06-17 16:27:32 +00:00
{
return history - > hid ;
}
2008-06-22 01:51:49 +00:00
/**
* Get host id where the VM was executing . The hasPreviousHistory ( )
* function MUST be called before this one .
*/
2019-09-30 10:01:23 +02:00
int get_previous_hid ( ) const
2008-06-22 01:51:49 +00:00
{
return previous_history - > hid ;
}
2009-03-06 12:10:15 +00:00
2015-07-22 19:14:07 +02:00
/**
* Get cluster id where the VM is or is going to execute . The hasHistory ( )
* function MUST be called before this one .
*/
2019-09-30 10:01:23 +02:00
int get_cid ( ) const
2015-07-22 19:14:07 +02:00
{
return history - > cid ;
}
/**
* Get cluster id where the VM was executing . The hasPreviousHistory ( )
* function MUST be called before this one .
*/
2019-09-30 10:01:23 +02:00
int get_previous_cid ( ) const
2015-07-22 19:14:07 +02:00
{
return previous_history - > cid ;
}
2008-06-17 16:27:32 +00:00
/**
* Sets start time of a VM .
* @ param _stime time when the VM started
*/
void set_stime ( time_t _stime )
{
2019-07-26 13:45:26 +02:00
history - > stime = _stime ;
2008-06-17 16:27:32 +00:00
} ;
2012-05-05 20:57:17 +02:00
/**
2012-10-05 13:23:44 +02:00
* Sets VM info ( with monitoring info ) in the history record
2012-05-05 20:57:17 +02:00
*/
2012-05-08 15:42:24 +02:00
void set_vm_info ( )
{
2022-12-12 09:28:46 +01:00
load_monitoring ( ) ;
2023-07-03 18:15:52 +02:00
to_xml_extended ( history - > vm_info , 0 , false ) ;
2012-05-08 15:42:24 +02:00
} ;
2012-05-05 20:57:17 +02:00
/**
2012-10-05 13:23:44 +02:00
* Sets VM info ( with monitoring info ) in the previous history record
2012-05-05 20:57:17 +02:00
*/
2012-05-08 15:42:24 +02:00
void set_previous_vm_info ( )
{
2023-07-03 18:15:52 +02:00
to_xml_extended ( previous_history - > vm_info , 0 , false ) ;
2012-05-08 15:42:24 +02:00
} ;
2012-05-05 20:57:17 +02:00
2008-06-17 16:27:32 +00:00
/**
2023-11-28 11:08:57 +01:00
* Sets end time of a VM
2008-06-17 16:27:32 +00:00
* @ param _etime time when the VM finished
*/
void set_etime ( time_t _etime )
{
2019-07-26 13:45:26 +02:00
history - > etime = _etime ;
2008-06-17 16:27:32 +00:00
} ;
2021-04-15 11:11:41 +02:00
/**
* Gets end time of a VM
*/
time_t get_etime ( )
{
return history - > etime ;
}
2008-06-22 01:51:49 +00:00
/**
2023-11-28 11:08:57 +01:00
* Sets end time of a VM in the previous Host
2008-06-22 01:51:49 +00:00
* @ param _etime time when the VM finished
*/
void set_previous_etime ( time_t _etime )
{
2019-07-26 13:45:26 +02:00
previous_history - > etime = _etime ;
2008-06-22 01:51:49 +00:00
} ;
2009-03-06 12:10:15 +00:00
2008-06-17 16:27:32 +00:00
/**
* Sets start time of VM prolog .
* @ param _stime time when the prolog started
*/
void set_prolog_stime ( time_t _stime )
{
2019-07-26 13:45:26 +02:00
history - > prolog_stime = _stime ;
2008-06-17 16:27:32 +00:00
} ;
/**
* Sets end time of VM prolog .
* @ param _etime time when the prolog finished
*/
void set_prolog_etime ( time_t _etime )
{
2019-07-26 13:45:26 +02:00
history - > prolog_etime = _etime ;
2008-06-17 16:27:32 +00:00
} ;
/**
* Sets start time of VM running state .
* @ param _stime time when the running state started
*/
void set_running_stime ( time_t _stime )
{
2019-07-26 13:45:26 +02:00
history - > running_stime = _stime ;
2008-06-17 16:27:32 +00:00
} ;
2013-11-02 18:25:10 +01:00
/**
* Gets the running start time for the VM
*/
2019-09-30 10:01:23 +02:00
time_t get_running_stime ( ) const
2013-11-02 18:25:10 +01:00
{
return history - > running_stime ;
}
2008-06-17 16:27:32 +00:00
/**
* Sets end time of VM running state .
* @ param _etime time when the running state finished
*/
void set_running_etime ( time_t _etime )
{
2019-07-26 13:45:26 +02:00
history - > running_etime = _etime ;
2008-06-17 16:27:32 +00:00
} ;
2021-04-15 11:11:41 +02:00
/**
* Gets the running end time for the VM
*/
time_t get_running_etime ( ) const
{
return history - > running_etime ;
}
2008-06-22 01:51:49 +00:00
/**
* Sets end time of VM running state in the previous host .
* @ param _etime time when the running state finished
*/
void set_previous_running_etime ( time_t _etime )
{
2019-07-26 13:45:26 +02:00
previous_history - > running_etime = _etime ;
2008-06-22 01:51:49 +00:00
} ;
2009-03-06 12:10:15 +00:00
2008-06-17 16:27:32 +00:00
/**
* Sets start time of VM epilog .
* @ param _stime time when the epilog started
*/
void set_epilog_stime ( time_t _stime )
{
2019-07-26 13:45:26 +02:00
history - > epilog_stime = _stime ;
2008-06-17 16:27:32 +00:00
} ;
/**
* Sets end time of VM epilog .
* @ param _etime time when the epilog finished
*/
void set_epilog_etime ( time_t _etime )
{
2019-07-26 13:45:26 +02:00
history - > epilog_etime = _etime ;
2008-06-17 16:27:32 +00:00
} ;
2013-04-03 16:18:50 +02:00
/**
* Sets the action that closed the history record
* @ param action that closed the history record
*/
2019-09-09 13:13:52 +02:00
void set_action ( VMActions : : Action action , int uid , int gid , int req_id )
2013-04-03 16:18:50 +02:00
{
history - > action = action ;
2017-02-09 16:58:47 +01:00
history - > uid = uid ;
history - > gid = gid ;
history - > req_id = req_id ;
2013-04-03 16:18:50 +02:00
} ;
2019-09-09 13:13:52 +02:00
void set_internal_action ( VMActions : : Action action )
2017-02-10 14:19:55 +01:00
{
history - > action = action ;
history - > uid = - 1 ;
history - > gid = - 1 ;
history - > req_id = - 1 ;
} ;
2017-02-09 16:58:47 +01:00
void clear_action ( )
{
2019-09-09 13:13:52 +02:00
history - > action = VMActions : : NONE_ACTION ;
2017-02-09 16:58:47 +01:00
history - > uid = - 1 ;
history - > gid = - 1 ;
history - > req_id = - 1 ;
}
2019-09-09 13:13:52 +02:00
void set_previous_action ( VMActions : : Action action , int uid , int gid , int rid )
2013-04-03 16:18:50 +02:00
{
previous_history - > action = action ;
2017-02-10 14:19:55 +01:00
previous_history - > uid = uid ;
previous_history - > gid = gid ;
previous_history - > req_id = rid ;
2013-04-03 16:18:50 +02:00
} ;
2013-04-04 11:34:20 +02:00
2020-10-16 12:33:20 +02:00
/**
* Release the previous VNC port when a VM is migrated to another cluster
* ( GRAPHICS / PREVIOUS_PORT present )
*/
void release_previous_vnc_port ( ) ;
/**
* Frees current PORT from * * current * * cluster and sets it to PREVIOUS_PORT
* ( which is allocated in previous cluster ) . This function is called when
* the migration fails .
*/
void rollback_previous_vnc_port ( ) ;
2008-06-17 16:27:32 +00:00
// ------------------------------------------------------------------------
2016-12-17 02:49:14 +01:00
// Template & Object Representation
2009-03-06 12:10:15 +00:00
// ------------------------------------------------------------------------
2016-04-19 16:22:37 +02:00
/**
2016-12-17 02:49:14 +01:00
* Function to print the VirtualMachine object into a string in
* XML format
* @ param xml the resulting XML string
* @ return a reference to the generated string
*/
2020-07-02 22:42:10 +02:00
std : : string & to_xml ( std : : string & xml ) const override
2016-12-17 02:49:14 +01:00
{
2023-07-03 18:15:52 +02:00
return to_xml_extended ( xml , 1 , false ) ;
2016-12-17 02:49:14 +01:00
}
2018-10-09 11:05:08 +02:00
/**
* Function to print the VirtualMachine object into a string in
* XML format , with reduced information
* @ param xml the resulting XML string
* @ return a reference to the generated string
*/
2020-07-02 22:42:10 +02:00
std : : string & to_xml_short ( std : : string & xml ) ;
2018-10-09 11:05:08 +02:00
2016-12-17 02:49:14 +01:00
/**
* Function to print the VirtualMachine object into a string in
* XML format , with extended information ( full history records )
* @ param xml the resulting XML string
* @ return a reference to the generated string
*/
2020-07-02 22:42:10 +02:00
std : : string & to_xml_extended ( std : : string & xml ) const
2016-12-17 02:49:14 +01:00
{
2023-07-03 18:15:52 +02:00
return to_xml_extended ( xml , 2 , true ) ;
2016-12-17 02:49:14 +01:00
}
/**
* Rebuilds the object from an xml formatted string
* @ param xml_str The xml - formatted string
2016-04-19 16:22:37 +02:00
*
2016-12-17 02:49:14 +01:00
* @ return 0 on success , - 1 otherwise
2016-04-19 16:22:37 +02:00
*/
2020-07-02 22:42:10 +02:00
int from_xml ( const std : : string & xml_str ) override ;
2016-04-19 16:22:37 +02:00
2011-06-01 23:53:09 +02:00
/**
* Factory method for virtual machine templates
*/
2020-09-15 11:16:00 +02:00
std : : unique_ptr < Template > get_new_template ( ) const override
2011-06-01 23:53:09 +02:00
{
2020-09-15 11:16:00 +02:00
return std : : make_unique < VirtualMachineTemplate > ( ) ;
2011-06-01 23:53:09 +02:00
}
2011-04-08 01:02:55 +02:00
2012-06-06 17:05:11 +02:00
/**
* Returns a copy of the VirtualMachineTemplate
* @ return A copy of the VirtualMachineTemplate
*/
2020-09-15 11:16:00 +02:00
std : : unique_ptr < VirtualMachineTemplate > clone_template ( ) const
2012-06-06 17:05:11 +02:00
{
2020-09-15 11:16:00 +02:00
return std : : make_unique < VirtualMachineTemplate > ( * obj_template ) ;
}
2012-06-06 17:05:11 +02:00
2024-01-08 13:56:40 +01:00
/**
* Returns a copy of the VirtualMachine User Template
* @ return A copy of the VirtualMachine User Template
*/
std : : unique_ptr < VirtualMachineTemplate > clone_user_template ( ) const
{
return std : : make_unique < VirtualMachineTemplate > ( * user_obj_template ) ;
}
2013-01-04 00:04:01 +01:00
/**
* This function replaces the * user template * .
* @ param tmpl_str new contents
2013-03-12 15:19:00 +01:00
* @ param keep_restricted If true , the restricted attributes of the
* current template will override the new template
2013-01-04 00:04:01 +01:00
* @ param error string describing the error if any
* @ return 0 on success
*/
2020-07-02 22:42:10 +02:00
int replace_template ( const std : : string & tmpl_str , bool keep_restricted ,
std : : string & error ) override ;
2012-06-06 17:05:11 +02:00
2013-06-28 18:00:26 +02:00
/**
* Append new attributes to the * user template * .
* @ param tmpl_str new contents
* @ param keep_restricted If true , the restricted attributes of the
* current template will override the new template
* @ param error string describing the error if any
* @ return 0 on success
*/
2020-07-02 22:42:10 +02:00
int append_template ( const std : : string & tmpl_str , bool keep_restricted ,
std : : string & error ) override ;
2013-06-28 18:00:26 +02:00
2013-03-15 17:37:47 +01:00
/**
* This function gets an attribute from the user template
* @ param name of the attribute
* @ param value of the attribute
*/
2024-01-08 13:56:40 +01:00
template < typename T >
bool get_user_template_attribute ( const std : : string & name , T & value ) const
2013-02-26 14:32:31 +01:00
{
2024-01-08 13:56:40 +01:00
return user_obj_template - > get ( name , value ) ;
2013-02-26 14:32:31 +01:00
}
2013-04-26 17:26:09 +02:00
/**
* Sets an error message with timestamp in the template
* @ param message Message string
*/
2020-07-02 22:42:10 +02:00
void set_template_error_message ( const std : : string & message ) override ;
2013-04-26 17:26:09 +02:00
/**
* Sets an error message with timestamp in the template
* @ param name of the error attribute
* @ param message Message string
*/
2020-07-02 22:42:10 +02:00
void set_template_error_message ( const std : : string & name ,
2023-02-07 08:50:30 +01:00
const std : : string & message ) override ;
2013-04-26 17:26:09 +02:00
/**
* Deletes the error message from the template
*/
2019-09-03 16:31:51 +02:00
void clear_template_error_message ( ) override ;
2013-04-26 17:26:09 +02:00
2008-06-17 16:27:32 +00:00
// ------------------------------------------------------------------------
2013-03-15 17:37:47 +01:00
// Timers & Requirements
2009-03-06 12:10:15 +00:00
// ------------------------------------------------------------------------
2019-02-04 18:18:54 +01:00
/**
* @ return time when the VM was created ( in epoch )
*/
time_t get_stime ( ) const
{
return stime ;
} ;
2008-06-17 16:27:32 +00:00
/**
2019-07-01 17:52:47 +02:00
* Get the VM physical capacity requirements for the host .
* @ param sr the HostShareCapacity to store the capacity request .
2008-06-17 16:27:32 +00:00
*/
2019-09-30 10:01:23 +02:00
void get_capacity ( HostShareCapacity & sr ) const ;
2009-03-06 12:10:15 +00:00
2016-08-30 18:32:29 +02:00
/**
* Adds automatic placement requirements : Datastore and Cluster
2016-09-01 15:46:01 +02:00
* @ param cluster_ids set of viable clusters for this VM
2016-08-30 18:32:29 +02:00
* @ param error_str Returns the error reason , if any
* @ return 0 on success
*/
2021-03-15 16:24:25 +01:00
int automatic_requirements ( std : : set < int > & cluster_ids , std : : string & error_str ) ;
2016-08-18 16:18:59 +02:00
2013-02-23 19:49:06 +01:00
/**
* Resize the VM capacity
* @ param cpu
* @ param memory
* @ param vcpu
2019-07-01 17:52:47 +02:00
*/
2020-11-17 11:24:52 +01:00
int resize ( float cpu , long int memory , unsigned int vcpu , std : : string & error ) ;
/**
* Store old values of resize parameters , to be able to revert in case of failure
* @ param cpu - old cpu value
* @ param memory - old memory value
* @ param vcpu - old vcpu value
*/
void store_resize ( float cpu , long int memory , unsigned int vcpu ) ;
/**
* Clear resize parameters
*/
void reset_resize ( ) ;
2019-07-01 17:52:47 +02:00
/**
* Parse TOPOLOGY and NUMA_NODE
* @ param tmpl template of the virtual machine
* @ param error if any
2013-02-25 17:01:55 +01:00
*
2019-07-01 17:52:47 +02:00
* @ return 0 on sucess
*/
static int parse_topology ( Template * tmpl , std : : string & error ) ;
/**
* @ return true if the VM is being deployed with a pinned policy
2013-02-23 19:49:06 +01:00
*/
2019-09-30 10:01:23 +02:00
bool is_pinned ( ) const ;
2013-02-23 19:49:06 +01:00
2024-01-08 13:56:40 +01:00
/**
* @ return true if Virtual Machine is in state , when running quota applies
*/
bool is_running_quota ( ) const ;
2020-08-03 13:44:52 +02:00
/**
* Fill a template only with the necessary attributes to update the quotas
* @ param qtmpl template that will be filled
2024-01-08 13:56:40 +01:00
* @ param basic_quota true to add basic quota attributes ( from Template and User template )
* @ param running_quota true to add RUNNING_ quota attributes ( for Template and User Template )
2020-08-03 13:44:52 +02:00
*/
2024-01-08 13:56:40 +01:00
void get_quota_template ( VirtualMachineTemplate & quota_tmpl , bool basic_quota , bool running_quota ) ;
2020-08-03 13:44:52 +02:00
2016-12-11 21:05:07 +01:00
// ------------------------------------------------------------------------
// Virtual Machine Disks
// ------------------------------------------------------------------------
/**
* Releases all disk images taken by this Virtual Machine
2016-12-17 02:49:14 +01:00
* @ param quotas disk space to free from image datastores
2018-10-15 15:34:40 +02:00
* @ param check_state to update image state based on VM state
2016-12-11 21:05:07 +01:00
*/
2020-07-02 22:42:10 +02:00
void release_disk_images ( std : : vector < Template * > & quotas , bool check_state ) ;
2016-12-11 21:05:07 +01:00
/**
* @ return reference to the VirtualMachine disks
*/
VirtualMachineDisks & get_disks ( )
{
return disks ;
}
/**
* @ return a pointer to the given disk
*/
VirtualMachineDisk * get_disk ( int disk_id ) const
{
return disks . get_disk ( disk_id ) ;
}
2008-11-13 16:21:17 +00:00
// ------------------------------------------------------------------------
2016-12-24 01:35:33 +01:00
// Virtual Machine Nics
2008-11-13 16:21:17 +00:00
// ------------------------------------------------------------------------
/**
2016-12-24 01:35:33 +01:00
* Get a NIC by its id
* @ param nic_id of the NIC
2008-11-13 16:21:17 +00:00
*/
2016-12-24 01:35:33 +01:00
VirtualMachineNic * get_nic ( int nic_id ) const
{
return nics . get_nic ( nic_id ) ;
}
2009-03-06 12:10:15 +00:00
2012-12-12 18:31:27 +01:00
/**
2016-12-24 01:35:33 +01:00
* Returns a set of the security group IDs in use in this VM .
* @ param sgs a set of security group IDs
2012-12-12 18:31:27 +01:00
*/
2020-07-02 22:42:10 +02:00
void get_security_groups ( std : : set < int > & sgs )
2016-12-24 01:35:33 +01:00
{
nics . get_security_groups ( sgs ) ;
}
2016-03-01 23:31:31 +01:00
/**
2016-12-24 01:35:33 +01:00
* Releases all network leases taken by this Virtual Machine
2016-03-01 23:31:31 +01:00
*/
2016-12-24 01:35:33 +01:00
void release_network_leases ( )
2016-07-14 11:06:59 +02:00
{
2016-12-24 01:35:33 +01:00
nics . release_network_leases ( oid ) ;
2016-07-14 11:06:59 +02:00
}
F #5989: Live update of Virtual Network attributes
co-authored-by: Pavel Czerný <pczerny@opennebula.systems>
co-authored-by: Frederick Borges <fborges@opennebula.io>
co-authored-by: Christian González <cgonzalez@opennebula.io>
* VNET updates trigger a driver action on running VMs with NICs in the
network.
* VNET includes a sets with VM status: updated, outdated, error and
updating. With VMs in each state.
* VNET flags error situations with a new state UPDATE_FAILURE.
* The same procedure is applied when an AR is updated (only VMs in that
AR are updated).
* A new options in the one.vn.recover API call enable to recover or
retry this VM update operations.
* The following attributes can be live-updated per VNET driver:
- PHYDEV (novlan, vlan, ovs driver)
- MTU (vlan, ovs driver)
- VLAN_ID (vlan, ovs driver)
- QINQ_TYPE (ovs driver)
- CVLANS (ovs driver)
- VLAN_TAGGED_ID (ovs driver)
- OUTER_VLAN_ID (ovs driver)
- INBOUND_AVG_BW (SG, ovs driver + KVM)
- INBOUND_PEAK_BW (SG, ovs driver + KVM)
- INBOUND_PEAK_KB (SG, ovs driver + KVM)
- OUTBOUND_AVG_BW (SG, ovs driver + KVM)
- OUTBOUND_PEAK_BW (SG, ovs driver + KVM)
- OUTBOUND_PEAK_KB (SG, ovs driver + KVM)
* New API call one.vm.updatenic, allows to update individual NICs
without the need of detach/attach (only QoS supported).
* Update operations for: 802.1Q, bridge, fw, ovswitch, ovswitch_vxlan
and vxlan network drivers.
* VNET attributes (old values) stored in VNET_UPDATE to allow
implementation of update operations. The attribute is removed after a
successful update.
* Updates to CLI onevnet (--retry option) / onevm (nicupdate command)
* XSD files updated to reflect the new data model
* Ruby and JAVA bindings updated: new VNET state and recover option, new
VM API call.
* Suntone and Fireedge implementation (lease status, recover option, new
states)
TODO: Virtual Functions does not support this functionality
iii
2022-11-16 13:35:29 +01:00
/**
* Update nic with values from Virtual Network
* @ param vnid ID of the network with updated attributes
* @ return 0 on success , - 1 on error ,
*/
int nic_update ( int vnid ) ;
/**
* Update nic with values user template
* @ param vnid ID of the network with updated attributes
* @ return 0 on success , - 1 on error
*/
int nic_update ( int nic_id , VirtualMachineNic * new_nic , bool live ) ;
2016-03-01 23:31:31 +01:00
/**
* Remove the rules associated to the given security group rules
* @ param sgid the security group ID
*/
void remove_security_group ( int sgid ) ;
2012-12-12 18:31:27 +01:00
2017-01-04 15:23:35 +01:00
// ------------------------------------------------------------------------
// Virtual Machine Groups
// ------------------------------------------------------------------------
/**
* Remove this VM from its role and VM group if any
*/
void release_vmgroup ( ) ;
2016-12-24 01:35:33 +01:00
// ------------------------------------------------------------------------
// Imported VM interface
// ------------------------------------------------------------------------
2015-03-09 18:14:57 +01:00
/**
2015-11-18 13:10:55 +01:00
* Check if the VM is imported
2015-03-09 18:14:57 +01:00
*/
2015-06-16 23:40:09 +02:00
bool is_imported ( ) const ;
2015-03-09 18:14:57 +01:00
2016-02-09 12:56:01 +01:00
/**
* Return state of the VM right before import
*/
2020-07-02 22:42:10 +02:00
std : : string get_import_state ( ) const ;
2016-02-09 12:56:01 +01:00
2015-11-18 13:10:55 +01:00
/**
* Checks if the current VM MAD supports the given action for imported VMs
* @ param action VM action to check
* @ return true if the current VM MAD supports the given action for imported VMs
*/
2019-09-09 13:13:52 +02:00
bool is_imported_action_supported ( VMActions : : Action action ) const ;
2015-11-18 13:10:55 +01:00
2015-12-01 15:35:33 +01:00
// ------------------------------------------------------------------------
// Virtual Router related functions
// ------------------------------------------------------------------------
/**
* Returns the Virtual Router ID if this VM is a VR , or - 1
* @ return VR ID or - 1
*/
2019-09-30 10:01:23 +02:00
int get_vrouter_id ( ) const ;
2015-12-01 15:35:33 +01:00
2015-12-16 12:32:19 +01:00
/**
* Returns true if this VM is a Virtual Router
* @ return true if this VM is a Virtual Router
*/
2019-09-30 10:01:23 +02:00
bool is_vrouter ( ) const ;
2015-12-16 12:32:19 +01:00
2009-03-06 15:54:57 +00:00
// ------------------------------------------------------------------------
// Context related functions
// ------------------------------------------------------------------------
2009-03-06 12:10:15 +00:00
/**
2009-03-07 00:56:26 +00:00
* Writes the context file for this VM , and gets the paths to be included
* in the context block device ( CBD )
* @ param files space separated list of paths to be included in the CBD
2012-10-16 18:27:16 +02:00
* @ param disk_id CONTEXT / DISK_ID attribute value
2016-03-16 14:53:05 +01:00
* @ param password Password to encrypt the token , if it is set
2018-10-09 11:42:17 +02:00
* @ param only_auto boolean to generate context only for vnets
* with NETWORK_MODE = auto
2012-10-16 18:27:16 +02:00
* @ return - 1 in case of error , 0 if the VM has no context , 1 on success
2009-03-06 12:10:15 +00:00
*/
2020-07-02 22:42:10 +02:00
int generate_context ( std : : string & files , int & disk_id ,
const std : : string & password ) ;
2009-03-06 12:10:15 +00:00
2016-01-25 17:38:46 +01:00
/**
* Returns the CREATED_BY template attribute , or the uid if it does not exist
* @ return uid
*/
int get_created_by_uid ( ) const ;
2016-12-17 02:49:14 +01:00
/**
* Updates the configuration attributes based on a template , the state of
* the virtual machine is checked to assure operation consistency
* @ param tmpl with the new attributes include : OS , RAW , FEAUTRES ,
2023-12-10 04:36:23 -06:00
* CONTEXT , INPUT , BACKUP_CONFIG , CPU_MODEL and GRAPHICS .
2016-12-17 02:49:14 +01:00
* @ param err description if any
2021-08-09 11:00:58 +02:00
* @ param append true append , false replace
2016-12-17 02:49:14 +01:00
*
F #5516: New backup interface for OpenNebula
co-authored-by: Frederick Borges <fborges@opennebula.io>
co-authored-by: Neal Hansen <nhansen@opennebula.io>
co-authored-by: Daniel Clavijo Coca <dclavijo@opennebula.io>
co-authored-by: Pavel Czerný <pczerny@opennebula.systems>
BACKUP INTERFACE
=================
* Backups are exposed through a a special Datastore (BACKUP_DS) and
Image (BACKUP) types. These new types can only be used for backup'ing
up VMs. This approach allows to:
- Implement tier based backup policies (backups made on different
locations).
- Leverage access control and quota systems
- Support differnt storage and backup technologies
* Backup interface for the VMs:
- VM configures backups with BACKUP_CONFIG. This attribute can be set
in the VM template or updated with updateconf API call. It can include:
+ BACKUP_VOLATILE: To backup or not volatile disks
+ FS_FREEZE: How the FS is freeze for running VMs (qemu-agent,
suspend or none). When possible backups are crash consistent.
+ KEEP_LAST: keep only a given number of backups.
- Backups are initiated by the one.vm.backup API call that requires
the target Datastore to perform the backup (one-shot). This is
exposed by the onevm backup command.
- Backups can be periodic through scheduled actions.
- Backup configuration is updated with one.vm.updateconf API call.
* Restore interface:
- Restores are initiated by the one.image.restore API call. This is
exposed by oneimage restore command.
- Restore include configurable options for the VM template
+ NO_IP: to not preserve IP addresses (but keep the NICs and network
mapping)
+ NO_NIC: to not preserve network mappings
- Other template attributes:
+ Clean PCI devices, including network configuration in case of TYPE=NIC
attributes. By default it removes SHORT_ADDRESS and leave the "auto"
selection attributes.
+ Clean NUMA_NODE, removes node id and cpu sets. It keeps the NUMA node
- It is possible to restore single files stored in the repository by
using the backup specific URL.
* Sunstone (Ruby version) has been updated to expose this feautres.
BACKUP DRIVERS & IMPLEMENTATION
===============================
* Backup operation is implemented by a combination of 3 driver operations:
- VMM. New (internal oned <-> one_vmm_exec.rb) to orchestrate
backups for RUNNING VMs.
- TM. This commit introduces 2 new operations (and their
corresponding _live variants):
+ pre_backup(_live): Prepares the disks to be back'ed up in the
repository. It is specific to the driver: (i) ceph uses the export
operation; (ii) qcow2/raw uses snapshot-create-as and fs_freeze as
needed.
+ post_backup(_live): Performs cleanning operations, i.e. KVM
snapshots or tmp dirs.
- DATASTORE. Each backup technology is represented by its
corresponfing driver, that needs to implement:
+ backup: it takes the VM disks in file (qcow2) format and stores it
the backup repository.
+ restore: it takes a backup image and restores the associated disks
and VM template.
+ monitor: to gather available space in the repository
+ rm: to remove existing backups
+ stat: to return the "restored" size of a disk stored in a backup
+ downloader pseudo-URL handler: in the form
<backup_proto>://<driver_snapshot_id>/<disk filename>
BACKUP MANAGEMENT
=================
Backup actions may potentially take some time, leaving some vmm_exec threads in
use for a long time, stucking other vmm operations. Backups are planned
by the scheduler through the sched action interface.
Two attributes has been added to sched.conf:
* MAX_BACKUPS max active backup operations in the cloud. No more
backups will be started beyond this limit.
* MAX_BACKUPS_HOST max number of backups per host
* Fix onevm CLI to properly show and manage schedule actions. --schedule
supports now, as well as relative times +<seconds_from_stime>
onvm backup --schedule now -d 100 63
* Backup is added as VM_ADMIN_ACTIONS in oned.conf. Regular users needs
to use the batch interface or request specific permissions
Internal restructure of Scheduler:
- All sched_actions interface is now in SchedActionsXML class and files.
This class uses references to VM XML, and MUST be used in the same
lifetime scope.
- XMLRPC API calls for sched actions has been moved to ScheduledActionXML.cc as
static functions.
- VirtualMachineActionPool includes counters for active backups (total
and per host).
SUPPORTED PLATFORMS
====================
* hypervisor: KVM
* TM: qcow2/shared/ssh, ceph
* backup: restic, rsync
Notes on Ceph
* Ceph backups are performed in the following steps:
1. A snapshot of each disk is taken (group snapshots cannot be used as
it seems we cannot export the disks afterwards)
2. Disks are export to a file
3. File is converted to qcow2 format
4. Disk files are upload to the backup repo
TODO:
* Confirm crash consistent snapshots cannot be used in Ceph
TODO:
* Check if using VM dir instead of full path is better to accomodate
DS migrations i.e.:
- Current path: /var/lib/one/datastores/100/53/backup/disk.0
- Proposal: 53/backup/disk.0
RESTIC DRIVER
=============
Developed together with this feature is part of the EE edtion.
* It supports the SFTP protocol, the following attributes are
supported:
- RESTIC_SFTP_SERVER
- RESTIC_SFTP_USER: only if different from oneadmin
- RESTIC_PASSWORD
- RESTIC_IONICE: Run restic under a given ionice priority (class 2)
- RESTIC_NICE: Run restic under a given nice
- RESTIC_BWLIMIT: Limit restic upload/download BW
- RESTIC_COMPRESSION: Restic 0.14 implements compression (three modes:
off, auto, max). This requires repositories version 2. By default,
auto is used (average compression without to much CPU usage)
- RESTIC_CONNECTIONS: Sets the number of concurrent connections to a
backend (5 by default). For high-latency backends this number can be
increased.
* downloader URL: restic://<datastore_id>/<snapshot_id>/<file_name>
snapshot_id is the restic snapshot hash. To recover single disk images
from a backup. This URLs support:
- RESTIC_CONNECTIONS
- RESTIC_BWLIMIT
- RESTIC_IONICE
- RESTIC_NICE
These options needs to be defined in the associated datastore.
RSYNC DRIVER
=============
A rsync driver is included as part of the CE distribution. It uses the
rsync tool to store backups in a remote server through SSH:
* The following attributes are supported to configure the backup
datastore:
- RSYNC_HOST
- RSYNC_USER
- RSYNC_ARGS: Arguments to perform the rsync operatin (-aS by default)
* downloader URL: rsync://<ds_id>/<vmid>/<hash>/<file> can be used to recover
single files from an existing backup. (RSYNC_HOST and RSYN_USER needs
to be set in ds_id
EMULATOR_CPUS
=============
This commit includes a non related backup feature:
* Add EMULATOR_CPUS (KVM). This host (or cluster attribute) defines the
CPU IDs where the emulator threads will be pinned. If this value is
not defined the allocated CPU wll be used when using a PIN policy.
(cherry picked from commit a9e6a8e000e9a5a2f56f80ce622ad9ffc9fa032b)
F OpenNebula/one#5516: adding rsync backup driver
(cherry picked from commit fb52edf5d009dc02b071063afb97c6519b9e8305)
F OpenNebula/one#5516: update install.sh, add vmid to source, some polish
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 6fc6f8a67e435f7f92d5c40fdc3d1c825ab5581d)
F OpenNebula/one#5516: cleanup
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 12f4333b833f23098142cd4762eb9e6c505e1340)
F OpenNebula/one#5516: update downloader, default args, size check
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 510124ef2780a4e2e8c3d128c9a42945be38a305)
LL
(cherry picked from commit d4fcd134dc293f2b862086936db4d552792539fa)
2022-09-09 11:46:44 +02:00
* @ return - 1 ( error ) , 0 ( context change ) , 1 ( no context changed )
2016-12-17 02:49:14 +01:00
*/
2021-08-09 11:00:58 +02:00
int updateconf ( VirtualMachineTemplate * tmpl , std : : string & err , bool append ) ;
2016-12-17 02:49:14 +01:00
2018-03-26 18:58:04 +02:00
/**
2021-06-10 09:55:21 +02:00
* Check if the template includes any restricted attribute , different from
* this VM template .
* @ param template to look for for restricted . The resulting tgt template
* will have the same restricted Attributes as this VM .
* @ param ra the restricted attribute found to be different
* @ return true if a different restricted is found
2018-03-26 18:58:04 +02:00
*/
2023-09-18 16:17:59 +02:00
bool check_restricted ( std : : string & ra , VirtualMachineTemplate * tgt , bool append ) const
2021-03-15 12:22:32 +01:00
{
2023-09-18 16:17:59 +02:00
return tgt - > check_restricted ( ra , obj_template . get ( ) , append ) ;
2021-03-15 12:22:32 +01:00
}
2018-03-26 18:58:04 +02:00
2013-03-07 22:44:18 +01:00
// -------------------------------------------------------------------------
2015-06-10 19:56:35 +02:00
// "Save as" Disk related functions (save_as hot)
2013-03-07 22:44:18 +01:00
// -------------------------------------------------------------------------
/**
2015-06-10 19:56:35 +02:00
* Mark the disk that is going to be " save as "
2013-03-07 22:44:18 +01:00
* @ param disk_id of the VM
2015-06-10 19:56:35 +02:00
* @ param snap_id of the disk to save , - 1 to select the active snapshot
2015-10-16 17:56:29 +02:00
* @ param img_id The image id used by the disk
* @ param size The disk size . This may be different to the original
* image size
* @ param err_str describing the error if any
* @ return - 1 if the image cannot saveas , 0 on success
2013-03-07 22:44:18 +01:00
*/
2016-12-12 10:26:55 +01:00
int set_saveas_disk ( int disk_id , int snap_id , int & img_id , long long & size ,
2020-07-02 22:42:10 +02:00
std : : string & err_str )
2016-12-12 10:26:55 +01:00
{
return disks . set_saveas ( disk_id , snap_id , img_id , size , err_str ) ;
}
2013-03-07 22:44:18 +01:00
2015-06-10 12:53:55 +02:00
/**
2015-06-10 19:56:35 +02:00
* Set save attributes for the disk
* @ param disk_id Index of the disk to save
2015-06-10 12:53:55 +02:00
* @ param source to save the disk
* @ param img_id ID of the image this disk will be saved to
*/
2020-07-02 22:42:10 +02:00
int set_saveas_disk ( int disk_id , const std : : string & source , int img_id )
2016-12-12 10:26:55 +01:00
{
2021-01-08 10:50:12 +01:00
if ( lcm_state ! = HOTPLUG_SAVEAS & &
lcm_state ! = HOTPLUG_SAVEAS_SUSPENDED & &
lcm_state ! = HOTPLUG_SAVEAS_POWEROFF & &
lcm_state ! = HOTPLUG_SAVEAS_UNDEPLOYED & &
lcm_state ! = HOTPLUG_SAVEAS_STOPPED )
2016-12-12 10:26:55 +01:00
{
return - 1 ;
}
return disks . set_saveas ( disk_id , source , img_id ) ;
}
2015-06-10 12:53:55 +02:00
2013-03-07 22:44:18 +01:00
/**
2015-06-10 19:56:35 +02:00
* Sets the corresponding state to save the disk .
* @ return 0 if the VM can be saved
2013-03-07 22:44:18 +01:00
*/
2015-06-09 23:25:10 +02:00
int set_saveas_state ( ) ;
2013-03-07 22:44:18 +01:00
/**
2015-06-10 19:56:35 +02:00
* Clears the save state , moving the VM to the original state .
* @ return 0 if the VM was in an saveas state
2013-03-07 22:44:18 +01:00
*/
2015-06-09 23:25:10 +02:00
int clear_saveas_state ( ) ;
2013-03-07 22:44:18 +01:00
2010-07-21 18:06:40 +02:00
/**
2015-06-10 19:56:35 +02:00
* Clears the SAVE_AS_ * attributes of the disk being saved as
2015-06-09 23:25:10 +02:00
* @ return the ID of the image this disk will be saved to or - 1 if it
* is not found .
2010-07-21 18:06:40 +02:00
*/
2016-12-12 10:26:55 +01:00
int clear_saveas_disk ( )
{
return disks . clear_saveas ( ) ;
}
2015-04-20 13:03:16 +02:00
2012-03-05 23:49:18 +01:00
/**
* Get the original image id of the disk . It also checks that the disk can
* be saved_as .
* @ param disk_id Index of the disk to save
2015-06-10 12:53:55 +02:00
* @ param source of the image to save the disk to
* @ param image_id of the image to save the disk to
* @ param tm_mad in use by the disk
* @ param ds_id of the datastore in use by the disk
2012-03-05 23:49:18 +01:00
* @ return - 1 if failure
*/
2020-07-02 22:42:10 +02:00
int get_saveas_disk ( int & disk_id , std : : string & source , int & image_id ,
std : : string & snap_id , std : : string & tm_mad , std : : string & ds_id ) const
2016-12-12 10:26:55 +01:00
{
return disks . get_saveas_info ( disk_id , source , image_id , snap_id ,
tm_mad , ds_id ) ;
}
2013-03-07 22:44:18 +01:00
2011-06-01 12:41:46 +02:00
// ------------------------------------------------------------------------
// Authorization related functions
// ------------------------------------------------------------------------
/**
* Sets an authorization request for a VirtualMachine template based on
* the images and networks used
* @ param uid for template owner
* @ param ar the AuthRequest object
* @ param tmpl the virtual machine template
2019-02-04 18:18:54 +01:00
* @ param check_lock for check if the resource is lock or not
2011-06-01 12:41:46 +02:00
*/
2016-12-24 01:35:33 +01:00
static void set_auth_request ( int uid , AuthRequest & ar ,
2018-05-31 12:50:02 +02:00
VirtualMachineTemplate * tmpl , bool check_lock ) ;
2012-06-13 18:42:42 +02:00
2013-03-07 22:44:18 +01:00
// -------------------------------------------------------------------------
2016-12-11 21:05:07 +01:00
// Attach Disk Interface
2013-03-07 22:44:18 +01:00
// -------------------------------------------------------------------------
2012-06-15 12:28:20 +02:00
/**
2016-04-18 16:46:02 +02:00
* Generate and attach a new DISK attribute to the VM . This method check
* that the DISK is compatible with the VM cluster allocation and disk target
* usage .
2012-06-15 12:28:20 +02:00
* @ param tmpl Template containing a single DISK vector attribute .
* @ param error_str describes the error
*
2016-04-18 16:46:02 +02:00
* @ return 0 if success
*/
2020-07-02 22:42:10 +02:00
int set_up_attach_disk ( VirtualMachineTemplate * tmpl , std : : string & error_str ) ;
2016-04-18 16:46:02 +02:00
2012-06-13 18:42:42 +02:00
/**
* Returns the disk that is waiting for an attachment action
*
* @ return the disk waiting for an attachment action , or 0
*/
2019-09-30 10:01:23 +02:00
VirtualMachineDisk * get_attach_disk ( ) const
2016-12-11 21:05:07 +01:00
{
return disks . get_attach ( ) ;
}
2012-06-13 18:42:42 +02:00
/**
2012-06-13 19:15:33 +02:00
* Cleans the ATTACH = YES attribute from the disks
2012-06-14 13:02:18 +02:00
*/
2016-12-11 21:05:07 +01:00
void clear_attach_disk ( )
{
disks . clear_attach ( ) ;
}
2012-06-14 13:02:18 +02:00
/**
2012-06-15 12:28:20 +02:00
* Deletes the DISK that was in the process of being attached
2012-06-14 13:02:18 +02:00
*
2012-06-15 12:28:20 +02:00
* @ return the DISK or 0 if no disk was deleted
2012-06-13 18:42:42 +02:00
*/
2016-12-11 21:05:07 +01:00
VirtualMachineDisk * delete_attach_disk ( )
{
2016-12-12 10:26:55 +01:00
VirtualMachineDisk * disk = disks . delete_attach ( ) ;
2016-12-24 01:35:33 +01:00
2020-09-10 09:08:29 +02:00
if ( disk = = nullptr )
2016-12-24 01:35:33 +01:00
{
2020-09-10 09:08:29 +02:00
return nullptr ;
2016-12-24 01:35:33 +01:00
}
2016-12-12 10:26:55 +01:00
obj_template - > remove ( disk - > vector_attribute ( ) ) ;
return disk ;
2016-12-11 21:05:07 +01:00
}
2012-06-15 12:28:20 +02:00
2012-06-15 16:28:30 +02:00
/**
* Sets the attach attribute to the given disk
* @ param disk_id of the DISK
* @ return 0 if the disk_id was found - 1 otherwise
*/
2016-12-11 21:05:07 +01:00
int set_attach_disk ( int disk_id )
{
return disks . set_attach ( disk_id ) ;
}
// -------------------------------------------------------------------------
// Resize Disk Interface
// -------------------------------------------------------------------------
/**
* Returns the disk that is going to be resized
*
* @ return the disk or 0 if not found
*/
2019-09-30 10:01:23 +02:00
VirtualMachineDisk * get_resize_disk ( ) const
2016-12-11 21:05:07 +01:00
{
return disks . get_resize ( ) ;
}
/**
2016-12-17 02:49:14 +01:00
* Cleans the RESIZE = YES attribute from the disks
* @ param restore if true the previous disk size is restored
2016-12-11 21:05:07 +01:00
*/
2016-12-17 02:49:14 +01:00
VirtualMachineDisk * clear_resize_disk ( bool restore )
2016-12-11 21:05:07 +01:00
{
2016-12-17 02:49:14 +01:00
VirtualMachineDisk * disk = disks . get_resize ( ) ;
if ( disk = = 0 )
{
return 0 ;
}
disk - > clear_resize ( restore ) ;
return disk ;
2016-12-11 21:05:07 +01:00
}
2016-12-12 10:26:55 +01:00
/**
* Prepares a disk to be resized .
* @ param disk_id of disk
* @ param size new size for the disk ( needs to be greater than current )
* @ param error
*
* @ return 0 on success
*/
2020-07-02 22:42:10 +02:00
int set_up_resize_disk ( int disk_id , long size , std : : string & error )
2016-12-12 10:26:55 +01:00
{
return disks . set_up_resize ( disk_id , size , error ) ;
}
2012-12-12 18:31:27 +01:00
// ------------------------------------------------------------------------
2024-01-31 17:37:27 +01:00
// NIC/PCI Hotplug related functions
2012-12-12 18:31:27 +01:00
// ------------------------------------------------------------------------
2022-10-04 20:16:09 +02:00
/**
* Checks the attributes of a PCI device
*/
2024-01-31 17:37:27 +01:00
static int check_pci_attributes ( VectorAttribute * pci , std : : string & err ) ;
/**
* Get PCI attribute from VM ( VectorAttribute form )
*
* @ param pci_id of the PCI device
*
* @ return pointer to the PCI Attribute
*/
VectorAttribute * get_pci ( int pci_id ) ;
/**
* Attach / Detach a PCI attribute to the VM it generates the PCI_ID and VM_BUS
* paratmeters .
*
* @ param vpci attribute wih the PCI information
* @ param err string
*
* @ return 0 on success
*/
int attach_pci ( VectorAttribute * vpci , std : : string & err ) ;
void detach_pci ( VectorAttribute * vpci ) ;
2022-10-04 20:16:09 +02:00
2012-06-14 17:45:41 +02:00
/**
2016-04-18 16:46:02 +02:00
* Generate and attach a new NIC attribute to the VM . This method check
* that the NIC is compatible with the VM cluster allocation and fills SG
* information .
* @ param tmpl Template containing a single NIC vector attribute .
* @ param error_str error reason , if any
2014-07-10 16:24:25 +02:00
*
2016-04-18 16:46:02 +02:00
* @ return 0 on success , - 1 otherwise
2016-01-27 11:27:26 +01:00
*/
2020-07-02 22:42:10 +02:00
int set_up_attach_nic ( VirtualMachineTemplate * tmpl , std : : string & error_str ) ;
2016-01-27 11:27:26 +01:00
2012-12-12 18:31:27 +01:00
/**
* Sets the attach attribute to the given NIC
* @ param nic_id of the NIC
* @ return 0 if the nic_id was found , - 1 otherwise
2012-06-14 17:45:41 +02:00
*/
2016-01-27 11:27:26 +01:00
int set_detach_nic ( int nic_id ) ;
2012-06-14 17:45:41 +02:00
2016-01-25 16:20:12 +01:00
/**
2016-12-24 01:35:33 +01:00
* Cleans the ATTACH = YES attribute from the NICs
2016-01-25 16:20:12 +01:00
*/
2016-12-24 01:35:33 +01:00
void clear_attach_nic ( )
{
nics . clear_attach ( ) ;
}
2016-01-27 11:27:26 +01:00
/**
2016-12-24 01:35:33 +01:00
* Deletes the NIC that was in the process of being attached / detached
*
* @ return the deleted NIC or 0 if none was deleted
2016-01-27 11:27:26 +01:00
*/
2016-12-24 01:35:33 +01:00
VirtualMachineNic * delete_attach_nic ( )
{
VirtualMachineNic * nic = nics . delete_attach ( ) ;
if ( nic = = 0 )
{
return 0 ;
}
obj_template - > remove ( nic - > vector_attribute ( ) ) ;
return nic ;
}
2016-01-25 16:20:12 +01:00
2018-12-04 14:41:55 +01:00
/**
* Deletes the alias of the NIC that was in the process of being attached / detached
*/
2019-12-10 11:45:15 +01:00
void delete_attach_alias ( VirtualMachineNic * nic ) ;
2018-12-04 14:41:55 +01:00
2015-05-19 18:41:23 +02:00
// ------------------------------------------------------------------------
2016-12-11 21:05:07 +01:00
// Disk Snapshot related functions
2015-05-19 18:41:23 +02:00
// ------------------------------------------------------------------------
2015-06-04 19:46:46 +02:00
/**
* Return the snapshot list for the disk
* @ param disk_id of the disk
* @ param error if any
* @ return pointer to Snapshots or 0 if not found
*/
2020-07-02 22:42:10 +02:00
const Snapshots * get_disk_snapshots ( int did , std : : string & err ) const
2016-12-11 21:05:07 +01:00
{
return disks . get_snapshots ( did , err ) ;
}
2015-06-04 19:46:46 +02:00
2015-05-19 18:41:23 +02:00
/**
* Creates a new snapshot of the given disk
* @ param disk_id of the disk
2015-07-09 13:13:07 +02:00
* @ param name a description for this snapshot
2015-05-19 18:41:23 +02:00
* @ param error if any
* @ return the id of the new snapshot or - 1 if error
*/
2020-07-02 22:42:10 +02:00
int new_disk_snapshot ( int disk_id , const std : : string & name , std : : string & error )
2016-12-11 21:05:07 +01:00
{
return disks . create_snapshot ( disk_id , name , error ) ;
}
2015-05-19 18:41:23 +02:00
2018-10-11 17:01:36 +02:00
/**
* Renames the snap_id from the list
* @ param disk_id of the disk
* @ param snap_id of the snapshot
* @ param new_name of the snapshot
* @ return 0 on success
*/
2020-07-02 22:42:10 +02:00
int rename_disk_snapshot ( int disk_id , int snap_id ,
const std : : string & new_name ,
std : : string & error_str )
2018-10-11 17:01:36 +02:00
{
return disks . rename_snapshot ( disk_id , snap_id , new_name , error_str ) ;
}
2015-11-12 13:23:58 +01:00
/**
* Deletes all the disk snapshots for non - persistent disks and for persistent
* disks in no shared system ds .
* @ param vm_quotas The SYSTEM_DISK_SIZE freed by the deleted snapshots
* @ param ds_quotas The DS SIZE freed from image datastores .
*/
2022-06-20 18:34:44 +02:00
void delete_non_persistent_disk_snapshots ( Template & vm_quotas ,
2020-07-02 22:42:10 +02:00
std : : vector < Template * > & ds_quotas )
2016-12-11 21:05:07 +01:00
{
disks . delete_non_persistent_snapshots ( vm_quotas , ds_quotas ) ;
}
2015-11-12 13:23:58 +01:00
2015-05-20 17:48:27 +02:00
/**
* Get information about the disk to take the snapshot from
* @ param ds_id id of the datastore
* @ param tm_mad used by the datastore
2015-05-21 14:12:35 +02:00
* @ param disk_id of the disk
2015-05-20 17:48:27 +02:00
* @ param snap_id of the snapshot
*/
2020-07-02 22:42:10 +02:00
int get_snapshot_disk ( int & ds_id , std : : string & tm_mad , int & disk_id ,
2019-09-30 10:01:23 +02:00
int & snap_id ) const
2016-12-11 21:05:07 +01:00
{
return disks . get_active_snapshot ( ds_id , tm_mad , disk_id , snap_id ) ;
}
2015-05-26 11:24:34 +02:00
/**
* Unset the current disk being snapshotted ( reverted . . . )
*/
2016-12-11 21:05:07 +01:00
void clear_snapshot_disk ( )
{
disks . clear_active_snapshot ( ) ;
}
2015-05-20 17:48:27 +02:00
2015-05-21 14:12:35 +02:00
/**
2015-05-26 11:24:34 +02:00
* Set the disk as being snapshotted ( reverted . . . )
* @ param disk_id of the disk
* @ param snap_id of the target snap_id
2015-05-21 14:12:35 +02:00
*/
2016-12-11 21:05:07 +01:00
int set_snapshot_disk ( int disk_id , int snap_id )
{
return disks . set_active_snapshot ( disk_id , snap_id ) ;
}
2015-05-26 11:24:34 +02:00
2013-02-19 15:21:33 +01:00
// ------------------------------------------------------------------------
2016-12-11 21:05:07 +01:00
// System Snapshot related functions
2013-02-19 15:21:33 +01:00
// ------------------------------------------------------------------------
2022-12-09 17:25:21 +01:00
/**
* @ return true if VM has system snapshots defined
*/
bool has_snapshots ( ) ;
2013-02-20 16:04:09 +01:00
/**
* Creates a new Snapshot attribute , and sets it to ACTIVE = YES
*
2013-02-20 18:11:58 +01:00
* @ param name for the new Snapshot . If it is empty , the generated name
* will be placed in this param
* @ param snap_id Id of the new snapshot
2013-02-20 16:04:09 +01:00
*
2022-01-04 13:03:47 +01:00
* @ return Created VectorAttribute with the snapshot data
2013-02-20 16:04:09 +01:00
*/
2022-01-04 13:03:47 +01:00
VectorAttribute * new_snapshot ( std : : string & name , int & snap_id ) ;
2013-02-19 15:21:33 +01:00
2013-02-20 16:04:09 +01:00
/**
* Sets the given Snapshot as ACTIVE = YES
*
* @ param snap_id the snapshow ID
*
* @ return 0 on success
*/
2017-06-09 15:04:19 +02:00
int set_revert_snapshot ( int snap_id ) ;
int set_delete_snapshot ( int snap_id ) ;
/**
* @ return the on - going ACTION associated to the ACTIVE snapshot
*/
2020-07-02 22:42:10 +02:00
std : : string get_snapshot_action ( ) const ;
2013-02-20 16:04:09 +01:00
2022-01-04 13:03:47 +01:00
VectorAttribute * get_active_snapshot ( ) const ;
2013-02-19 15:21:33 +01:00
/**
* Replaces HYPERVISOR_ID for the active SNAPSHOT
*
* @ param hypervisor_id Id returned by the hypervisor for the newly
2017-06-09 15:04:19 +02:00
* created snapshot . The no hypervisor_id version uses the snap_id .
2013-02-19 15:21:33 +01:00
*/
2020-07-02 22:42:10 +02:00
void update_snapshot_id ( const std : : string & hypervisor_id ) ;
2017-06-09 15:04:19 +02:00
void update_snapshot_id ( ) ;
2013-02-19 15:21:33 +01:00
/**
* Cleans the ACTIVE = YES attribute from the snapshots
*/
void clear_active_snapshot ( ) ;
/**
* Deletes the SNAPSHOT that was in the process of being created
*/
void delete_active_snapshot ( ) ;
2013-02-20 18:51:06 +01:00
/**
* Deletes all SNAPSHOT attributes
2022-06-20 18:34:44 +02:00
* @ param snapshots Returns template with deleted snapshots
2012-06-14 17:45:41 +02:00
*/
2022-06-20 18:34:44 +02:00
void delete_snapshots ( Template & snapshots ) ;
2012-06-14 17:45:41 +02:00
2022-01-04 13:03:47 +01:00
/**
* Returns size acquired on system DS by VM snapshots
*/
static long long get_snapshots_system_size ( Template * tmpl ) ;
2016-04-18 17:44:47 +02:00
// ------------------------------------------------------------------------
// Cloning state related functions
// ------------------------------------------------------------------------
/**
* Returns true if any of the disks is waiting for an image in LOCKED state
* @ return true if cloning
*/
2016-12-11 21:05:07 +01:00
bool has_cloning_disks ( )
{
return disks . has_cloning ( ) ;
}
2016-04-18 17:44:47 +02:00
/**
* Returns the image IDs for the disks waiting for the LOCKED state to finish
2016-04-22 13:12:51 +02:00
* @ param ids image ID set
2016-04-18 17:44:47 +02:00
*/
2020-07-02 22:42:10 +02:00
void get_cloning_image_ids ( std : : set < int > & ids )
2016-12-11 21:05:07 +01:00
{
disks . get_cloning_image_ids ( ids ) ;
}
2016-04-18 17:44:47 +02:00
/**
* Clears the flag for the disks waiting for the given image
*/
2020-11-11 15:37:01 +01:00
void clear_cloning_image_id ( int image_id ,
const std : : string & source ,
const std : : string & format )
2016-12-11 21:05:07 +01:00
{
2020-11-11 15:37:01 +01:00
disks . clear_cloning_image_id ( image_id , source , format ) ;
2016-12-11 21:05:07 +01:00
}
2016-04-18 17:44:47 +02:00
2018-10-09 11:42:17 +02:00
/**
* Get network leases with NETWORK_MODE = auto for this Virtual Machine
* @ pram tmpl with the scheduling results for the auto NICs
* @ param estr description if any
* @ return 0 if success
*/
2020-07-02 22:42:10 +02:00
int get_auto_network_leases ( VirtualMachineTemplate * tmpl , std : : string & estr ) ;
2018-10-09 11:42:17 +02:00
2018-11-05 16:46:23 +01:00
/**
* Check if a tm_mad is valid for the Virtual Machine Disks and set
* clone_target and ln_target
* @ param tm_mad is the tm_mad for system datastore chosen
*/
2020-07-02 22:42:10 +02:00
int check_tm_mad_disks ( const std : : string & tm_mad , std : : string & error ) ;
2018-11-05 16:46:23 +01:00
2021-03-15 16:24:25 +01:00
/**
* Check if VM has shareable disks and vmm_mad supports them
* @ param vmm_mad is the tm_mad for system datastore chosen
*/
int check_shareable_disks ( const std : : string & vmm_mad , std : : string & error ) ;
F #5516: New backup interface for OpenNebula
co-authored-by: Frederick Borges <fborges@opennebula.io>
co-authored-by: Neal Hansen <nhansen@opennebula.io>
co-authored-by: Daniel Clavijo Coca <dclavijo@opennebula.io>
co-authored-by: Pavel Czerný <pczerny@opennebula.systems>
BACKUP INTERFACE
=================
* Backups are exposed through a a special Datastore (BACKUP_DS) and
Image (BACKUP) types. These new types can only be used for backup'ing
up VMs. This approach allows to:
- Implement tier based backup policies (backups made on different
locations).
- Leverage access control and quota systems
- Support differnt storage and backup technologies
* Backup interface for the VMs:
- VM configures backups with BACKUP_CONFIG. This attribute can be set
in the VM template or updated with updateconf API call. It can include:
+ BACKUP_VOLATILE: To backup or not volatile disks
+ FS_FREEZE: How the FS is freeze for running VMs (qemu-agent,
suspend or none). When possible backups are crash consistent.
+ KEEP_LAST: keep only a given number of backups.
- Backups are initiated by the one.vm.backup API call that requires
the target Datastore to perform the backup (one-shot). This is
exposed by the onevm backup command.
- Backups can be periodic through scheduled actions.
- Backup configuration is updated with one.vm.updateconf API call.
* Restore interface:
- Restores are initiated by the one.image.restore API call. This is
exposed by oneimage restore command.
- Restore include configurable options for the VM template
+ NO_IP: to not preserve IP addresses (but keep the NICs and network
mapping)
+ NO_NIC: to not preserve network mappings
- Other template attributes:
+ Clean PCI devices, including network configuration in case of TYPE=NIC
attributes. By default it removes SHORT_ADDRESS and leave the "auto"
selection attributes.
+ Clean NUMA_NODE, removes node id and cpu sets. It keeps the NUMA node
- It is possible to restore single files stored in the repository by
using the backup specific URL.
* Sunstone (Ruby version) has been updated to expose this feautres.
BACKUP DRIVERS & IMPLEMENTATION
===============================
* Backup operation is implemented by a combination of 3 driver operations:
- VMM. New (internal oned <-> one_vmm_exec.rb) to orchestrate
backups for RUNNING VMs.
- TM. This commit introduces 2 new operations (and their
corresponding _live variants):
+ pre_backup(_live): Prepares the disks to be back'ed up in the
repository. It is specific to the driver: (i) ceph uses the export
operation; (ii) qcow2/raw uses snapshot-create-as and fs_freeze as
needed.
+ post_backup(_live): Performs cleanning operations, i.e. KVM
snapshots or tmp dirs.
- DATASTORE. Each backup technology is represented by its
corresponfing driver, that needs to implement:
+ backup: it takes the VM disks in file (qcow2) format and stores it
the backup repository.
+ restore: it takes a backup image and restores the associated disks
and VM template.
+ monitor: to gather available space in the repository
+ rm: to remove existing backups
+ stat: to return the "restored" size of a disk stored in a backup
+ downloader pseudo-URL handler: in the form
<backup_proto>://<driver_snapshot_id>/<disk filename>
BACKUP MANAGEMENT
=================
Backup actions may potentially take some time, leaving some vmm_exec threads in
use for a long time, stucking other vmm operations. Backups are planned
by the scheduler through the sched action interface.
Two attributes has been added to sched.conf:
* MAX_BACKUPS max active backup operations in the cloud. No more
backups will be started beyond this limit.
* MAX_BACKUPS_HOST max number of backups per host
* Fix onevm CLI to properly show and manage schedule actions. --schedule
supports now, as well as relative times +<seconds_from_stime>
onvm backup --schedule now -d 100 63
* Backup is added as VM_ADMIN_ACTIONS in oned.conf. Regular users needs
to use the batch interface or request specific permissions
Internal restructure of Scheduler:
- All sched_actions interface is now in SchedActionsXML class and files.
This class uses references to VM XML, and MUST be used in the same
lifetime scope.
- XMLRPC API calls for sched actions has been moved to ScheduledActionXML.cc as
static functions.
- VirtualMachineActionPool includes counters for active backups (total
and per host).
SUPPORTED PLATFORMS
====================
* hypervisor: KVM
* TM: qcow2/shared/ssh, ceph
* backup: restic, rsync
Notes on Ceph
* Ceph backups are performed in the following steps:
1. A snapshot of each disk is taken (group snapshots cannot be used as
it seems we cannot export the disks afterwards)
2. Disks are export to a file
3. File is converted to qcow2 format
4. Disk files are upload to the backup repo
TODO:
* Confirm crash consistent snapshots cannot be used in Ceph
TODO:
* Check if using VM dir instead of full path is better to accomodate
DS migrations i.e.:
- Current path: /var/lib/one/datastores/100/53/backup/disk.0
- Proposal: 53/backup/disk.0
RESTIC DRIVER
=============
Developed together with this feature is part of the EE edtion.
* It supports the SFTP protocol, the following attributes are
supported:
- RESTIC_SFTP_SERVER
- RESTIC_SFTP_USER: only if different from oneadmin
- RESTIC_PASSWORD
- RESTIC_IONICE: Run restic under a given ionice priority (class 2)
- RESTIC_NICE: Run restic under a given nice
- RESTIC_BWLIMIT: Limit restic upload/download BW
- RESTIC_COMPRESSION: Restic 0.14 implements compression (three modes:
off, auto, max). This requires repositories version 2. By default,
auto is used (average compression without to much CPU usage)
- RESTIC_CONNECTIONS: Sets the number of concurrent connections to a
backend (5 by default). For high-latency backends this number can be
increased.
* downloader URL: restic://<datastore_id>/<snapshot_id>/<file_name>
snapshot_id is the restic snapshot hash. To recover single disk images
from a backup. This URLs support:
- RESTIC_CONNECTIONS
- RESTIC_BWLIMIT
- RESTIC_IONICE
- RESTIC_NICE
These options needs to be defined in the associated datastore.
RSYNC DRIVER
=============
A rsync driver is included as part of the CE distribution. It uses the
rsync tool to store backups in a remote server through SSH:
* The following attributes are supported to configure the backup
datastore:
- RSYNC_HOST
- RSYNC_USER
- RSYNC_ARGS: Arguments to perform the rsync operatin (-aS by default)
* downloader URL: rsync://<ds_id>/<vmid>/<hash>/<file> can be used to recover
single files from an existing backup. (RSYNC_HOST and RSYN_USER needs
to be set in ds_id
EMULATOR_CPUS
=============
This commit includes a non related backup feature:
* Add EMULATOR_CPUS (KVM). This host (or cluster attribute) defines the
CPU IDs where the emulator threads will be pinned. If this value is
not defined the allocated CPU wll be used when using a PIN policy.
(cherry picked from commit a9e6a8e000e9a5a2f56f80ce622ad9ffc9fa032b)
F OpenNebula/one#5516: adding rsync backup driver
(cherry picked from commit fb52edf5d009dc02b071063afb97c6519b9e8305)
F OpenNebula/one#5516: update install.sh, add vmid to source, some polish
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 6fc6f8a67e435f7f92d5c40fdc3d1c825ab5581d)
F OpenNebula/one#5516: cleanup
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 12f4333b833f23098142cd4762eb9e6c505e1340)
F OpenNebula/one#5516: update downloader, default args, size check
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 510124ef2780a4e2e8c3d128c9a42945be38a305)
LL
(cherry picked from commit d4fcd134dc293f2b862086936db4d552792539fa)
2022-09-09 11:46:44 +02:00
// ------------------------------------------------------------------------
// Backup related functions
// ------------------------------------------------------------------------
2022-11-06 22:54:17 +01:00
long long backup_size ( Template & ds_quota )
F #5516: New backup interface for OpenNebula
co-authored-by: Frederick Borges <fborges@opennebula.io>
co-authored-by: Neal Hansen <nhansen@opennebula.io>
co-authored-by: Daniel Clavijo Coca <dclavijo@opennebula.io>
co-authored-by: Pavel Czerný <pczerny@opennebula.systems>
BACKUP INTERFACE
=================
* Backups are exposed through a a special Datastore (BACKUP_DS) and
Image (BACKUP) types. These new types can only be used for backup'ing
up VMs. This approach allows to:
- Implement tier based backup policies (backups made on different
locations).
- Leverage access control and quota systems
- Support differnt storage and backup technologies
* Backup interface for the VMs:
- VM configures backups with BACKUP_CONFIG. This attribute can be set
in the VM template or updated with updateconf API call. It can include:
+ BACKUP_VOLATILE: To backup or not volatile disks
+ FS_FREEZE: How the FS is freeze for running VMs (qemu-agent,
suspend or none). When possible backups are crash consistent.
+ KEEP_LAST: keep only a given number of backups.
- Backups are initiated by the one.vm.backup API call that requires
the target Datastore to perform the backup (one-shot). This is
exposed by the onevm backup command.
- Backups can be periodic through scheduled actions.
- Backup configuration is updated with one.vm.updateconf API call.
* Restore interface:
- Restores are initiated by the one.image.restore API call. This is
exposed by oneimage restore command.
- Restore include configurable options for the VM template
+ NO_IP: to not preserve IP addresses (but keep the NICs and network
mapping)
+ NO_NIC: to not preserve network mappings
- Other template attributes:
+ Clean PCI devices, including network configuration in case of TYPE=NIC
attributes. By default it removes SHORT_ADDRESS and leave the "auto"
selection attributes.
+ Clean NUMA_NODE, removes node id and cpu sets. It keeps the NUMA node
- It is possible to restore single files stored in the repository by
using the backup specific URL.
* Sunstone (Ruby version) has been updated to expose this feautres.
BACKUP DRIVERS & IMPLEMENTATION
===============================
* Backup operation is implemented by a combination of 3 driver operations:
- VMM. New (internal oned <-> one_vmm_exec.rb) to orchestrate
backups for RUNNING VMs.
- TM. This commit introduces 2 new operations (and their
corresponding _live variants):
+ pre_backup(_live): Prepares the disks to be back'ed up in the
repository. It is specific to the driver: (i) ceph uses the export
operation; (ii) qcow2/raw uses snapshot-create-as and fs_freeze as
needed.
+ post_backup(_live): Performs cleanning operations, i.e. KVM
snapshots or tmp dirs.
- DATASTORE. Each backup technology is represented by its
corresponfing driver, that needs to implement:
+ backup: it takes the VM disks in file (qcow2) format and stores it
the backup repository.
+ restore: it takes a backup image and restores the associated disks
and VM template.
+ monitor: to gather available space in the repository
+ rm: to remove existing backups
+ stat: to return the "restored" size of a disk stored in a backup
+ downloader pseudo-URL handler: in the form
<backup_proto>://<driver_snapshot_id>/<disk filename>
BACKUP MANAGEMENT
=================
Backup actions may potentially take some time, leaving some vmm_exec threads in
use for a long time, stucking other vmm operations. Backups are planned
by the scheduler through the sched action interface.
Two attributes has been added to sched.conf:
* MAX_BACKUPS max active backup operations in the cloud. No more
backups will be started beyond this limit.
* MAX_BACKUPS_HOST max number of backups per host
* Fix onevm CLI to properly show and manage schedule actions. --schedule
supports now, as well as relative times +<seconds_from_stime>
onvm backup --schedule now -d 100 63
* Backup is added as VM_ADMIN_ACTIONS in oned.conf. Regular users needs
to use the batch interface or request specific permissions
Internal restructure of Scheduler:
- All sched_actions interface is now in SchedActionsXML class and files.
This class uses references to VM XML, and MUST be used in the same
lifetime scope.
- XMLRPC API calls for sched actions has been moved to ScheduledActionXML.cc as
static functions.
- VirtualMachineActionPool includes counters for active backups (total
and per host).
SUPPORTED PLATFORMS
====================
* hypervisor: KVM
* TM: qcow2/shared/ssh, ceph
* backup: restic, rsync
Notes on Ceph
* Ceph backups are performed in the following steps:
1. A snapshot of each disk is taken (group snapshots cannot be used as
it seems we cannot export the disks afterwards)
2. Disks are export to a file
3. File is converted to qcow2 format
4. Disk files are upload to the backup repo
TODO:
* Confirm crash consistent snapshots cannot be used in Ceph
TODO:
* Check if using VM dir instead of full path is better to accomodate
DS migrations i.e.:
- Current path: /var/lib/one/datastores/100/53/backup/disk.0
- Proposal: 53/backup/disk.0
RESTIC DRIVER
=============
Developed together with this feature is part of the EE edtion.
* It supports the SFTP protocol, the following attributes are
supported:
- RESTIC_SFTP_SERVER
- RESTIC_SFTP_USER: only if different from oneadmin
- RESTIC_PASSWORD
- RESTIC_IONICE: Run restic under a given ionice priority (class 2)
- RESTIC_NICE: Run restic under a given nice
- RESTIC_BWLIMIT: Limit restic upload/download BW
- RESTIC_COMPRESSION: Restic 0.14 implements compression (three modes:
off, auto, max). This requires repositories version 2. By default,
auto is used (average compression without to much CPU usage)
- RESTIC_CONNECTIONS: Sets the number of concurrent connections to a
backend (5 by default). For high-latency backends this number can be
increased.
* downloader URL: restic://<datastore_id>/<snapshot_id>/<file_name>
snapshot_id is the restic snapshot hash. To recover single disk images
from a backup. This URLs support:
- RESTIC_CONNECTIONS
- RESTIC_BWLIMIT
- RESTIC_IONICE
- RESTIC_NICE
These options needs to be defined in the associated datastore.
RSYNC DRIVER
=============
A rsync driver is included as part of the CE distribution. It uses the
rsync tool to store backups in a remote server through SSH:
* The following attributes are supported to configure the backup
datastore:
- RSYNC_HOST
- RSYNC_USER
- RSYNC_ARGS: Arguments to perform the rsync operatin (-aS by default)
* downloader URL: rsync://<ds_id>/<vmid>/<hash>/<file> can be used to recover
single files from an existing backup. (RSYNC_HOST and RSYN_USER needs
to be set in ds_id
EMULATOR_CPUS
=============
This commit includes a non related backup feature:
* Add EMULATOR_CPUS (KVM). This host (or cluster attribute) defines the
CPU IDs where the emulator threads will be pinned. If this value is
not defined the allocated CPU wll be used when using a PIN policy.
(cherry picked from commit a9e6a8e000e9a5a2f56f80ce622ad9ffc9fa032b)
F OpenNebula/one#5516: adding rsync backup driver
(cherry picked from commit fb52edf5d009dc02b071063afb97c6519b9e8305)
F OpenNebula/one#5516: update install.sh, add vmid to source, some polish
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 6fc6f8a67e435f7f92d5c40fdc3d1c825ab5581d)
F OpenNebula/one#5516: cleanup
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 12f4333b833f23098142cd4762eb9e6c505e1340)
F OpenNebula/one#5516: update downloader, default args, size check
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 510124ef2780a4e2e8c3d128c9a42945be38a305)
LL
(cherry picked from commit d4fcd134dc293f2b862086936db4d552792539fa)
2022-09-09 11:46:44 +02:00
{
2022-11-06 22:54:17 +01:00
return disks . backup_size ( ds_quota , _backups . do_volatile ( ) ) ;
F #5516: New backup interface for OpenNebula
co-authored-by: Frederick Borges <fborges@opennebula.io>
co-authored-by: Neal Hansen <nhansen@opennebula.io>
co-authored-by: Daniel Clavijo Coca <dclavijo@opennebula.io>
co-authored-by: Pavel Czerný <pczerny@opennebula.systems>
BACKUP INTERFACE
=================
* Backups are exposed through a a special Datastore (BACKUP_DS) and
Image (BACKUP) types. These new types can only be used for backup'ing
up VMs. This approach allows to:
- Implement tier based backup policies (backups made on different
locations).
- Leverage access control and quota systems
- Support differnt storage and backup technologies
* Backup interface for the VMs:
- VM configures backups with BACKUP_CONFIG. This attribute can be set
in the VM template or updated with updateconf API call. It can include:
+ BACKUP_VOLATILE: To backup or not volatile disks
+ FS_FREEZE: How the FS is freeze for running VMs (qemu-agent,
suspend or none). When possible backups are crash consistent.
+ KEEP_LAST: keep only a given number of backups.
- Backups are initiated by the one.vm.backup API call that requires
the target Datastore to perform the backup (one-shot). This is
exposed by the onevm backup command.
- Backups can be periodic through scheduled actions.
- Backup configuration is updated with one.vm.updateconf API call.
* Restore interface:
- Restores are initiated by the one.image.restore API call. This is
exposed by oneimage restore command.
- Restore include configurable options for the VM template
+ NO_IP: to not preserve IP addresses (but keep the NICs and network
mapping)
+ NO_NIC: to not preserve network mappings
- Other template attributes:
+ Clean PCI devices, including network configuration in case of TYPE=NIC
attributes. By default it removes SHORT_ADDRESS and leave the "auto"
selection attributes.
+ Clean NUMA_NODE, removes node id and cpu sets. It keeps the NUMA node
- It is possible to restore single files stored in the repository by
using the backup specific URL.
* Sunstone (Ruby version) has been updated to expose this feautres.
BACKUP DRIVERS & IMPLEMENTATION
===============================
* Backup operation is implemented by a combination of 3 driver operations:
- VMM. New (internal oned <-> one_vmm_exec.rb) to orchestrate
backups for RUNNING VMs.
- TM. This commit introduces 2 new operations (and their
corresponding _live variants):
+ pre_backup(_live): Prepares the disks to be back'ed up in the
repository. It is specific to the driver: (i) ceph uses the export
operation; (ii) qcow2/raw uses snapshot-create-as and fs_freeze as
needed.
+ post_backup(_live): Performs cleanning operations, i.e. KVM
snapshots or tmp dirs.
- DATASTORE. Each backup technology is represented by its
corresponfing driver, that needs to implement:
+ backup: it takes the VM disks in file (qcow2) format and stores it
the backup repository.
+ restore: it takes a backup image and restores the associated disks
and VM template.
+ monitor: to gather available space in the repository
+ rm: to remove existing backups
+ stat: to return the "restored" size of a disk stored in a backup
+ downloader pseudo-URL handler: in the form
<backup_proto>://<driver_snapshot_id>/<disk filename>
BACKUP MANAGEMENT
=================
Backup actions may potentially take some time, leaving some vmm_exec threads in
use for a long time, stucking other vmm operations. Backups are planned
by the scheduler through the sched action interface.
Two attributes has been added to sched.conf:
* MAX_BACKUPS max active backup operations in the cloud. No more
backups will be started beyond this limit.
* MAX_BACKUPS_HOST max number of backups per host
* Fix onevm CLI to properly show and manage schedule actions. --schedule
supports now, as well as relative times +<seconds_from_stime>
onvm backup --schedule now -d 100 63
* Backup is added as VM_ADMIN_ACTIONS in oned.conf. Regular users needs
to use the batch interface or request specific permissions
Internal restructure of Scheduler:
- All sched_actions interface is now in SchedActionsXML class and files.
This class uses references to VM XML, and MUST be used in the same
lifetime scope.
- XMLRPC API calls for sched actions has been moved to ScheduledActionXML.cc as
static functions.
- VirtualMachineActionPool includes counters for active backups (total
and per host).
SUPPORTED PLATFORMS
====================
* hypervisor: KVM
* TM: qcow2/shared/ssh, ceph
* backup: restic, rsync
Notes on Ceph
* Ceph backups are performed in the following steps:
1. A snapshot of each disk is taken (group snapshots cannot be used as
it seems we cannot export the disks afterwards)
2. Disks are export to a file
3. File is converted to qcow2 format
4. Disk files are upload to the backup repo
TODO:
* Confirm crash consistent snapshots cannot be used in Ceph
TODO:
* Check if using VM dir instead of full path is better to accomodate
DS migrations i.e.:
- Current path: /var/lib/one/datastores/100/53/backup/disk.0
- Proposal: 53/backup/disk.0
RESTIC DRIVER
=============
Developed together with this feature is part of the EE edtion.
* It supports the SFTP protocol, the following attributes are
supported:
- RESTIC_SFTP_SERVER
- RESTIC_SFTP_USER: only if different from oneadmin
- RESTIC_PASSWORD
- RESTIC_IONICE: Run restic under a given ionice priority (class 2)
- RESTIC_NICE: Run restic under a given nice
- RESTIC_BWLIMIT: Limit restic upload/download BW
- RESTIC_COMPRESSION: Restic 0.14 implements compression (three modes:
off, auto, max). This requires repositories version 2. By default,
auto is used (average compression without to much CPU usage)
- RESTIC_CONNECTIONS: Sets the number of concurrent connections to a
backend (5 by default). For high-latency backends this number can be
increased.
* downloader URL: restic://<datastore_id>/<snapshot_id>/<file_name>
snapshot_id is the restic snapshot hash. To recover single disk images
from a backup. This URLs support:
- RESTIC_CONNECTIONS
- RESTIC_BWLIMIT
- RESTIC_IONICE
- RESTIC_NICE
These options needs to be defined in the associated datastore.
RSYNC DRIVER
=============
A rsync driver is included as part of the CE distribution. It uses the
rsync tool to store backups in a remote server through SSH:
* The following attributes are supported to configure the backup
datastore:
- RSYNC_HOST
- RSYNC_USER
- RSYNC_ARGS: Arguments to perform the rsync operatin (-aS by default)
* downloader URL: rsync://<ds_id>/<vmid>/<hash>/<file> can be used to recover
single files from an existing backup. (RSYNC_HOST and RSYN_USER needs
to be set in ds_id
EMULATOR_CPUS
=============
This commit includes a non related backup feature:
* Add EMULATOR_CPUS (KVM). This host (or cluster attribute) defines the
CPU IDs where the emulator threads will be pinned. If this value is
not defined the allocated CPU wll be used when using a PIN policy.
(cherry picked from commit a9e6a8e000e9a5a2f56f80ce622ad9ffc9fa032b)
F OpenNebula/one#5516: adding rsync backup driver
(cherry picked from commit fb52edf5d009dc02b071063afb97c6519b9e8305)
F OpenNebula/one#5516: update install.sh, add vmid to source, some polish
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 6fc6f8a67e435f7f92d5c40fdc3d1c825ab5581d)
F OpenNebula/one#5516: cleanup
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 12f4333b833f23098142cd4762eb9e6c505e1340)
F OpenNebula/one#5516: update downloader, default args, size check
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 510124ef2780a4e2e8c3d128c9a42945be38a305)
LL
(cherry picked from commit d4fcd134dc293f2b862086936db4d552792539fa)
2022-09-09 11:46:44 +02:00
}
Backups & backups ( )
{
return _backups ;
}
2023-07-03 18:15:52 +02:00
// ------------------------------------------------------------------------
// Scheduled actions functions
// ------------------------------------------------------------------------
ObjectCollection & sched_actions ( )
{
return _sched_actions ;
}
const ObjectCollection & sched_actions ( ) const
{
return _sched_actions ;
}
2008-06-17 16:27:32 +00:00
private :
2022-12-19 15:21:12 +01:00
static const int MAX_ERROR_MSG_LENGTH = 100 ;
2008-06-17 16:27:32 +00:00
// -------------------------------------------------------------------------
// Friends
// -------------------------------------------------------------------------
friend class VirtualMachinePool ;
2020-09-10 09:08:29 +02:00
friend class PoolSQL ;
2008-06-17 16:27:32 +00:00
// *************************************************************************
// Virtual Machine Attributes
// *************************************************************************
2009-03-06 12:10:15 +00:00
2008-06-17 16:27:32 +00:00
// -------------------------------------------------------------------------
// Virtual Machine Description
// -------------------------------------------------------------------------
/**
* The state of the virtual machine .
*/
VmState state ;
2015-02-19 16:12:09 +01:00
/**
* Previous state og the virtual machine , to trigger state hooks
*/
VmState prev_state ;
2008-06-17 16:27:32 +00:00
/**
* The state of the virtual machine ( in the Life - cycle Manager ) .
*/
LcmState lcm_state ;
2015-02-19 16:12:09 +01:00
/**
* Previous state og the virtual machine , to trigger state hooks
*/
LcmState prev_lcm_state ;
2012-04-26 19:06:49 +02:00
/**
* Marks the VM as to be re - scheduled
*/
int resched ;
2008-06-17 16:27:32 +00:00
/**
* Start time , the VM enter the nebula system ( in epoch )
*/
time_t stime ;
/**
* Exit time , the VM leave the nebula system ( in epoch )
*/
time_t etime ;
/**
* Deployment specific identification string , as returned by the VM driver
*/
2020-07-02 22:42:10 +02:00
std : : string deploy_id ;
2008-06-17 16:27:32 +00:00
/**
2008-06-22 01:51:49 +00:00
* History record , for the current host
2008-06-17 16:27:32 +00:00
*/
History * history ;
2008-06-22 01:51:49 +00:00
/**
* History record , for the previous host
*/
History * previous_history ;
2009-03-06 12:10:15 +00:00
2011-06-25 01:29:44 +02:00
/**
* Complete set of history records for the VM
*/
2020-07-02 22:42:10 +02:00
std : : vector < History * > history_records ;
2011-06-25 01:29:44 +02:00
2015-05-19 18:41:23 +02:00
/**
2016-12-11 21:05:07 +01:00
* VirtualMachine disks
2015-05-19 18:41:23 +02:00
*/
2016-12-11 21:05:07 +01:00
VirtualMachineDisks disks ;
2015-05-19 18:41:23 +02:00
2016-12-24 01:35:33 +01:00
/**
* VirtualMachine nics
*/
VirtualMachineNics nics ;
2015-06-23 21:52:10 +02:00
/**
* User template to store custom metadata . This template can be updated
*/
2020-09-15 11:16:00 +02:00
std : : unique_ptr < VirtualMachineTemplate > user_obj_template ;
2015-06-23 21:52:10 +02:00
/**
* Monitoring information for the VM
*/
VirtualMachineMonitorInfo monitoring ;
2008-06-17 16:27:32 +00:00
/**
* Log class for the virtual machine , it writes log messages in
2010-04-10 22:16:47 +02:00
* $ ONE_LOCATION / var / $ VID / vm . log
2009-01-02 14:58:51 +00:00
* or , in case that OpenNebula is installed in root
2010-04-10 22:16:47 +02:00
* / var / log / one / $ VM_ID . log
2015-06-23 21:52:10 +02:00
* For the syslog it will use the predefined / var / log / locations
2008-06-17 16:27:32 +00:00
*/
2013-01-25 17:32:12 +01:00
Log * _log ;
2012-02-24 23:13:22 +01:00
F #5516: New backup interface for OpenNebula
co-authored-by: Frederick Borges <fborges@opennebula.io>
co-authored-by: Neal Hansen <nhansen@opennebula.io>
co-authored-by: Daniel Clavijo Coca <dclavijo@opennebula.io>
co-authored-by: Pavel Czerný <pczerny@opennebula.systems>
BACKUP INTERFACE
=================
* Backups are exposed through a a special Datastore (BACKUP_DS) and
Image (BACKUP) types. These new types can only be used for backup'ing
up VMs. This approach allows to:
- Implement tier based backup policies (backups made on different
locations).
- Leverage access control and quota systems
- Support differnt storage and backup technologies
* Backup interface for the VMs:
- VM configures backups with BACKUP_CONFIG. This attribute can be set
in the VM template or updated with updateconf API call. It can include:
+ BACKUP_VOLATILE: To backup or not volatile disks
+ FS_FREEZE: How the FS is freeze for running VMs (qemu-agent,
suspend or none). When possible backups are crash consistent.
+ KEEP_LAST: keep only a given number of backups.
- Backups are initiated by the one.vm.backup API call that requires
the target Datastore to perform the backup (one-shot). This is
exposed by the onevm backup command.
- Backups can be periodic through scheduled actions.
- Backup configuration is updated with one.vm.updateconf API call.
* Restore interface:
- Restores are initiated by the one.image.restore API call. This is
exposed by oneimage restore command.
- Restore include configurable options for the VM template
+ NO_IP: to not preserve IP addresses (but keep the NICs and network
mapping)
+ NO_NIC: to not preserve network mappings
- Other template attributes:
+ Clean PCI devices, including network configuration in case of TYPE=NIC
attributes. By default it removes SHORT_ADDRESS and leave the "auto"
selection attributes.
+ Clean NUMA_NODE, removes node id and cpu sets. It keeps the NUMA node
- It is possible to restore single files stored in the repository by
using the backup specific URL.
* Sunstone (Ruby version) has been updated to expose this feautres.
BACKUP DRIVERS & IMPLEMENTATION
===============================
* Backup operation is implemented by a combination of 3 driver operations:
- VMM. New (internal oned <-> one_vmm_exec.rb) to orchestrate
backups for RUNNING VMs.
- TM. This commit introduces 2 new operations (and their
corresponding _live variants):
+ pre_backup(_live): Prepares the disks to be back'ed up in the
repository. It is specific to the driver: (i) ceph uses the export
operation; (ii) qcow2/raw uses snapshot-create-as and fs_freeze as
needed.
+ post_backup(_live): Performs cleanning operations, i.e. KVM
snapshots or tmp dirs.
- DATASTORE. Each backup technology is represented by its
corresponfing driver, that needs to implement:
+ backup: it takes the VM disks in file (qcow2) format and stores it
the backup repository.
+ restore: it takes a backup image and restores the associated disks
and VM template.
+ monitor: to gather available space in the repository
+ rm: to remove existing backups
+ stat: to return the "restored" size of a disk stored in a backup
+ downloader pseudo-URL handler: in the form
<backup_proto>://<driver_snapshot_id>/<disk filename>
BACKUP MANAGEMENT
=================
Backup actions may potentially take some time, leaving some vmm_exec threads in
use for a long time, stucking other vmm operations. Backups are planned
by the scheduler through the sched action interface.
Two attributes has been added to sched.conf:
* MAX_BACKUPS max active backup operations in the cloud. No more
backups will be started beyond this limit.
* MAX_BACKUPS_HOST max number of backups per host
* Fix onevm CLI to properly show and manage schedule actions. --schedule
supports now, as well as relative times +<seconds_from_stime>
onvm backup --schedule now -d 100 63
* Backup is added as VM_ADMIN_ACTIONS in oned.conf. Regular users needs
to use the batch interface or request specific permissions
Internal restructure of Scheduler:
- All sched_actions interface is now in SchedActionsXML class and files.
This class uses references to VM XML, and MUST be used in the same
lifetime scope.
- XMLRPC API calls for sched actions has been moved to ScheduledActionXML.cc as
static functions.
- VirtualMachineActionPool includes counters for active backups (total
and per host).
SUPPORTED PLATFORMS
====================
* hypervisor: KVM
* TM: qcow2/shared/ssh, ceph
* backup: restic, rsync
Notes on Ceph
* Ceph backups are performed in the following steps:
1. A snapshot of each disk is taken (group snapshots cannot be used as
it seems we cannot export the disks afterwards)
2. Disks are export to a file
3. File is converted to qcow2 format
4. Disk files are upload to the backup repo
TODO:
* Confirm crash consistent snapshots cannot be used in Ceph
TODO:
* Check if using VM dir instead of full path is better to accomodate
DS migrations i.e.:
- Current path: /var/lib/one/datastores/100/53/backup/disk.0
- Proposal: 53/backup/disk.0
RESTIC DRIVER
=============
Developed together with this feature is part of the EE edtion.
* It supports the SFTP protocol, the following attributes are
supported:
- RESTIC_SFTP_SERVER
- RESTIC_SFTP_USER: only if different from oneadmin
- RESTIC_PASSWORD
- RESTIC_IONICE: Run restic under a given ionice priority (class 2)
- RESTIC_NICE: Run restic under a given nice
- RESTIC_BWLIMIT: Limit restic upload/download BW
- RESTIC_COMPRESSION: Restic 0.14 implements compression (three modes:
off, auto, max). This requires repositories version 2. By default,
auto is used (average compression without to much CPU usage)
- RESTIC_CONNECTIONS: Sets the number of concurrent connections to a
backend (5 by default). For high-latency backends this number can be
increased.
* downloader URL: restic://<datastore_id>/<snapshot_id>/<file_name>
snapshot_id is the restic snapshot hash. To recover single disk images
from a backup. This URLs support:
- RESTIC_CONNECTIONS
- RESTIC_BWLIMIT
- RESTIC_IONICE
- RESTIC_NICE
These options needs to be defined in the associated datastore.
RSYNC DRIVER
=============
A rsync driver is included as part of the CE distribution. It uses the
rsync tool to store backups in a remote server through SSH:
* The following attributes are supported to configure the backup
datastore:
- RSYNC_HOST
- RSYNC_USER
- RSYNC_ARGS: Arguments to perform the rsync operatin (-aS by default)
* downloader URL: rsync://<ds_id>/<vmid>/<hash>/<file> can be used to recover
single files from an existing backup. (RSYNC_HOST and RSYN_USER needs
to be set in ds_id
EMULATOR_CPUS
=============
This commit includes a non related backup feature:
* Add EMULATOR_CPUS (KVM). This host (or cluster attribute) defines the
CPU IDs where the emulator threads will be pinned. If this value is
not defined the allocated CPU wll be used when using a PIN policy.
(cherry picked from commit a9e6a8e000e9a5a2f56f80ce622ad9ffc9fa032b)
F OpenNebula/one#5516: adding rsync backup driver
(cherry picked from commit fb52edf5d009dc02b071063afb97c6519b9e8305)
F OpenNebula/one#5516: update install.sh, add vmid to source, some polish
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 6fc6f8a67e435f7f92d5c40fdc3d1c825ab5581d)
F OpenNebula/one#5516: cleanup
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 12f4333b833f23098142cd4762eb9e6c505e1340)
F OpenNebula/one#5516: update downloader, default args, size check
Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 510124ef2780a4e2e8c3d128c9a42945be38a305)
LL
(cherry picked from commit d4fcd134dc293f2b862086936db4d552792539fa)
2022-09-09 11:46:44 +02:00
/**
*
*/
Backups _backups ;
2023-07-03 18:15:52 +02:00
/**
* Associated scheduled action for this VM
*/
ObjectCollection _sched_actions ;
2008-06-17 16:27:32 +00:00
// *************************************************************************
// DataBase implementation (Private)
// *************************************************************************
/**
* Bootstraps the database table ( s ) associated to the VirtualMachine
2011-10-10 06:14:46 -07:00
* @ return 0 on success
2008-06-17 16:27:32 +00:00
*/
2018-02-02 17:03:45 +01:00
static int bootstrap ( SqlDB * db ) ;
2009-03-06 12:10:15 +00:00
2010-04-29 18:11:04 +02:00
/**
2010-06-25 13:24:54 +02:00
* Execute an INSERT or REPLACE Sql query .
2010-04-29 18:11:04 +02:00
* @ param db The SQL DB
2010-06-25 13:24:54 +02:00
* @ param replace Execute an INSERT or a REPLACE
2011-12-19 17:07:32 +01:00
* @ param error_str Returns the error reason , if any
2010-04-29 18:11:04 +02:00
* @ return 0 one success
2011-12-19 17:07:32 +01:00
*/
2020-07-02 22:42:10 +02:00
int insert_replace ( SqlDB * db , bool replace , std : : string & error_str ) ;
2009-07-09 14:34:34 +00:00
2008-06-17 16:27:32 +00:00
/**
* Updates the VM history record
* @ param db pointer to the db
2009-03-06 12:10:15 +00:00
* @ return 0 on success
2008-06-17 16:27:32 +00:00
*/
2010-04-10 22:16:47 +02:00
int update_history ( SqlDB * db )
2008-06-17 16:27:32 +00:00
{
2019-06-12 17:20:50 +02:00
if ( history = = 0 )
2008-06-17 16:27:32 +00:00
{
return - 1 ;
2019-06-12 17:20:50 +02:00
}
return history - > update ( db ) ;
2008-06-17 16:27:32 +00:00
} ;
2019-06-12 17:20:50 +02:00
/**
* Insert a new VM history record
* @ param db pointer to the db
* @ return 0 on success
*/
int insert_history ( SqlDB * db )
{
std : : string error ;
if ( history = = 0 )
{
return - 1 ;
}
return history - > insert ( db , error ) ;
}
2008-06-17 16:27:32 +00:00
/**
2008-06-22 01:51:49 +00:00
* Updates the previous history record
2008-06-17 16:27:32 +00:00
* @ param db pointer to the db
2009-03-06 12:10:15 +00:00
* @ return 0 on success
2008-06-17 16:27:32 +00:00
*/
2010-04-10 22:16:47 +02:00
int update_previous_history ( SqlDB * db )
2008-06-22 01:51:49 +00:00
{
2019-06-12 17:20:50 +02:00
if ( previous_history = = 0 )
2008-06-22 01:51:49 +00:00
{
return - 1 ;
2019-06-12 17:20:50 +02:00
}
return previous_history - > update ( db ) ;
2008-06-22 01:51:49 +00:00
} ;
2009-03-06 12:10:15 +00:00
2019-06-12 17:20:50 +02:00
/**
* Updates the VM search information .
*
* @ param db pointer to the db
* @ return 0 on success
*/
int update_search ( SqlDB * db ) ;
2016-12-12 10:26:55 +01:00
/**
* Function that renders the VM in XML format optinally including
* extended information ( all history records )
* @ param xml the resulting XML string
* @ param n_history Number of history records to include :
* 0 : none
* 1 : the last one
* 2 : all
2023-07-03 18:15:52 +02:00
* @ param sa include scheduled action information
2016-12-12 10:26:55 +01:00
* @ return a reference to the generated string
*/
2023-07-03 18:15:52 +02:00
std : : string & to_xml_extended ( std : : string & xml , int n_history , bool sa ) const ;
2016-12-12 10:26:55 +01:00
2020-07-02 22:42:10 +02:00
std : : string & to_json ( std : : string & json ) const ;
2019-01-30 00:10:18 +01:00
2020-07-02 22:42:10 +02:00
std : : string & to_token ( std : : string & text ) const ;
2019-01-30 00:10:18 +01:00
2009-04-02 10:14:54 +00:00
// -------------------------------------------------------------------------
// Attribute Parser
// -------------------------------------------------------------------------
2016-12-12 10:26:55 +01:00
/**
* Attributes not allowed in NIC_DEFAULT to avoid authorization bypass and
* inconsistencies for NIC_DEFAULTS
*/
static const char * NO_NIC_DEFAULTS [ ] ;
static const int NUM_NO_NIC_DEFAULTS ;
/**
* Known Virtual Router attributes , to be moved from the user template
* to the template
*/
static const char * VROUTER_ATTRIBUTES [ ] ;
static const int NUM_VROUTER_ATTRIBUTES ;
/**
* Parse a string and substitute variables ( e . g . $ NAME ) using the VM
* template values :
* @ param attribute , the string to be parsed
* @ param parsed , the resulting parsed string
* @ param error description in case of failure
* @ return 0 on success .
*/
2020-07-02 22:42:10 +02:00
int parse_template_attribute ( const std : : string & attribute ,
std : : string & parsed ,
std : : string & error ) ;
2016-12-12 10:26:55 +01:00
/**
* Parse a file string variable ( i . e . $ FILE ) using the FILE_DS datastores .
* It should be used for OS / DS_KERNEL , OS / DS_INITRD , CONTEXT / DS_FILES .
* @ param attribute the string to be parsed
* @ param img_ids ids of the FILE images in the attribute
* @ param error description in case of failure
* @ return 0 on success .
*/
2020-07-02 22:42:10 +02:00
int parse_file_attribute ( std : : string attribute ,
std : : vector < int > & img_ids ,
std : : string & error ) ;
2016-12-12 10:26:55 +01:00
2012-11-18 00:01:43 +01:00
/**
* Generates image attributes ( DS_ID , TM_MAD , SOURCE . . . ) for KERNEL and
* INITRD files .
* @ param os attribute of the VM template
* @ param base_name of the attribute " KERNEL " , or " INITRD "
2012-12-04 23:19:08 +01:00
* @ param base_type of the image attribute KERNEL , RAMDISK
2012-11-18 00:01:43 +01:00
* @ param error_str Returns the error reason , if any
2016-12-12 10:26:55 +01:00
* @ return 0 on succes
2012-11-18 00:01:43 +01:00
*/
2020-07-02 22:42:10 +02:00
int set_os_file ( VectorAttribute * os , const std : : string & base_name ,
Image : : ImageType base_type , std : : string & error_str ) ;
2016-12-12 10:26:55 +01:00
2012-11-17 02:46:03 +01:00
/**
* Parse the " OS " attribute of the template by substituting
* $ FILE variables
* @ param error_str Returns the error reason , if any
* @ return 0 on success
*/
2020-07-02 22:42:10 +02:00
int parse_os ( std : : string & error_str ) ;
2012-11-17 02:46:03 +01:00
2018-01-18 10:42:03 +01:00
/**
* Parse the " CPU_MODEL " attribute of the template
* @ return 0 on success
*/
2018-03-23 16:40:25 +01:00
int parse_cpu_model ( Template * tmpl ) ;
2018-01-18 10:42:03 +01:00
2014-07-10 23:32:06 +02:00
/**
2016-12-12 10:26:55 +01:00
* Parse the " NIC_DEFAULT " attribute
* @ param error_str Returns the error reason , if any
2016-03-16 14:53:05 +01:00
* @ return 0 on success
*/
2020-07-02 22:42:10 +02:00
int parse_defaults ( std : : string & error_str , Template * tmpl ) ;
2016-03-16 14:53:05 +01:00
/**
2016-12-12 10:26:55 +01:00
* Parse virtual router related attributes
* @ param error_str Returns the error reason , if any
2016-09-13 13:10:33 +02:00
* @ return 0 on success
2016-03-16 14:53:05 +01:00
*/
2020-07-02 22:42:10 +02:00
int parse_vrouter ( std : : string & error_str , Template * tmpl ) ;
2016-07-18 17:04:41 +02:00
2016-04-26 12:39:29 +02:00
/**
2016-12-12 10:26:55 +01:00
* Parse the " PCI " attribute of the template and checks mandatory attributes
* @ param error_str Returns the error reason , if any
* @ return 0 on success
2016-04-26 12:39:29 +02:00
*/
2020-07-02 22:42:10 +02:00
int parse_pci ( std : : string & error_str , Template * tmpl ) ;
2016-04-26 12:39:29 +02:00
2014-07-10 16:24:25 +02:00
/**
2016-12-12 10:26:55 +01:00
* Parse the " SCHED_REQUIREMENTS " attribute of the template by substituting
* $ VARIABLE , $ VARIABLE [ ATTR ] and $ VARIABLE [ ATTR , ATTR = VALUE ]
2014-07-10 16:24:25 +02:00
* @ param error_str Returns the error reason , if any
* @ return 0 on success
*/
2020-07-02 22:42:10 +02:00
int parse_requirements ( std : : string & error_str ) ;
2014-07-10 16:24:25 +02:00
2015-12-01 16:36:50 +01:00
/**
2016-12-12 10:26:55 +01:00
* Parse the " GRAPHICS " attribute and generates a default PORT if not
* defined
2015-12-01 16:36:50 +01:00
*/
2020-07-02 22:42:10 +02:00
int parse_graphics ( std : : string & error_str , Template * tmpl ) ;
2015-12-01 16:36:50 +01:00
2023-09-20 19:09:53 +02:00
/**
* Parse the " VIDEO " attribute to verify the TYPE exists , and that the VRAM
* and RESOLUTION values are a good format
*/
int parse_video ( std : : string & error_str , Template * tmpl ) ;
2016-01-19 18:13:31 +01:00
/**
2016-12-12 10:26:55 +01:00
* Searches the meaningful attributes and moves them from the user template
* to the internal template
2016-01-19 18:13:31 +01:00
*/
2016-12-12 10:26:55 +01:00
void parse_well_known_attributes ( ) ;
2016-01-19 18:13:31 +01:00
2016-12-12 10:26:55 +01:00
// -------------------------------------------------------------------------
// Context related functions
// -------------------------------------------------------------------------
2015-09-02 17:30:51 +02:00
/**
2016-12-12 10:26:55 +01:00
* Generate the NETWORK related CONTEXT setions , i . e . ETH_ * . This function
* is invoked when ever the context is prepared for the VM to capture
* netowrking updates .
* @ param context attribute of the VM
* @ param error string if any
2019-07-26 13:45:26 +02:00
* @ param only_auto boolean to generate context only for vnets
2018-10-09 11:42:17 +02:00
* with NETWORK_MODE = auto
2015-09-02 17:30:51 +02:00
* @ return 0 on success
*/
2020-07-02 22:42:10 +02:00
int generate_network_context ( VectorAttribute * context , std : : string & error ,
2018-10-09 11:42:17 +02:00
bool only_auto ) ;
2016-12-12 10:26:55 +01:00
2016-12-24 01:35:33 +01:00
/**
* Deletes the NETWORK related CONTEXT section for the given nic , i . e .
* ETH_ < id >
* @ param nicid the id of the NIC
*/
void clear_nic_context ( int nicid ) ;
2018-12-04 14:41:55 +01:00
/**
* Deletes the NETWORK ALIAS related CONTEXT section for the given nic , i . e .
* ETH_ < id > _ALIAS < aliasid >
* @ param nicid the id of the NIC
* @ param aliasid the idx of the ALIAS
*/
void clear_nic_alias_context ( int nicid , int aliasidx ) ;
2016-12-12 10:26:55 +01:00
/**
* Generate the PCI related CONTEXT setions , i . e . PCI_ * . This function
* is also adds basic network attributes for pass - through NICs
* @ param context attribute of the VM
* @ return true if the net context was generated .
*/
bool generate_pci_context ( VectorAttribute * context ) ;
2024-01-31 17:37:27 +01:00
/**
* Deletes the PCI ( non NIC ) related CONTEXT section for the given nic , i . e .
* PCI < id > _ADDRESS
* @ param pciid the id of the PCI
*/
void clear_pci_context ( VectorAttribute * pci ) ;
/**
* Deletes the PCI ( non NIC ) related CONTEXT section for the given nic , i . e .
* PCI < id > _ADDRESS
* @ param pci device to add context for
*/
void add_pci_context ( VectorAttribute * pci ) ;
2016-12-12 10:26:55 +01:00
/**
* Generate the ONE_GATE token & url
* @ param context attribute of the VM
* @ param error_str describing the error
* @ return 0 if success
*/
2020-07-02 22:42:10 +02:00
int generate_token_context ( VectorAttribute * context ,
std : : string & error_str ) ;
2015-09-02 17:30:51 +02:00
2009-04-02 10:14:54 +00:00
/**
2010-03-05 19:17:52 +01:00
* Parse the " CONTEXT " attribute of the template by substituting
* $ VARIABLE , $ VARIABLE [ ATTR ] and $ VARIABLE [ ATTR , ATTR = VALUE ]
2011-03-22 18:40:03 +01:00
* @ param error_str Returns the error reason , if any
2018-10-09 11:42:17 +02:00
* @ param only_auto boolean to parse only the context for vnets
* with NETWORK_MODE = auto
2010-03-05 19:17:52 +01:00
* @ return 0 on success
*/
2020-07-02 22:42:10 +02:00
int parse_context ( std : : string & error_str , bool all_nics ) ;
2010-03-05 19:17:52 +01:00
2016-01-27 11:27:26 +01:00
/**
2016-02-07 18:01:42 +01:00
* Parses the current contents of the context vector attribute , without
* adding any attributes . Substitutes $ VARIABLE , $ VARIABLE [ ATTR ] and
* $ VARIABLE [ ATTR , ATTR = VALUE ]
* @ param pointer to the context attribute . It will be updated to point
* to the new parsed CONTEXT
* @ param error_str description in case of error
* @ return 0 on success
*/
2020-07-02 22:42:10 +02:00
int parse_context_variables ( VectorAttribute * * context ,
std : : string & error_str ) ;
2016-01-27 11:27:26 +01:00
2014-11-11 16:42:49 +01:00
// -------------------------------------------------------------------------
2017-01-04 15:23:35 +01:00
// Management helpers: NIC, DISK and VMGROUP
2014-11-11 16:42:49 +01:00
// -------------------------------------------------------------------------
/**
2018-10-09 11:42:17 +02:00
* Get network leases ( no auto NICs , NETWORK_MODE ! = auto ) for this VM
2014-11-11 16:42:49 +01:00
* @ return 0 if success
*/
2020-07-02 22:42:10 +02:00
int get_network_leases ( std : : string & error_str ) ;
2014-11-11 16:42:49 +01:00
/**
2016-12-24 01:35:33 +01:00
* Get all disk images for this Virtual Machine
* @ param error_str Returns the error reason , if any
* @ return 0 if success
2016-12-12 10:26:55 +01:00
*/
2020-07-02 22:42:10 +02:00
int get_disk_images ( std : : string & error_str ) ;
2016-12-12 10:26:55 +01:00
2017-01-04 15:23:35 +01:00
/**
* Adds the VM to the VM group if needed
* @ param error_str Returns the error reason , if any
* @ return 0 if success
*/
2020-07-02 22:42:10 +02:00
int get_vmgroup ( std : : string & error ) ;
2017-01-04 15:23:35 +01:00
2015-10-23 20:23:59 +02:00
// ------------------------------------------------------------------------
// Public cloud templates related functions
// ------------------------------------------------------------------------
/**
* Gets the list of public clouds defined in this VM .
* @ param clouds list to store the cloud hypervisors in the template
* @ return the number of public cloud hypervisors
*/
2020-07-02 22:42:10 +02:00
int get_public_clouds ( std : : set < std : : string > & clouds ) const
2015-10-23 20:23:59 +02:00
{
2015-10-30 15:41:09 +01:00
get_public_clouds ( " PUBLIC_CLOUD " , clouds ) ;
2015-10-23 20:23:59 +02:00
2015-10-30 15:41:09 +01:00
return clouds . size ( ) ;
2015-10-23 20:23:59 +02:00
} ;
/**
* Same as above but specifies the attribute name to handle old versions
2015-10-30 15:41:09 +01:00
* @ param name Attribute name
* @ param clouds list to store the cloud hypervisors in the template
2015-10-23 20:23:59 +02:00
*/
2020-07-02 22:42:10 +02:00
void get_public_clouds ( const std : : string & name ,
std : : set < std : : string > & clouds ) const ;
2015-10-23 20:23:59 +02:00
/**
* Parse the public cloud attributes and subsititue variable definition
* for the values in the template , i . e . :
* INSTANCE_TYPE = " m1-small "
*
* PUBLIC_CLOUD = [ TYPE = " ec2 " , INSTANCE = " $INSTANCE_TYPE " . . .
*
* @ param error description if any
* @ return - 1 in case of error
*/
2020-07-02 22:42:10 +02:00
int parse_public_clouds ( std : : string & error )
2015-10-23 20:23:59 +02:00
{
int rc = parse_public_clouds ( " PUBLIC_CLOUD " , error ) ;
if ( rc = = 0 )
{
rc = parse_public_clouds ( " EC2 " , error ) ;
}
return rc ;
} ;
/**
* Same as above but specifies the attribute name to handle old versions
*/
2020-07-02 22:42:10 +02:00
int parse_public_clouds ( const char * name , std : : string & error ) ;
2015-05-19 18:41:23 +02:00
2019-09-03 16:31:51 +02:00
/**
2019-09-12 16:25:23 +02:00
* Encrypt all secret attributes
*/
2020-09-15 11:16:00 +02:00
void encrypt ( ) override ;
2019-09-12 16:25:23 +02:00
/**
* Decrypt all secret attributes
2019-09-03 16:31:51 +02:00
*/
2020-09-15 11:16:00 +02:00
void decrypt ( ) override ;
2019-09-03 16:31:51 +02:00
2008-06-17 16:27:32 +00:00
protected :
2009-03-06 12:10:15 +00:00
2008-06-17 16:27:32 +00:00
//**************************************************************************
// Constructor
//**************************************************************************
2009-03-06 12:10:15 +00:00
2012-10-05 13:23:44 +02:00
VirtualMachine ( int id ,
2011-06-30 11:31:00 +02:00
int uid ,
2012-10-05 13:23:44 +02:00
int gid ,
2020-07-02 22:42:10 +02:00
const std : : string & uname ,
const std : : string & gname ,
2013-01-18 18:34:51 +01:00
int umask ,
2020-09-15 11:16:00 +02:00
std : : unique_ptr < VirtualMachineTemplate > _vm_template ) ;
2008-06-17 16:27:32 +00:00
// *************************************************************************
// DataBase implementation
// *************************************************************************
2012-10-05 13:23:44 +02:00
2008-06-17 16:27:32 +00:00
/**
* Reads the Virtual Machine ( identified with its OID ) from the database .
* @ param db pointer to the db
* @ return 0 on success
*/
2019-09-03 16:31:51 +02:00
int select ( SqlDB * db ) override ;
2008-06-17 16:27:32 +00:00
/**
* Writes the Virtual Machine and its associated template in the database .
* @ param db pointer to the db
* @ return 0 on success
*/
2020-07-02 22:42:10 +02:00
int insert ( SqlDB * db , std : : string & error_str ) override ;
2008-06-17 16:27:32 +00:00
/**
* Writes / updates the Virtual Machine data fields in the database .
* @ param db pointer to the db
* @ return 0 on success
*/
2019-09-03 16:31:51 +02:00
int update ( SqlDB * db ) override
2011-03-08 17:55:14 +01:00
{
2020-07-02 22:42:10 +02:00
std : : string error_str ;
2011-12-19 17:07:32 +01:00
return insert_replace ( db , true , error_str ) ;
2011-03-08 17:55:14 +01:00
}
2010-04-10 22:16:47 +02:00
2008-06-17 16:27:32 +00:00
/**
2010-05-25 18:19:22 +02:00
* Deletes a VM from the database and all its associated information
2008-06-17 16:27:32 +00:00
* @ param db pointer to the db
2010-05-25 18:19:22 +02:00
* @ return - 1
2008-06-17 16:27:32 +00:00
*/
2019-09-03 16:31:51 +02:00
int drop ( SqlDB * db ) override
2009-03-06 12:10:15 +00:00
{
2010-05-25 18:19:22 +02:00
NebulaLog : : log ( " ONE " , Log : : ERROR , " VM Drop not implemented! " ) ;
return - 1 ;
2008-06-17 16:27:32 +00:00
}
2013-03-13 17:43:42 +01:00
2008-06-17 16:27:32 +00:00
} ;
# endif /*VIRTUAL_MACHINE_H_*/