1
0
mirror of https://github.com/OpenNebula/one.git synced 2025-03-11 04:58:16 +03:00

F #5516: New backup interface for OpenNebula

co-authored-by: Frederick Borges <fborges@opennebula.io>
co-authored-by: Neal Hansen <nhansen@opennebula.io>
co-authored-by: Daniel Clavijo Coca <dclavijo@opennebula.io>
co-authored-by: Pavel Czerný <pczerny@opennebula.systems>

BACKUP INTERFACE
=================

* Backups are exposed through a a special Datastore (BACKUP_DS) and
  Image (BACKUP) types. These new types can only be used for backup'ing
  up VMs. This approach allows to:

  - Implement tier based backup policies (backups made on different
    locations).

  - Leverage access control and quota systems

  - Support differnt storage and backup technologies

* Backup interface for the VMs:

  - VM configures backups with BACKUP_CONFIG. This attribute can be set
    in the VM template or updated with updateconf API call. It can include:

    + BACKUP_VOLATILE: To backup or not volatile disks

    + FS_FREEZE: How the FS is freeze for running VMs (qemu-agent,
      suspend or none). When possible backups are crash consistent.

    + KEEP_LAST: keep only a given number of backups.

  - Backups are initiated by the one.vm.backup API call that requires
    the target Datastore to perform the backup (one-shot). This is
    exposed by the onevm backup command.

  - Backups can be periodic through scheduled actions.

  - Backup configuration is updated with one.vm.updateconf API call.

* Restore interface:

  - Restores are initiated by the one.image.restore API call. This is
    exposed by oneimage restore command.

  - Restore include configurable options for the VM template

    + NO_IP: to not preserve IP addresses (but keep the NICs and network
      mapping)

    + NO_NIC: to not preserve network mappings

  - Other template attributes:

    + Clean PCI devices, including network configuration in case of TYPE=NIC
    attributes. By default it removes SHORT_ADDRESS and leave the "auto"
    selection attributes.

    + Clean NUMA_NODE, removes node id and cpu sets. It keeps the NUMA node

  - It is possible to restore single files stored in the repository by
    using the backup specific URL.

* Sunstone (Ruby version) has been updated to expose this feautres.

BACKUP DRIVERS & IMPLEMENTATION
===============================

* Backup operation is implemented by a combination of 3 driver operations:

  - VMM. New (internal oned <-> one_vmm_exec.rb) to orchestrate
    backups for RUNNING VMs.

  - TM. This commit introduces 2 new operations (and their
    corresponding _live variants):

    + pre_backup(_live): Prepares the disks to be back'ed up in the
      repository. It is specific to the driver: (i) ceph uses the export
      operation; (ii) qcow2/raw uses snapshot-create-as and fs_freeze as
      needed.
    + post_backup(_live): Performs cleanning operations, i.e. KVM
      snapshots or tmp dirs.

  - DATASTORE. Each backup technology is represented by its
    corresponfing driver, that needs to implement:

    + backup: it takes the VM disks in file (qcow2) format and stores it
      the backup repository.

    + restore: it takes a backup image and restores the associated disks
      and VM template.

    + monitor: to gather available space in the repository

    + rm: to remove existing backups

    + stat: to return the "restored" size of a disk stored in a backup

    + downloader pseudo-URL handler: in the form
      <backup_proto>://<driver_snapshot_id>/<disk filename>

BACKUP MANAGEMENT
=================

Backup actions may potentially take some time, leaving some vmm_exec threads in
use for a long time, stucking other vmm operations. Backups are planned
by the scheduler through the sched action interface.

Two attributes has been added to sched.conf:
  * MAX_BACKUPS max active backup operations in the cloud. No more
    backups will be started beyond this limit.

  * MAX_BACKUPS_HOST max number of backups per host

* Fix onevm CLI to properly show and manage schedule actions. --schedule
  supports now, as well as relative times +<seconds_from_stime>

  onvm backup --schedule now -d 100 63

* Backup is added as VM_ADMIN_ACTIONS in oned.conf. Regular users needs
  to use the batch interface or request specific permissions

Internal restructure of Scheduler:

- All sched_actions interface is now in SchedActionsXML class and files.
  This class uses references to VM XML, and MUST be used in the same
  lifetime scope.

- XMLRPC API calls for sched actions has been moved to ScheduledActionXML.cc as
  static functions.

- VirtualMachineActionPool includes counters for active backups (total
  and per host).

SUPPORTED PLATFORMS
====================
* hypervisor: KVM
* TM: qcow2/shared/ssh, ceph
* backup: restic, rsync

Notes on Ceph

* Ceph backups are performed in the following steps:
    1. A snapshot of each disk is taken (group snapshots cannot be used as
       it seems we cannot export the disks afterwards)
    2. Disks are export to a file
    3. File is converted to qcow2 format
    4. Disk files are upload to the backup repo

TODO:
  * Confirm crash consistent snapshots cannot be used in Ceph

TODO:
  * Check if using VM dir instead of full path is better to accomodate
    DS migrations i.e.:
    - Current path: /var/lib/one/datastores/100/53/backup/disk.0
    - Proposal: 53/backup/disk.0

RESTIC DRIVER
=============
Developed together with this feature is part of the EE edtion.

* It supports the SFTP protocol, the following attributes are
  supported:

  - RESTIC_SFTP_SERVER
  - RESTIC_SFTP_USER: only if different from oneadmin
  - RESTIC_PASSWORD
  - RESTIC_IONICE: Run restic under a given ionice priority (class 2)
  - RESTIC_NICE: Run restic under a given nice
  - RESTIC_BWLIMIT: Limit restic upload/download BW
  - RESTIC_COMPRESSION: Restic 0.14 implements compression (three modes:
    off, auto, max). This requires repositories version 2. By default,
    auto is used (average compression without to much CPU usage)
  - RESTIC_CONNECTIONS: Sets the number of concurrent connections to a
    backend (5 by default). For high-latency backends this number can be
    increased.

* downloader URL: restic://<datastore_id>/<snapshot_id>/<file_name>
  snapshot_id is the restic snapshot hash. To recover single disk images
  from a backup. This URLs support:

  - RESTIC_CONNECTIONS
  - RESTIC_BWLIMIT
  - RESTIC_IONICE
  - RESTIC_NICE

  These options needs to be defined in the associated datastore.

RSYNC DRIVER
=============
A rsync driver is included as part of the CE distribution. It uses the
rsync tool to store backups in a remote server through SSH:

* The following attributes are supported to configure the backup
  datastore:

  - RSYNC_HOST
  - RSYNC_USER
  - RSYNC_ARGS: Arguments to perform the rsync operatin (-aS by default)

* downloader URL: rsync://<ds_id>/<vmid>/<hash>/<file> can be used to recover
  single files from an existing backup. (RSYNC_HOST and RSYN_USER needs
  to be set in ds_id

EMULATOR_CPUS
=============

This commit includes a non related backup feature:

* Add EMULATOR_CPUS (KVM). This host (or cluster attribute) defines the
  CPU IDs where the emulator threads will be pinned. If this value is
  not defined the allocated CPU wll be used when using a PIN policy.

(cherry picked from commit a9e6a8e000e9a5a2f56f80ce622ad9ffc9fa032b)

F OpenNebula/one#5516: adding rsync backup driver

(cherry picked from commit fb52edf5d009dc02b071063afb97c6519b9e8305)

F OpenNebula/one#5516: update install.sh, add vmid to source, some polish

Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 6fc6f8a67e435f7f92d5c40fdc3d1c825ab5581d)

F OpenNebula/one#5516: cleanup

Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 12f4333b833f23098142cd4762eb9e6c505e1340)

F OpenNebula/one#5516: update downloader, default args, size check

Signed-off-by: Neal Hansen <nhansen@opennebula.io>
(cherry picked from commit 510124ef2780a4e2e8c3d128c9a42945be38a305)

LL

(cherry picked from commit d4fcd134dc293f2b862086936db4d552792539fa)
This commit is contained in:
Ruben S. Montero 2022-09-09 11:46:44 +02:00
parent ae136f0d97
commit e433ccb85b
No known key found for this signature in database
GPG Key ID: A0CEA6FA880A1D87
181 changed files with 15890 additions and 11365 deletions

208
include/Backups.h Normal file
View File

@ -0,0 +1,208 @@
/* -------------------------------------------------------------------------- */
/* Copyright 2002-2022, OpenNebula Project, OpenNebula Systems */
/* */
/* Licensed under the Apache License, Version 2.0 (the "License"); you may */
/* not use this file except in compliance with the License. You may obtain */
/* a copy of the License at */
/* */
/* http://www.apache.org/licenses/LICENSE-2.0 */
/* */
/* Unless required by applicable law or agreed to in writing, software */
/* distributed under the License is distributed on an "AS IS" BASIS, */
/* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. */
/* See the License for the specific language governing permissions and */
/* limitations under the License. */
/* -------------------------------------------------------------------------- */
#ifndef BACKUPS_H_
#define BACKUPS_H_
#include <string>
#include <map>
#include "ObjectCollection.h"
#include "Template.h"
class ObjectXML;
/**
* This class represents the backup information of a VM, it consists of two
* parts, configuration and list of backups
* The schema is as follows:
* <BACKUPS>
* <BACKUP_CONFIG>
* <KEEP_LAST> Just keep the last N backups
* <BACKUP_VOLATILE> Backup volatile disks or not
* <FS_FREEZE> FS freeze operation to perform on the VM
* <LAST_DATASTORE_ID> The dastore ID used to store the active backups(*)
* <LAST_BACKUP_ID> ID of the active backup(*)
* <LAST_BACKUP_SIZE> SIZE of the active backup(*)
* <BACKUP_IDS>
* <ID> ID of the image with a valid backup
*
* (*) refers to the active backup operation, and are only present while
* a backup is being performed
*
* Configuration attributes defaults
* - BACKUP_VOLATILE "NO"
* - FS_FREEZE "NONE"
* - KEEP_LAST (empty = keep all)
*/
class Backups
{
public:
Backups();
~Backups() = default;
// *************************************************************************
// Inititalization functions
// *************************************************************************
/**
* Builds the snapshot list from its XML representation. This function
* is used when importing it from the DB.
* @param node xmlNode for the template
* @return 0 on success
*/
int from_xml(const ObjectXML* xml);
/**
* XML Representation of the Snapshots
*/
std::string& to_xml(std::string& xml) const;
/**
* Gets the BACKUP_CONFIG attribute attribute and parses the associated
* attributes:
* - BACKUP_VOLATILE
* - KEEP_LAST
* - FS_FREEZE
*
* The following attributes are stored in the configuration and refers
* only to the active backup operation
* - LAST_DATASTORE_ID
* - LAST_BACKUP_ID
* - LAST_BACKUP_SIZE
*/
int parse(std::string& error_str, Template *tmpl);
/**
* @return true if the backup needs to include volatile disks
*/
bool do_volatile() const;
/**
* Set of functions to manipulate the LAST_* attributes referring to
* the active backup operation
*/
void last_datastore_id(int ds_id)
{
config.replace("LAST_DATASTORE_ID", ds_id);
}
void last_backup_id(const std::string& id)
{
config.replace("LAST_BACKUP_ID", id);
}
void last_backup_size(const std::string& size)
{
config.replace("LAST_BACKUP_SIZE", size);
}
/* ---------------------------------------------------------------------- */
int last_datastore_id() const
{
int dst;
config.get("LAST_DATASTORE_ID", dst);
return dst;
}
std::string last_backup_id() const
{
std::string id;
config.get("LAST_BACKUP_ID", id);
return id;
}
std::string last_backup_size() const
{
std::string sz;
config.get("LAST_BACKUP_SIZE", sz);
return sz;
}
/* ---------------------------------------------------------------------- */
void last_backup_clear()
{
config.erase("LAST_DATASTORE_ID");
config.erase("LAST_BACKUP_ID");
config.erase("LAST_BACKUP_SIZE");
}
/**
* @param riids Return the backups that needs to be removed to conform
* to KEEP_LAST configuration
*/
void remove_last(std::set<int> &riids) const
{
int kl;
riids.clear();
if (!config.get("KEEP_LAST", kl) || kl == 0)
{
return;
}
auto iids = ids.get_collection();
auto it = iids.cbegin();
int to_remove = iids.size() - kl;
for (int i = 0 ; i < to_remove && it != iids.cend() ; ++i, ++it)
{
riids.insert(*it);
}
}
/**
* Adds / deletes a backup from the list. Each backup is represented by
* an image in the backup datastore. The list holds the ID's of the images
*
* @return 0 on success -1 if an error adding (already present) or deleting
* (not present) occurred
*/
int add(int id)
{
return ids.add(id);
}
int del(int id)
{
return ids.del(id);
}
private:
/**
* Text representation of the backup information of the VM
*/
Template config;
/**
* Backups of the VM as a collection of Image ID
*/
ObjectCollection ids;
};
#endif /*BACKUPS_H_*/

View File

@ -38,7 +38,8 @@ public:
{
IMAGE_DS = 0, /** < Standard datastore for disk images */
SYSTEM_DS = 1, /** < System datastore for disks of running VMs */
FILE_DS = 2 /** < File datastore for context, kernel, initrd files */
FILE_DS = 2, /** < File datastore for context, kernel, initrd files */
BACKUP_DS = 3 /** < Backup datastore for VMs */
};
/**
@ -53,6 +54,7 @@ public:
case IMAGE_DS: return "IMAGE_DS" ; break;
case SYSTEM_DS: return "SYSTEM_DS" ; break;
case FILE_DS: return "FILE_DS" ; break;
case BACKUP_DS: return "BACKUP_DS" ; break;
default: return "";
}
};

View File

@ -455,6 +455,19 @@ public:
int detach_sg(int vid, int nicid, int sgid,
const RequestAttributes& ra, std::string& error_str);
/**
* Backup a VM
*
* @param vid the VM id
* @param bck_ds_is the ID of the datastore to save the backup
* @param ra information about the API call request
* @param error_str Error reason, if any
*
* @return 0 on success, -1 otherwise
*/
int backup(int vid, int bck_ds_id,
const RequestAttributes& ra, std::string& error_str);
//--------------------------------------------------------------------------
// DM Actions associated with a VM state transition
//--------------------------------------------------------------------------

View File

@ -40,7 +40,8 @@ public:
DATABLOCK = 2, /** < User persistent data device */
KERNEL = 3, /** < Kernel files */
RAMDISK = 4, /** < Initrd files */
CONTEXT = 5 /** < Context files */
CONTEXT = 5, /** < Context files */
BACKUP = 6, /** < VM Backup reference */
};
/**
@ -58,6 +59,7 @@ public:
case KERNEL: return "KERNEL" ; break;
case RAMDISK: return "RAMDISK" ; break;
case CONTEXT: return "CONTEXT" ; break;
case BACKUP: return "BACKUP" ; break;
default: return "";
}
};

View File

@ -225,6 +225,20 @@ public:
*/
int delete_image(int iid, std::string& error_str);
/**
* Restores a backup image restoring the associated disk images and VM
* template.
* @param iid id of the backup image
* @param dst_ds_id destination ds where the images will be restored
* @param opts XML encoded options for the restore operation
*
* @param result string with objects ids or error reason
*
* @return 0 on success
*/
int restore_image(int iid, int dst_ds_id, const std::string& opts,
std::string& result);
/**
* Gets the size of an image by calling the STAT action of the associated
* datastore driver.
@ -375,67 +389,35 @@ private:
// -------------------------------------------------------------------------
// Protocol implementation, procesing messages from driver
// -------------------------------------------------------------------------
/**
*
*/
static void _undefined(std::unique_ptr<image_msg_t> msg);
/**
*
*/
void _stat(std::unique_ptr<image_msg_t> msg);
/**
*
*/
void _cp(std::unique_ptr<image_msg_t> msg);
/**
*
*/
void _clone(std::unique_ptr<image_msg_t> msg);
/**
*
*/
void _mkfs(std::unique_ptr<image_msg_t> msg);
/**
*
*/
void _rm(std::unique_ptr<image_msg_t> msg);
/**
*
*/
void _monitor(std::unique_ptr<image_msg_t> msg);
/**
*
*/
void _snap_delete(std::unique_ptr<image_msg_t> msg);
/**
*
*/
void _snap_revert(std::unique_ptr<image_msg_t> msg);
/**
*
*/
void _snap_flatten(std::unique_ptr<image_msg_t> msg);
/**
*
*/
void _restore(std::unique_ptr<image_msg_t> msg);
static void _log(std::unique_ptr<image_msg_t> msg);
/**
* This function is executed periodically to monitor Datastores.
* This function is executed periodically to monitor Datastores and
* check sync actions
*/
void timer_action();
};
#endif /*IMAGE_MANAGER_H*/

View File

@ -28,6 +28,7 @@ class ImageManager;
class ClusterPool;
class HostPool;
class ImagePool;
class DatastorePool;
class SecurityGroupPool;
class VirtualMachinePool;
class VirtualMachine;
@ -145,6 +146,8 @@ public:
void trigger_resize_success(int vid);
void trigger_resize_failure(int vid);
void trigger_backup_success(int vid);
void trigger_backup_failure(int vid);
// -------------------------------------------------------------------------
// External Actions, triggered by user requests
// -------------------------------------------------------------------------
@ -202,6 +205,11 @@ private:
*/
ImagePool * ipool = nullptr;
/**
* Pointer to the Datastore Pool, to access images
*/
DatastorePool * dspool = nullptr;
/**
* Pointer to the SecurityGroup Pool
*/

View File

@ -160,7 +160,10 @@ public:
void finalize()
{
trigger([&] {
NebulaLog::info("Lis", "Stopping " + name);
if (!name.empty())
{
NebulaLog::info("Lis", "Stopping " + name);
}
finalize_action();

View File

@ -339,6 +339,28 @@ namespace one_util
*/
std::string uuid();
/**
* Reads a generic value from string that supports operator >>.
* @param str Input string
* @param value Numeric value converted from the str, undefined if
* the method fails
* @return true on success, false otherwise
*/
template <class T>
bool str_cast(const std::string str, T& value)
{
std::istringstream iss(str);
iss >> value;
if (iss.fail() || !iss.eof())
{
return false;
}
return true;
}
} // namespace one_util
#endif /* _NEBULA_UTIL_H_ */

View File

@ -235,6 +235,13 @@ public:
int get_nodes(const std::string& xpath_expr,
std::vector<xmlNodePtr>& content) const;
/**
* Count number of nodes matching a given xpath_expr
* @param xpath_expr the Xpath for the elements
* @return the number of nodes found
*/
int count_nodes(const std::string& xpath_expr) const;
/**
* Adds a copy of the node as a child of the node in the xpath expression.
* The source node must be cleaned by the caller.

View File

@ -59,6 +59,7 @@ enum class ImageManagerMessages : unsigned short int
SNAP_DELETE,
SNAP_REVERT,
SNAP_FLATTEN,
RESTORE,
LOG,
ENUM_MAX
};
@ -173,6 +174,7 @@ enum class VMManagerMessages : unsigned short int
DRIVER_CANCEL,
LOG,
RESIZE,
BACKUP,
ENUM_MAX
};

View File

@ -182,6 +182,27 @@ protected:
RequestAttributes& att) override;
};
/* ------------------------------------------------------------------------- */
/* ------------------------------------------------------------------------- */
class ImageRestore : public RequestManagerImage
{
public:
ImageRestore():
RequestManagerImage("one.image.restore",
"Restores a VM backup", "A:siis")
{
auth_op = AuthRequest::USE;
};
~ImageRestore(){};
protected:
void request_execute(xmlrpc_c::paramList const& _paramList,
RequestAttributes& att) override;
};
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */

View File

@ -625,4 +625,23 @@ protected:
RequestAttributes& ra) override;
};
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
class VirtualMachineBackup : public RequestManagerVirtualMachine
{
public:
VirtualMachineBackup():
RequestManagerVirtualMachine("one.vm.backup",
"Creates a new backup image for the virtual machine",
"A:sii")
{
vm_action = VMActions::BACKUP_ACTION;
}
protected:
void request_execute(xmlrpc_c::paramList const& pl,
RequestAttributes& ra) override;
};
#endif

View File

@ -48,6 +48,11 @@ public:
virtual ~SchedAction(){};
int action_id()
{
return get_id();
}
/**
* Returns the REPEAT value of the SCHED_ACTION
* @param r repeat WEEKLY, MONTHLY, YEARLY or HOURLY

View File

@ -71,9 +71,9 @@ public:
/**
* Wait for the AuthRequest to be completed
*/
void wait()
void wait(time_t tout = 90)
{
time_out = time(0) + 90;//Requests will expire in 1.5 minutes
time_out = time(0) + tout;//Requests will expire in 1.5 minutes
loop();
}

View File

@ -162,6 +162,18 @@ public:
VirtualMachine * vm,
const VirtualMachineDisk * disk,
std::ostream& xfr);
/**
* Generate backup commands for each VM disk
* @param vm
* @param xfr stream to include the command.
* @param os describing error if any
*
* @return 0 on success
*/
int backup_transfer_commands(
VirtualMachine * vm,
std::ostream& xfr);
private:
/**
* Pointer to the Virtual Machine Pool, to access VMs

View File

@ -77,7 +77,8 @@ public:
ALIAS_ATTACH_ACTION = 46, // "one.vm.attachnic"
ALIAS_DETACH_ACTION = 47, // "one.vm.detachnic"
POFF_MIGRATE_ACTION = 48, // "one.vm.migrate"
POFF_HARD_MIGRATE_ACTION = 49 // "one.vm.migrate"
POFF_HARD_MIGRATE_ACTION = 49, // "one.vm.migrate"
BACKUP_ACTION = 50 // "one.vm.backup"
};
static std::string action_to_str(Action action);

View File

@ -25,6 +25,7 @@
#include "History.h"
#include "Image.h"
#include "NebulaLog.h"
#include "Backups.h"
#include <time.h>
#include <set>
@ -138,7 +139,9 @@ public:
HOTPLUG_NIC_POWEROFF = 65,
HOTPLUG_RESIZE = 66,
HOTPLUG_SAVEAS_UNDEPLOYED = 67,
HOTPLUG_SAVEAS_STOPPED = 68
HOTPLUG_SAVEAS_STOPPED = 68,
BACKUP = 69,
BACKUP_POWEROFF = 70
};
/**
@ -1190,7 +1193,7 @@ public:
* @param err description if any
* @param append true append, false replace
*
* @return 0 on success
* @return -1 (error), 0 (context change), 1 (no context changed)
*/
int updateconf(VirtualMachineTemplate* tmpl, std::string &err, bool append);
@ -1726,6 +1729,23 @@ public:
const std::string& sched_template,
std::string& error);
// ------------------------------------------------------------------------
// Backup related functions
// ------------------------------------------------------------------------
/**
*
*/
void max_backup_size(Template &ds_quota)
{
disks.backup_size(ds_quota, _backups.do_volatile());
}
Backups& backups()
{
return _backups;
}
private:
// -------------------------------------------------------------------------
@ -1855,6 +1875,11 @@ private:
*/
Log * _log;
/**
*
*/
Backups _backups;
// *************************************************************************
// DataBase implementation (Private)
// *************************************************************************

View File

@ -780,6 +780,15 @@ public:
void delete_non_persistent_snapshots(Template &vm_quotas,
std::vector<Template *> &ds_quotas);
/* ---------------------------------------------------------------------- */
/* BACKUP interface */
/* ---------------------------------------------------------------------- */
/** Returns upper limit of the disk size needed to do a VM backup
* @param ds_quota The Datastore quota
*/
void backup_size(Template &ds_quota, bool do_volatile);
/**
* Marshall disks in XML format with just essential information
* @param xml string to write the disk XML description

View File

@ -302,6 +302,11 @@ private:
*/
void _log(std::unique_ptr<vm_msg_t> msg);
/**
*
*/
void _backup(std::unique_ptr<vm_msg_t> msg);
/**
*
*/
@ -532,6 +537,13 @@ public:
* @param vid the id of the VM.
*/
void trigger_resize(int vid);
/**
* Create backup fot the VM
*
* @param vid the id of the VM.
*/
void trigger_backup(int vid);
};
#endif /*VIRTUAL_MACHINE_MANAGER_H*/

View File

@ -570,6 +570,19 @@ private:
write_drv(VMManagerMessages::UPDATESG, oid, drv_msg);
}
/**
* Sends a backup create request to the MAD:
* "BACKUP ID XML_DRV_MSG"
* @param oid the virtual machine id.
* @param drv_msg xml data for the mad operation
*/
void backup(
const int oid,
const std::string& drv_msg) const
{
write_drv(VMManagerMessages::BACKUP, oid, drv_msg);
}
/**
*
*/

View File

@ -322,11 +322,7 @@ LIB_DIRS="$LIB_LOCATION/ruby \
$LIB_LOCATION/onecfg/lib/config/type \
$LIB_LOCATION/onecfg/lib/config/type/augeas \
$LIB_LOCATION/onecfg/lib/config/type/yaml \
$LIB_LOCATION/onecfg/lib/patch \
$LIB_LOCATION/ruby/onevmdump \
$LIB_LOCATION/ruby/onevmdump/lib \
$LIB_LOCATION/ruby/onevmdump/lib/exporters \
$LIB_LOCATION/ruby/onevmdump/lib/restorers"
$LIB_LOCATION/onecfg/lib/patch"
VAR_DIRS="$VAR_LOCATION/remotes \
$VAR_LOCATION/remotes/etc \
@ -477,6 +473,7 @@ VAR_DIRS="$VAR_LOCATION/remotes \
$VAR_LOCATION/remotes/vnm/hooks/clean \
$VAR_LOCATION/remotes/tm/ \
$VAR_LOCATION/remotes/tm/dummy \
$VAR_LOCATION/remotes/tm/lib \
$VAR_LOCATION/remotes/tm/shared \
$VAR_LOCATION/remotes/tm/fs_lvm \
$VAR_LOCATION/remotes/tm/fs_lvm_ssh \
@ -496,6 +493,8 @@ VAR_DIRS="$VAR_LOCATION/remotes \
$VAR_LOCATION/remotes/datastore/ceph \
$VAR_LOCATION/remotes/datastore/dev \
$VAR_LOCATION/remotes/datastore/vcenter \
$VAR_LOCATION/remotes/datastore/iscsi_libvirt \
$VAR_LOCATION/remotes/datastore/rsync \
$VAR_LOCATION/remotes/market \
$VAR_LOCATION/remotes/market/http \
$VAR_LOCATION/remotes/market/one \
@ -505,7 +504,6 @@ VAR_DIRS="$VAR_LOCATION/remotes \
$VAR_LOCATION/remotes/market/turnkeylinux \
$VAR_LOCATION/remotes/market/dockerhub \
$VAR_LOCATION/remotes/market/docker_registry \
$VAR_LOCATION/remotes/datastore/iscsi_libvirt \
$VAR_LOCATION/remotes/auth \
$VAR_LOCATION/remotes/auth/plain \
$VAR_LOCATION/remotes/auth/ssh \
@ -699,6 +697,7 @@ INSTALL_FILES=(
VMM_EXEC_ONE_SCRIPTS:$VAR_LOCATION/remotes/vmm/one
VMM_EXEC_EQUINIX_SCRIPTS:$VAR_LOCATION/remotes/vmm/equinix
TM_FILES:$VAR_LOCATION/remotes/tm
TM_LIB_FILES:$VAR_LOCATION/remotes/tm/lib
TM_SHARED_FILES:$VAR_LOCATION/remotes/tm/shared
TM_FS_LVM_FILES:$VAR_LOCATION/remotes/tm/fs_lvm
TM_FS_LVM_ETC_FILES:$VAR_LOCATION/remotes/etc/tm/fs_lvm/fs_lvm.conf
@ -720,6 +719,7 @@ INSTALL_FILES=(
DATASTORE_DRIVER_DEV_SCRIPTS:$VAR_LOCATION/remotes/datastore/dev
DATASTORE_DRIVER_VCENTER_SCRIPTS:$VAR_LOCATION/remotes/datastore/vcenter
DATASTORE_DRIVER_ISCSI_SCRIPTS:$VAR_LOCATION/remotes/datastore/iscsi_libvirt
DATASTORE_DRIVER_RSYNC_SCRIPTS:$VAR_LOCATION/remotes/datastore/rsync
DATASTORE_DRIVER_ETC_SCRIPTS:$VAR_LOCATION/remotes/etc/datastore
MARKETPLACE_DRIVER_HTTP_SCRIPTS:$VAR_LOCATION/remotes/market/http
MARKETPLACE_DRIVER_ETC_HTTP_SCRIPTS:$VAR_LOCATION/remotes/etc/market/http
@ -757,8 +757,6 @@ INSTALL_FILES=(
INSTALL_GEMS_SHARE_FILES:$SHARE_LOCATION
ONETOKEN_SHARE_FILE:$SHARE_LOCATION
FOLLOWER_CLEANUP_SHARE_FILE:$SHARE_LOCATION
PRE_CLEANUP_SHARE_FILE:$SHARE_LOCATION
BACKUP_VMS_SHARE_FILE:$SHARE_LOCATION
HOOK_AUTOSTART_FILES:$VAR_LOCATION/remotes/hooks/autostart
HOOK_FT_FILES:$VAR_LOCATION/remotes/hooks/ft
HOOK_RAFT_FILES:$VAR_LOCATION/remotes/hooks/raft
@ -778,10 +776,6 @@ INSTALL_FILES=(
CONTEXT_SHARE:$SHARE_LOCATION/context
DOCKERFILE_TEMPLATE:$SHARE_LOCATION/dockerhub
DOCKERFILES_TEMPLATES:$SHARE_LOCATION/dockerhub/dockerfiles
ONEVMDUMP_FILES:$LIB_LOCATION/ruby/onevmdump
ONEVMDUMP_LIB_FILES:$LIB_LOCATION/ruby/onevmdump/lib
ONEVMDUMP_LIB_EXPORTERS_FILES:$LIB_LOCATION/ruby/onevmdump/lib/exporters
ONEVMDUMP_LIB_RESTORERS_FILES:$LIB_LOCATION/ruby/onevmdump/lib/restorers
)
INSTALL_CLIENT_FILES=(
@ -994,7 +988,6 @@ BIN_FILES="src/nebula/oned \
src/cli/onelog \
src/cli/oneirb \
src/onedb/onedb \
src/onevmdump/onevmdump \
share/scripts/qemu-kvm-one-gen \
share/scripts/one"
@ -1083,6 +1076,7 @@ MADS_LIB_FILES="src/mad/sh/madcommon.sh \
src/authm_mad/one_auth_mad.rb \
src/authm_mad/one_auth_mad \
src/datastore_mad/one_datastore.rb \
src/datastore_mad/one_datastore_exec.rb \
src/datastore_mad/one_datastore \
src/market_mad/one_market.rb \
src/market_mad/one_market \
@ -1888,6 +1882,10 @@ IPAM_DRIVER_EC2_SCRIPTS="src/ipamm_mad/remotes/aws/register_address_range \
TM_FILES="src/tm_mad/tm_common.sh"
TM_LIB_FILES="src/tm_mad/lib/kvm.rb \
src/tm_mad/lib/tm_action.rb \
src/tm_mad/lib/backup.rb"
TM_SHARED_FILES="src/tm_mad/shared/clone \
src/tm_mad/shared/clone.ssh \
src/tm_mad/shared/delete \
@ -1914,7 +1912,13 @@ TM_SHARED_FILES="src/tm_mad/shared/clone \
src/tm_mad/shared/snap_revert.ssh \
src/tm_mad/shared/cpds \
src/tm_mad/shared/cpds.ssh \
src/tm_mad/shared/resize"
src/tm_mad/shared/resize \
src/tm_mad/shared/prebackup_live \
src/tm_mad/shared/prebackup \
src/tm_mad/shared/postbackup_live \
src/tm_mad/shared/postbackup"
TM_QCOW2_FILES="${TM_SHARED_FILES}"
TM_FS_LVM_FILES="src/tm_mad/fs_lvm/activate \
src/tm_mad/fs_lvm/clone \
@ -1934,7 +1938,11 @@ TM_FS_LVM_FILES="src/tm_mad/fs_lvm/activate \
src/tm_mad/fs_lvm/snap_revert \
src/tm_mad/fs_lvm/failmigrate \
src/tm_mad/fs_lvm/delete \
src/tm_mad/fs_lvm/resize"
src/tm_mad/fs_lvm/resize \
src/tm_mad/fs_lvm/prebackup_live \
src/tm_mad/fs_lvm/prebackup \
src/tm_mad/fs_lvm/postbackup_live \
src/tm_mad/fs_lvm/postbackup"
TM_FS_LVM_ETC_FILES="src/tm_mad/fs_lvm/fs_lvm.conf"
@ -1956,35 +1964,11 @@ TM_FS_LVM_SSH_FILES="src/tm_mad/fs_lvm_ssh/activate \
src/tm_mad/fs_lvm_ssh/snap_revert \
src/tm_mad/fs_lvm_ssh/failmigrate \
src/tm_mad/fs_lvm_ssh/delete \
src/tm_mad/fs_lvm_ssh/resize"
TM_QCOW2_FILES="src/tm_mad/qcow2/clone \
src/tm_mad/qcow2/clone.ssh \
src/tm_mad/qcow2/delete \
src/tm_mad/qcow2/ln \
src/tm_mad/qcow2/ln.ssh \
src/tm_mad/qcow2/monitor \
src/tm_mad/qcow2/mkswap \
src/tm_mad/qcow2/mkimage \
src/tm_mad/qcow2/mv \
src/tm_mad/qcow2/mv.ssh \
src/tm_mad/qcow2/context \
src/tm_mad/qcow2/premigrate \
src/tm_mad/qcow2/postmigrate \
src/tm_mad/qcow2/failmigrate \
src/tm_mad/qcow2/mvds \
src/tm_mad/qcow2/mvds.ssh \
src/tm_mad/qcow2/snap_create \
src/tm_mad/qcow2/snap_create.ssh \
src/tm_mad/qcow2/snap_create_live \
src/tm_mad/qcow2/snap_create_live.ssh \
src/tm_mad/qcow2/snap_delete \
src/tm_mad/qcow2/snap_delete.ssh \
src/tm_mad/qcow2/snap_revert \
src/tm_mad/qcow2/snap_revert.ssh \
src/tm_mad/qcow2/cpds \
src/tm_mad/qcow2/cpds.ssh \
src/tm_mad/qcow2/resize"
src/tm_mad/fs_lvm_ssh/resize \
src/tm_mad/fs_lvm_ssh/prebackup_live \
src/tm_mad/fs_lvm_ssh/prebackup \
src/tm_mad/fs_lvm_ssh/postbackup_live \
src/tm_mad/fs_lvm_ssh/postbackup"
TM_SSH_FILES="src/tm_mad/ssh/clone \
src/tm_mad/ssh/clone.replica \
@ -2008,7 +1992,11 @@ TM_SSH_FILES="src/tm_mad/ssh/clone \
src/tm_mad/ssh/cpds \
src/tm_mad/ssh/resize \
src/tm_mad/ssh/ssh_utils.sh \
src/tm_mad/ssh/recovery_snap_create_live"
src/tm_mad/ssh/recovery_snap_create_live \
src/tm_mad/ssh/prebackup_live \
src/tm_mad/ssh/prebackup \
src/tm_mad/ssh/postbackup_live \
src/tm_mad/ssh/postbackup"
TM_SSH_ETC_FILES="src/tm_mad/ssh/sshrc"
@ -2054,7 +2042,11 @@ TM_CEPH_FILES="src/tm_mad/ceph/clone \
src/tm_mad/ceph/monitor \
src/tm_mad/ceph/mkswap \
src/tm_mad/ceph/resize \
src/tm_mad/ceph/resize.ssh"
src/tm_mad/ceph/resize.ssh \
src/tm_mad/ceph/prebackup_live \
src/tm_mad/ceph/prebackup \
src/tm_mad/ceph/postbackup_live \
src/tm_mad/ceph/postbackup"
TM_DEV_FILES="src/tm_mad/dev/clone \
src/tm_mad/dev/ln \
@ -2189,6 +2181,19 @@ DATASTORE_DRIVER_ISCSI_SCRIPTS="src/datastore_mad/remotes/iscsi_libvirt/cp \
src/datastore_mad/remotes/iscsi_libvirt/snap_flatten \
src/datastore_mad/remotes/iscsi_libvirt/clone"
DATASTORE_DRIVER_RSYNC_SCRIPTS="src/datastore_mad/remotes/rsync/cp \
src/datastore_mad/remotes/rsync/mkfs \
src/datastore_mad/remotes/rsync/stat \
src/datastore_mad/remotes/rsync/clone \
src/datastore_mad/remotes/rsync/monitor \
src/datastore_mad/remotes/rsync/snap_delete \
src/datastore_mad/remotes/rsync/snap_revert \
src/datastore_mad/remotes/rsync/snap_flatten \
src/datastore_mad/remotes/rsync/rm \
src/datastore_mad/remotes/rsync/backup \
src/datastore_mad/remotes/rsync/restore \
src/datastore_mad/remotes/rsync/export"
DATASTORE_DRIVER_ETC_SCRIPTS="src/datastore_mad/remotes/datastore.conf"
#-------------------------------------------------------------------------------
@ -2249,22 +2254,6 @@ ONEDB_FILES="src/onedb/fsck.rb \
ONEDB_PATCH_FILES="src/onedb/patches/4.14_monitoring.rb \
src/onedb/patches/history_times.rb"
#-------------------------------------------------------------------------------
# onevmdump command, to be installed under $LIB_LOCATION
#-------------------------------------------------------------------------------
ONEVMDUMP_FILES="src/onevmdump/onevmdump.rb"
ONEVMDUMP_LIB_FILES="src/onevmdump/lib/command.rb \
src/onevmdump/lib/commons.rb"
ONEVMDUMP_LIB_EXPORTERS_FILES="src/onevmdump/lib/exporters/base.rb \
src/onevmdump/lib/exporters/file.rb \
src/onevmdump/lib/exporters/lv.rb \
src/onevmdump/lib/exporters/rbd.rb"
ONEVMDUMP_LIB_RESTORERS_FILES="src/onevmdump/lib/restorers/base.rb"
#-------------------------------------------------------------------------------
# Configuration files for OpenNebula, to be installed under $ETC_LOCATION
#-------------------------------------------------------------------------------
@ -2368,10 +2357,6 @@ ONETOKEN_SHARE_FILE="share/onetoken/onetoken.sh"
FOLLOWER_CLEANUP_SHARE_FILE="share/hooks/raft/follower_cleanup"
PRE_CLEANUP_SHARE_FILE="share/pkgs/services/systemd/pre_cleanup"
BACKUP_VMS_SHARE_FILE="share/scripts/backup_vms"
#-------------------------------------------------------------------------------
# Start script files, to be installed under $SHARE_LOCATION/start-scripts
#-------------------------------------------------------------------------------

View File

@ -288,6 +288,28 @@
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="BACKUPS">
<xs:complexType>
<xs:sequence>
<xs:element name="BACKUP_CONFIG" minOccurs="1" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="BACKUP_VOLATILE" type="xs:string" minOccurs="0" maxOccurs="1"/>
<xs:element name="FS_FREEZE" type="xs:string" minOccurs="0" maxOccurs="1"/>
<xs:element name="KEEP_LAST" type="xs:string" minOccurs="0" maxOccurs="1"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="BACKUP_IDS" minOccurs="1" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="ID" type="xs:string" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="SNAPSHOTS" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>

View File

@ -657,12 +657,13 @@ TM_MAD = [
# -t number of threads, i.e. number of repo operations at the same time
# -d datastore mads separated by commas
# -s system datastore tm drivers, used to monitor shared system ds.
# -b backup datastore drivers
# -w Timeout in seconds to execute external commands (default unlimited)
#*******************************************************************************
DATASTORE_MAD = [
EXECUTABLE = "one_datastore",
ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,dev,iscsi_libvirt,vcenter -s shared,ssh,ceph,fs_lvm,fs_lvm_ssh,qcow2,vcenter"
ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,dev,iscsi_libvirt,vcenter,restic -s shared,ssh,ceph,fs_lvm,fs_lvm_ssh,qcow2,vcenter"
]
#*******************************************************************************
@ -804,7 +805,7 @@ DEFAULT_UMASK = 177
# - resched, includes resched and unresched actions
#******************************************************************************
VM_ADMIN_OPERATIONS = "migrate, delete, recover, retry, deploy, resched"
VM_ADMIN_OPERATIONS = "migrate, delete, recover, retry, deploy, resched, backup"
VM_MANAGE_OPERATIONS = "undeploy, hold, release, stop, suspend, resume, reboot,
poweroff, disk-attach, nic-attach, disk-snapshot, terminate, disk-resize,
@ -1030,7 +1031,8 @@ USER_ENCRYPTED_ATTR = "SSH_PASSPHRASE"
# CLUSTER_ENCRYPTED_ATTR = ""
# VNET_ENCRYPTED_ATTR = ""
# DATASTORE_ENCRYPTED_ATTR = ""
DATASTORE_ENCRYPTED_ATTR = "RESTIC_PASSWORD"
#*******************************************************************************
# Inherited Attributes Configuration
@ -1279,6 +1281,10 @@ DS_MAD_CONF = [
MARKETPLACE_ACTIONS = "export"
]
DS_MAD_CONF = [
NAME = "restic", REQUIRED_ATTRS = "RESTIC_PASSWORD", PERSISTENT_ONLY = "YES"
]
#*******************************************************************************
# MarketPlace Driver Behavior Configuration
#*******************************************************************************

View File

@ -52,7 +52,6 @@ COMMANDS=(
'oneflow' 'Manage oneFlow Services'
'oneflow-template' 'Manage oneFlow Templates'
'onevmdump' 'Dumps VM content'
'onelog' 'Access to OpenNebula services log files'
'oneirb' 'Opens an irb session'

View File

@ -348,18 +348,24 @@ class OneImageHelper < OpenNebulaHelper::OneHelper
CLIHelper.print_header(str_h1 % 'IMAGE TEMPLATE', false)
puts image.template_str
puts
CLIHelper.print_header('VIRTUAL MACHINES', false)
puts
vms=image.retrieve_elements('VMS/ID')
return unless vms
vms.map! {|e| e.to_i }
onevm_helper=OneVMHelper.new
onevm_helper.client=@client
onevm_helper.list_pool({ :ids=>vms, :no_pager => true }, false)
if image.type_str.casecmp('backup').zero?
puts format(str, 'BACKUP OF VM', vms[0])
else
puts
CLIHelper.print_header('VIRTUAL MACHINES', false)
puts
vms.map! {|e| e.to_i }
onevm_helper=OneVMHelper.new
onevm_helper.client=@client
onevm_helper.list_pool({ :ids=>vms, :no_pager => true },
false)
end
end
def format_snapshots(image)

View File

@ -117,7 +117,21 @@ class OneVMHelper < OpenNebulaHelper::OneHelper
:large => '--schedule TIME',
:description => 'Schedules this action to be executed after' \
'the given time. For example: onevm resume 0 --schedule "09/23 14:15"',
:format => Time
:format => String,
:proc => lambda {|o, options|
if o[0] == '+'
options[:schedule] = o
elsif o == 'now'
options[:schedule] = Time.now.to_i
else
begin
options[:schedule] = Time.new(o).to_i
rescue StandardError
STDERR.puts "Error parsing time spec: #{o}"
exit(-1)
end
end
}
}
WEEKLY = {
@ -408,10 +422,13 @@ class OneVMHelper < OpenNebulaHelper::OneHelper
# Verbose by default
options[:verbose] = true
perform_actions(
ids, options,
"#{action} scheduled at #{options[:schedule]}"
) do |vm|
message = if options[:schedule].class == Integer
"#{action} scheduled at #{Time.at(options[:schedule])}"
else
"#{action} scheduled after #{options[:schedule]}s from start"
end
perform_actions( ids, options, message) do |vm|
str_periodic = ''
@ -440,20 +457,11 @@ class OneVMHelper < OpenNebulaHelper::OneHelper
str_periodic << ', END_TYPE = 0'
end
sched = options[:schedule]
# If the action is set to be executed from VM start to an specific
# amount of time later, we should preserve the + symbol
if ((sched.is_a? String) && !sched.include?('+')) ||
!(sched.is_a? String)
sched = sched.to_i
end
tmp_str = "SCHED_ACTION = ["
tmp_str << "ACTION = #{action}, "
tmp_str << "WARNING = #{warning}," if warning
tmp_str << "ARGS = \"#{options[:args]}\"," if options[:args]
tmp_str << "TIME = #{sched}"
tmp_str << "TIME = #{options[:schedule]}"
tmp_str << str_periodic << ']'
vm.sched_action_add(tmp_str)
@ -1342,42 +1350,31 @@ class OneVMHelper < OpenNebulaHelper::OneHelper
end
str_end unless d.nil?
end
column :DONE, '', :adjust => true do |d|
OpenNebulaHelper.time_to_str(d['DONE'], false) \
unless d.nil?
end
column :MESSAGE, '', :size => 35 do |d|
d['MESSAGE'] ? d['MESSAGE'] : '-'
end
column :CHARTER, '', :left, :adjust, :size => 15 do |d|
t1 = Time.now
t2 = d['TIME'].to_i
t2 += vm['STIME'].to_i unless d['TIME'] =~ /^[0-9].*/
t2 = Time.at(t2)
days = ((t2 - t1) / (24 * 3600)).round(2)
hours = ((t2 - t1) / 3600).round(2)
minutes = ((t2 - t1) / 60).round(2)
if days > 1
show = "In #{days} days"
elsif days <= 1 && hours > 1
show = "In #{hours} hours"
elsif minutes > 0
show = "In #{minutes} minutes"
column :STATUS, '', :left, :size => 50 do |d|
if d['DONE'] && !d['REPEAT']
"Done on #{OpenNebulaHelper.time_to_str(d['DONE'], false)}"
elsif d['MESSAGE']
"Error! #{d['MESSAGE']}"
else
show = 'Already done'
end
t1 = Time.now
t2 = d['TIME'].to_i
t2 += vm['STIME'].to_i unless d['TIME'] =~ /^[0-9].*/
wrn = d['WARNING']
if !wrn.nil? && (t1 - vm['STIME'].to_i).to_i > wrn.to_i
"#{show} *"
else
show
t2 = Time.at(t2)
days = ((t2 - t1) / (24 * 3600)).round(2)
hours = ((t2 - t1) / 3600).round(2)
minutes = ((t2 - t1) / 60).round(2)
if days > 1
"Next in #{days} days"
elsif days <= 1 && hours > 1
"Next in #{hours} hours"
elsif minutes > 0
"Next in #{minutes} minutes"
else
"Overdue!"
end
end
end
end.show([vm_hash['VM']['TEMPLATE']['SCHED_ACTION']].flatten,
@ -1388,6 +1385,8 @@ class OneVMHelper < OpenNebulaHelper::OneHelper
vm.delete_element('/VM/TEMPLATE/SCHED_ACTION')
end
print_backups(vm, vm_hash)
if vm.has_elements?('/VM/USER_TEMPLATE')
puts
@ -1421,6 +1420,23 @@ class OneVMHelper < OpenNebulaHelper::OneHelper
puts vm.template_str
end
def print_backups(vm, vm_hash)
if vm.has_elements?('/VM/BACKUPS/BACKUP_CONFIG')
puts
CLIHelper.print_header('%-80s' % 'BACKUP CONFIGURATION', false)
puts vm.template_like_str('BACKUPS/BACKUP_CONFIG')
end
if vm.has_elements?('/VM/BACKUPS/BACKUP_IDS')
puts
CLIHelper.print_header('%-80s' % 'VM BACKUPS', false)
ids = [vm_hash['VM']['BACKUPS']['BACKUP_IDS']['ID']].flatten
puts format('IMAGE IDS: %s', ids.join(','))
end
end
def print_numa_nodes(numa_nodes)
puts
CLIHelper.print_header('NUMA NODES', false)

View File

@ -94,6 +94,18 @@ CommandParser::CmdParser.new(ARGV) do
:description => 'Do not add context when building from Dockerfile'
}
NO_IP = {
:name => 'no_ip',
:large => '--no_ip',
:description => 'Do not keep NIC addresses (MAC, IP and IP6)'
}
NO_NIC = {
:name => 'no_nic',
:large => '--no_nic',
:description => 'Do not keep network mappings'
}
########################################################################
# Global Options
########################################################################
@ -410,6 +422,41 @@ CommandParser::CmdParser.new(ARGV) do
end
end
restore_desc = <<-EOT.unindent
Restore a backup image. It will restore the associated VM template to the VM template pool and
the disk images to the selected image datastore.
EOT
command :restore,
restore_desc,
:imageid,
:options => [OneDatastoreHelper::DATASTORE, NO_NIC, NO_IP] do
helper.perform_action(args[0], options, 'vm backup restored') do |o|
if options[:datastore].nil?
STDERR.puts 'Datastore to restore the backup is mandatory: '
STDERR.puts "\t -d datastore_id | name"
exit(-1)
end
restore_opts = ''
restore_opts << "NO_NIC=\"YES\"\n" if options[:no_nic]
restore_opts << "NO_IP=\"YES\"\n" if options[:no_ip]
rc = o.restore(options[:datastore].to_i, restore_opts)
if !OpenNebula.is_error?(rc)
ids = rc.split(' ')
puts "VM Template: #{ids[0]}" if ids[0]
puts "Images: #{ids[1..-1].join(' ')}" if ids.length > 1
else
puts rc.message
exit(-1)
end
end
end
list_desc = <<-EOT.unindent
Lists Images in the pool
EOT

View File

@ -195,32 +195,6 @@ CommandParser::CmdParser.new(ARGV) do
:description => 'lock all actions'
}
LOGGER = {
:name => 'logger',
:large => '--logger logger',
:format => String,
:description => 'Set logger to STDOUT or FILE'
}
KEEP = {
:name => 'keep',
:large => '--keep-backup',
:description => 'Keep previous backup when creating a new one'
}
ONESHOT = {
:name => 'oneshot',
:large => '--oneshot',
:description => 'Take an snapshot of the VM without saving backup info'
}
MARKET = {
:name => 'market',
:large => '--market market_id',
:format => Integer,
:description => 'Market to save oneshot'
}
NIC_ID = {
:name => 'nic_id',
:large => '--nic-id nic_id',
@ -1375,8 +1349,8 @@ CommandParser::CmdParser.new(ARGV) do
updateconf_desc = <<-EOT.unindent
Updates the configuration of a VM. Valid states are: running, pending,
failure, poweroff, undeploy, hold or cloning.
In running state only changes in CONTEXT take effect immediately,
other values may need a VM restart.
In running state only changes in CONTEXT and BACKUP_CONFIG take effect
immediately, other values may need a VM restart.
This command accepts a template file or opens an editor, the full list of
configuration attributes are:
@ -1387,6 +1361,7 @@ CommandParser::CmdParser.new(ARGV) do
GRAPHICS = ["TYPE", "LISTEN", "PASSWD", "KEYMAP" ]
RAW = ["DATA", "DATA_VMX", "TYPE", "VALIDATE"]
CPU_MODEL = ["MODEL"]
BACKUP_CONFIG = ["FS_FREEZE", "DATASTORE_ID", "BACKUP_VOLATILE", "FREQUENCY_SECONDS"]
CONTEXT (any value, **variable substitution will be made**)
EOT
@ -1411,10 +1386,15 @@ CommandParser::CmdParser.new(ARGV) do
exit(-1)
end
backup = vm.template_like_str('BACKUPS', true,
'BACKUP_CONFIG')
template = vm.template_like_str('TEMPLATE', true,
'OS | FEATURES | INPUT | '\
'GRAPHICS | RAW | CONTEXT | '\
'CPU_MODEL')
template << "\n" << backup
template = OpenNebulaHelper.editor_input(template)
end
@ -1532,108 +1512,40 @@ CommandParser::CmdParser.new(ARGV) do
end
backup_vm_desc = <<-EOT.unindent
Creates a VM backup and stores it in the marketplace
Creates a VM backup on the given datastore
States: RUNNING, POWEROFF
EOT
command :backup,
backup_vm_desc,
:vmid,
:options => [LOGGER, KEEP, ONESHOT, MARKET] do
require 'logger'
if options.key?(:oneshot) && !options.key?(:market)
STDERR.puts 'ERROR: no market given'
:options => [OneDatastoreHelper::DATASTORE,
OneVMHelper::SCHEDULE,
OneVMHelper::WEEKLY,
OneVMHelper::MONTHLY,
OneVMHelper::YEARLY,
OneVMHelper::HOURLY,
OneVMHelper::END_TIME] do
if options[:datastore].nil?
STDERR.puts 'Datastore to save the backup is mandatory: '
STDERR.puts "\t -d datastore_id | name"
exit(-1)
end
helper.perform_action(args[0], options, 'Backup') do |vm|
vm.extend(OpenNebula::VirtualMachineExt)
if !options[:schedule].nil?
options[:args] = options[:datastore]
# Read user options
if options[:verbose]
log_to = STDOUT
elsif !options[:logger].nil?
log_to = options[:logger]
end
helper.schedule_actions([args[0]], options, @comm_name)
else
keep = options.key?(:keep)
if log_to
logger = Logger.new(log_to)
format = '%Y-%m-%d %H:%M:%S'
logger.formatter = proc do |severity, datetime, _p, msg|
"#{datetime.strftime(format)} " \
"#{severity.ljust(5)} : #{msg}\n"
end
end
binfo = {}
binfo[:market] = options[:market]
if options.key?(:oneshot)
binfo[:name] = "VM #{vm.id} BACKUP - " \
"#{Time.now.strftime('%Y%m%d_%k%M')}"
binfo[:freq] = 1
binfo[:last] = Time.now.to_i - 100
end
begin
rc = vm.backup(keep, logger, binfo)
helper.perform_action(args[0], options, 'Backup') do |vm|
rc = vm.backup(options[:datastore])
if OpenNebula.is_error?(rc)
STDERR.puts rc.message
STDERR.puts "Error creating VM backup: #{rc.message}"
exit(-1)
else
0
end
rescue StandardError => e
STDERR.puts e
exit(-1)
end
end
end
restore_vm_desc = <<-EOT.unindent
Restores a VM from a previous backup
EOT
command :restore,
restore_vm_desc,
:vmid,
:options => [OneDatastoreHelper::DATASTORE, LOGGER] do
require 'logger'
unless options[:datastore]
STDERR.puts 'ERROR: no datastore given'
exit(-1)
end
helper.perform_action(args[0], options, 'Restore') do |vm|
vm.extend(OpenNebula::VirtualMachineExt)
# If logger is specified use it, if not use STDOUT
options[:logger].nil? ? log_to = STDOUT : log_to = options[:logger]
logger = Logger.new(log_to)
format = '%Y-%m-%d %H:%M:%S'
logger.formatter = proc do |severity, datetime, _p, msg|
"#{datetime.strftime(format)} #{severity.ljust(5)} : #{msg}\n"
end
begin
rc = vm.restore(options[:datastore], logger)
if OpenNebula.is_error?(rc)
STDERR.puts rc.message
exit(-1)
else
puts "ID: #{rc}"
0
end
rescue StandardError => e
STDERR.puts e
exit(-1)
end
end
end

View File

@ -286,6 +286,10 @@ Datastore::DatastoreType Datastore::str_to_type(string& str_type)
{
dst = FILE_DS;
}
else if ( str_type == "BACKUP_DS" )
{
dst = BACKUP_DS;
}
return dst;
}
@ -382,6 +386,12 @@ int Datastore::set_tm_mad(string &tm_mad, string &error_str)
string orph;
if (tm_mad.empty())
{
error_str = "No TM_MAD in template.";
return -1;
}
if ( Nebula::instance().get_tm_conf_attribute(tm_mad, vatt) != 0 )
{
goto error_conf;
@ -423,7 +433,7 @@ int Datastore::set_tm_mad(string &tm_mad, string &error_str)
remove_template_attribute("LN_TARGET");
remove_template_attribute("CLONE_TARGET");
}
else
else if (type != BACKUP_DS)
{
string st = vatt->vector_value("TM_MAD_SYSTEM");
@ -497,12 +507,15 @@ int Datastore::set_tm_mad(string &tm_mad, string &error_str)
remove_template_attribute("SHARED");
}
if ( vatt->vector_value("ALLOW_ORPHANS", orph) == -1 )
if ( type != BACKUP_DS )
{
orph = "NO";
}
if ( vatt->vector_value("ALLOW_ORPHANS", orph) == -1 )
{
orph = "NO";
}
replace_template_attribute("ALLOW_ORPHANS", orph);
replace_template_attribute("ALLOW_ORPHANS", orph);
}
return 0;
@ -586,6 +599,7 @@ int Datastore::set_ds_disk_type(string& s_dt, string& error)
break;
case FILE_DS:
case BACKUP_DS:
disk_type = Image::FILE;
break;
}
@ -597,6 +611,7 @@ int Datastore::set_ds_disk_type(string& s_dt, string& error)
add_template_attribute("DISK_TYPE", Image::disk_type_to_str(disk_type));
break;
case FILE_DS:
case BACKUP_DS:
break;
}
@ -648,14 +663,16 @@ int Datastore::insert(SqlDB *db, string& error_str)
get_template_attribute("TM_MAD", tm_mad);
if ( tm_mad.empty() == true )
if ( type != BACKUP_DS )
{
goto error_empty_tm;
if (set_tm_mad(tm_mad, error_str) != 0)
{
goto error_common;
}
}
if (set_tm_mad(tm_mad, error_str) != 0)
else
{
goto error_common;
tm_mad = "-";
}
remove_template_attribute("BASE_PATH");
@ -678,11 +695,6 @@ int Datastore::insert(SqlDB *db, string& error_str)
goto error_common;
}
if ( tm_mad.empty() == true )
{
goto error_empty_tm;
}
//--------------------------------------------------------------------------
// Set default SAFE_DIRS & RESTRICTED_DIRS if not set
//--------------------------------------------------------------------------
@ -711,10 +723,6 @@ error_ds:
error_str = "No DS_MAD in template.";
goto error_common;
error_empty_tm:
error_str = "No TM_MAD in template.";
goto error_common;
error_common:
NebulaLog::log("DATASTORE", Log::ERROR, error_str);
return -1;
@ -968,15 +976,11 @@ int Datastore::post_update_template(string& error_str)
new_ds_type = type;
}
/* ---------------------------------------------------------------------- */
/* Set the TYPE of the Datastore (class & template) */
/* ---------------------------------------------------------------------- */
if ( oid == DatastorePool::SYSTEM_DS_ID )
{
type = SYSTEM_DS;
}
else
else if ( type != BACKUP_DS ) // Do not change BACKUP DS types
{
type = new_ds_type;
}
@ -1007,7 +1011,7 @@ int Datastore::post_update_template(string& error_str)
get_template_attribute("TM_MAD", new_tm_mad);
if ( !new_tm_mad.empty() )
if (!new_tm_mad.empty() && (type != BACKUP_DS))
{
// System DS are monitored by the TM mad, reset information
if ( type == SYSTEM_DS && new_tm_mad != tm_mad )

View File

@ -75,7 +75,8 @@ class DatastoreDriver < OpenNebulaDriver
:monitor => "MONITOR",
:snap_delete => "SNAP_DELETE",
:snap_revert => "SNAP_REVERT",
:snap_flatten=> "SNAP_FLATTEN"
:snap_flatten=> "SNAP_FLATTEN",
:restore => "RESTORE"
}
# Default System datastores for OpenNebula, override in oned.conf
@ -100,7 +101,8 @@ class DatastoreDriver < OpenNebulaDriver
ACTION[:monitor] => nil,
ACTION[:snap_delete] => nil,
ACTION[:snap_revert] => nil,
ACTION[:snap_flatten] => nil
ACTION[:snap_flatten] => nil,
ACTION[:restore] => nil
}
}.merge!(options)
@ -135,6 +137,7 @@ class DatastoreDriver < OpenNebulaDriver
register_action(ACTION[:snap_delete].to_sym, method("snap_delete"))
register_action(ACTION[:snap_revert].to_sym, method("snap_revert"))
register_action(ACTION[:snap_flatten].to_sym, method("snap_flatten"))
register_action(ACTION[:restore].to_sym, method("restore"))
end
############################################################################
@ -142,27 +145,27 @@ class DatastoreDriver < OpenNebulaDriver
############################################################################
def cp(id, drv_message)
ds, sys = get_ds_type(drv_message)
ds, _sys = get_ds_type(drv_message)
do_image_action(id, ds, :cp, "#{drv_message} #{id}")
end
def rm(id, drv_message)
ds, sys = get_ds_type(drv_message)
ds, _sys = get_ds_type(drv_message)
do_image_action(id, ds, :rm, "#{drv_message} #{id}")
end
def mkfs(id, drv_message)
ds, sys = get_ds_type(drv_message)
ds, _sys = get_ds_type(drv_message)
do_image_action(id, ds, :mkfs, "#{drv_message} #{id}")
end
def stat(id, drv_message)
ds, sys = get_ds_type(drv_message)
ds, _sys = get_ds_type(drv_message)
do_image_action(id, ds, :stat, "#{drv_message} #{id}")
end
def clone(id, drv_message)
ds, sys = get_ds_type(drv_message)
ds, _sys = get_ds_type(drv_message)
do_image_action(id, ds, :clone, "#{drv_message} #{id}")
end
@ -172,20 +175,25 @@ class DatastoreDriver < OpenNebulaDriver
end
def snap_delete(id, drv_message)
ds, sys = get_ds_type(drv_message)
ds, _sys = get_ds_type(drv_message)
do_image_action(id, ds, :snap_delete, "#{drv_message} #{id}")
end
def snap_revert(id, drv_message)
ds, sys = get_ds_type(drv_message)
ds, _sys = get_ds_type(drv_message)
do_image_action(id, ds, :snap_revert, "#{drv_message} #{id}")
end
def snap_flatten(id, drv_message)
ds, sys = get_ds_type(drv_message)
ds, _sys = get_ds_type(drv_message)
do_image_action(id, ds, :snap_flatten, "#{drv_message} #{id}")
end
def restore(id, drv_message)
ds, _sys = get_ds_type(drv_message)
do_image_action(id, ds, :restore, "#{drv_message} #{id}")
end
private
def is_available?(ds, id, action)

View File

@ -14,38 +14,45 @@
# limitations under the License. #
#--------------------------------------------------------------------------- #
# Module containing commnon functions for Exporter and Restorer classes
module Commons
require 'DriverExecHelper'
private
#####
# TODO COMMENTS
####
# This module provides an abstraction to generate an execution context for
# datastore operations
class DatastoreExecDriver
def create_tmp_folder(base_path)
prefix = 'onevmdump'
include DriverExecHelper
# Create temporal folder
rc = @cmd.run('mktemp', '-d', '-p', base_path, "#{prefix}.XXX")
# Inits the driver
def initialize
initialize_helper('datastore', {})
unless rc[2].success?
raise "Error creating temporal directory: #{rc[1]}"
@drivers = Dir["#{@local_scripts_path}/*/"].map do |d|
d.split('/')[-1]
end
end
#####
# TODO COMMENTS
####
def do_datastore_action(id, command, stdin = nil)
cmd = command[0].downcase
ds = command[1]
args = command[2..-1].map {|e| Shellwords.escape(e) }.join(' ')
if !@drivers.include?(ds)
return RESULT[:failure], "Datastore Driver '#{ds}' not available"
end
# Return STDOUT
rc[0].strip
end
path = File.join(@local_scripts_path, ds, cmd)
def check_state(vm)
state = vm.state_str
lcm = vm.lcm_state_str
rc = LocalCommand.run("#{path} #{args}", log_method(id), stdin)
msg = "Invalid state: #{state}"
raise msg unless self.class::VALID_STATES.include?(state)
result, info = get_info_from_execution(rc)
msg = "Invalid LCM state: #{lcm}"
raise msg unless self.class::VALID_LCM_STATES.include?(lcm)
end
def running?
@vm.lcm_state_str == 'RUNNING' || @vm.lcm_state_str == 'BACKUP'
[result, info]
end
end

View File

@ -162,6 +162,102 @@ function s3_env
CURRENT_DATE_ISO8601="${CURRENT_DATE_DAY}T$(date -u '+%H%M%S')Z"
}
# Get restic repo information from datastore template
# It generates the repo url as: sftp:SFTP_USER@SFTP_SERVER:DATASTORE_PATH
# Sets the following environment variables
# - RESTIC_REPOSITORY (replaces -r in restic command)
# - RESTIC_PASSWORD (password to access the repo)
function restic_env
{
XPATH="$DRIVER_PATH/xpath.rb --stdin"
unset i j XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <(onedatastore show -x --decrypt $1 | $XPATH \
/DATASTORE/TEMPLATE/RESTIC_SFTP_SERVER \
/DATASTORE/TEMPLATE/RESTIC_SFTP_USER \
/DATASTORE/BASE_PATH \
/DATASTORE/TEMPLATE/RESTIC_PASSWORD \
/DATASTORE/TEMPLATE/RESTIC_IONICE \
/DATASTORE/TEMPLATE/RESTIC_NICE \
/DATASTORE/TEMPLATE/RESTIC_BWLIMIT \
/DATASTORE/TEMPLATE/RESTIC_CONNECTIONS)
SFTP_SERVER="${XPATH_ELEMENTS[j++]}"
SFTP_USER="${XPATH_ELEMENTS[j++]:-oneadmin}"
BASE_PATH="${XPATH_ELEMENTS[j++]}"
PASSWORD="${XPATH_ELEMENTS[j++]}"
IONICE="${XPATH_ELEMENTS[j++]}"
NICE="${XPATH_ELEMENTS[j++]}"
BWLIMIT="${XPATH_ELEMENTS[j++]}"
CONNECTIONS="${XPATH_ELEMENTS[j++]}"
export RESTIC_REPOSITORY="sftp:${SFTP_USER}@${SFTP_SERVER}:${BASE_PATH}"
export RESTIC_PASSWORD="${PASSWORD}"
RESTIC_ONE_PRECMD=""
if [ -n "${NICE}" ]; then
RESTIC_ONE_PRECMD="nice -n ${NICE} "
fi
if [ -n "${IONICE}" ]; then
RESTIC_ONE_PRECMD="${RESTIC_ONE_PRECMD}ionice -c2 -n ${IONICE} "
fi
if [ -x "/var/lib/one/remotes/datastore/restic/restic" ]; then
RESTIC_ONE_PATH="/var/lib/one/remotes/datastore/restic/restic"
elif [ -x "/var/tmp/one/datastore/restic/restic" ]; then
RESTIC_ONE_PATH="/var/tmp/one/datastore/restic/restic"
else
RESTIC_ONE_PATH="restic"
fi
RESTIC_ONE_CMD="${RESTIC_ONE_PRECMD}${RESTIC_ONE_PATH}"
if [ -n "${BWLIMIT}" ]; then
RESTIC_ONE_CMD="${RESTIC_ONE_CMD} --limit-upload ${BWLIMIT} --limit-download ${BWLIMIT}"
fi
if [ -n "${CONNECTIONS}" ]; then
RESTIC_ONE_CMD="${RESTIC_ONE_CMD} --option sftp.connections=${CONNECTIONS}"
fi
export RESTIC_ONE_CMD
}
# Get rsync repo information from DS template
# Sets the following variables:
# - RSYNC_CMD = rsync -a user@host:/base/path
function rsync_env
{
XPATH="$DRIVER_PATH/xpath.rb --stdin"
unset i j XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <(onedatastore show -x --decrypt $1 | $XPATH \
/DATASTORE/TEMPLATE/RSYNC_HOST \
/DATASTORE/TEMPLATE/RSYNC_USER \
/DATASTORE/BASE_PATH)
RSYNC_HOST="${XPATH_ELEMENTS[j++]}"
RSYNC_USER="${XPATH_ELEMENTS[j++]}"
BASE_PATH="${XPATH_ELEMENTS[j++]}"
if [ -z "${RSYNC_HOST}" -o -z "${RSYNC_USER}" ]; then
echo "RSYNC_HOST and RSYNC_USER are required" >&2
exit -1
fi
RSYNC_CMD="ssh ${RSYNC_USER}@${RSYNC_HOST} 'cat ${BASE_PATH}"
export RSYNC_CMD
}
# Create an SHA-256 hash in hexadecimal.
# Usage:
# hash_sha256 <string>
@ -435,6 +531,34 @@ docker://*|dockerfile://*)
file_type="application/octet-stream"
command="$VAR_LOCATION/remotes/datastore/docker_downloader.sh \"$FROM\""
;;
restic://*)
#pseudo restic url restic://<datastore_id>/<snapshot_id>/<file_name>
restic_path=${FROM#restic://}
d_id=`echo ${restic_path} | cut -d'/' -f1`
s_id=`echo ${restic_path} | cut -d'/' -f2`
file=`echo ${restic_path} | cut -d'/' -f3-`
restic_env $d_id
if [ -z "$RESTIC_REPOSITORY" -o -z "$RESTIC_PASSWORD" ]; then
echo "RESTIC_REPOSITORY and RESTIC_PASSWORD are required" >&2
exit -1
fi
command="${RESTIC_ONE_CMD} dump -q ${s_id} /${file}"
;;
rsync://*)
# rsync://<ds_id>/<vm_id>/<backup_id>/<file>
rsync_path=${FROM#rsync://}
d_id=`echo ${rsync_path} | cut -d'/' -f1`
vmid=`echo ${rsync_path} | cut -d'/' -f2`
b_id=`echo ${rsync_path} | cut -d'/' -f3`
file=`echo ${rsync_path} | cut -d'/' -f4-`
rsync_env $d_id
command="${RSYNC_CMD}/${vmid}/${b_id}/${file}'"
;;
*)
if [ ! -r $FROM ]; then
echo "Cannot read from $FROM" >&2

View File

@ -0,0 +1,134 @@
#!/usr/bin/env ruby
# -------------------------------------------------------------------------- #
# Copyright 2002-2022, OpenNebula Project, OpenNebula Systems #
# #
# Licensed under the Apache License, Version 2.0 (the "License"); you may #
# not use this file except in compliance with the License. You may obtain #
# a copy of the License at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# Unless required by applicable law or agreed to in writing, software #
# distributed under the License is distributed on an "AS IS" BASIS, #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
# See the License for the specific language governing permissions and #
# limitations under the License. #
#--------------------------------------------------------------------------- #
ONE_LOCATION = ENV['ONE_LOCATION']
if !ONE_LOCATION
RUBY_LIB_LOCATION = '/usr/lib/one/ruby'
GEMS_LOCATION = '/usr/share/one/gems'
VMDIR = '/var/lib/one'
CONFIG_FILE = '/var/lib/one/config'
else
RUBY_LIB_LOCATION = ONE_LOCATION + '/lib/ruby'
GEMS_LOCATION = ONE_LOCATION + '/share/gems'
VMDIR = ONE_LOCATION + '/var'
CONFIG_FILE = ONE_LOCATION + '/var/config'
end
# %%RUBYGEMS_SETUP_BEGIN%%
if File.directory?(GEMS_LOCATION)
real_gems_path = File.realpath(GEMS_LOCATION)
if !defined?(Gem) || Gem.path != [real_gems_path]
$LOAD_PATH.reject! {|l| l =~ /vendor_ruby/ }
# Suppress warnings from Rubygems
# https://github.com/OpenNebula/one/issues/5379
begin
verb = $VERBOSE
$VERBOSE = nil
require 'rubygems'
Gem.use_paths(real_gems_path)
ensure
$VERBOSE = verb
end
end
end
# %%RUBYGEMS_SETUP_END%%
$LOAD_PATH << RUBY_LIB_LOCATION
require 'CommandManager'
require 'rexml/document'
require 'securerandom'
require 'pathname'
require_relative '../../tm/lib/tm_action'
#BACKUP host:remote_dir DISK_ID:..:DISK_ID deploy_id vmid dsid
ds_xml = STDIN.read
dir = ARGV[0].split ':'
_disks = ARGV[1].split ':'
_vmuuid = ARGV[2]
vmid = ARGV[3]
_dsid = ARGV[4]
vm_host = dir[0]
vm_dir = Pathname.new(dir[1]+'/backup/').cleanpath.to_s
ds = REXML::Document.new(ds_xml).root
rsync_user = ds.elements["TEMPLATE/RSYNC_USER"].text
rsync_host = ds.elements["TEMPLATE/RSYNC_HOST"].text
base = ds.elements["BASE_PATH"].text
if ds.elements["TEMPLATE/RSYNC_ARGS"].nil?
args = '-aS'
else
args = ds.elements["TEMPLATE/RSYNC_ARGS"].text
end
path = Pathname.new(base).cleanpath.to_s
backup_id = "#{vmid}/#{SecureRandom.hex[0,6]}"
backup_path = "#{path}/#{backup_id}/"
#rc = TransferManager::Action.make_dst_path(rsync_host, backup_path)
#-------------------------------------------------------------------------------
# Compute backup total size
#-------------------------------------------------------------------------------
cmd = "mkdir -p #{backup_path}"
rc = TransferManager::Action.ssh('backup_size',
:host => vm_host,
:cmds => "du -sm #{vm_dir}",
:forward => true,
:nostdout => false)
backup_size = rc.stdout.split()[0]
if rc.code != 0
exit rc.code
end
#-------------------------------------------------------------------------------
# Rsync backup files to server:
# 1. [rsync server] make backup path
# 2. [host] rsync files
#-------------------------------------------------------------------------------
rc = TransferManager::Action.ssh('make_dst_path',
:host => rsync_host,
:cmds => "mkdir -p #{backup_path}")
if rc.code != 0
exit rc.code
end
cmd = "rsync #{args} #{vm_dir}/ #{rsync_user}@#{rsync_host}:#{backup_path}"
rc = TransferManager::Action.ssh('backup',
:host => vm_host,
:cmds => cmd,
:forward => true,
:nostdout => false)
if rc.code != 0
exit rc.code
end
puts "#{backup_id} #{backup_size}"

View File

@ -0,0 +1 @@
../common/not_supported.sh

View File

@ -0,0 +1 @@
../common/not_supported.sh

View File

@ -0,0 +1 @@
../common/not_supported.sh

View File

@ -0,0 +1 @@
../common/not_supported.sh

View File

@ -1,3 +1,5 @@
#!/bin/bash
# -------------------------------------------------------------------------- #
# Copyright 2002-2022, OpenNebula Project, OpenNebula Systems #
# #
@ -14,43 +16,46 @@
# limitations under the License. #
#--------------------------------------------------------------------------- #
require 'nokogiri'
# ------------ Set up the environment to source common tools ------------
require 'lib/exporters/file'
require 'lib/exporters/rbd'
require 'lib/exporters/lv'
if [ -z "${ONE_LOCATION}" ]; then
LIB_LOCATION=/usr/lib/one
else
LIB_LOCATION=$ONE_LOCATION/lib
fi
require 'lib/restorers/base'
. $LIB_LOCATION/sh/scripts_common.sh
# OneVMDump module
#
# Module for exporting VM content into a bundle file
module OneVMDump
DRIVER_PATH=$(dirname $0)
source ${DRIVER_PATH}/../libfs.sh
def self.get_exporter(vm, config)
# Get TM_MAD from last history record
begin
last_hist_rec = Nokogiri.XML(vm.get_history_record(-1))
tm_mad = last_hist_rec.xpath('//TM_MAD').text
rescue StandardError
raise 'Cannot retrieve TM_MAD. The last history record' \
' might be corrupted or it might not exists.'
end
# -------- Get datastore arguments from OpenNebula core ------------
case tm_mad
when 'ceph'
self::RBDExporter.new(vm, config)
when 'ssh', 'shared', 'qcow2'
self::FileExporter.new(vm, config)
when 'fs_lvm', 'fs_lvm_ssh'
self::LVExporter.new(vm, config)
else
raise "Unsopported TM_MAD: '#{tm_mad}'"
end
end
DRV_ACTION=$1
ID=$2
def self.get_restorer(bundle_path, options)
BaseRestorer.new(bundle_path, options)
end
XPATH="${DRIVER_PATH}/../xpath.rb -b $DRV_ACTION"
end
unset i XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <($XPATH /DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/RSYNC_HOST \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/RSYNC_USER \
/DS_DRIVER_ACTION_DATA/DATASTORE/BASE_PATH)
unset i
RSYNC_HOST="${XPATH_ELEMENTS[i++]}"
RSYNC_USER="${XPATH_ELEMENTS[i++]}"
BASE_PATH="${XPATH_ELEMENTS[i++]}"
DF=$(ssh $RSYNC_USER@$RSYNC_HOST "df -PBM $BASE_PATH" | tail -1)
#/dev/sda1 20469M 2983M 17487M 15% /
TOTAL=$(echo $DF | awk '{print $2}')
USED=$(echo $DF | awk '{print $3}')
FREE=$(echo $DF | awk '{print $4}')
echo "USED_MB=$USED"
echo "TOTAL_MB=$TOTAL"
echo "FREE_MB=$FREE"

View File

@ -0,0 +1,223 @@
#!/usr/bin/env ruby
# -------------------------------------------------------------------------- #
# Copyright 2002-2022, OpenNebula Project, OpenNebula Systems #
# #
# Licensed under the Apache License, Version 2.0 (the "License"); you may #
# not use this file except in compliance with the License. You may obtain #
# a copy of the License at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# Unless required by applicable law or agreed to in writing, software #
# distributed under the License is distributed on an "AS IS" BASIS, #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
# See the License for the specific language governing permissions and #
# limitations under the License. #
#--------------------------------------------------------------------------- #
ONE_LOCATION = ENV['ONE_LOCATION']
if !ONE_LOCATION
RUBY_LIB_LOCATION = '/usr/lib/one/ruby'
GEMS_LOCATION = '/usr/share/one/gems'
VMDIR = '/var/lib/one'
CONFIG_FILE = '/var/lib/one/config'
VAR_LOCATION = '/var/lib/one'
else
RUBY_LIB_LOCATION = ONE_LOCATION + '/lib/ruby'
GEMS_LOCATION = ONE_LOCATION + '/share/gems'
VMDIR = ONE_LOCATION + '/var'
CONFIG_FILE = ONE_LOCATION + '/var/config'
VAR_LOCATION = ONE_LOCATION + '/var'
end
SERVERADMIN_AUTH = VAR_LOCATION + '/.one/onegate_auth'
# %%RUBYGEMS_SETUP_BEGIN%%
if File.directory?(GEMS_LOCATION)
real_gems_path = File.realpath(GEMS_LOCATION)
if !defined?(Gem) || Gem.path != [real_gems_path]
$LOAD_PATH.reject! {|l| l =~ /vendor_ruby/ }
# Suppress warnings from Rubygems
# https://github.com/OpenNebula/one/issues/5379
begin
verb = $VERBOSE
$VERBOSE = nil
require 'rubygems'
Gem.use_paths(real_gems_path)
ensure
$VERBOSE = verb
end
end
end
# %%RUBYGEMS_SETUP_END%%
$LOAD_PATH << RUBY_LIB_LOCATION
require 'base64'
require 'CommandManager'
require 'rexml/document'
require 'opennebula'
require 'opennebula/server_cipher_auth'
require_relative '../../tm/lib/backup'
require_relative '../../tm/lib/tm_action'
# ------------------------------------------------------------------------------
# Get backup information:
# - vm.xml description
# - list of disks in the backup
# ------------------------------------------------------------------------------
drv_action = Base64::decode64(ARGV[0])
_request_id = ARGV[1]
rds = REXML::Document.new(drv_action).root
begin
buid = rds.elements['IMAGE/SOURCE'].text
iid = rds.elements['IMAGE/ID'].text.to_i
dsid = rds.elements['DATASTORE/ID'].text.to_i
base = rds.elements['DATASTORE/BASE_PATH'].text
rsync_host = rds.elements['DATASTORE/TEMPLATE/RSYNC_HOST'].text
rsync_user = rds.elements['DATASTORE/TEMPLATE/RSYNC_USER'].text
rescue StandardError => se
STDERR.puts "Missing datastore or image attributes: #{se.message}"
exit(1)
end
begin
username = rds.elements['TEMPLATE/USERNAME'].text
dst_ds_id = rds.elements['DESTINATION_DS_ID'].text.to_i
rescue StandardError
STDERR.puts "Cannot find USERNAME / DESTINATION_DS_ID"
exit(1)
end
rc = TransferManager::Action.ssh('list_bkp_files',
:host => "#{rsync_user}@#{rsync_host}",
:cmds => "ls #{base}/#{buid}",
:nostdout => false)
if rc.code != 0
STDERR.puts rc.stderr
exit(1)
end
disks = []
vm_xml_path = ''
rc.stdout.each_line do |l|
l.delete('"').strip!
disks << l if l.match(/disk\.[0-9]+$/)
vm_xml_path = l if l.match(/vm\.xml$/)
end
if disks.empty? || vm_xml_path.empty?
STDERR.puts "Backup does not contain any disk or missing vm.xml"
exit(1)
end
rc = TransferManager::Action.ssh('gather_vm_xml',
:host => "#{rsync_user}@#{rsync_host}",
:cmds => "cat #{base}/#{buid}/vm.xml",
:nostdout => false)
if rc.code != 0
STDERR.puts rc.stderr
exit(1)
end
vm_xml = rc.stdout
# ------------------------------------------------------------------------------
# Prepare an OpenNebula client to impersonate the target user
# ------------------------------------------------------------------------------
no_ip = begin
rds['TEMPLATE/NO_IP'] == "YES"
rescue StandardError
false
end
no_nic = begin
rds['TEMPLATE/NO_NIC'] == "YES"
rescue StandardError
false
end
ENV['ONE_CIPHER_AUTH'] = SERVERADMIN_AUTH
sauth = OpenNebula::ServerCipherAuth.new_client
token = sauth.login_token(Time.now.to_i + 120, username)
one_client = OpenNebula::Client.new(token)
# ------------------------------------------------------------------------------
# Create backup object templates for VM and associated disk images
# ------------------------------------------------------------------------------
restorer = TransferManager::BackupRestore.new(
:vm_xml64 => vm_xml,
:backup_id => buid,
:ds_id => dsid,
:image_id => iid,
:no_ip => no_ip,
:no_nic => no_nic,
:proto => 'rsync')
br_disks = restorer.disk_images(disks)
one_error = ""
images = []
# Create disk images
br_disks.each do |id, disk|
# Fix image name
disk[:template].gsub!(/(NAME = \"[0-9]+-)[0-9]+\//, '\1')
image = OpenNebula::Image.new(OpenNebula::Image.build_xml, one_client)
rc = image.allocate(disk[:template], dst_ds_id)
if OpenNebula.is_error?(rc)
one_error = rc.message
break
end
disk[:image_id] = image.id
images << image.id
end
if !one_error.empty?
message = "Error restoring disk image: #{one_error}"
if !images.empty?
message << " The following images were restored: #{images.join(' ')}"
end
STDERR.puts message
exit(1)
end
# Create VM template
vm_template = restorer.vm_template(br_disks)
# Fix template name
vm_template.gsub!(/(NAME= "[0-9]+-)[0-9]+\//, '\1')
tmpl = OpenNebula::Template.new(OpenNebula::Template.build_xml, one_client)
rc = tmpl.allocate(vm_template)
if OpenNebula.is_error?(rc)
message = "Error creating VM template: #{rc.message}"
if !images.empty?
message << " The following images were restored: #{images.join(' ')}"
end
STDERR.puts message
exit(1)
end
STDOUT.puts "#{tmpl.id} #{images.join(' ')}"
exit(0)

View File

@ -0,0 +1,64 @@
#!/bin/bash
# -------------------------------------------------------------------------- #
# Copyright 2002-2022, OpenNebula Project, OpenNebula Systems #
# #
# Licensed under the Apache License, Version 2.0 (the "License"); you may #
# not use this file except in compliance with the License. You may obtain #
# a copy of the License at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# Unless required by applicable law or agreed to in writing, software #
# distributed under the License is distributed on an "AS IS" BASIS, #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
# See the License for the specific language governing permissions and #
# limitations under the License. #
#--------------------------------------------------------------------------- #
###############################################################################
# This script is used to remove a VM image (SRC) from the image repository
###############################################################################
# ------------ Set up the environment to source common tools ------------
if [ -z "${ONE_LOCATION}" ]; then
LIB_LOCATION=/usr/lib/one
else
LIB_LOCATION=$ONE_LOCATION/lib
fi
. $LIB_LOCATION/sh/scripts_common.sh
DRIVER_PATH=$(dirname $0)
source ${DRIVER_PATH}/../libfs.sh
# -------- Get rm and datastore arguments from OpenNebula core ------------
DRV_ACTION=$1
ID=$2
XPATH="${DRIVER_PATH}/../xpath.rb -b $DRV_ACTION"
unset i XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <($XPATH /DS_DRIVER_ACTION_DATA/IMAGE/SOURCE \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/RSYNC_HOST \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/RSYNC_USER \
/DS_DRIVER_ACTION_DATA/DATASTORE/BASE_PATH)
unset i
SRC="${XPATH_ELEMENTS[i++]}"
RSYNC_HOST="${XPATH_ELEMENTS[i++]}"
RSYNC_USER="${XPATH_ELEMENTS[i++]}"
BASE_PATH="${XPATH_ELEMENTS[i++]}"
# ------------ Remove the image from the repository ------------
BACKUP_PATH="${BASE_PATH}/${SRC}"
ssh_exec_and_log "$RSYNC_USER@$RSYNC_HOST" "[ -d $BACKUP_PATH ] && rm -rf $BACKUP_PATH" \
"Error deleting $BACKUP_PATH in $RSYNC_HOST"

View File

@ -0,0 +1 @@
../common/not_supported.sh

View File

@ -0,0 +1 @@
../common/not_supported.sh

View File

@ -0,0 +1 @@
../common/not_supported.sh

View File

@ -0,0 +1,66 @@
#!/bin/bash
# -------------------------------------------------------------------------- #
# Copyright 2002-2022, OpenNebula Project, OpenNebula Systems #
# #
# Licensed under the Apache License, Version 2.0 (the "License"); you may #
# not use this file except in compliance with the License. You may obtain #
# a copy of the License at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# Unless required by applicable law or agreed to in writing, software #
# distributed under the License is distributed on an "AS IS" BASIS, #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
# See the License for the specific language governing permissions and #
# limitations under the License. #
#--------------------------------------------------------------------------- #
DRV_ACTION=$1
IMAGE_ID=$2
# ------------ Set up the environment to source common tools ------------
if [ -z "${ONE_LOCATION}" ]; then
LIB_LOCATION=/usr/lib/one
else
LIB_LOCATION=$ONE_LOCATION/lib
fi
. $LIB_LOCATION/sh/scripts_common.sh
DRIVER_PATH=$(dirname $0)
source ${DRIVER_PATH}/../libfs.sh
# -------- Get datastore arguments from OpenNebula core ------------
DRV_ACTION=$1
ID=$2
XPATH="${DRIVER_PATH}/../xpath.rb -b $DRV_ACTION"
unset i XPATH_ELEMENTS
while IFS= read -r -d '' element; do
XPATH_ELEMENTS[i++]="$element"
done < <($XPATH /DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/RSYNC_HOST \
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/RSYNC_USER \
/DS_DRIVER_ACTION_DATA/DATASTORE/BASE_PATH \
/DS_DRIVER_ACTION_DATA/IMAGE/PATH)
unset i
RSYNC_HOST="${XPATH_ELEMENTS[i++]}"
RSYNC_USER="${XPATH_ELEMENTS[i++]}"
BASE_PATH="${XPATH_ELEMENTS[i++]}"
IMG_PATH="${XPATH_ELEMENTS[i++]}"
# rsync://dsid/vmid/backupid/diskid
VM_ID=$(echo ${IMG_PATH} | cut -d'/' -f4)
BACKUP_ID=$(echo ${IMG_PATH} | cut -d'/' -f5)
DU="du -ms ${BASE_PATH}/${VM_ID}/${BACKUP_ID}/"
SIZE=$(ssh $RSYNC_USER@$RSYNC_HOST "$DU")
echo $SIZE | cut -f1

View File

@ -1933,31 +1933,20 @@ int DispatchManager::disk_snapshot_create(int vid, int did, const string& name,
case VirtualMachine::POWEROFF:
vm->set_state(VirtualMachine::ACTIVE);
vm->set_state(VirtualMachine::DISK_SNAPSHOT_POWEROFF);
tm->trigger_snapshot_create(vid);
break;
case VirtualMachine::SUSPENDED:
vm->set_state(VirtualMachine::ACTIVE);
vm->set_state(VirtualMachine::DISK_SNAPSHOT_SUSPENDED);
tm->trigger_snapshot_create(vid);
break;
case VirtualMachine::ACTIVE:
vm->set_state(VirtualMachine::ACTIVE);
vm->set_state(VirtualMachine::DISK_SNAPSHOT);
break;
default: break;
}
close_cp_history(vmpool, vm.get(), VMActions::DISK_SNAPSHOT_CREATE_ACTION, ra);
switch (state)
{
case VirtualMachine::POWEROFF:
case VirtualMachine::SUSPENDED:
tm->trigger_snapshot_create(vid);
break;
case VirtualMachine::ACTIVE:
vmm->trigger_disk_snapshot_create(vid);
break;
@ -1965,6 +1954,8 @@ int DispatchManager::disk_snapshot_create(int vid, int did, const string& name,
default: break;
}
close_cp_history(vmpool, vm.get(), VMActions::DISK_SNAPSHOT_CREATE_ACTION, ra);
vmpool->update(vm.get());
return 0;
@ -2187,16 +2178,22 @@ int DispatchManager::disk_resize(int vid, int did, long long new_size,
case VirtualMachine::POWEROFF:
vm->set_state(VirtualMachine::ACTIVE);
vm->set_state(VirtualMachine::DISK_RESIZE_POWEROFF);
tm->trigger_resize(vid);
break;
case VirtualMachine::UNDEPLOYED:
vm->set_state(VirtualMachine::ACTIVE);
vm->set_state(VirtualMachine::DISK_RESIZE_UNDEPLOYED);
tm->trigger_resize(vid);
break;
case VirtualMachine::ACTIVE:
vm->set_state(VirtualMachine::ACTIVE);
vm->set_state(VirtualMachine::DISK_RESIZE);
vmm->trigger_disk_resize(vid);
break;
default: break;
@ -2204,20 +2201,6 @@ int DispatchManager::disk_resize(int vid, int did, long long new_size,
close_cp_history(vmpool, vm.get(), VMActions::DISK_RESIZE_ACTION, ra);
switch (state)
{
case VirtualMachine::POWEROFF:
case VirtualMachine::UNDEPLOYED:
tm->trigger_resize(vid);
break;
case VirtualMachine::ACTIVE:
vmm->trigger_disk_resize(vid);
break;
default: break;
}
vmpool->update(vm.get());
vmpool->update_search(vm.get());
@ -2533,3 +2516,70 @@ int DispatchManager::detach_sg(int vid, int nicid, int sgid,
return 0;
}
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
int DispatchManager::backup(int vid, int backup_ds_id,
const RequestAttributes& ra, string& error_str)
{
ostringstream oss;
auto vm = vmpool->get(vid);
if ( vm == nullptr )
{
oss << "Could not create a new backup for VM " << vid
<< ", VM does not exist";
error_str = oss.str();
return -1;
}
// -------------------------------------------------------------------------
// Set BACKUP state
// -------------------------------------------------------------------------
VirtualMachine::VmState state = vm->get_state();
switch (state)
{
case VirtualMachine::ACTIVE:
if (vm->get_lcm_state() != VirtualMachine::RUNNING)
{
oss << "Could not create a new backup for VM " << vid
<< ", wrong state " << vm->state_str() << ".";
error_str = oss.str();
return -1;
}
vm->set_state(VirtualMachine::BACKUP);
break;
case VirtualMachine::POWEROFF:
vm->set_state(VirtualMachine::ACTIVE);
vm->set_state(VirtualMachine::BACKUP_POWEROFF);
break;
default:
oss << "Could not create a new backup for VM " << vid
<< ", wrong state " << vm->state_str() << ".";
error_str = oss.str();
return -1;
}
vm->backups().last_datastore_id(backup_ds_id);
vmm->trigger_backup(vid);
vm->set_resched(false);
close_cp_history(vmpool, vm.get(), VMActions::BACKUP_ACTION, ra);
vmpool->update(vm.get());
vm.reset();
return 0;
}

View File

@ -151,6 +151,11 @@ int Image::insert(SqlDB *db, string& error_str)
persistent_img = false;
erase_template_attribute("DEV_PREFIX", dev_prefix);
break;
case BACKUP:
persistent_img = true;
erase_template_attribute("DEV_PREFIX", dev_prefix);
break;
}
// ------------ SIZE --------------------
@ -180,23 +185,37 @@ int Image::insert(SqlDB *db, string& error_str)
}
else if (get_cloning_id() == -1) // !is_saving() && !is_cloning
{
if ( source.empty() && path.empty() && type != DATABLOCK && type != OS)
if (!source.empty())
{
goto error_no_path;
if (!path.empty())
{
error_str = "PATH and SOURCE cannot be both set.";
goto error_common;
}
else if (format.empty())
{
error_str = "SOURCE needs FORMAT to be set.";
goto error_common;
}
}
else if ( !source.empty() && !path.empty() )
else if (type == Image::BACKUP)
{
goto error_path_and_source;
error_str = "SOURCE cannot be empty for BACKUP images";
goto error_common;
}
/* MKFS image, FORMAT is mandatory, precedence:
* 1. TM_MAD_CONF/DRIVER in oned.conf
* 2. DRIVER in DS Template
* 3. IMAGE template
* 4. "raw" Default
*/
if ( path.empty() && (type == Image::DATABLOCK || type == Image::OS))
else if (!path.empty())
{
// It's filled by the driver (cp) based on type of file.
format = "";
}
else if (type == Image::DATABLOCK || type == Image::OS)
{
/* MKFS image, FORMAT is mandatory, precedence:
* 1. TM_MAD_CONF/DRIVER in oned.conf
* 2. DRIVER in DS Template
* 3. IMAGE template
* 4. "raw" Default
*/
DatastorePool * ds_pool = Nebula::instance().get_dspool();
string ds_driver = ds_pool->get_ds_driver(ds_id);
@ -214,14 +233,33 @@ int Image::insert(SqlDB *db, string& error_str)
}
// else format in the IMAGE template
}
else
else //CDROM, KERNEL, RAMDISK, CONTEXT
{
// It's filled by the driver depending on the type of file.
format = "";
error_str = "No PATH nor SOURCE in template.";
goto error_common;
}
}
state = LOCKED; //LOCKED till the ImageManager copies it to the Repository
// -------------------------------------------------------------------------
// State is LOCKED till the ImageManager copies it to the Repository.
// Backup Images set to READY as it is already in the repo
// -------------------------------------------------------------------------
state = LOCKED;
if (type == Image::BACKUP)
{
int vm_id;
if ( erase_template_attribute("VM_ID", vm_id) == 0 )
{
error_str = "No associated VM ID for BACKUP image.";
goto error_common;
}
state = READY;
inc_running(vm_id);
}
encrypt();
@ -233,14 +271,6 @@ int Image::insert(SqlDB *db, string& error_str)
return rc;
error_no_path:
error_str = "No PATH nor SOURCE in template.";
goto error_common;
error_path_and_source:
error_str = "Template malformed, PATH and SOURCE are mutually exclusive.";
goto error_common;
error_common:
NebulaLog::log("IMG", Log::ERROR, error_str);
return -1;
@ -760,6 +790,10 @@ int Image::set_type(string& _type, string& error)
{
type = CONTEXT;
}
else if ( _type == "BACKUP" )
{
type = BACKUP;
}
else
{
error = "Unknown type " + type;
@ -834,6 +868,10 @@ Image::ImageType Image::str_to_type(string& str_type)
{
it = CONTEXT;
}
else if ( str_type == "BACKUP" )
{
it = BACKUP;
}
return it;
}
@ -953,6 +991,11 @@ Image::DiskType Image::str_to_disk_type(string& s_disk_type)
void Image::set_state(ImageState _state)
{
if ( type == Image::BACKUP ) //Backups in READY state at creation
{
return;
}
if (_state == ERROR && (state == LOCKED_USED || state == LOCKED_USED_PERS))
{
LifeCycleManager* lcm = Nebula::instance().get_lcm();
@ -981,6 +1024,11 @@ void Image::set_state(ImageState _state)
void Image::set_state_unlock()
{
if ( type == Image::BACKUP ) //Backups in READY state at creation
{
return;
}
LifeCycleManager* lcm = Nebula::instance().get_lcm();
bool vms_notify = false;

View File

@ -96,6 +96,9 @@ int ImageManager::start()
register_action(ImageManagerMessages::SNAP_FLATTEN,
bind(&ImageManager::_snap_flatten, this, _1));
register_action(ImageManagerMessages::RESTORE,
bind(&ImageManager::_restore, this, _1));
register_action(ImageManagerMessages::LOG,
&ImageManager::_log);
@ -129,6 +132,8 @@ void ImageManager::timer_action()
mark = 0;
}
check_time_outs_action();
if ( tics < monitor_period )
{
return;
@ -194,6 +199,8 @@ void ImageManager::monitor_datastore(int ds_id)
if ( auto ds = dspool->get_ro(ds_id) )
{
ds->decrypt();
ds->to_xml(ds_data);
shared = ds->is_shared();
@ -238,6 +245,7 @@ void ImageManager::monitor_datastore(int ds_id)
case Datastore::FILE_DS:
case Datastore::IMAGE_DS:
case Datastore::BACKUP_DS:
break;
}

View File

@ -104,6 +104,7 @@ int ImageManager::acquire_image(int vm_id, Image *img, bool attach, string& erro
case Image::KERNEL:
case Image::RAMDISK:
case Image::CONTEXT:
case Image::BACKUP:
oss << "Image " << img->get_oid() << " (" << img->get_name() << ") "
<< "of type " << Image::type_to_str(img->get_type())
<< " cannot be used as DISK.";
@ -242,8 +243,9 @@ void ImageManager::release_image(int vm_id, int iid, bool failed)
case Image::KERNEL:
case Image::RAMDISK:
case Image::CONTEXT:
case Image::BACKUP:
NebulaLog::log("ImM", Log::ERROR, "Trying to release a KERNEL, "
"RAMDISK or CONTEXT image");
"RAMDISK, BACKUP or CONTEXT image");
return;
}
@ -345,8 +347,9 @@ void ImageManager::release_cloning_resource(
case Image::KERNEL:
case Image::RAMDISK:
case Image::CONTEXT:
case Image::BACKUP:
NebulaLog::log("ImM", Log::ERROR, "Trying to release a cloning "
"KERNEL, RAMDISK or CONTEXT image");
"KERNEL, RAMDISK, BACKUP or CONTEXT image");
return;
}
@ -393,6 +396,12 @@ int ImageManager::enable_image(int iid, bool to_enable, string& error_str)
return -1;
}
if ( img->get_type() == Image::BACKUP )
{
error_str = "Backup images cannot be enabled or disabled.";
return -1;
}
if ( to_enable == true )
{
switch (img->get_state())
@ -472,6 +481,8 @@ int ImageManager::delete_image(int iid, string& error_str)
if (auto ds = dspool->get_ro(ds_id))
{
ds->decrypt();
ds->to_xml(ds_data);
}
else
@ -491,7 +502,7 @@ int ImageManager::delete_image(int iid, string& error_str)
switch (img->get_state())
{
case Image::READY:
if ( img->get_running() != 0 )
if ( img->get_running() != 0 && img->get_type() != Image::BACKUP)
{
oss << "There are " << img->get_running() << " VMs using it.";
error_str = oss.str();
@ -505,7 +516,6 @@ int ImageManager::delete_image(int iid, string& error_str)
error_str = oss.str();
return -1; //Cannot remove images in use
break;
case Image::USED:
case Image::USED_PERS:
@ -515,7 +525,6 @@ int ImageManager::delete_image(int iid, string& error_str)
error_str = oss.str();
return -1; //Cannot remove images in use
break;
case Image::INIT:
case Image::DISABLED:
@ -640,6 +649,12 @@ int ImageManager::can_clone_image(int cloning_id, ostringstream& oss_error)
return -1;
}
if (img->get_type() == Image::BACKUP)
{
oss_error << "Cannoe clone backup images";
return -1;
}
Image::ImageState state = img->get_state();
switch(state)
@ -682,6 +697,12 @@ int ImageManager::set_clone_state(
return -1;
}
if (img->get_type() == Image::BACKUP)
{
error = "Cannoe clone backup images";
return -1;
}
switch(img->get_state())
{
case Image::READY:
@ -786,10 +807,6 @@ int ImageManager::register_image(int iid,
ostringstream oss;
string path;
string img_tmpl;
if ( imd == nullptr )
{
error = "Could not get datastore driver";
@ -801,14 +818,16 @@ int ImageManager::register_image(int iid,
if (!img)
{
error = "Image deleted during copy operation";
error = "Image deleted during register operation";
return -1;
}
string drv_msg(format_message(img->to_xml(img_tmpl), ds_data, extra_data));
path = img->get_path();
string img_tmpl;
string path = img->get_path();
if ( path.empty() == true ) //NO PATH
string drv_msg(format_message(img->to_xml(img_tmpl), ds_data, extra_data));
if ( path.empty() ) //NO PATH
{
string source = img->get_source();
@ -868,6 +887,16 @@ int ImageManager::stat_image(Template* img_tmpl,
switch (Image::str_to_type(type_att))
{
case Image::BACKUP:
if ( img_tmpl->get("SIZE", res) )
{
return 0;
}
res = "";
return -1;
case Image::CDROM:
case Image::KERNEL:
case Image::RAMDISK:
@ -903,6 +932,7 @@ int ImageManager::stat_image(Template* img_tmpl,
break;
case Image::OS:
case Image::DATABLOCK:
img_tmpl->get("SOURCE", res);
if (!res.empty()) //SOURCE in Image
@ -920,7 +950,6 @@ int ImageManager::stat_image(Template* img_tmpl,
return 0;
}
case Image::DATABLOCK:
img_tmpl->get("PATH", res);
if (res.empty())//no PATH, created using mkfs
@ -1011,6 +1040,7 @@ void ImageManager::set_image_snapshots(int iid, const Snapshots& s)
case Image::RAMDISK:
case Image::CONTEXT:
case Image::CDROM:
case Image::BACKUP:
return;
}
@ -1063,6 +1093,7 @@ void ImageManager::set_image_size(int iid, long long size)
case Image::RAMDISK:
case Image::CONTEXT:
case Image::CDROM:
case Image::BACKUP:
return;
}
@ -1107,6 +1138,8 @@ int ImageManager::delete_snapshot(int iid, int sid, string& error)
if (auto ds = dspool->get_ro(ds_id))
{
ds->decrypt();
ds->to_xml(ds_data);
}
else
@ -1128,6 +1161,13 @@ int ImageManager::delete_snapshot(int iid, int sid, string& error)
return -1;
}
if ( img->get_type() != Image::OS && img->get_type() != Image::DATABLOCK )
{
error = "IMAGES of type KERNEL, RAMDISK, BACKUP and CONTEXT does not "
"have snapshots.";
return -1;
}
if (img->get_state() != Image::READY)
{
error = "Cannot delete snapshot in state " + Image::state_to_str(img->get_state());
@ -1191,6 +1231,8 @@ int ImageManager::revert_snapshot(int iid, int sid, string& error)
if (auto ds = dspool->get_ro(ds_id))
{
ds->decrypt();
ds->to_xml(ds_data);
}
else
@ -1213,6 +1255,13 @@ int ImageManager::revert_snapshot(int iid, int sid, string& error)
return -1;
}
if ( img->get_type() != Image::OS && img->get_type() != Image::DATABLOCK )
{
error = "IMAGES of type KERNEL, RAMDISK, BACKUP and CONTEXT does not "
"have snapshots.";
return -1;
}
if (img->get_state() != Image::READY)
{
error = "Cannot revert to snapshot in state " + Image::state_to_str(img->get_state());
@ -1278,6 +1327,8 @@ int ImageManager::flatten_snapshot(int iid, int sid, string& error)
if (auto ds = dspool->get_ro(ds_id))
{
ds->decrypt();
ds->to_xml(ds_data);
}
else
@ -1300,6 +1351,13 @@ int ImageManager::flatten_snapshot(int iid, int sid, string& error)
return -1;
}
if ( img->get_type() != Image::OS && img->get_type() != Image::DATABLOCK )
{
error = "IMAGES of type KERNEL, RAMDISK, BACKUP and CONTEXT does not "
"have snapshots.";
return -1;
}
if (img->get_state() != Image::READY)
{
error = "Cannot flatten snapshot in state " + Image::state_to_str(img->get_state());
@ -1335,3 +1393,87 @@ int ImageManager::flatten_snapshot(int iid, int sid, string& error)
return 0;
}
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
int ImageManager::restore_image(int iid, int dst_ds_id, const std::string& txml,
std::string& result)
{
const auto* imd = get();
std::string image_data, ds_data;
int ds_id;
if ( imd == nullptr )
{
result = "Could not get datastore driver";
NebulaLog::log("ImM", Log::ERROR, result);
return -1;
}
if (auto img = ipool->get_ro(iid))
{
img->to_xml(image_data);
if ( img->get_type() != Image::BACKUP )
{
result = "Can only restore images of type BACKUP";
return -1;
}
ds_id = img->get_ds_id();
}
else
{
result = "Image does not exist";
return -1;
}
if (auto ds = dspool->get_ro(ds_id))
{
ds->decrypt();
ds->to_xml(ds_data);
}
else
{
result = "Datastore does not exist";
return -1;
}
if (auto ds = dspool->get_ro(dst_ds_id))
{
if ( ds->get_type() != Datastore:: IMAGE_DS )
{
result = "Destination can be only an image datastore";
return -1;
}
}
else
{
result = "Destination datastore does not exist";
return -1;
}
ostringstream oss;
oss << "<DESTINATION_DS_ID>" << dst_ds_id << "</DESTINATION_DS_ID>"
<< txml;
SyncRequest sr;
add_request(&sr);
string drv_msg(format_message(image_data, ds_data, oss.str()));
image_msg_t msg(ImageManagerMessages::RESTORE, "", sr.id, drv_msg);
imd->write(msg);
sr.wait(180);
result = sr.message;
return sr.result ? 0 : -1;
}

View File

@ -279,7 +279,7 @@ void ImageManager::_mkfs(unique_ptr<image_msg_t> msg)
if (!source.empty())
{
oss << "MkFS operation succeeded but image no longer exists."
oss << "MKFS operation succeeded but image no longer exists."
<< " Source image: " << source << ", may be left in datastore";
NebulaLog::log("ImM", Log::ERROR, oss);
@ -443,11 +443,24 @@ void ImageManager::_rm(unique_ptr<image_msg_t> msg)
ostringstream oss;
int backup_vm_id = -1;
if ( auto image = ipool->get(msg->oid()) )
{
ds_id = image->get_ds_id();
source = image->get_source();
if ( image->get_type() == Image::BACKUP )
{
auto ids = image->get_running_ids();
auto first = ids.cbegin();
if (first != ids.cend())
{
backup_vm_id = *first;
}
}
rc = ipool->drop(image.get(), tmp_error);
}
else
@ -455,6 +468,20 @@ void ImageManager::_rm(unique_ptr<image_msg_t> msg)
return;
}
if ( backup_vm_id != -1 )
{
VirtualMachinePool * vmpool = Nebula::instance().get_vmpool();
if ( auto vm = vmpool->get(backup_vm_id) )
{
vm->backups().del(msg->oid());
vmpool->update(vm.get());
}
// TODO BACKUP QUOTA ROLLBACK
}
if (msg->status() != "SUCCESS")
{
goto error;
@ -767,6 +794,32 @@ void ImageManager::_snap_flatten(unique_ptr<image_msg_t> msg)
/* -------------------------------------------------------------------------- */
void ImageManager::_restore(unique_ptr<image_msg_t> msg)
{
NebulaLog::dddebug("ImM", "_restore: " + msg->payload());
if (msg->status() == "SUCCESS")
{
if (msg->payload().empty())
{
notify_request(msg->oid(), false,
"Cannot get info about restored disk images");
return;
}
NebulaLog::log("ImM", Log::INFO, "Backup successfully restored: "
+ msg->payload());
notify_request(msg->oid(), true, msg->payload());
}
else
{
notify_request(msg->oid(), false, msg->payload());
}
}
/* -------------------------------------------------------------------------- */
void ImageManager::_log(unique_ptr<image_msg_t> msg)
{
NebulaLog::log("ImM", log_type(msg->status()[0]), msg->payload());

View File

@ -143,6 +143,13 @@ int ImagePool::allocate (
goto error_types_missmatch_image;
}
break;
case Image::BACKUP:
if ( ds_type != Datastore::BACKUP_DS )
{
goto error_types_missmatch_backup;
}
break;
}
db_oid = exist(name, uid);
@ -229,6 +236,11 @@ error_types_missmatch_image:
" in an IMAGE_DS datastore";
goto error_common;
error_types_missmatch_backup:
error_str = "IMAGES of type BACKUP can only be registered"
" in an BACKUP_DS datastore";
goto error_common;
error_duplicated:
oss << "NAME is already taken by IMAGE " << db_oid << ".";
error_str = oss.str();

View File

@ -1055,6 +1055,7 @@ void LifeCycleManager::clean_up_vm(VirtualMachine * vm, bool dispose,
case VirtualMachine::SHUTDOWN_POWEROFF:
case VirtualMachine::SHUTDOWN_UNDEPLOY:
case VirtualMachine::HOTPLUG_SNAPSHOT:
case VirtualMachine::BACKUP:
vm->set_running_etime(the_time);
vmm->trigger_driver_cancel(vid);
@ -1132,6 +1133,11 @@ void LifeCycleManager::clean_up_vm(VirtualMachine * vm, bool dispose,
tm->trigger_driver_cancel(vid);
tm->trigger_epilog_delete(vm);
break;
case VirtualMachine::BACKUP_POWEROFF:
vmm->trigger_driver_cancel(vid);
tm->trigger_epilog_delete(vm);
break;
case VirtualMachine::DISK_SNAPSHOT:
@ -1378,6 +1384,18 @@ void LifeCycleManager::recover(VirtualMachine * vm, bool success,
}
break;
case VirtualMachine::BACKUP:
case VirtualMachine::BACKUP_POWEROFF:
if (success)
{
trigger_backup_success(vim);
}
else
{
trigger_backup_failure(vim);
}
break;
case VirtualMachine::SHUTDOWN:
case VirtualMachine::SHUTDOWN_POWEROFF:
case VirtualMachine::SHUTDOWN_UNDEPLOY:
@ -1729,6 +1747,8 @@ void LifeCycleManager::retry(VirtualMachine * vm)
case VirtualMachine::DISK_RESIZE_UNDEPLOYED:
case VirtualMachine::RUNNING:
case VirtualMachine::UNKNOWN:
case VirtualMachine::BACKUP:
case VirtualMachine::BACKUP_POWEROFF:
break;
}
@ -1846,6 +1866,7 @@ void LifeCycleManager::trigger_updatesg(int sgid)
case VirtualMachine::HOTPLUG_SAVEAS_STOPPED:
case VirtualMachine::HOTPLUG_PROLOG_POWEROFF:
case VirtualMachine::HOTPLUG_EPILOG_POWEROFF:
case VirtualMachine::BACKUP_POWEROFF:
is_tmpl = true;
break;
@ -1858,6 +1879,7 @@ void LifeCycleManager::trigger_updatesg(int sgid)
case VirtualMachine::DISK_SNAPSHOT:
case VirtualMachine::DISK_SNAPSHOT_DELETE:
case VirtualMachine::DISK_RESIZE:
case VirtualMachine::BACKUP:
is_update = true;
break;
}

View File

@ -45,6 +45,7 @@ void LifeCycleManager::init_managers()
vmpool = nd.get_vmpool();
hpool = nd.get_hpool();
ipool = nd.get_ipool();
dspool = nd.get_dspool();
sgpool = nd.get_secgrouppool();
clpool = nd.get_clpool();
}

View File

@ -13,6 +13,8 @@
/* See the License for the specific language governing permissions and */
/* limitations under the License. */
/* -------------------------------------------------------------------------- */
#include <time.h>
#include <stdio.h>
#include "LifeCycleManager.h"
#include "TransferManager.h"
@ -23,11 +25,11 @@
#include "ClusterPool.h"
#include "HostPool.h"
#include "ImagePool.h"
#include "DatastorePool.h"
#include "VirtualMachinePool.h"
using namespace std;
void LifeCycleManager::start_prolog_migrate(VirtualMachine* vm)
{
HostShareCapacity sr;
@ -2633,3 +2635,232 @@ void LifeCycleManager::trigger_resize_failure(int vid)
Quotas::quota_del(Quotas::VM, vm_uid, vm_gid, &deltas);
});
}
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
void LifeCycleManager::trigger_backup_success(int vid)
{
trigger([this, vid] {
auto vm = vmpool->get(vid);
if ( vm == nullptr )
{
return;
}
time_t the_time;
char tbuffer[80];
ostringstream oss;
string ds_name, ds_mad, ds_data;
Datastore::DatastoreType ds_type;
Image::DiskType ds_dtype;
int i_id;
string error_str;
// Store quota values
Template ds_deltas;
long long backup_size = 0;
int vm_uid = vm->get_uid();
int vm_gid = vm->get_gid();
auto& backups = vm->backups();
vm->max_backup_size(ds_deltas);
ds_deltas.add("DATASTORE", backups.last_datastore_id());
one_util::str_cast(backups.last_backup_size(), backup_size);
switch(vm->get_lcm_state())
{
case VirtualMachine::BACKUP:
vm->set_state(VirtualMachine::RUNNING);
break;
case VirtualMachine::BACKUP_POWEROFF:
vm->set_state(VirtualMachine::POWEROFF);
vm->set_state(VirtualMachine::LCM_INIT);
break;
default:
vm->log("LCM",Log::ERROR,"backup_success, VM in a wrong state");
vm.reset();
Quotas::ds_del(vm_uid, vm_gid, &ds_deltas);
return;
}
/* ------------------------------------------------------------------ */
/* Get datastore backup information */
/* ------------------------------------------------------------------ */
int ds_id = backups.last_datastore_id();
if (auto ds = dspool->get_ro(ds_id))
{
ds_name = ds->get_name();
ds_dtype = ds->get_disk_type();
ds_type = ds->get_type();
ds_mad = ds->get_ds_mad();
ds->to_xml(ds_data);
}
else
{
vm->log("LCM", Log::ERROR, "backup_success, "
"backup datastore does not exist");
vmpool->update(vm.get());
vm.reset();
Quotas::ds_del(vm_uid, vm_gid, &ds_deltas);
return;
}
/* ------------------------------------------------------------------ */
/* Create Image for the backup snapshot, add it to the VM */
/* ------------------------------------------------------------------ */
time(&the_time);
struct tm * tinfo = localtime(&the_time);
strftime (tbuffer, 80, "%d-%b %H.%M.%S", tinfo); //18-Jun 08.30.15
oss << vm->get_oid() << " " << tbuffer;
auto itmp = make_unique<ImageTemplate>();
itmp->add("NAME", oss.str());
itmp->add("SOURCE", backups.last_backup_id());
itmp->add("SIZE", backups.last_backup_size());
itmp->add("FORMAT", "raw");
itmp->add("VM_ID", vm->get_oid());
itmp->add("TYPE", Image::type_to_str(Image::BACKUP));
int rc = ipool->allocate( vm->get_uid(), vm->get_gid(), vm->get_uname(),
vm->get_gname(), 0177, move(itmp), ds_id, ds_name, ds_dtype,
ds_data, ds_type, ds_mad, "-", "", -1, &i_id, error_str);
if ( rc < 0 )
{
vm->log("LCM",Log::ERROR,"backup_success, "
"backup image allocate error: " + error_str);
vmpool->update(vm.get());
vm.reset();
Quotas::ds_del(vm_uid, vm_gid, &ds_deltas);
return;
}
std::set<int> iids;
backups.add(i_id);
backups.remove_last(iids);
backups.last_backup_clear();
vmpool->update(vm.get());
if ( iids.size() > 0 )
{
oss.str("");
oss << "Removing backup snapshots:";
for (int i : iids)
{
oss << " " << i;
}
vm->log("LCM", Log::INFO, oss.str());
}
vm.reset();
/* ------------------------------------------------------------------ */
/* Add image to the datastore and forget keep_last backups */
/* ------------------------------------------------------------------ */
if ( auto ds = dspool->get(ds_id) )
{
ds->add_image(i_id);
dspool->update(ds.get());
}
for (int i : iids)
{
if ( imagem->delete_image(i, error_str) != 0 )
{
oss.str("");
oss << "backup_success, cannot remove VM backup " << i
<< " : " << error_str;
NebulaLog::error("LCM", oss.str());
}
}
// Update quotas, count real size of the backup
long long reserved_size{0};
ds_deltas.get("SIZE", reserved_size);
ds_deltas.replace("SIZE", reserved_size - backup_size);
ds_deltas.add("IMAGES", 0);
Quotas::ds_del(vm_uid, vm_gid, &ds_deltas);
});
}
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
void LifeCycleManager::trigger_backup_failure(int vid)
{
trigger([this, vid] {
int vm_uid{0}, vm_gid{0};
Template ds_deltas;
if ( auto vm = vmpool->get(vid) )
{
switch(vm->get_lcm_state())
{
case VirtualMachine::BACKUP:
vm->set_state(VirtualMachine::RUNNING);
break;
case VirtualMachine::BACKUP_POWEROFF:
vm->set_state(VirtualMachine::POWEROFF);
vm->set_state(VirtualMachine::LCM_INIT);
break;
default:
vm->log("LCM", Log::ERROR, "backup_failure, VM in a wrong state");
break;
}
vm_uid = vm->get_uid();
vm_gid = vm->get_gid();
vm->max_backup_size(ds_deltas);
ds_deltas.add("DATASTORE", vm->backups().last_datastore_id());
vm->backups().last_backup_clear();
vmpool->update(vm.get());
}
// Quota rollback
Quotas::ds_del(vm_uid, vm_gid, &ds_deltas);
});
}
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */

View File

@ -212,19 +212,22 @@ end
# Executes commands in a remote machine ussing ssh. See documentation
# for GenericCommand
class SSHCommand < GenericCommand
attr_accessor :host
attr_accessor :host, :ssh_opts
# Creates a command and runs it
def self.run(command, host, logger=nil, stdin=nil, timeout=nil)
cmd=self.new(command, host, logger, stdin, timeout)
def self.run(command, host, logger=nil, stdin=nil, timeout=nil, ssh_opts='')
cmd=self.new(command, host, logger, stdin, timeout, ssh_opts)
cmd.run
cmd
end
# This one takes another parameter. +host+ is the machine
# where the command is going to be executed
def initialize(command, host, logger=nil, stdin=nil, timeout=nil)
def initialize(command, host, logger=nil, stdin=nil, timeout=nil, ssh_opts='')
@host=host
@ssh_opts = ssh_opts
super(command, logger, stdin, timeout)
end
@ -232,10 +235,10 @@ private
def execute
if @stdin
capture3_timeout("ssh #{@host} #{@command}",
capture3_timeout("ssh #{@ssh_opts} #{@host} #{@command}",
:pgroup => true, :stdin_data => @stdin)
else
capture3_timeout("ssh -n #{@host} #{@command}",
capture3_timeout("ssh -n #{@ssh_opts} #{@host} #{@command}",
:pgroup => true)
end
end

View File

@ -53,7 +53,8 @@ class VirtualMachineDriver < OpenNebulaDriver
:resize_disk => "RESIZEDISK",
:update_sg => "UPDATESG",
:update_conf => "UPDATECONF",
:resize => "RESIZE"
:resize => "RESIZE",
:backup => "BACKUP"
}
POLL_ATTRIBUTE = OpenNebula::VirtualMachine::Driver::POLL_ATTRIBUTE
@ -76,23 +77,23 @@ class VirtualMachineDriver < OpenNebulaDriver
super(directory, @options)
@hosts = Array.new
@hosts = Array.new
register_action(ACTION[:deploy].to_sym, method("deploy"))
register_action(ACTION[:shutdown].to_sym, method("shutdown"))
register_action(ACTION[:reboot].to_sym, method("reboot"))
register_action(ACTION[:reset].to_sym, method("reset"))
register_action(ACTION[:cancel].to_sym, method("cancel"))
register_action(ACTION[:save].to_sym, method("save"))
register_action(ACTION[:restore].to_sym, method("restore"))
register_action(ACTION[:migrate].to_sym, method("migrate"))
register_action(ACTION[:poll].to_sym, method("poll"))
register_action(ACTION[:deploy].to_sym, method("deploy"))
register_action(ACTION[:shutdown].to_sym, method("shutdown"))
register_action(ACTION[:reboot].to_sym, method("reboot"))
register_action(ACTION[:reset].to_sym, method("reset"))
register_action(ACTION[:cancel].to_sym, method("cancel"))
register_action(ACTION[:save].to_sym, method("save"))
register_action(ACTION[:restore].to_sym, method("restore"))
register_action(ACTION[:migrate].to_sym, method("migrate"))
register_action(ACTION[:poll].to_sym, method("poll"))
register_action(ACTION[:attach_disk].to_sym, method("attach_disk"))
register_action(ACTION[:detach_disk].to_sym, method("detach_disk"))
register_action(ACTION[:snapshot_create].to_sym, method("snapshot_create"))
register_action(ACTION[:snapshot_revert].to_sym, method("snapshot_revert"))
register_action(ACTION[:snapshot_delete].to_sym, method("snapshot_delete"))
register_action(ACTION[:cleanup].to_sym, method("cleanup"))
register_action(ACTION[:cleanup].to_sym, method("cleanup"))
register_action(ACTION[:attach_nic].to_sym, method("attach_nic"))
register_action(ACTION[:detach_nic].to_sym, method("detach_nic"))
register_action(ACTION[:disk_snapshot_create].to_sym, method("disk_snapshot_create"))
@ -100,6 +101,7 @@ class VirtualMachineDriver < OpenNebulaDriver
register_action(ACTION[:update_sg].to_sym, method("update_sg"))
register_action(ACTION[:update_conf].to_sym, method("update_conf"))
register_action(ACTION[:resize].to_sym, method("resize"))
register_action(ACTION[:backup].to_sym, method("backup"))
end
# Decodes the encoded XML driver message received from the core
@ -234,6 +236,11 @@ class VirtualMachineDriver < OpenNebulaDriver
send_message(ACTION[:resize],RESULT[:failure],id,error)
end
def backup(id, drv_message)
error = "Action not implemented by driver #{self.class}"
send_message(ACTION[:backup],RESULT[:failure],id,error)
end
private
# Interface to handle the pending events from the ActionManager Interface

View File

@ -34,12 +34,13 @@ module OpenNebula
:enable => "datastore.enable"
}
DATASTORE_TYPES=%w{IMAGE SYSTEM FILE}
DATASTORE_TYPES=%w{IMAGE SYSTEM FILE BACKUP}
SHORT_DATASTORE_TYPES = {
"IMAGE" => "img",
"SYSTEM"=> "sys",
"FILE" => "fil"
"FILE" => "fil",
"BACKUP"=> "bck"
}
DATASTORE_STATES=%w{READY DISABLED}

View File

@ -26,20 +26,21 @@ module OpenNebula
#######################################################################
IMAGE_METHODS = {
:info => "image.info",
:allocate => "image.allocate",
:update => "image.update",
:enable => "image.enable",
:persistent => "image.persistent",
:delete => "image.delete",
:chown => "image.chown",
:chmod => "image.chmod",
:chtype => "image.chtype",
:clone => "image.clone",
:rename => "image.rename",
:snapshotdelete => "image.snapshotdelete",
:snapshotrevert => "image.snapshotrevert",
:snapshotflatten=> "image.snapshotflatten",
:info => "image.info",
:allocate => "image.allocate",
:update => "image.update",
:enable => "image.enable",
:persistent => "image.persistent",
:delete => "image.delete",
:chown => "image.chown",
:chmod => "image.chmod",
:chtype => "image.chtype",
:clone => "image.clone",
:rename => "image.rename",
:snapshotdelete => "image.snapshotdelete",
:snapshotrevert => "image.snapshotrevert",
:snapshotflatten => "image.snapshotflatten",
:restore => "image.restore",
:lock => "image.lock",
:unlock => "image.unlock"
}
@ -61,7 +62,7 @@ module OpenNebula
"LOCKED_USED_PERS" => "lock"
}
IMAGE_TYPES=%w{OS CDROM DATABLOCK KERNEL RAMDISK CONTEXT}
IMAGE_TYPES=%w{OS CDROM DATABLOCK KERNEL RAMDISK CONTEXT BACKUP}
SHORT_IMAGE_TYPES={
"OS" => "OS",
@ -69,7 +70,8 @@ module OpenNebula
"DATABLOCK" => "DB",
"KERNEL" => "KL",
"RAMDISK" => "RD",
"CONTEXT" => "CX"
"CONTEXT" => "CX",
"BACKUP" => "BK"
}
DISK_TYPES=%w{FILE CD_ROM BLOCK RBD}
@ -229,7 +231,7 @@ module OpenNebula
# @return [nil, OpenNebula::Error] nil in case of success, Error
# otherwise
def rename(name)
return call(IMAGE_METHODS[:rename], @pe_id, name)
call(IMAGE_METHODS[:rename], @pe_id, name)
end
# Deletes Image from snapshot
@ -238,7 +240,7 @@ module OpenNebula
#
# @return [nil, OpenNebula::Error] nil in case of success or Error
def snapshot_delete(snap_id)
return call(IMAGE_METHODS[:snapshotdelete], @pe_id, snap_id)
call(IMAGE_METHODS[:snapshotdelete], @pe_id, snap_id)
end
# Reverts Image state to a previous snapshot
@ -247,7 +249,7 @@ module OpenNebula
#
# @return [nil, OpenNebula::Error] nil in case of success or Error
def snapshot_revert(snap_id)
return call(IMAGE_METHODS[:snapshotrevert], @pe_id, snap_id)
call(IMAGE_METHODS[:snapshotrevert], @pe_id, snap_id)
end
# Flattens an image snapshot
@ -256,7 +258,15 @@ module OpenNebula
#
# @return [nil, OpenNebula::Error] nil in case of success or Error
def snapshot_flatten(snap_id)
return call(IMAGE_METHODS[:snapshotflatten], @pe_id, snap_id)
call(IMAGE_METHODS[:snapshotflatten], @pe_id, snap_id)
end
# Restore the VM backup stored by the image
#
# @param dst_id [Integer] Datastore destination ID
# @param restore_opts [String] Template with additional restore options
def restore(dst_id, restore_opts)
@client.call(IMAGE_METHODS[:restore], @pe_id, dst_id, restore_opts)
end
#######################################################################

View File

@ -56,7 +56,8 @@ module OpenNebula
:scheddelete => "vm.scheddelete",
:schedupdate => "vm.schedupdate",
:attachsg => "vm.attachsg",
:detachsg => "vm.detachsg"
:detachsg => "vm.detachsg",
:backup => "vm.backup"
}
VM_STATE=%w{INIT PENDING HOLD ACTIVE STOPPED SUSPENDED DONE FAILED
@ -132,6 +133,8 @@ module OpenNebula
HOTPLUG_RESIZE
HOTPLUG_SAVEAS_UNDEPLOYED
HOTPLUG_SAVEAS_STOPPED
BACKUP
BACKUP_POWEROFF
}
SHORT_VM_STATES={
@ -216,7 +219,9 @@ module OpenNebula
"HOTPLUG_NIC_POWEROFF" => "hotp",
"HOTPLUG_RESIZE" => "hotp",
"HOTPLUG_SAVEAS_UNDEPLOYED" => "hotp",
"HOTPLUG_SAVEAS_STOPPED" => "hotp"
"HOTPLUG_SAVEAS_STOPPED" => "hotp",
"BACKUP" => "back",
"BACKUP_POWEROFF" => "back",
}
HISTORY_ACTION=%w{none migrate live-migrate shutdown shutdown-hard
@ -228,6 +233,7 @@ module OpenNebula
snapshot-resize snapshot-delete snapshot-revert disk-saveas
disk-snapshot-revert recover retry monitor disk-snapshot-rename
alias-attach alias-detach poweroff-migrate poweroff-hard-migrate
backup
}
EXTERNAL_IP_ATTRS = [
@ -779,6 +785,15 @@ module OpenNebula
sched_template)
end
# Generate a backup for the VM (backup config must be set)
#
# @param ds_id [Integer] Id of the datastore to save the backup
# @return [Integer, OpenNebula::Error] ID of the resulting BACKUP image
# in case of success, Error otherwise.
def backup(ds_id)
return @client.call(VM_METHODS[:backup], @pe_id, ds_id)
end
########################################################################
# Helpers to get VirtualMachine information
########################################################################

View File

@ -286,160 +286,6 @@ module OpenNebula::VirtualMachineExt
raise
end
#-------------------------------------------------------------------
# Backups a VM. TODO Add final description
# @param keep [Bool]
# @param logger[Logger]
# @param binfo[Hash] Oneshot
#-------------------------------------------------------------------
def backup(keep = false, logger = nil, binfo = nil)
# --------------------------------------------------------------
# Check backup consistency
# --------------------------------------------------------------
rc = info
raise rc.message if OpenNebula.is_error?(rc)
binfo.merge!(backup_info) do |_key, old_val, new_val|
new_val.nil? ? old_val : new_val
end
raise 'No backup information' if binfo.nil?
raise 'No frequency defined' unless valid?(binfo[:freq])
raise 'No marketplace defined' unless valid?(binfo[:market])
return if Time.now.to_i - binfo[:last].to_i < binfo[:freq].to_i
# --------------------------------------------------------------
# Save VM as new template
# --------------------------------------------------------------
logger.info 'Saving VM as template' if logger
tid = save_as_template(
binfo[:name], '', :poweroff => true, :logger => logger
)
tmp = OpenNebula::Template.new_with_id(tid, @client)
rc = tmp.info
raise rc.message if OpenNebula.is_error?(rc)
# --------------------------------------------------------------
# Import template into Marketplace & update VM info
# --------------------------------------------------------------
logger.info "Importing template #{tmp.id} to marketplace "\
"#{binfo[:market]}" if logger
tmp.extend(OpenNebula::TemplateExt)
rc, ids = tmp.mp_import(binfo[:market], true, binfo[:name],
:wait => true, :logger => logger)
raise rc.message if OpenNebula.is_error?(rc)
logger.info "Imported app ids: #{ids.join(',')}" if logger
rc = update(backup_attr(binfo, ids), true)
if OpenNebula.is_error?(rc)
raise 'Could not update the backup reference: ' \
" #{rc.message}. New backup ids are #{ids.join(',')}."
end
# --------------------------------------------------------------
# Cleanup
# --------------------------------------------------------------
backup_cleanup(keep, logger, binfo, tmp)
rescue Error, StandardError => e
backup_cleanup(keep, logger, binfo, tmp)
logger.fatal(e.inspect) if logger
raise
end
#-------------------------------------------------------------------
# Restores VM information from previous backup
#
# @param datastore [Integer] Datastore ID to import app backup
# @param logger [Logger] Logger instance to print debug info
#
# @return [Integer] VM ID
#-------------------------------------------------------------------
def restore(datastore, logger = nil)
rc = info
if OpenNebula.is_error?(rc)
raise "Error getting VM: #{rc.message}"
end
logger.info 'Reading backup information' if logger
backup_ids = backup_info[:apps]
# highest (=last) of the app ids is the template id
app_id = backup_ids.last
app = OpenNebula::MarketPlaceApp.new_with_id(app_id, @client)
rc = app.info
if OpenNebula.is_error?(rc)
raise "Can not find appliance #{app_id}: #{rc.message}."
end
if logger
logger.info "Restoring VM #{self['ID']} from " \
"saved appliance #{app_id}"
end
app.extend(OpenNebula::MarketPlaceAppExt)
exp = app.export(:dsid => Integer(datastore),
:name => "#{self['NAME']} - RESTORED")
if OpenNebula.is_error?(exp)
raise "Can not restore app: #{exp.message}."
end
# Check possible errors when exporting apps
exp[:image].each do |image|
next unless OpenNebula.is_error?(image)
raise "Error restoring image: #{image.message}."
end
template = exp[:vmtemplate].first
if OpenNebula.is_error?(template)
raise "Error restoring template: #{template.message}."
end
if logger
logger.info(
"Backup restored, VM template: #{exp[:vmtemplate]}, " \
"images: #{exp[:image]}"
)
logger.info(
"Instantiating the template #{exp[:vmtemplate]}"
)
end
tmpl = OpenNebula::Template.new_with_id(template, @client)
rc = tmpl.instantiate
if OpenNebula.is_error?(rc)
raise "Can not instantiate the template: #{rc.message}."
end
rc
rescue Error, StandardError => e
logger.fatal(e.inspect) if logger
raise
end
####################################################################
# Private extended interface
####################################################################
@ -457,58 +303,6 @@ module OpenNebula::VirtualMachineExt
!att.nil?
end
#-------------------------------------------------------------------
# Get backup information from the VM
#-------------------------------------------------------------------
def backup_info
base = '//USER_TEMPLATE/BACKUP'
binfo = {}
app_ids = self["#{base}/MARKETPLACE_APP_IDS"] || ''
binfo[:name] = "#{self['NAME']} - BACKUP " \
" - #{Time.now.strftime('%Y%m%d_%k%M')}"
binfo[:freq] = self["#{base}/FREQUENCY_SECONDS"]
binfo[:last] = self["#{base}/LAST_BACKUP_TIME"]
binfo[:market] = Integer(self["#{base}/MARKETPLACE_ID"])
binfo[:apps] = app_ids.split(',')
binfo
rescue StandardError
binfo
end
#-------------------------------------------------------------------
# Generate backup information string
#-------------------------------------------------------------------
def backup_attr(binfo, ids)
'BACKUP=[' \
" MARKETPLACE_APP_IDS = \"#{ids.join(',')}\"," \
" FREQUENCY_SECONDS = \"#{binfo[:freq]}\"," \
" LAST_BACKUP_TIME = \"#{Time.now.to_i}\"," \
" MARKETPLACE_ID = \"#{binfo[:market]}\" ]"
end
#-------------------------------------------------------------------
# Cleanup backup leftovers in case of failure
#-------------------------------------------------------------------
def backup_cleanup(keep, logger, binfo, template)
if template
logger.info "Deleting template #{template.id}" if logger
template.delete(true)
end
binfo[:apps].each do |id|
logger.info "Deleting appliance #{id}" if logger
papp = OpenNebula::MarketPlaceApp.new_with_id(id, @client)
papp.delete
end if !keep && binfo[:apps]
end
end
end

View File

@ -1,146 +0,0 @@
# -------------------------------------------------------------------------- #
# Copyright 2002-2022, OpenNebula Project, OpenNebula Systems #
# #
# Licensed under the Apache License, Version 2.0 (the "License"); you may #
# not use this file except in compliance with the License. You may obtain #
# a copy of the License at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# Unless required by applicable law or agreed to in writing, software #
# distributed under the License is distributed on an "AS IS" BASIS, #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
# See the License for the specific language governing permissions and #
# limitations under the License. #
#--------------------------------------------------------------------------- #
require 'open3'
# Command helper class
class Command
def initialize(remote_user = nil, remote_host = nil)
@local = true
@host = nil
@user = nil
# Set up host if provided
if !remote_host.nil? && !remote_host.empty?
@host = remote_host
@local = false
end
# Return if no user provided
return if remote_user.nil? || remote_user.empty?
# Fail if user provided but not host
raise 'Remote user provided, but host is empty.' unless @host
@user = remote_user
end
# Executes a command either locally on remotely depending on the
# configuration.
#
# The comand and each arg will be wraped by single quotes to avoid
# injections attacks.
#
# @return [String, String, Process::Status] the standard output,
# standard error and
# status returned by
def run(cmd, *args)
if cmd.split.size > 1
raise 'Cannot run cmd for security reasons.' \
'Check run_insecure method'
end
cmd = "'#{cmd}'"
args = quote_args(args)
run_insecure(cmd, args)
end
# Executes a command either locally on remotely depending on the
# configuration. And redirect the stdout and stderr to the path
# contained in the corresponding variables
#
# The comand and each arg will be wraped by single quotes to avoid
# injections attacks.
#
# @return [String, String, Process::Status] the standard output,
# standard error and
# status returned by
def run_redirect_output(cmd, stdout, stderr, *args)
cmd = "'#{cmd}'"
args = quote_args(args)
args << "> #{stdout}" if !stdout.nil? && !stdout.empty?
args << "2> #{stderr}" if !stderr.nil? && !stderr.empty?
run_insecure(cmd, args)
end
# Executes a command either locally on remotely depending on the
# configuration. And redirect the stdout and stderr to the path
# contained in the corresponding variables
#
# This method won't validate the input, hence the command execution
# is prone to injection attacks. Ensure both cmd and arguments have
# been validated before using this method and when possible use secure
# methods instead.
#
# @return [String, String, Process::Status] the standard output,
# standard error and
# status returned by
def run_insecure(cmd, *args)
if @local
run_local(cmd, args)
else
run_ssh(@user, @host, cmd, args)
end
end
private
# Executes a command locally
# @return [String, String, Process::Status] the standard output,
# standard error and
# status returned by
# Open3.capture3
def run_local(cmd, *args)
cmd_str = "#{cmd} #{args.join(' ')}"
Open3.capture3(cmd_str)
end
# Executes a command remotely via SSH
# @return [String, String, Process::Status] the standard output,
# standard error and
# status returned by
# Open3.capture3
def run_ssh(user, host, cmd, *args)
ssh_usr = ''
ssh_usr = "-l \"#{user}\"" if !user.nil? && !user.empty?
# TODO, should we make this configurabe?
ssh_opts = '-o ForwardAgent=yes -o ControlMaster=no ' \
'-o ControlPath=none -o StrictHostKeyChecking=no'
cmd_str = "ssh #{ssh_opts} #{ssh_usr} '#{host}' "
cmd_str << "bash -s <<EOF\n"
cmd_str << "export LANG=C\n"
cmd_str << "export LC_ALL=C\n"
cmd_str << "#{cmd} #{args.join(' ')}\n"
cmd_str << 'EOF'
Open3.capture3(cmd_str)
end
def quote_args(args)
args.map do |arg|
"'#{arg}'"
end
end
end

View File

@ -1,133 +0,0 @@
# -------------------------------------------------------------------------- #
# Copyright 2002-2022, OpenNebula Project, OpenNebula Systems #
# #
# Licensed under the Apache License, Version 2.0 (the "License"); you may #
# not use this file except in compliance with the License. You may obtain #
# a copy of the License at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# Unless required by applicable law or agreed to in writing, software #
# distributed under the License is distributed on an "AS IS" BASIS, #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
# See the License for the specific language governing permissions and #
# limitations under the License. #
#--------------------------------------------------------------------------- #
require_relative '../command'
require_relative '../commons'
# Base class with the exporters interface
class BaseExporter
# --------------------------------------------------------------------------
# Default configuration options
# --------------------------------------------------------------------------
DEFAULT_CONF = {
# workaround: using "//" as libvirt needs the path to match exactly
:ds_location => '/var/lib/one//datastores',
:remote_host => nil,
:remote_user => nil,
:destination_path => nil,
:destination_host => nil,
:destination_user => nil
}
VALID_STATES = %w[ACTIVE
POWEROFF
UNDEPLOYED]
VALID_LCM_STATES = %w[LCM_INIT
RUNNING
BACKUP
BACKUP_POWEROFF
BACKUP_UNDEPLOYED]
def initialize(vm, config)
@vm = vm
@config = DEFAULT_CONF.merge(config)
# Will raise an error if invalid state/lcm_state
check_state(@vm)
# Check if the action needs to be live
@live = running?
# Get System DS ID from last history record
begin
last_hist_rec = Nokogiri.XML(@vm.get_history_record(-1))
@sys_ds_id = Integer(last_hist_rec.xpath('//DS_ID').text)
rescue StandardError
raise 'Cannot retrieve system DS ID. The last history record' \
' might be corrupted or it might not exists.'
end
# Get Command
@cmd = Command.new(@config[:remote_user], @config[:remote_host])
# Build VM folder path
@vm_path = "#{@config[:ds_location]}/#{@sys_ds_id}/#{@vm.id}"
@tmp_path = create_tmp_folder(@vm_path)
end
def export
# Export disks
@vm.retrieve_xmlelements('//DISK/DISK_ID').each do |disk_id|
disk_id = Integer(disk_id.text)
if @live
export_disk_live(disk_id)
else
export_disk_cold(disk_id)
end
end
# Dump VM xml to include it in the final bundle
@cmd.run_redirect_output('echo', "#{@tmp_path}/vm.xml", nil, @vm.to_xml)
create_bundle
ensure
cleanup
end
def cleanup
@cmd.run('rm', '-rf', @tmp_path)
end
private
include Commons
def gen_bundle_name
return "'#{@config[:destination_path]}'" if @config[:destination_path]
timestamp = Time.now.strftime('%s')
"/tmp/onevmdump-#{@vm.id}-#{timestamp}.tar.gz"
end
def create_bundle
bundle_name = gen_bundle_name
cmd = "tar -C #{@tmp_path} -czS"
if @config[:destination_host].nil? || @config[:destination_host].empty?
dst = " -f #{bundle_name} ."
else
destination_user = @config[:destination_user]
if !destination_user.nil? && !destination_user.empty?
usr = "-l '#{destination_user}'"
end
dst = " - . | ssh '#{@config[:destination_host]}' #{usr}" \
" \"cat - > #{bundle_name}\""
end
cmd << dst
rc = @cmd.run_insecure(cmd)
raise "Error creating bundle file: #{rc[1]}" unless rc[2].success?
bundle_name
end
end

View File

@ -1,137 +0,0 @@
# -------------------------------------------------------------------------- #
# Copyright 2002-2022, OpenNebula Project, OpenNebula Systems #
# #
# Licensed under the Apache License, Version 2.0 (the "License"); you may #
# not use this file except in compliance with the License. You may obtain #
# a copy of the License at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# Unless required by applicable law or agreed to in writing, software #
# distributed under the License is distributed on an "AS IS" BASIS, #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
# See the License for the specific language governing permissions and #
# limitations under the License. #
#--------------------------------------------------------------------------- #
require_relative 'base'
# OneVMDump module
#
# Module for exporting VM content into a bundle file
module OneVMDump
# FileExporter class
class FileExporter < BaseExporter
private
#######################################################################
# Export methods used by base
#######################################################################
def export_disk_live(disk_id)
type = file_type(disk_path(disk_id))
case type
when :qcow2
export_qcow2_live(disk_id)
when :raw, :cdrom
export_raw_live(disk_id)
end
end
def export_disk_cold(disk_id)
type = file_type(disk_path(disk_id))
case type
when :qcow2
export_qcow2_cold(disk_id)
when :raw, :cdrom
export_raw_cold(disk_id)
end
end
#######################################################################
# RAW export methods
#######################################################################
def export_raw_cold(disk_id)
path = disk_path(disk_id)
dst_path = "#{@tmp_path}/backup.#{File.basename(path)}"
@cmd.run('cp', path, dst_path)
end
alias export_raw_live export_raw_cold
#######################################################################
# QCOWw export methods
#######################################################################
def export_qcow2_live(disk_id)
path = disk_path(disk_id)
# blockcopy:
# Copy a disk backing image chain to a destination.
dst_path = "#{@tmp_path}/backup.#{File.basename(path)}"
@cmd.run('touch', dst_path) # Create file to set ownership
rc = @cmd.run('virsh', '-c', 'qemu:///system', 'blockcopy',
"one-#{@vm.id}", '--path', path, '--dest',
dst_path, '--wait', '--finish')
raise "Error exporting '#{path}': #{rc[1]}" unless rc[2].success?
end
def export_qcow2_cold(disk_id)
path = disk_path(disk_id)
dst_path = "#{@tmp_path}/backup.#{File.basename(path)}"
rc = @cmd.run('qemu-img', 'convert', '-q', '-O', 'qcow2',
path, dst_path)
raise "Error exporting '#{path}': #{rc[1]}" unless rc[2].success?
end
#######################################################################
# Helpers
#######################################################################
# Retruns the file type of the given path (it will follow sym links)
#
# Supported types:
# - :qcow2
# - :cdrom
# - :raw
#
def file_type(path)
real_path = path
real_path = "#{@vm_path}/#{readlink(path)}" if symlink?(path)
raw_type = @cmd.run('file', real_path)[0].strip
case raw_type
when /^.*QEMU QCOW2 Image.*$/
:qcow2
when /^.*CD-ROM.*$/
:cdrom
else
:raw
end
end
def disk_path(disk_id)
"#{@vm_path}/disk.#{disk_id}"
end
def symlink?(path)
@cmd.run('test', '-L', path)[2].success?
end
def readlink(path)
@cmd.run('readlink', path)[0].strip
end
end
end

View File

@ -1,112 +0,0 @@
# -------------------------------------------------------------------------- #
# Copyright 2002-2022, OpenNebula Project, OpenNebula Systems #
# #
# Licensed under the Apache License, Version 2.0 (the "License"); you may #
# not use this file except in compliance with the License. You may obtain #
# a copy of the License at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# Unless required by applicable law or agreed to in writing, software #
# distributed under the License is distributed on an "AS IS" BASIS, #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
# See the License for the specific language governing permissions and #
# limitations under the License. #
#--------------------------------------------------------------------------- #
require_relative 'base'
# OneVMDump module
#
# Module for exporting VM content into a bundle file
module OneVMDump
# BlockExporer class
# It exports the content from LVs into a bundle
class LVExporter < BaseExporter
private
def export_disk_live(disk_id)
# Freeze filesystem
rc = @cmd.run('virsh', '-c', 'qemu:///system', 'domfsfreeze',
"one-#{@vm.id}")
unless rc[2].success?
raise "Error freezing domain: #{rc[1]}"
end
# Take LV snapshot
# TODO: Crete snapshot of proportional size of the disk?
begin
lv = get_device(disk_id)
snap_name = "#{File.basename(lv)}-backup-snap"
rc = @cmd.run('sudo', 'lvcreate', '-s', '-L', '1G', '-n',
snap_name, lv)
ensure
@cmd.run('virsh', '-c', 'qemu:///system', 'domfsthaw',
"one-#{@vm.id}")
end
unless rc[2].success?
raise "Error creating snapshot for #{lv}: #{rc[1]}"
end
# Dump content
snap_lv = "#{File.dirname(lv)}/#{snap_name}"
dst_path = dst_path(disk_id)
rc = @cmd.run('dd', "if=#{snap_lv}", "of=#{dst_path}")
unless rc[2].success?
raise "Error writting '#{snap_lv}' content into #{dst_path}:" \
" #{rc[1]}"
end
ensure
@cmd.run('sudo', 'lvremove', '-f', snap_lv) if snap_lv
end
def export_disk_cold(disk_id)
device = get_device(disk_id)
dst_path = dst_path(disk_id)
active = check_active(device)
# Activate LV
if !active
rc = @cmd.run('lvchange', '-ay', device)
msg = "Error activating '#{device}': #{rc[1]}"
raise msg unless rc[2].success?
end
# Dump content
rc = @cmd.run('dd', "if=#{device}", "of=#{dst_path}")
unless rc[2].success?
raise "Error writting '#{device}' content into" \
" #{dst_path}: #{rc[1]}"
end
ensure
# Ensure LV is in the same state as before
@cmd.run('lvchange', '-an', device) unless active
end
########################################################################
# Helpers
########################################################################
def get_device(disk_id)
"/dev/vg-one-#{@sys_ds_id}/lv-one-#{@vm.id}-#{disk_id}"
end
def check_active(device)
@cmd.run('test', '-e', device)[2].success?
end
def dst_path(disk_id)
"#{@tmp_path}/backup.disk.#{disk_id}"
end
end
end

View File

@ -1,90 +0,0 @@
# -------------------------------------------------------------------------- #
# Copyright 2002-2022, OpenNebula Project, OpenNebula Systems #
# #
# Licensed under the Apache License, Version 2.0 (the "License"); you may #
# not use this file except in compliance with the License. You may obtain #
# a copy of the License at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# Unless required by applicable law or agreed to in writing, software #
# distributed under the License is distributed on an "AS IS" BASIS, #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
# See the License for the specific language governing permissions and #
# limitations under the License. #
#--------------------------------------------------------------------------- #
require_relative 'base'
# OneVMDump module
#
# Module for exporting VM content into a bundle file
module OneVMDump
# RBDExporter class
class RBDExporter < BaseExporter
private
def export_disk_live(disk_id)
# Freeze filesystem
rc = @cmd.run('virsh', '-c', 'qemu:///system', 'domfsfreeze',
"one-#{@vm.id}")
unless rc[2].success?
raise "Error freezing domain: #{rc[1]}"
end
export_disk_cold(disk_id)
ensure
@cmd.run('virsh', '-c', 'qemu:///system', 'domfsthaw',
"one-#{@vm.id}")
end
def export_disk_cold(disk_id)
path = disk_path(disk_id)
# Export rbd
# Assume rbd version 2
cmd = rbd_cmd(disk_id)
cmd.append('export', path, "#{@tmp_path}/backup.disk.#{disk_id}")
rc = @cmd.run(cmd[0], *cmd[1..-1])
raise "Error exporting '#{path}': #{rc[1]}" unless rc[2].success?
end
########################################################################
# Helpers
########################################################################
def rbd_cmd(disk_id)
cmd = ['rbd']
disk_xpath = "//DISK[DISK_ID = #{disk_id}]"
# rubocop:disable Layout/LineLength
ceph_user = @vm.retrieve_xmlelements("#{disk_xpath}/CEPH_USER")[0].text rescue nil
ceph_key = @vm.retrieve_xmlelements("#{disk_xpath}/CEPH_KEY")[0].text rescue nil
ceph_conf = @vm.retrieve_xmlelements("#{disk_xpath}/CEPH_CONF")[0].text rescue nil
cmd.append('--id', ceph_user) if !ceph_user.nil? && !ceph_user.empty?
cmd.append('--keyfile', ceph_key) if !ceph_key.nil? && !ceph_key.empty?
cmd.append('--conf', ceph_conf) if !ceph_conf.nil? && !ceph_conf.empty?
# rubocop:enable Layout/LineLength
cmd
end
def disk_path(disk_id)
disk_xpath = "//DISK[DISK_ID = #{disk_id}]"
source = @vm.retrieve_xmlelements("#{disk_xpath}/SOURCE")[0].text
if source.nil? || source.empty?
raise "Error retrieving source from disk #{disk_id}"
end
"#{source}-#{@vm.id}-#{disk_id}"
end
end
end

View File

@ -1,284 +0,0 @@
# -------------------------------------------------------------------------- #
# Copyright 2002-2022, OpenNebula Project, OpenNebula Systems #
# #
# Licensed under the Apache License, Version 2.0 (the "License"); you may #
# not use this file except in compliance with the License. You may obtain #
# a copy of the License at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# Unless required by applicable law or agreed to in writing, software #
# distributed under the License is distributed on an "AS IS" BASIS, #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
# See the License for the specific language governing permissions and #
# limitations under the License. #
#--------------------------------------------------------------------------- #
require 'uri'
require 'ffi-rzmq'
require 'opennebula'
require_relative '../command'
require_relative '../commons'
# Base class with the restorer interface
class BaseRestorer
# --------------------------------------------------------------------------
# Constants
# --------------------------------------------------------------------------
VALID_STATES = %w[DONE
POWEROFF
UNDEPLOYED]
VALID_LCM_STATES = %w[LCM_INIT]
# --------------------------------------------------------------------------
# Default configuration options
# --------------------------------------------------------------------------
DEFAULT_CONF = {
:tmp_location => '/var/tmp/one',
:restore_nics => false
}
def initialize(bundle_path, config)
@config = DEFAULT_CONF.merge(config)
@bundle_path = bundle_path
@bundle_name = File.basename(bundle_path)
@cmd = Command.new(nil, nil)
end
def restore
client = OpenNebula::Client.new(nil, @config[:endpoint])
tmp_path = decompress_bundle
vm_xml = OpenNebula::XMLElement.build_xml(
File.read("#{tmp_path}/vm.xml"),
'VM'
)
vm = OpenNebula::VirtualMachine.new(vm_xml, client)
@config[:tmpl_name] = "backup-#{vm.id}" unless @config[:tmpl_name]
new_disks = []
# Register each bundle disk as a new image
Dir.glob("#{tmp_path}/backup.disk.*") do |disk|
disk_id = Integer(disk.split('.')[-1])
disk_xpath = "/VM//DISK[DISK_ID = #{disk_id}]"
ds_id = Integer(vm_xml.xpath("#{disk_xpath}/DATASTORE_ID").text)
type = vm_xml.xpath("#{disk_xpath}/IMAGE_TYPE").text
img_tmpl = ''
img_tmpl << "NAME=\"#{@config[:tmpl_name]}-disk-#{disk_id}\"\n"
img_tmpl << "TYPE=\"#{type}\"\n"
img_tmpl << "PATH=\"#{disk}\"\n"
img = OpenNebula::Image.new(OpenNebula::Image.build_xml, client)
rc = img.allocate(img_tmpl, ds_id)
img = nil if OpenNebula.is_error?(rc)
new_disks << {
:id => disk_id,
:img => img
}
end
wait_disks_ready(new_disks, client)
# Important to use XML from the backup
tmpl_info = generate_template(vm_xml, new_disks)
tmpl = OpenNebula::Template.new(OpenNebula::Template.build_xml, client)
rc = tmpl.allocate(tmpl_info)
if OpenNebula.is_error?(rc)
# roll back image creation if one fails
new_disks.each do |new_disk|
OpenNebula::Image.new_with_id(new_disk[:img_id], client).delete
end
raise "Error creating VM Template from backup: #{rc.message}"
end
tmpl.id
ensure
# cleanup
@cmd.run('rm', '-rf', tmp_path) if !tmp_path.nil? && !tmp_path.empty?
end
private
include Commons
def decompress_bundle
tmp_path = create_tmp_folder(@config[:tmp_location])
rc = @cmd.run('tar', '-C', tmp_path, '-xf', @bundle_path)
raise "Error decompressing bundle file: #{rc[1]}" unless rc[2].success?
tmp_path
end
def generate_template(vm_xml, new_disks)
template = ''
begin
template << "NAME = \"#{@config[:tmpl_name]}\"\n"
template << "CPU = \"#{vm_xml.xpath('TEMPLATE/CPU').text}\"\n"
template << "VCPU = \"#{vm_xml.xpath('TEMPLATE/VCPU').text}\"\n"
template << "MEMORY = \"#{vm_xml.xpath('TEMPLATE/MEMORY').text}\"\n"
template << "DESCRIPTION = \"VM restored from backup.\"\n"
# Add disks
disk_black_list = Set.new(%w[ALLOW_ORPHANS CLONE CLONE_TARGET
CLUSTER_ID DATASTORE DATASTORE_ID
DEV_PREFIX DISK_ID
DISK_SNAPSHOT_TOTAL_SIZE DISK_TYPE
DRIVER IMAGE IMAGE_ID IMAGE_STATE
IMAGE_UID IMAGE_UNAME LN_TARGET
OPENNEBULA_MANAGED ORIGINAL_SIZE
PERSISTENT READONLY SAVE SIZE SOURCE
TARGET TM_MAD TYPE])
new_disks.each do |disk|
disk_xpath = "/VM//DISK[DISK_ID = #{disk[:id]}]/*"
disk_tmpl = "DISK = [\n"
vm_xml.xpath(disk_xpath).each do |item|
# Add every attribute but image related ones
next if disk_black_list.include?(item.name)
disk_tmpl << "#{item.name} = \"#{item.text}\",\n"
end
disk_tmpl << "IMAGE_ID = #{disk[:img].id}]\n"
template << disk_tmpl
end
# Add NICs
nic_black_list = Set.new(%w[AR_ID BRIDGE BRIDGE_TYPE CLUSTER_ID IP
IP6 IP6_ULA IP6_GLOBAL NAME NETWORK_ID
NIC_ID TARGET VLAN_ID VN_MAD])
if @config[:restore_nics]
%w[NIC NIC_ALIAS].each do |type|
vm_xml.xpath("/VM//#{type}").each do |nic|
nic_tmpl = "#{type} = [\n"
nic.xpath('./*').each do |item|
next if nic_black_list.include?(item.name)
nic_tmpl << "#{item.name} = \"#{item.text}\",\n"
end
# remove ',\n' for last elem
template << nic_tmpl[0..-3] << "]\n"
end
end
end
###########################################################
# TODO, evaluate what else should be copy from original VM
###########################################################
rescue StandardError => e
msg = 'Error parsing VM information. '
msg << "#{e.message}\n#{e.backtrace}" if @config[:debug]
raise msg
end
template
end
def wait_disks_ready(disks, client)
context = ZMQ::Context.new(1)
subscriber = context.socket(ZMQ::SUB)
poller = ZMQ::Poller.new
poller.register(subscriber, ZMQ::POLLIN)
uri = URI(@config[:endpoint])
error = false
subscriber.connect("tcp://#{uri.host}:2101")
# Subscribe for every IMAGE
imgs_set = Set.new
disks.each do |disk|
if disk[:img].nil?
error = true
next
end
# subscribe to wait until every image is ready
img_id = disk[:img].id
%w[READY ERROR].each do |i|
key = "EVENT IMAGE #{img_id}/#{i}/"
subscriber.setsockopt(ZMQ::SUBSCRIBE, key)
end
imgs_set.add(Integer(img_id))
end
# Wait until every image is ready (or limit retries)
retries = 60
key = ''
content = ''
while !imgs_set.empty? && retries > 0
if retries % 10 == 0
# Check manually in case the event is missed
imgs_set.clone.each do |id|
img = OpenNebula::Image.new_with_id(id, client)
img.info
next unless %w[READY ERROR].include?(img.state_str)
error = true if img.state_str.upcase == 'ERROR'
imgs_set.delete(id)
end
end
break if imgs_set.empty?
# 60 retries * 60 secs timeout (select) 1h timeout worst case
if !poller.poll(60 * 1000).zero?
subscriber.recv_string(key)
subscriber.recv_string(content)
match = key.match(%r{EVENT IMAGE (?<img_id>\d+)/(?<state>\S+)/})
img_id = Integer(match[:img_id])
%w[READY ERROR].each do |i|
key = "EVENT IMAGE #{img_id}/#{i}/"
subscriber.setsockopt(ZMQ::UNSUBSCRIBE, key)
end
error = true if match[:state] == 'ERROR'
imgs_set.delete(img_id)
else
retries -= 1
end
end
raise 'Error allocating new images.' if error
ensure
# Close socket
subscriber.close
# Rollback - remove every image if error
if error
disks.each do |disk|
disk[:img].delete unless disk[:img].nil?
end
end
end
end

View File

@ -1,261 +0,0 @@
#!/usr/bin/ruby
# -------------------------------------------------------------------------- #
# Copyright 2002-2022, OpenNebula Project, OpenNebula Systems #
# #
# Licensed under the Apache License, Version 2.0 (the "License"); you may #
# not use this file except in compliance with the License. You may obtain #
# a copy of the License at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# Unless required by applicable law or agreed to in writing, software #
# distributed under the License is distributed on an "AS IS" BASIS, #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
# See the License for the specific language governing permissions and #
# limitations under the License. #
#--------------------------------------------------------------------------- #
############################################################################
# Set up Frontend libraries location
############################################################################
ONE_LOCATION = ENV['ONE_LOCATION']
if !ONE_LOCATION
RUBY_LIB_LOCATION = '/usr/lib/one/ruby'
GEMS_LOCATION = '/usr/share/one/gems'
VMDIR = '/var/lib/one'
CONFIG_FILE = '/var/lib/one/config'
LOG_FILE = '/var/log/one/host_error.log'
else
RUBY_LIB_LOCATION = ONE_LOCATION + '/lib/ruby'
GEMS_LOCATION = ONE_LOCATION + '/share/gems'
VMDIR = ONE_LOCATION + '/var'
CONFIG_FILE = ONE_LOCATION + '/var/config'
LOG_FILE = ONE_LOCATION + '/var/host_error.log'
end
# %%RUBYGEMS_SETUP_BEGIN%%
if File.directory?(GEMS_LOCATION)
real_gems_path = File.realpath(GEMS_LOCATION)
if !defined?(Gem) || Gem.path != [real_gems_path]
$LOAD_PATH.reject! {|l| l =~ /vendor_ruby/ }
# Suppress warnings from Rubygems
# https://github.com/OpenNebula/one/issues/5379
begin
verb = $VERBOSE
$VERBOSE = nil
require 'rubygems'
Gem.use_paths(real_gems_path)
ensure
$VERBOSE = verb
end
end
end
# %%RUBYGEMS_SETUP_END%%
$LOAD_PATH << RUBY_LIB_LOCATION
$LOAD_PATH << RUBY_LIB_LOCATION + '/onevmdump'
############################################################################
# Required libraries
############################################################################
require 'nokogiri'
require 'optparse'
require 'opennebula'
require 'onevmdump'
############################################################################
# Constants
############################################################################
HELP_MSG = <<~EOF
COMMANDS
\texport\t\tExports VM into a bundle file.
\trestore\t\tRestore a VM from a bundle file.
\tCheck 'onevmdump COMMAND --help' for more information on an specific command.
EOF
############################################################################
# Parameters initalization
############################################################################
options = {
:endpoint => 'http://localhost:2633/RPC2'
}
############################################################################
# General optinos parser
############################################################################
global_parser = OptionParser.new do |opts|
opts.banner = 'Usage: onevmdump [options] [COMMAND [options]]'
desc = 'Run it on debug mode'
opts.on('-D', '--debug', desc) do |v|
options[:debug] = v
end
desc = 'OpenNebula endpoint (default http://localhost:2633/RPC2)'
opts.on('--endpoint=ENDPOINT', desc) do |v|
options[:endpoint] = v
end
opts.separator ''
opts.separator HELP_MSG
end
############################################################################
# Commands optinos parser
############################################################################
commands_parsers = {
'export' => OptionParser.new do |opts|
opts.banner = 'Usage: onevmdump export [options] <vm_id>'
options[:lock] = true
desc = 'Path where the bundle will be created (default /tmp)'
opts.on('-dPATH', '--destination-path=PATH', desc) do |v|
options[:destination_path] = v
end
desc = 'Destination host for the bundle'
opts.on('--destination-host=HOST', desc) do |v|
options[:destination_host] = v
end
desc = 'Avoid locking the VM while doing the backup (another security' \
' messures should be taken to avoid un expected status changes.)'
opts.on('-L', '--no-lock', desc) do |v|
options[:lock] = v # as parameter is *no*-lock if set v == false
end
desc = 'Remote user for accessing destination host via SSH'
opts.on('--destination-user=USER', desc) do |v|
options[:destination_user] = v
end
desc = 'Remote host, used when the VM storage is not available from ' \
'the curren node'
opts.on('-hHOST', '--remote-host=HOST', desc) do |v|
options[:remote_host] = v
end
desc = 'Remote user for accessing remote host via SSH'
opts.on('-lUSER', '--remote-user=USER', desc) do |v|
options[:remote_user] = v
end
desc = 'Instead of retrieving the VM XML querying the endpoint, it ' \
'will be readed from STDIN. The VM id will be automatically ' \
'retrieved from the XML'
opts.on(nil, '--stdin', desc) do |v|
options[:stdin] = v
end
end,
'restore' => OptionParser.new do |opts|
opts.banner = 'Usage: onevmdump restore [options] <backup_file>'
desc = 'Instantiate backup resulting template automatically'
opts.on('--instantiate', desc) do |v|
options[:instantiate] = v
end
desc = 'Name for the resulting VM Template'
opts.on('-nNAME', '--name=NAME', desc) do |v|
options[:tmpl_name] = v
end
desc = 'Force restore of original VM NICs'
opts.on('--restore-nics', desc) do |v|
options[:restore_nics] = v
end
end
}
############################################################################
# Options parsing
############################################################################
begin
global_parser.order!
command = ARGV.shift
raise 'A valid comman must be provided' if command.nil? || command.empty?
raise "Invalid command: #{command}" if commands_parsers[command].nil?
commands_parsers[command].parse!
rescue StandardError => e
STDERR.puts "ERROR parsing commands: #{e.message}"
exit(-1)
end
############################################################################
# Main Program
#
# TODO
# - Multithreading for multiple disks (async/await?)
# - Add incremental
############################################################################
begin
client = OpenNebula::Client.new(nil, options[:endpoint])
case command
when 'export'
if options[:stdin]
vm_xml = OpenNebula::XMLElement.build_xml(STDIN.read, 'VM')
vm = OpenNebula::VirtualMachine.new(vm_xml, client)
vm.lock(4) if options[:lock]
else
begin
vm_id = Integer(ARGV[0])
rescue ArgumentError, TypeError
raise 'A VM ID must be provided.'
end
vm = OpenNebula::VirtualMachine.new_with_id(vm_id, client)
vm.lock(4) if options[:lock]
rc = vm.info
if OpenNebula.is_error?(rc)
raise "Error getting VM info: #{rc.message}"
end
end
# Export VM. VM folder will be used as temporal storage
exporter = OneVMDump.get_exporter(vm, options)
bundle_location = exporter.export
puts bundle_location
when 'restore'
raise 'Bundle path must be provided' if ARGV[0].nil? || ARGV[0].empty?
tmpl = OneVMDump.get_restorer(ARGV[0], options).restore
if options[:instantiate]
rc = OpenNebula::Template.new_with_id(tmpl, client).instantiate
if OpenNebula.is_error?(rc)
raise "Error instantiating VM template: #{rc.message}"
end
puts "VM Restored: #{rc}"
else
puts "VM Template restored: #{tmpl}"
end
end
rescue StandardError => e
STDERR.puts e.message
STDERR.puts e.backtrace if options[:debug]
exit(-1)
ensure
vm.unlock if options[:lock] && !vm.nil?
end
exit(0)

View File

@ -63,6 +63,7 @@ const EString<ImageManagerMessages> image_msg_t::_type_str({
{"SNAP_DELETE", ImageManagerMessages::SNAP_DELETE},
{"SNAP_REVERT", ImageManagerMessages::SNAP_REVERT},
{"SNAP_FLATTEN", ImageManagerMessages::SNAP_FLATTEN},
{"RESTORE", ImageManagerMessages::RESTORE},
{"LOG", ImageManagerMessages::LOG},
});
@ -142,6 +143,7 @@ const EString<VMManagerMessages> vm_msg_t::_type_str({
{"DRIVER_CANCEL", VMManagerMessages::DRIVER_CANCEL},
{"LOG", VMManagerMessages::LOG},
{"RESIZE", VMManagerMessages::RESIZE},
{"BACKUP", VMManagerMessages::BACKUP},
});
template<>

View File

@ -345,6 +345,7 @@ void RequestManager::register_xml_methods()
xmlrpc_c::methodPtr vm_sched_update(new RequestManagerSchedUpdate());
xmlrpc_c::methodPtr vm_attachsg(new VirtualMachineAttachSG());
xmlrpc_c::methodPtr vm_detachsg(new VirtualMachineDetachSG());
xmlrpc_c::methodPtr vm_backup(new VirtualMachineBackup());
xmlrpc_c::methodPtr vm_pool_acct(new VirtualMachinePoolAccounting());
xmlrpc_c::methodPtr vm_pool_monitoring(new VirtualMachinePoolMonitoring());
@ -477,6 +478,7 @@ void RequestManager::register_xml_methods()
xmlrpc_c::methodPtr image_snap_delete(new ImageSnapshotDelete());
xmlrpc_c::methodPtr image_snap_revert(new ImageSnapshotRevert());
xmlrpc_c::methodPtr image_snap_flatten(new ImageSnapshotFlatten());
xmlrpc_c::methodPtr image_restore(new ImageRestore());
// Datastore Methods
xmlrpc_c::methodPtr datastore_enable(new DatastoreEnable());
@ -582,6 +584,7 @@ void RequestManager::register_xml_methods()
RequestManagerRegistry.addMethod("one.vm.schedupdate", vm_sched_update);
RequestManagerRegistry.addMethod("one.vm.attachsg", vm_attachsg);
RequestManagerRegistry.addMethod("one.vm.detachsg", vm_detachsg);
RequestManagerRegistry.addMethod("one.vm.backup", vm_backup);
RequestManagerRegistry.addMethod("one.vmpool.info", vm_pool_info);
RequestManagerRegistry.addMethod("one.vmpool.infoextended", vm_pool_info_extended);
@ -796,6 +799,7 @@ void RequestManager::register_xml_methods()
RequestManagerRegistry.addMethod("one.image.snapshotdelete", image_snap_delete);
RequestManagerRegistry.addMethod("one.image.snapshotrevert", image_snap_revert);
RequestManagerRegistry.addMethod("one.image.snapshotflatten", image_snap_flatten);
RequestManagerRegistry.addMethod("one.image.restore", image_restore);
RequestManagerRegistry.addMethod("one.image.lock", image_lock);
RequestManagerRegistry.addMethod("one.image.unlock", image_unlock);

View File

@ -396,14 +396,14 @@ void ImageAllocate::request_execute(xmlrpc_c::paramList const& params,
MarketPlacePool * marketpool = nd.get_marketpool();
MarketPlaceAppPool * apppool = nd.get_apppool();
Template img_usage;
Template img_usage;
Image::DiskType ds_disk_type;
int app_id;
int market_id;
int app_id;
int market_id;
long long avail;
long long avail;
bool ds_check;
bool persistent_attr;
@ -429,9 +429,9 @@ void ImageAllocate::request_execute(xmlrpc_c::paramList const& params,
ds_type = ds->get_type();
if ( ds_type == Datastore::SYSTEM_DS )
if ( ds_type == Datastore::SYSTEM_DS || ds_type == Datastore::BACKUP_DS)
{
att.resp_msg = "New images cannot be allocated in a system datastore.";
att.resp_msg = "New images can only be allocated in a files or image datastore.";
failure_response(ALLOCATE, att);
return;
@ -439,15 +439,18 @@ void ImageAllocate::request_execute(xmlrpc_c::paramList const& params,
ds->get_permissions(ds_perms);
ds_name = ds->get_name();
ds_name = ds->get_name();
ds_check = ds->get_avail_mb(avail) && check_capacity;
ds_mad = ds->get_ds_mad();
tm_mad = ds->get_tm_mad();
ds_disk_type = ds->get_disk_type();
ds_check = ds->get_avail_mb(avail) && check_capacity;
ds_persistent_only = ds->is_persistent_only();
ds_mad = ds->get_ds_mad();
tm_mad = ds->get_tm_mad();
ds->get_template_attribute("DRIVER", ds_driver);
ds->decrypt();
ds->to_xml(ds_data);
}
else
@ -462,7 +465,7 @@ void ImageAllocate::request_execute(xmlrpc_c::paramList const& params,
// --------------- Get the SIZE for the Image, (DS driver) -----------------
if ( tmpl->get("FROM_APP", app_id ) )
if ( tmpl->get("FROM_APP", app_id) )
{
// This image comes from a MarketPlaceApp. Get the Market info and
// the size.
@ -504,7 +507,30 @@ void ImageAllocate::request_execute(xmlrpc_c::paramList const& params,
}
else
{
rc = imagem->stat_image(tmpl.get(), ds_data, size_str);
if ( tmpl->get("FROM_BACKUP_DS", app_id) )
{
string bck_ds_data;
if ( auto ds = dspool->get_ro(app_id) )
{
ds->decrypt();
ds->to_xml(bck_ds_data);
}
else
{
att.resp_msg = "Could not get associated backup datastore.";
failure_response(INTERNAL, att);
return;
}
rc = imagem->stat_image(tmpl.get(), bck_ds_data, size_str);
}
else
{
rc = imagem->stat_image(tmpl.get(), ds_data, size_str);
}
if ( rc == -1 )
{
@ -666,7 +692,7 @@ Request::ErrorCode TemplateAllocate::pool_allocate(
/* -------------------------------------------------------------------------- */
bool TemplateAllocate::allocate_authorization(
xmlrpc_c::paramList const& paramList,
xmlrpc_c::paramList const& paramList,
Template * tmpl,
RequestAttributes& att,
PoolObjectAuth * cluster_perms)

View File

@ -147,7 +147,10 @@ Request::ErrorCode ImagePersistent::request_execute(
case Image::RAMDISK:
case Image::CONTEXT:
att.resp_msg = "KERNEL, RAMDISK and CONTEXT must be non-persistent";
return ACTION;
case Image::BACKUP:
att.resp_msg = "BACKUP images must be persistent";
return ACTION;
}
@ -238,6 +241,11 @@ void ImageChangeType::request_execute(xmlrpc_c::paramList const& paramList,
return;
}
break;
case Image::BACKUP:
att.resp_msg = "Cannot change type for BACKUP images.";
failure_response(ACTION, att);
return;
}
rc = image->set_type(type, att.resp_msg);
@ -325,7 +333,8 @@ Request::ErrorCode ImageClone::request_execute(
case Image::KERNEL:
case Image::RAMDISK:
case Image::CONTEXT:
att.resp_msg = "KERNEL, RAMDISK and CONTEXT cannot be cloned.";
case Image::BACKUP:
att.resp_msg = "KERNEL, RAMDISK, BACKUP and CONTEXT cannot be cloned.";
return ACTION;
}
@ -594,3 +603,57 @@ void ImageSnapshotFlatten::request_execute(xmlrpc_c::paramList const& paramList,
success_response(snap_id, att);
}
/* ------------------------------------------------------------------------- */
/* ------------------------------------------------------------------------- */
void ImageRestore::request_execute(xmlrpc_c::paramList const& paramList,
RequestAttributes& att)
{
int image_id = xmlrpc_c::value_int(paramList.getInt(1));
int dst_ds_id = xmlrpc_c::value_int(paramList.getInt(2));
string opt_tmp = xmlrpc_c::value_string(paramList.getString(3));
Nebula& nd = Nebula::instance();
DatastorePool * dspool = nd.get_dspool();
ImageManager * imagem = nd.get_imagem();
if ( basic_authorization(image_id, att) == false )
{
return;
}
ErrorCode ec = basic_authorization(dspool, dst_ds_id,
PoolObjectSQL::DATASTORE, att);
if ( ec != SUCCESS)
{
failure_response(ec, att);
return;
}
Template tmpl;
string txml;
int rc = tmpl.parse_str_or_xml(opt_tmp, att.resp_msg);
if ( rc != 0 )
{
failure_response(INTERNAL, att);
return;
}
tmpl.replace("USERNAME", att.uname);
rc = imagem->restore_image(image_id, dst_ds_id, tmpl.to_xml(txml),
att.resp_msg);
if ( rc < 0 )
{
failure_response(ACTION, att);
return;
}
success_response(att.resp_msg, att);
}

View File

@ -1506,6 +1506,7 @@ void VirtualMachineDiskSaveas::request_execute(
case Image::KERNEL:
case Image::RAMDISK:
case Image::CONTEXT:
case Image::BACKUP:
goto error_image_type;
}
@ -3393,8 +3394,10 @@ void VirtualMachineUpdateConf::request_execute(
}
}
if (vm->updateconf(uc_tmpl.get(), att.resp_msg,
update_type == 1 ? true : false ) != 0 )
rc = vm->updateconf(uc_tmpl.get(), att.resp_msg, update_type == 1 ? true : false);
// rc = -1 (error), 0 (context changed), 1 (no context change)
if ( rc == -1 )
{
failure_response(INTERNAL, att);
@ -3435,7 +3438,7 @@ void VirtualMachineUpdateConf::request_execute(
// Apply the change for running VM
if (state == VirtualMachine::VmState::ACTIVE &&
lcm_state == VirtualMachine::LcmState::RUNNING)
lcm_state == VirtualMachine::LcmState::RUNNING && rc == 0)
{
auto dm = Nebula::instance().get_dm();
@ -3801,3 +3804,109 @@ void VirtualMachineDetachSG::request_execute(
success_response(vm_id, att);
}
}
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
void VirtualMachineBackup::request_execute(
xmlrpc_c::paramList const& paramList, RequestAttributes& att)
{
Nebula& nd = Nebula::instance();
DispatchManager * dm = nd.get_dm();
DatastorePool * dspool = nd.get_dspool();
VirtualMachinePool * vmpool = static_cast<VirtualMachinePool *>(pool);
PoolObjectAuth vm_perms;
PoolObjectAuth ds_perms;
Template quota_tmpl;
ostringstream oss;
// ------------------------------------------------------------------------
// Get request parameters
// ------------------------------------------------------------------------
int vm_id = xmlrpc_c::value_int(paramList.getInt(1));
int backup_ds_id = xmlrpc_c::value_int(paramList.getInt(2));
// ------------------------------------------------------------------------
// Get VM & Backup Information
// ------------------------------------------------------------------------
if ( auto vm = vmpool->get(vm_id) )
{
vm->get_permissions(vm_perms);
vm->max_backup_size(quota_tmpl);
}
else
{
att.resp_id = vm_id;
failure_response(NO_EXISTS, att);
return;
}
if ( auto ds = dspool->get_ro(backup_ds_id) )
{
if (ds->get_type() != Datastore::BACKUP_DS)
{
att.resp_msg = "Datastore needs to be of type BACKUP";
failure_response(ACTION, att);
return;
}
ds->get_permissions(ds_perms);
}
else
{
att.resp_obj = PoolObjectSQL::DATASTORE;
att.resp_id = backup_ds_id;
failure_response(NO_EXISTS, att);
return;
}
// ------------------------------------------------------------------------
// Authorize request (VM and Datastore access)
// ------------------------------------------------------------------------
bool auth = vm_authorization(vm_id, 0, 0, att, 0, &ds_perms, 0);
if (auth == false)
{
return;
}
// -------------------------------------------------------------------------
// Check backup datastore quotas (size or number of backups)
//
// Reserves maximal possible quota size for the backup, the value is updated
// after backup success notification from driver
// -------------------------------------------------------------------------
quota_tmpl.add("DATASTORE", backup_ds_id);
quota_tmpl.add("IMAGES", 1);
RequestAttributes att_quota(vm_perms.uid, vm_perms.gid, att);
if ( !quota_authorization(&quota_tmpl, Quotas::DATASTORE, att_quota, att_quota.resp_msg) )
{
failure_response(AUTHORIZATION, att_quota);
return;
}
// ------------------------------------------------------------------------
// Create backup
// ------------------------------------------------------------------------
if (dm->backup(vm_id, backup_ds_id, att, att.resp_msg) < 0)
{
quota_rollback(&quota_tmpl, Quotas::DATASTORE, att_quota);
failure_response(INTERNAL, att);
return;
}
success_response(vm_id, att);
return;
}

View File

@ -91,6 +91,11 @@
# overhead of checkpoint files:
# system_ds_usage = system_ds_usage + memory_system_ds_scale * memory
#
# MAX_BACKUPS: Maximum number of concurrent backup operations in the cloud.
# The scheduler will not start pending scheduled backups beyond this limit
#
# MAX_BACKUPS_HOST: Maximum number of active backup operations per host.
#
#*******************************************************************************
MESSAGE_SIZE = 1073741824
@ -105,6 +110,9 @@ MAX_VM = 5000
MAX_DISPATCH = 30
MAX_HOST = 1
MAX_BACKUP = 5
MAX_BACKUP_HOST = 2
LIVE_RESCHEDS = 0
COLD_MIGRATE_MODE = 0

View File

@ -0,0 +1,73 @@
/* -------------------------------------------------------------------------- */
/* Copyright 2002-2022, OpenNebula Project, OpenNebula Systems */
/* */
/* Licensed under the Apache License, Version 2.0 (the "License"); you may */
/* not use this file except in compliance with the License. You may obtain */
/* a copy of the License at */
/* */
/* http://www.apache.org/licenses/LICENSE-2.0 */
/* */
/* Unless required by applicable law or agreed to in writing, software */
/* distributed under the License is distributed on an "AS IS" BASIS, */
/* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. */
/* See the License for the specific language governing permissions and */
/* limitations under the License. */
/* -------------------------------------------------------------------------- */
#ifndef SCHEDULED_ACTION_XML_H_
#define SCHEDULED_ACTION_XML_H_
#include <queue>
#include <map>
#include <vector>
#include "ScheduledAction.h"
class VirtualMachineActionsPoolXML;
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
class SchedActionsXML : public SchedActions
{
public:
SchedActionsXML(Template * vm):SchedActions(vm){};
int do_actions(int vmid, time_t stime);
};
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
class BackupActions
{
public:
BackupActions(int max, int maxh):max_backups(max), max_backups_host(maxh){};
void add(int vmid, int hid, time_t stime, SchedActionsXML& sas);
void dispatch(VirtualMachineActionsPoolXML* vmapool);
private:
int max_backups;
int max_backups_host;
struct VMBackupAction
{
VMBackupAction():backup("SCHED_ACTION"){};
/** ID of the VM and action**/
int vm_id;
int action_id;
/** Pending backup operation **/
VectorAttribute backup;
};
std::map<int, std::vector<VMBackupAction>> host_backups;
};
#endif /* SCHEDULED_ACTION_XML_H_ */

View File

@ -228,6 +228,16 @@ private:
*/
bool diff_vnets;
/**
* Max number of active backups
*/
int max_backups;
/**
* Max number of active backups per host
*/
int max_backups_host;
/**
* oned runtime configuration values
*/

View File

@ -119,8 +119,8 @@ protected:
int get_suitable_nodes(std::vector<xmlNodePtr>& content) const override
{
// Pending or ((running or unknown) and resched))
return get_nodes("/VM_POOL/VM[STATE=1 or "
"((STATE=8 or (LCM_STATE=3 or LCM_STATE=16)) and RESCHED=1)]", content);
return get_nodes("/VM_POOL/VM[STATE=1 or ((STATE=8 or "
"(LCM_STATE=3 or LCM_STATE=16)) and RESCHED=1)]", content);
}
virtual void add_object(xmlNodePtr node);
@ -163,32 +163,34 @@ public:
*/
int set_up();
/**
* Calls one.vm.action
*
* @param vid The VM id
* @param action Action argument (terminate, hold, release...)
* @param args Action arguments
* @param error_msg Error reason, if any
*
* @return 0 on success, -1 otherwise
*/
int action(int vid,
const std::string &action,
const std::string &args,
std::string &error_msg) const;
int active_backups()
{
return _active_backups;
}
int host_backups(int host_id)
{
return backups_host[host_id];
}
void add_backup(int host_id)
{
backups_host[host_id]++;
_active_backups++;
}
protected:
/**
* Total backup operations in progress
*/
mutable int _active_backups;
int get_suitable_nodes(std::vector<xmlNodePtr>& content) const override
{
std::ostringstream oss;
/**
* Backup operations per host
*/
mutable std::map<int, int> backups_host;
oss << "/VM_POOL/VM/TEMPLATE/SCHED_ACTION[(TIME < " << time(0)
<< " and (not(DONE > 0) or boolean(REPEAT))) or ( TIME[starts-with(text(),\"+\")] and not(DONE>0) ) ]/../..";
return get_nodes(oss.str().c_str(), content);
}
int get_suitable_nodes(std::vector<xmlNodePtr>& content) const override;
};
/* -------------------------------------------------------------------------- */

View File

@ -128,6 +128,10 @@ public:
//--------------------------------------------------------------------------
// Get Methods for VirtualMachineXML class
//--------------------------------------------------------------------------
int get_state() const { return state; };
int get_lcm_state() const { return lcm_state; };
int get_oid() const { return oid; };
int get_uid() const { return uid; };
@ -399,6 +403,11 @@ public:
return xml_str;
}
VirtualMachineTemplate * get_template()
{
return vm_template.get();
}
/**
* Get scheduled actions of the VM
*
@ -489,6 +498,9 @@ protected:
int hid;
int dsid;
int state;
int lcm_state;
bool resched;
bool resume;
bool active;

View File

@ -33,7 +33,8 @@ source_files=[
'DatastorePoolXML.cc',
'DatastoreXML.cc',
'VirtualNetworkPoolXML.cc',
'VirtualNetworkXML.cc']
'VirtualNetworkXML.cc',
'ScheduledActionXML.cc']
# Build library
sched_env.StaticLibrary(lib_name, source_files)

View File

@ -0,0 +1,500 @@
/* -------------------------------------------------------------------------- */
/* Copyright 2002-2022, OpenNebula Project, OpenNebula Systems */
/* */
/* Licensed under the Apache License, Version 2.0 (the "License"); you may */
/* not use this file except in compliance with the License. You may obtain */
/* a copy of the License at */
/* */
/* http://www.apache.org/licenses/LICENSE-2.0 */
/* */
/* Unless required by applicable law or agreed to in writing, software */
/* distributed under the License is distributed on an "AS IS" BASIS, */
/* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. */
/* See the License for the specific language governing permissions and */
/* limitations under the License. */
/* -------------------------------------------------------------------------- */
#include "ScheduledActionXML.h"
#include "Client.h"
#include "VirtualMachinePoolXML.h"
#include "VirtualMachine.h"
#include <string>
#include <stdexcept>
/* ************************************************************************** */
/* ************************************************************************** */
/* Helper functions to parse command line arguments for scheduled actions */
/* ************************************************************************** */
/* ************************************************************************** */
/**
* Parses value from string to given type
*
* @param val_s string value
* @param val parsed value
*
* @return 0 on success, -1 otherwise
*/
template<typename T>
static int from_str(const std::string& val_s, T& val)
{
if (val_s.empty())
{
return -1;
}
std::istringstream iss(val_s);
iss >> val;
if (iss.fail() || !iss.eof())
{
return -1;
}
return 0;
}
template<>
int from_str(const std::string& val_s, std::string& val)
{
if (val_s.empty())
{
return -1;
}
val = val_s;
return 0;
}
/**
* Parses tokens to scpecific value with given type
*
* @param tokens values to parse
* @param value given type to parse it
*
* @return 0 on success, -1 otherwise
*/
template<typename T>
static int parse_args(std::queue<std::string>& tokens, T& value)
{
if (tokens.empty())
{
return -1;
}
int rc = from_str(tokens.front(), value);
tokens.pop();
return rc;
}
template<typename T, typename... Args>
static int parse_args(std::queue<std::string>& tokens, T& value, Args&... args)
{
if (tokens.empty())
{
return -1;
}
int rc = from_str(tokens.front(), value);
tokens.pop();
if ( rc != 0 )
{
return -1;
}
return parse_args(tokens, args...);
}
/* ************************************************************************** */
/* ************************************************************************** */
/* XMLRPC API Interface for executing & updating scheduled actions */
/* ************************************************************************** */
/* ************************************************************************** */
static bool update_action(int vmid, SchedAction* action, std::string& error)
{
xmlrpc_c::value result;
error.clear();
try
{
std::ostringstream oss;
oss << "<TEMPLATE>";
action->to_xml(oss);
oss << "</TEMPLATE>";
Client::client()->call("one.vm.schedupdate", "iis", &result, vmid,
action->action_id(), oss.str().c_str());
}
catch (std::exception const& e)
{
return false;
}
std::vector<xmlrpc_c::value> values =
xmlrpc_c::value_array(result).vectorValueValue();
bool rc = xmlrpc_c::value_boolean(values[0]);
if (!rc)
{
error = xmlrpc_c::value_string(values[1]);
}
return rc;
}
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
static int action_call(int vmid, SchedAction *sa, const std::string& aname,
std::string& error)
{
static std::set<std::string> valid_actions { "terminate", "terminate-hard",
"undeploy", "undeploy-hard", "hold", "release", "stop", "suspend",
"resume", "reboot", "reboot-hard", "poweroff", "poweroff-hard",
"snapshot-create", "snapshot-revert", "snapshot-delete",
"disk-snapshot-create", "disk-snapshot-revert", "disk-snapshot-delete",
"backup" };
if ( valid_actions.find(aname) == valid_actions.end() )
{
error = aname + " is not supported.";
return -1;
}
std::string args_st = sa->vector_value("ARGS");
xmlrpc_c::value result;
std::stringstream ss(args_st);
std::queue<std::string> args;
std::string tmp_arg;
while (getline(ss, tmp_arg, ','))
{
args.push(tmp_arg);
}
try
{
if (aname == "snapshot-create")
{
std::string name = "";
if (parse_args(args, name) != 0)
{
error = "Missing or malformed ARGS for: snapshot-create."
" Format: snapshot-name";
return -1;
}
Client::client()->call("one.vm.snapshotcreate", "is", &result, vmid,
name.c_str());
}
else if (aname == "snapshot-revert")
{
int snapid = 0;
if (parse_args(args, snapid) != 0)
{
error = "Missing or malformed ARGS for: snapshot-revert."
" Format: snapshot-id";
return -1;
}
Client::client()->call("one.vm.snapshotrevert", "ii", &result, vmid,
snapid);
}
else if (aname == "snapshot-delete")
{
int snapid = 0;
if (parse_args(args, snapid) != 0)
{
error = "Missing or malformed ARGS for: snapshot-delete."
" Format: snapshot-id";
return -1;
}
Client::client()->call("one.vm.snapshotdelete", "ii", &result, vmid,
snapid);
}
else if (aname == "disk-snapshot-create")
{
int diskid = 0;
std::string name = "";
if (parse_args(args, diskid, name) != 0)
{
error = "Missing or malformed ARGS for: disk-snapshot-create."
" Format: disk-id, snapshot-name";
return -1;
}
Client::client()->call("one.vm.disksnapshotcreate", "iis", &result,
vmid, diskid, name.c_str());
}
else if (aname == "disk-snapshot-revert")
{
int diskid = 0, snapid = 0;
if (parse_args(args, diskid, snapid) != 0)
{
error = "Missing or malformed ARGS for: disk-snapshot-revert."
" Format: disk-id, snapshot-id";
return -1;
}
Client::client()->call("one.vm.disksnapshotrevert", "iii", &result,
vmid, diskid, snapid);
}
else if (aname == "disk-snapshot-delete")
{
int diskid = 0, snapid = 0;
if (parse_args(args, diskid, snapid) != 0)
{
error = "Missing or malformed ARGS for: disk-snapshot-delete."
" Format: disk-id, snapshot-id";
return -1;
}
Client::client()->call("one.vm.disksnapshotdelete", "iii", &result,
vmid, diskid, snapid);
}
else if (aname == "backup")
{
int dsid;
if (parse_args(args, dsid) != 0)
{
error = "Missing or malformed ARGS for: backup."
" Format: datastore-id";
return -1;
}
Client::client()->call("one.vm.backup", "ii", &result, vmid, dsid);
}
else
{
Client::client()->call("one.vm.action", "si", &result,
aname.c_str(), vmid);
}
}
catch (std::exception const& e)
{
return -1;
}
std::vector<xmlrpc_c::value> values =
xmlrpc_c::value_array(result).vectorValueValue();
if (!xmlrpc_c::value_boolean(values[0]))
{
error = xmlrpc_c::value_string(values[1]);
return -1;
}
return 0;
}
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
static void scheduled_action(int vmid, SchedAction* action, std::string aname)
{
std::ostringstream oss;
oss << "Executing action '" << aname << "' for VM " << vmid << " : ";
std::string error;
if (action_call(vmid, action, aname, error) == 0)
{
time_t done_time = time(0);
time_t next_time;
action->remove("MESSAGE");
action->replace("DONE", done_time);
do
{
next_time = action->next_action();
} while ( next_time < done_time && next_time != -1 );
oss << "Success.";
}
else
{
std::ostringstream oss_aux;
std::string time_str = one_util::log_time(time(0));
oss_aux << time_str << " : " << error;
action->replace("MESSAGE", oss_aux.str());
oss << "Failure. " << error;
}
if (!update_action(vmid, action, error))
{
std::ostringstream oss_aux;
oss_aux << "Unable to update scheduled action: " << error;
NebulaLog::warn("SCHED", oss_aux.str());
}
NebulaLog::log("SCHED", Log::INFO, oss);
}
/* ************************************************************************** */
/* Scheduled Backups Dispatch */
/* ************************************************************************** */
void BackupActions::add(int vmid, int hid, time_t stime, SchedActionsXML& sas)
{
struct VMBackupAction vm_ba;
vm_ba.vm_id = vmid;
SchedAction* first_action = nullptr;
// Only first due backup with lower time will be executed
for (auto action: sas)
{
std::string action_st = action->vector_value("ACTION");
one_util::tolower(action_st);
if ( action_st != "backup" || !action->is_due(stime))
{
continue;
}
if (!first_action ||
first_action->get_time(stime) > action->get_time(stime))
{
first_action = action;
}
}
if ( first_action == nullptr )
{
return;
}
std::ostringstream oss;
oss << "Found pending backup for VM " << vmid;
NebulaLog::log("SCHED", Log::INFO, oss);
vm_ba.backup = *first_action->vector_attribute();
vm_ba.action_id = first_action->action_id();
auto it = host_backups.find(vm_ba.vm_id);
if ( it == host_backups.end() )
{
host_backups.insert(std::pair<int, std::vector<VMBackupAction>>(hid,
{ vm_ba }));
}
else
{
it->second.push_back(std::move(vm_ba));
}
return;
}
void BackupActions::dispatch(VirtualMachineActionsPoolXML * vmapool)
{
for (auto& hba : host_backups)
{
if ( vmapool->active_backups() >= max_backups )
{
std::ostringstream oss;
oss << "Reached max number of active backups (" << max_backups << ")";
NebulaLog::log("SCHED", Log::INFO, oss);
break;
}
if ( vmapool->host_backups(hba.first) >= max_backups_host )
{
std::ostringstream oss;
oss << "Reached max number of active backups (" << max_backups_host
<< ") in host " << hba.first;
NebulaLog::log("SCHED", Log::INFO, oss);
continue;
}
for (auto& vba: hba.second)
{
SchedAction sa(&vba.backup, vba.action_id);
scheduled_action(vba.vm_id, &sa, "backup");
vmapool->add_backup(hba.first);
}
}
}
/* ************************************************************************** */
/* Scheduled Actions Dispatch */
/* ************************************************************************** */
int SchedActionsXML::do_actions(int vmid, time_t stime)
{
SchedAction* first_action = nullptr;
std::string first_aname = "";
// Only first is_due action with lower time will be executed
// Backups are filtered out to schedule them separately
for (auto action : *this)
{
if (!action->is_due(stime))
{
continue;
}
std::string aname = action->vector_value("ACTION");
one_util::tolower(aname);
if (aname == "backup")
{
continue;
}
if (!first_action ||
first_action->get_time(stime) > action->get_time(stime))
{
first_action = action;
first_aname = aname;
}
}
if (!first_action)
{
return 0;
}
scheduled_action(vmid, first_action, first_aname);
return 0;
}

View File

@ -21,95 +21,6 @@
using namespace std;
/* -------------------------------------------------------------------------- */
/**
* Parses value from string to given type
*
* @param val_s string value
* @param val parsed value
*
* @return 0 on success, -1 otherwise
*/
/* -------------------------------------------------------------------------- */
template<typename T>
static int from_str(const string& val_s, T& val)
{
if (val_s.empty())
{
return -1;
}
istringstream iss(val_s);
iss >> val;
if (iss.fail() || !iss.eof())
{
return -1;
}
return 0;
}
template<>
int from_str(const string& val_s, string& val)
{
if (val_s.empty())
{
return -1;
}
val = val_s;
return 0;
}
/* -------------------------------------------------------------------------- */
/**
* Parses tokens to scpecific value with given type
*
* @param tokens values to parse
* @param value given type to parse it
*
* @return 0 on success, -1 otherwise
*/
/* -------------------------------------------------------------------------- */
template<typename T>
static int parse_args(queue<string>& tokens, T& value)
{
if (tokens.empty())
{
return -1;
}
int rc = from_str(tokens.front(), value);
tokens.pop();
return rc;
}
/* -------------------------------------------------------------------------- */
template<typename T, typename... Args>
static int parse_args(queue<string>& tokens, T& value, Args&... args)
{
if (tokens.empty())
{
return -1;
}
int rc = from_str(tokens.front(), value);
tokens.pop();
if ( rc != 0 )
{
return -1;
}
return parse_args(tokens, args...);
}
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
@ -362,7 +273,19 @@ int VirtualMachineActionsPoolXML::set_up()
oss << " " << it->first;
}
NebulaLog::log("VM",Log::DEBUG,oss);
oss << "\nActive backup operations. Total: " << _active_backups << "\n";
oss << right << setw(8) << "Host ID" << " "
<< right << setw(8) << "Backups" << " "
<< endl << setw(18) << setfill('-') << "-" << setfill(' ') << endl;
for (auto i: backups_host)
{
oss << right << setw(8) << i.first << " "
<< right << setw(8) << i.second << "\n";
}
NebulaLog::log("VM", Log::DEBUG, oss);
}
return rc;
@ -371,163 +294,47 @@ int VirtualMachineActionsPoolXML::set_up()
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */
int VirtualMachineActionsPoolXML::action(
int vid,
const string& action,
const string& args,
string& error_msg) const
int VirtualMachineActionsPoolXML::get_suitable_nodes(
std::vector<xmlNodePtr>& content) const
{
xmlrpc_c::value result;
bool success;
std::vector<xmlNodePtr> nodes;
queue<string> sargs;
string tmp_arg;
_active_backups = get_nodes("/VM_POOL/VM[LCM_STATE=69 or LCM_STATE=70]", nodes);
stringstream ss(args);
backups_host.clear();
while (getline(ss, tmp_arg, ','))
for ( auto& node: nodes)
{
sargs.push(tmp_arg);
int hid = -1;
if ( node == 0 || node->children == 0 || node->children->next==0 )
{
continue;
}
ObjectXML vmxml(node);
vmxml.xpath(hid, "/VM/HISTORY_RECORDS/HISTORY/HID", -1);
if ( hid == -1 )
{
continue;
}
backups_host[hid]++;
}
try
{
if (action == "snapshot-create")
{
string name = "";
free_nodes(nodes);
int rc = parse_args(sargs, name);
std::ostringstream oss;
if (rc != 0)
{
error_msg = "Missing or malformed ARGS for: snapshot-create."
" Format: snapshot-name";
return -1;
}
oss << "/VM_POOL/VM/TEMPLATE/SCHED_ACTION[(TIME < " << time(0)
<< " and (not(DONE > 0) or boolean(REPEAT))) or "
<< "( TIME[starts-with(text(),\"+\")] and not(DONE>0) ) ]/../..";
client->call("one.vm.snapshotcreate",
"is",
&result,
vid,
name.c_str());
}
else if (action == "snapshot-revert")
{
int snapid = 0;
int rc = parse_args(sargs, snapid);
if (rc != 0)
{
error_msg = "Missing or malformed ARGS for: snapshot-revert."
" Format: snapshot-id";
return -1;
}
client->call("one.vm.snapshotrevert", "ii", &result, vid, snapid);
}
else if (action == "snapshot-delete")
{
int snapid = 0;
int rc = parse_args(sargs, snapid);
if (rc != 0)
{
error_msg = "Missing or malformed ARGS for: snapshot-delete."
" Format: snapshot-id";
return -1;
}
client->call("one.vm.snapshotdelete", "ii", &result, vid, snapid);
}
else if (action == "disk-snapshot-create")
{
int diskid = 0;
string name = "";
int rc = parse_args(sargs, diskid, name);
if (rc != 0)
{
error_msg = "Missing or malformed ARGS for: disk-snapshot-create."
" Format: disk-id, snapshot-name";
return -1;
}
client->call("one.vm.disksnapshotcreate",
"iis",
&result,
vid,
diskid,
name.c_str());
}
else if (action == "disk-snapshot-revert")
{
int diskid = 0, snapid = 0;
int rc = parse_args(sargs, diskid, snapid);
if (rc != 0)
{
error_msg = "Missing or malformed ARGS for: disk-snapshot-revert."
" Format: disk-id, snapshot-id";
return -1;
}
client->call("one.vm.disksnapshotrevert",
"iii",
&result,
vid,
diskid,
snapid);
}
else if (action == "disk-snapshot-delete")
{
int diskid = 0, snapid = 0;
int rc = parse_args(sargs, diskid, snapid);
if (rc != 0)
{
error_msg = "Missing or malformed ARGS for: disk-snapshot-delete."
" Format: disk-id, snapshot-id";
return -1;
}
client->call("one.vm.disksnapshotdelete",
"iii",
&result,
vid,
diskid,
snapid);
}
else
{
client->call("one.vm.action", "si", &result, action.c_str(), vid);
}
}
catch (exception const& e)
{
return -1;
}
vector<xmlrpc_c::value> values =
xmlrpc_c::value_array(result).vectorValueValue();
success = xmlrpc_c::value_boolean(values[0]);
if (!success)
{
error_msg = xmlrpc_c::value_string( values[1] );
return -1;
}
return 0;
return get_nodes(oss.str().c_str(), content);
}
/* -------------------------------------------------------------------------- */
/* -------------------------------------------------------------------------- */

View File

@ -52,8 +52,10 @@ void VirtualMachineXML::init_attributes()
xpath(uid, "/VM/UID", -1);
xpath(gid, "/VM/GID", -1);
xpath(tmp, "/VM/STATE", -1);
active = tmp == 3;
xpath(state, "/VM/STATE", 0);
xpath(lcm_state, "/VM/LCM_STATE", 0);
active = state == 3;
xpath(tmp, "/VM/RESCHED", 0);
resched = tmp == 1;
@ -670,6 +672,7 @@ int VirtualMachineXML::parse_action_name(string& action_st)
&& action_st != "disk-snapshot-create"
&& action_st != "disk-snapshot-revert"
&& action_st != "disk-snapshot-delete"
&& action_st != "backup"
// Compatibility with 4.x
&& action_st != "shutdown"

View File

@ -34,7 +34,9 @@
#include "NebulaLog.h"
#include "PoolObjectAuth.h"
#include "NebulaUtil.h"
#include "ScheduledAction.h"
#include "ScheduledActionXML.h"
#include "VirtualMachine.h"
using namespace std;
@ -135,6 +137,10 @@ void Scheduler::start()
conf.get("DIFFERENT_VNETS", diff_vnets);
conf.get("MAX_BACKUPS", max_backups);
conf.get("MAX_BACKUPS_HOST", max_backups_host);
// -----------------------------------------------------------
// Log system & Configuration File
// -----------------------------------------------------------
@ -1689,102 +1695,36 @@ void Scheduler::dispatch()
int Scheduler::do_scheduled_actions()
{
VirtualMachineXML* vm;
BackupActions backups(max_backups, max_backups_host);
const map<int, ObjectXML*> vms = vmapool->get_objects();
const map<int, ObjectXML*> vms = vmapool->get_objects();
for (auto vm_it=vms.begin(); vm_it != vms.end(); vm_it++)
{
vm = static_cast<VirtualMachineXML *>(vm_it->second);
VirtualMachineXML* vm = static_cast<VirtualMachineXML *>(vm_it->second);
SchedActions sas = vm->get_actions();
/* -------------- Check VM scheduled actions ------------------------ */
SchedAction* first_action = nullptr;
SchedActionsXML sactions(vm->get_template());
for (auto action : sas)
{
auto stime = vm->get_stime();
if (!action->is_due(stime))
{
continue;
}
sactions.do_actions(vm->get_oid(), vm->get_stime());
if (!first_action ||
first_action->get_time(stime) > action->get_time(stime))
{
// Only first is_due action with lower time will be executed
first_action = action;
}
}
/* ---------------- Get VM scheduled backups ------------------------ */
if (!first_action)
int state = static_cast<VirtualMachine::VmState>(vm->get_state());
int lstate = static_cast<VirtualMachine::LcmState>(vm->get_lcm_state());
if ((state != VirtualMachine::ACTIVE || lstate != VirtualMachine::RUNNING)
&& (state != VirtualMachine::POWEROFF))
{
continue;
}
ostringstream oss;
string error_msg;
string action_st = first_action->vector_value("ACTION");
int rc = VirtualMachineXML::parse_action_name(action_st);
oss << "Executing action '" << action_st << "' for VM "
<< vm->get_oid() << " : ";
if ( rc != 0 )
{
error_msg = "This action is not supported.";
}
else
{
string args_st = first_action->vector_value("ARGS");
rc = vmapool->action(vm->get_oid(), action_st, args_st, error_msg);
if (rc == 0)
{
time_t done_time = time(0);
time_t next_time;
first_action->remove("MESSAGE");
first_action->replace("DONE", done_time);
do
{
next_time = first_action->next_action();
} while ( next_time < done_time && next_time != -1 );
oss << "Success.";
}
}
if ( rc != 0 )
{
ostringstream oss_aux;
string time_str = one_util::log_time(time(0));
oss_aux << time_str << " : " << error_msg;
first_action->replace("MESSAGE", oss_aux.str());
oss << "Failure. " << error_msg;
}
if (!vm->update_sched_action(first_action))
{
ostringstream oss;
first_action->to_xml(oss);
NebulaLog::warn("SCHED", string("Unable to update sched action: ")
+ oss.str());
}
NebulaLog::log("VM", Log::INFO, oss);
backups.add(vm->get_oid(), vm->get_hid(), vm->get_stime(), sactions);
}
backups.dispatch(vmapool);
return 0;
}

View File

@ -46,6 +46,8 @@ void SchedulerTemplate::set_conf_default()
# LIVE_RESCHEDS
# COLD_MIGRATE_MODE
# LOG
# MAX_BACKUPS
# MAX_BACKUPS_HOST
#-------------------------------------------------------------------------------
*/
set_conf_single("MESSAGE_SIZE", "1073741824");
@ -58,6 +60,9 @@ void SchedulerTemplate::set_conf_default()
set_conf_single("LIVE_RESCHEDS", "0");
set_conf_single("COLD_MIGRATE_MODE", "0");
set_conf_single("MAX_BACKUPS", "5");
set_conf_single("MAX_BACKUPS_HOST", "2");
//DEFAULT_SCHED
vvalue.clear();
vvalue.insert(make_pair("POLICY","1"));

File diff suppressed because it is too large Load Diff

View File

@ -3,244 +3,247 @@ link_logo:
text_link_logo:
confirm_vms: false
enabled_tabs:
- provision-tab
- settings-tab
- provision-tab
- settings-tab
features:
# True to show showback monthly reports, and VM cost
showback: true
# True to show showback monthly reports, and VM cost
showback: true
# Allows to change the security groups for each network interface
# on the VM creation dialog
secgroups: true
# Allows to change the security groups for each network interface
# on the VM creation dialog
secgroups: true
# True to hide the CPU setting in the VM creation dialog
instantiate_hide_cpu: false
# True to hide the CPU setting in the VM creation dialog
instantiate_hide_cpu: false
# False to not scale the CPU.
# An integer value would be used as a multiplier as follows:
# CPU = instantiate_cpu_factor * VCPU
# Set it to 1 to tie CPU and vCPU.
instantiate_cpu_factor: false
# False to not scale the CPU.
# An integer value would be used as a multiplier as follows:
# CPU = instantiate_cpu_factor * VCPU
# Set it to 1 to tie CPU and vCPU.
instantiate_cpu_factor: false
# True to show the option to make an instance persistent
instantiate_persistent: true
# True to show the option to make an instance persistent
instantiate_persistent: true
# True to allow to create machines to cloud users
cloud_vm_create: true
# True to allow to create machines to cloud users
cloud_vm_create: true
# True to show the monitoring info (VM & VRouters)
show_monitoring_info: true
# True to show the monitoring info (VM & VRouters)
show_monitoring_info: true
# True to show the attributes info (VM & VRouters)
show_attributes_info: true
# True to show the attributes info (VM & VRouters)
show_attributes_info: true
# True to show the vCenter info (VM & VRouters)
show_vcenter_info: true
# True to show the vCenter info (VM & VRouters)
show_vcenter_info: true
# True to show advanced options
show_attach_disk_advanced: true
# True to show advanced options
show_attach_disk_advanced: true
show_attach_nic_advanced: true
show_attach_nic_advanced: true
# True to show the network configuration to instantiate Service template
show_vnet_instantiate_flow: true
# True to show the network configuration to instantiate Service template
show_vnet_instantiate_flow: true
# True to show schedule actions section to instantiate VM
show_sched_actions_instantiate: true
# True to show schedule actions section to instantiate VM
show_sched_actions_instantiate: true
# True to show boot order section to instantiate VM
show_boot_order: true
# True to show boot order section to instantiate VM
show_boot_order: true
tabs:
provision-tab:
panel_tabs:
vm_info_tab: false
vm_capacity_tab: true
vm_storage_tab: true
vm_network_tab: true
vm_snapshot_tab: true
vm_placement_tab: false
vm_actions_tab: true
vm_conf_tab: false
vm_template_tab: false
vm_log_tab: false
provision_tabs:
flows: true
templates: true
actions: &provisionactions
# In the cloud view, delete is the equivalent
# of 'onetemplate chmod --recursive'
Template.chmod: false
provision-tab:
panel_tabs:
vm_info_tab: false
vm_capacity_tab: true
vm_storage_tab: true
vm_network_tab: true
vm_snapshot_tab: true
vm_placement_tab: false
vm_actions_tab: true
vm_conf_tab: false
vm_template_tab: false
vm_log_tab: false
provision_tabs:
flows: true
templates: true
actions: &provisionactions
# In the cloud view, delete is the equivalent
# of 'onetemplate chmod --recursive'
Template.chmod: false
# In the cloud view, delete is the equivalent
# of 'onetemplate delete --recursive'
Template.delete: true
VM.rename: true
VM.resume: true
VM.reboot: true
VM.reboot_hard: true
VM.poweroff: true
VM.poweroff_hard: true
VM.undeploy: false
VM.undeploy_hard: false
VM.terminate: true
VM.terminate_hard: true
VM.resize: true
VM.disk_resize: true
VM.attachdisk: true
VM.detachdisk: true
VM.disk_saveas: true
VM.attachnic: true
VM.detachnic: true
VM.snapshot_create: true
VM.snapshot_revert: true
VM.snapshot_delete: true
VM.disk_snapshot_create: true
VM.disk_snapshot_revert: true
VM.disk_snapshot_rename: true
VM.disk_snapshot_delete: true
VM.save_as_template: true
VM.migrate_poff: false
VM.migrate_poff_hard: false
VM.lockU: true
VM.unlock: true
VM.startvnc: true
VM.startvmrc: true
VM.startspice: true
VM.vnc: true
VM.ssh: true
VM.rdp: true
VM.save_rdp: true
VM.save_virt_viewer: true
VM.updateconf: true
VM.attachsg: true
VM.detachsg: true
VM.instantiate_name: true
dashboard:
# Connected user's quotas
quotas: true
# Overview of connected user's VMs
vms: true
# Group's quotas
groupquotas: false
# Overview of group's VMs
groupvms: false
create_vm:
# True to allow capacity (CPU, MEMORY, VCPU) customization
capacity_select: true
# True to allow NIC customization
network_select: true
# True to allow vmgroup customization
vmgroup_select: true
# True to allow DISK size customization
disk_resize: true
# True to allow datastore customization
datastore_select: true
settings-tab:
panel_tabs:
settings_info_tab: false
settings_config_tab: true
settings_quotas_tab: true
settings_accounting_tab: true
settings_showback_tab: true
actions:
# Buttons for settings_info_tab
User.update_password: true
User.login_token: true
User.two_factor_auth: true
# Buttons for settings_config_tab
Settings.change_language: true
Settings.change_password: true
Settings.change_view: true
Settings.ssh_key: true
Settings.login_token: true
# Edit button in settings_quotas_tab
User.quotas_dialog: false
vms-tab:
actions: *provisionactions
images-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
- 3 # Onwer
- 4 # Group
- 5 # Datastore
#- 6 # Size
- 7 # Type
#- 8 # Registration time
#- 9 # Persistent
- 10 # Status
- 11 # #VMs
#- 12 # Target
vnets-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
#- 3 # Owner
- 4 # Group
- 5 # Status
#- 6 # Reservation
- 7 # Cluster
#- 8 # Bridge
#- 9 # Leases
#- 10 # VLAN ID
secgroups-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
#- 3 # Owner
- 4 # Group
#- 5 # Labels
vmgroup-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
- 3 # Owner
- 4 # Group
- 5 # Vms
#- 6 # Labels
#- 7 # Search data
datastores-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
- 3 # Owner
- 4 # Group
#- 5 # Capacity
#- 6 # Cluster
#- 7 # Basepath
#- 8 # TM
#- 9 # DS
#- 10 # Type
- 11 # Status
#- 12 # Labels
#- 13 # Search data
templates-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Owner
- 3 # Group
- 4 # Name
- 5 # Registration time
#- 6 # Labels
#- 7 # Search data
actions: *provisionactions
template_creation_tabs:
general: true
storage: true
network: true
os_booting: true
features: true
input_output: true
context: true
actions: true
scheduling: false
hybrid: false
vmgroup: true
other: true
numa: true
# In the cloud view, delete is the equivalent
# of 'onetemplate delete --recursive'
Template.delete: true
VM.rename: true
VM.resume: true
VM.reboot: true
VM.reboot_hard: true
VM.poweroff: true
VM.poweroff_hard: true
VM.undeploy: false
VM.undeploy_hard: false
VM.terminate: true
VM.terminate_hard: true
VM.resize: true
VM.disk_resize: true
VM.attachdisk: true
VM.detachdisk: true
VM.disk_saveas: true
VM.attachnic: true
VM.detachnic: true
VM.snapshot_create: true
VM.snapshot_revert: true
VM.snapshot_delete: true
VM.disk_snapshot_create: true
VM.disk_snapshot_revert: true
VM.disk_snapshot_rename: true
VM.disk_snapshot_delete: true
VM.save_as_template: true
VM.migrate_poff: false
VM.migrate_poff_hard: false
VM.lockU: true
VM.unlock: true
VM.startvnc: true
VM.startvmrc: true
VM.startspice: true
VM.vnc: true
VM.ssh: true
VM.rdp: true
VM.save_rdp: true
VM.save_virt_viewer: true
VM.updateconf: true
VM.attachsg: true
VM.detachsg: true
VM.instantiate_name: true
VM.backup_dialog: true
VM.backup: true
dashboard:
# Connected user's quotas
quotas: true
# Overview of connected user's VMs
vms: true
# Group's quotas
groupquotas: false
# Overview of group's VMs
groupvms: false
create_vm:
# True to allow capacity (CPU, MEMORY, VCPU) customization
capacity_select: true
# True to allow NIC customization
network_select: true
# True to allow vmgroup customization
vmgroup_select: true
# True to allow DISK size customization
disk_resize: true
# True to allow datastore customization
datastore_select: true
settings-tab:
panel_tabs:
settings_info_tab: false
settings_config_tab: true
settings_quotas_tab: true
settings_accounting_tab: true
settings_showback_tab: true
actions:
# Buttons for settings_info_tab
User.update_password: true
User.login_token: true
User.two_factor_auth: true
# Buttons for settings_config_tab
Settings.change_language: true
Settings.change_password: true
Settings.change_view: true
Settings.ssh_key: true
Settings.login_token: true
# Edit button in settings_quotas_tab
User.quotas_dialog: false
vms-tab:
actions: *provisionactions
images-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
- 3 # Onwer
- 4 # Group
- 5 # Datastore
#- 6 # Size
- 7 # Type
#- 8 # Registration time
#- 9 # Persistent
- 10 # Status
- 11 # #VMs
#- 12 # Target
vnets-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
#- 3 # Owner
- 4 # Group
- 5 # Status
#- 6 # Reservation
- 7 # Cluster
#- 8 # Bridge
#- 9 # Leases
#- 10 # VLAN ID
secgroups-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
#- 3 # Owner
- 4 # Group
#- 5 # Labels
vmgroup-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
- 3 # Owner
- 4 # Group
- 5 # Vms
#- 6 # Labels
#- 7 # Search data
datastores-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
- 3 # Owner
- 4 # Group
#- 5 # Capacity
#- 6 # Cluster
#- 7 # Basepath
#- 8 # TM
#- 9 # DS
#- 10 # Type
- 11 # Status
#- 12 # Labels
#- 13 # Search data
templates-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Owner
- 3 # Group
- 4 # Name
- 5 # Registration time
#- 6 # Labels
#- 7 # Search data
actions: *provisionactions
template_creation_tabs:
general: true
storage: true
network: true
os_booting: true
features: true
input_output: true
context: true
actions: true
scheduling: false
hybrid: false
vmgroup: true
other: true
numa: true
backup: true

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -3,255 +3,258 @@ link_logo:
text_link_logo:
confirm_vms: false
enabled_tabs:
- provision-tab
- settings-tab
- provision-tab
- settings-tab
features:
# True to show showback monthly reports, and VM cost
showback: true
# True to show showback monthly reports, and VM cost
showback: true
# Allows to change the security groups for each network interface
# on the VM creation dialog
secgroups: true
# Allows to change the security groups for each network interface
# on the VM creation dialog
secgroups: true
# True to hide the CPU setting in the VM creation dialog
instantiate_hide_cpu: false
# True to hide the CPU setting in the VM creation dialog
instantiate_hide_cpu: false
# False to not scale the CPU.
# An integer value would be used as a multiplier as follows:
# CPU = instantiate_cpu_factor * VCPU
# Set it to 1 to tie CPU and vCPU.
instantiate_cpu_factor: false
# False to not scale the CPU.
# An integer value would be used as a multiplier as follows:
# CPU = instantiate_cpu_factor * VCPU
# Set it to 1 to tie CPU and vCPU.
instantiate_cpu_factor: false
# True to show the option to make an instance persistent
instantiate_persistent: true
# True to show the option to make an instance persistent
instantiate_persistent: true
# True to allow to create machines to cloud users
cloud_vm_create: true
# True to allow to create machines to cloud users
cloud_vm_create: true
# True to show the monitoring info (VM & VRouters)
show_monitoring_info: true
# True to show the monitoring info (VM & VRouters)
show_monitoring_info: true
# True to show the attributes info (VM & VRouters)
show_attributes_info: true
# True to show the attributes info (VM & VRouters)
show_attributes_info: true
# True to show the vCenter info (VM & VRouters)
show_vcenter_info: true
# True to show the vCenter info (VM & VRouters)
show_vcenter_info: true
# True to show advanced options
show_attach_disk_advanced: true
# True to show advanced options
show_attach_disk_advanced: true
show_attach_nic_advanced: true
show_attach_nic_advanced: true
# True to show the network configuration to instantiate Service template
show_vnet_instantiate_flow: true
# True to show the network configuration to instantiate Service template
show_vnet_instantiate_flow: true
# True to show schedule actions section to instantiate VM
show_sched_actions_instantiate: true
# True to show schedule actions section to instantiate VM
show_sched_actions_instantiate: true
# True to show boot order section to instantiate VM
show_boot_order: true
# True to show boot order section to instantiate VM
show_boot_order: true
tabs:
provision-tab:
panel_tabs:
vm_info_tab: false
vm_capacity_tab: true
vm_storage_tab: true
vm_network_tab: true
vm_snapshot_tab: true
vm_placement_tab: false
vm_actions_tab: true
vm_conf_tab: false
vm_template_tab: false
vm_log_tab: false
provision_tabs:
flows: true
templates: true
actions: &provisionactions
# In the cloud view, delete is the equivalent
# of 'onetemplate chmod --recursive'
Template.chmod: false
provision-tab:
panel_tabs:
vm_info_tab: false
vm_capacity_tab: true
vm_storage_tab: true
vm_network_tab: true
vm_snapshot_tab: true
vm_placement_tab: false
vm_actions_tab: true
vm_conf_tab: false
vm_template_tab: false
vm_log_tab: false
provision_tabs:
flows: true
templates: true
actions: &provisionactions
# In the cloud view, delete is the equivalent
# of 'onetemplate chmod --recursive'
Template.chmod: false
# In the cloud view, delete is the equivalent
# of 'onetemplate delete --recursive'
Template.delete: true
VM.rename: true
VM.resume: true
VM.reboot: true
VM.reboot_hard: true
VM.poweroff: true
VM.poweroff_hard: true
VM.undeploy: false
VM.undeploy_hard: false
VM.terminate: true
VM.terminate_hard: true
VM.resize: true
VM.disk_resize: true
VM.attachdisk: true
VM.detachdisk: true
VM.disk_saveas: true
VM.attachnic: true
VM.detachnic: true
VM.snapshot_create: true
VM.snapshot_revert: true
VM.snapshot_delete: true
VM.disk_snapshot_create: true
VM.disk_snapshot_revert: true
VM.disk_snapshot_rename: true
VM.disk_snapshot_delete: true
VM.migrate_poff: false
VM.migrate_poff_hard: false
VM.save_as_template: true
VM.lockU: true
VM.unlock: true
VM.startvnc: true
VM.startvmrc: true
VM.startspice: true
VM.vnc: true
VM.ssh: true
VM.rdp: true
VM.save_rdp: true
VM.save_virt_viewer: true
VM.updateconf: true
VM.attachsg: true
VM.detachsg: true
VM.instantiate_name: true
dashboard:
# Connected user's quotas
quotas: true
# Overview of connected user's VMs
vms: true
# Group's quotas
groupquotas: false
# Overview of group's VMs
groupvms: false
create_vm:
# True to allow capacity (CPU, MEMORY, VCPU) customization
capacity_select: true
# True to allow NIC customization
network_select: true
# True to allow vmgroup customization
vmgroup_select: true
# True to allow DISK size customization
disk_resize: true
# True to allow datastore customization
datastore_select: true
settings-tab:
panel_tabs:
settings_info_tab: false
settings_config_tab: true
settings_quotas_tab: true
settings_accounting_tab: true
settings_showback_tab: true
actions:
# Buttons for settings_info_tab
User.update_password: true
User.login_token: true
User.two_factor_auth: true
# Buttons for settings_config_tab
Settings.change_language: true
Settings.change_password: true
Settings.change_view: true
Settings.ssh_key: true
Settings.login_token: true
Settings.two_factor_auth: true
# Edit button in settings_quotas_tab
User.quotas_dialog: false
vms-tab:
actions: *provisionactions
images-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
- 3 # Onwer
- 4 # Group
- 5 # Datastore
#- 6 # Size
- 7 # Type
#- 8 # Registration time
#- 9 # Persistent
- 10 # Status
- 11 # #VMs
#- 12 # Target
vnets-tab:
# Allows to instantiate a service with vnets
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
#- 3 # Owner
- 4 # Group
- 5 # Status
#- 6 # Reservation
- 7 # Cluster
#- 8 # Bridge
#- 9 # Leases
#- 10 # VLAN ID
vnets-templates-tab:
# Allows to instantiate a service with vnets templates
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
#- 3 # Owner
- 4 # Group
#- 6 # Cluster
secgroups-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
#- 3 # Owner
- 4 # Group
#- 5 # Labels
vmgroup-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
- 3 # Owner
- 4 # Group
- 5 # Vms
#- 6 # Labels
#- 7 # Search data
datastores-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
- 3 # Owner
- 4 # Group
#- 5 # Capacity
#- 6 # Cluster
#- 7 # Basepath
#- 8 # TM
#- 9 # DS
#- 10 # Type
- 11 # Status
#- 12 # Labels
#- 13 # Search data
templates-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Owner
- 3 # Group
- 4 # Name
- 5 # Registration time
#- 6 # Labels
#- 7 # Search data
actions: *provisionactions
template_creation_tabs:
general: true
storage: true
network: true
os_booting: true
features: true
input_output: true
context: true
actions: true
scheduling: false
hybrid: false
vmgroup: true
other: true
numa: true
# In the cloud view, delete is the equivalent
# of 'onetemplate delete --recursive'
Template.delete: true
VM.rename: true
VM.resume: true
VM.reboot: true
VM.reboot_hard: true
VM.poweroff: true
VM.poweroff_hard: true
VM.undeploy: false
VM.undeploy_hard: false
VM.terminate: true
VM.terminate_hard: true
VM.resize: true
VM.disk_resize: true
VM.attachdisk: true
VM.detachdisk: true
VM.disk_saveas: true
VM.attachnic: true
VM.detachnic: true
VM.snapshot_create: true
VM.snapshot_revert: true
VM.snapshot_delete: true
VM.disk_snapshot_create: true
VM.disk_snapshot_revert: true
VM.disk_snapshot_rename: true
VM.disk_snapshot_delete: true
VM.migrate_poff: false
VM.migrate_poff_hard: false
VM.save_as_template: true
VM.lockU: true
VM.unlock: true
VM.startvnc: true
VM.startvmrc: true
VM.startspice: true
VM.vnc: true
VM.ssh: true
VM.rdp: true
VM.save_rdp: true
VM.save_virt_viewer: true
VM.updateconf: true
VM.attachsg: true
VM.detachsg: true
VM.instantiate_name: true
VM.backup_dialog: true
VM.backup: true
dashboard:
# Connected user's quotas
quotas: true
# Overview of connected user's VMs
vms: true
# Group's quotas
groupquotas: false
# Overview of group's VMs
groupvms: false
create_vm:
# True to allow capacity (CPU, MEMORY, VCPU) customization
capacity_select: true
# True to allow NIC customization
network_select: true
# True to allow vmgroup customization
vmgroup_select: true
# True to allow DISK size customization
disk_resize: true
# True to allow datastore customization
datastore_select: true
settings-tab:
panel_tabs:
settings_info_tab: false
settings_config_tab: true
settings_quotas_tab: true
settings_accounting_tab: true
settings_showback_tab: true
actions:
# Buttons for settings_info_tab
User.update_password: true
User.login_token: true
User.two_factor_auth: true
# Buttons for settings_config_tab
Settings.change_language: true
Settings.change_password: true
Settings.change_view: true
Settings.ssh_key: true
Settings.login_token: true
Settings.two_factor_auth: true
# Edit button in settings_quotas_tab
User.quotas_dialog: false
vms-tab:
actions: *provisionactions
images-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
- 3 # Onwer
- 4 # Group
- 5 # Datastore
#- 6 # Size
- 7 # Type
#- 8 # Registration time
#- 9 # Persistent
- 10 # Status
- 11 # #VMs
#- 12 # Target
vnets-tab:
# Allows to instantiate a service with vnets
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
#- 3 # Owner
- 4 # Group
- 5 # Status
#- 6 # Reservation
- 7 # Cluster
#- 8 # Bridge
#- 9 # Leases
#- 10 # VLAN ID
vnets-templates-tab:
# Allows to instantiate a service with vnets templates
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
#- 3 # Owner
- 4 # Group
#- 6 # Cluster
secgroups-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
#- 3 # Owner
- 4 # Group
#- 5 # Labels
vmgroup-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
- 3 # Owner
- 4 # Group
- 5 # Vms
#- 6 # Labels
#- 7 # Search data
datastores-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
- 3 # Owner
- 4 # Group
#- 5 # Capacity
#- 6 # Cluster
#- 7 # Basepath
#- 8 # TM
#- 9 # DS
#- 10 # Type
- 11 # Status
#- 12 # Labels
#- 13 # Search data
templates-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Owner
- 3 # Group
- 4 # Name
- 5 # Registration time
#- 6 # Labels
#- 7 # Search data
actions: *provisionactions
template_creation_tabs:
general: true
storage: true
network: true
os_booting: true
features: true
input_output: true
context: true
actions: true
scheduling: false
hybrid: false
vmgroup: true
other: true
numa: true
backup: true

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -3,246 +3,249 @@ link_logo:
text_link_logo:
confirm_vms: false
enabled_tabs:
- provision-tab
- settings-tab
- provision-tab
- settings-tab
features:
# True to show showback monthly reports, and VM cost
showback: true
# True to show showback monthly reports, and VM cost
showback: true
# Allows to change the security groups for each network interface
# on the VM creation dialog
secgroups: false
# Allows to change the security groups for each network interface
# on the VM creation dialog
secgroups: false
# True to hide the CPU setting in the VM creation dialog
instantiate_hide_cpu: false
# True to hide the CPU setting in the VM creation dialog
instantiate_hide_cpu: false
# False to not scale the CPU.
# An integer value would be used as a multiplier as follows:
# CPU = instantiate_cpu_factor * VCPU
# Set it to 1 to tie CPU and vCPU.
instantiate_cpu_factor: false
# False to not scale the CPU.
# An integer value would be used as a multiplier as follows:
# CPU = instantiate_cpu_factor * VCPU
# Set it to 1 to tie CPU and vCPU.
instantiate_cpu_factor: false
# True to show the option to make an instance persistent
instantiate_persistent: true
# True to show the option to make an instance persistent
instantiate_persistent: true
# True to allow to create machines to cloud users
cloud_vm_create: true
# True to allow to create machines to cloud users
cloud_vm_create: true
# True to show the monitoring info (VM & VRouters)
show_monitoring_info: true
# True to show the monitoring info (VM & VRouters)
show_monitoring_info: true
# True to show the attributes info (VM & VRouters)
show_attributes_info: true
# True to show the attributes info (VM & VRouters)
show_attributes_info: true
# True to show the vCenter info (VM & VRouters)
show_vcenter_info: true
# True to show the vCenter info (VM & VRouters)
show_vcenter_info: true
# True to show advanced options
show_attach_disk_advanced: true
# True to show advanced options
show_attach_disk_advanced: true
show_attach_nic_advanced: true
show_attach_nic_advanced: true
# True to show the network configuration to instantiate Service template
show_vnet_instantiate_flow: true
# True to show the network configuration to instantiate Service template
show_vnet_instantiate_flow: true
# True to show schedule actions section to instantiate VM
show_sched_actions_instantiate: true
# True to show schedule actions section to instantiate VM
show_sched_actions_instantiate: true
# True to show boot order section to instantiate VM
show_boot_order: true
# True to show boot order section to instantiate VM
show_boot_order: true
tabs:
provision-tab:
panel_tabs:
vm_info_tab: false
vm_capacity_tab: true
vm_storage_tab: true
vm_network_tab: true
vm_snapshot_tab: true
vm_placement_tab: false
vm_actions_tab: true
vm_conf_tab: false
vm_template_tab: false
vm_log_tab: false
provision_tabs:
flows: true
templates: true
provision-tab:
panel_tabs:
vm_info_tab: false
vm_capacity_tab: true
vm_storage_tab: true
vm_network_tab: true
vm_snapshot_tab: true
vm_placement_tab: false
vm_actions_tab: true
vm_conf_tab: false
vm_template_tab: false
vm_log_tab: false
provision_tabs:
flows: true
templates: true
actions: &provisionactions
# In the cloud view, delete is the equivalent
# of 'onetemplate chmod --recursive'
Template.chmod: false
actions: &provisionactions
# In the cloud view, delete is the equivalent
# of 'onetemplate chmod --recursive'
Template.chmod: false
# In the cloud view, delete is the equivalent
# of 'onetemplate delete --recursive'
Template.delete: true
VM.rename: true
VM.resume: true
VM.reboot: true
VM.reboot_hard: true
VM.poweroff: true
VM.poweroff_hard: true
VM.undeploy: false
VM.undeploy_hard: false
VM.terminate: true
VM.terminate_hard: true
VM.resize: true
VM.disk_resize: true
VM.attachdisk: true
VM.detachdisk: true
VM.disk_saveas: true
VM.attachnic: true
VM.detachnic: true
VM.snapshot_create: true
VM.snapshot_revert: true
VM.snapshot_delete: true
VM.disk_snapshot_create: true
VM.disk_snapshot_revert: true
VM.disk_snapshot_rename: true
VM.disk_snapshot_delete: true
VM.migrate_poff: false
VM.migrate_poff_hard: false
VM.save_as_template: true
VM.lockU: true
VM.unlock: true
VM.startvnc: true
VM.startvmrc: true
VM.startspice: true
VM.vnc: true
VM.ssh: true
VM.rdp: true
VM.save_rdp: true
VM.save_virt_viewer: true
VM.updateconf: true
VM.attachsg: true
VM.detachsg: true
VM.instantiate_name: true
dashboard:
# Connected user's quotas
quotas: true
# Overview of connected user's VMs
vms: true
# Group's quotas
groupquotas: false
# Overview of group's VMs
groupvms: false
create_vm:
# True to allow capacity (CPU, MEMORY, VCPU) customization
capacity_select: true
# True to allow NIC customization
network_select: true
# True to allow vmgroup customization
vmgroup_select: true
# True to allow DISK size customization
disk_resize: true
# True to allow datastore customization
datastore_select: true
settings-tab:
panel_tabs:
settings_info_tab: false
settings_config_tab: true
settings_quotas_tab: true
settings_accounting_tab: true
settings_showback_tab: true
actions:
# Buttons for settings_info_tab
User.update_password: true
User.login_token: true
User.two_factor_auth: true
# Buttons for settings_config_tab
Settings.change_language: true
Settings.change_password: true
Settings.change_view: true
Settings.ssh_key: true
Settings.login_token: true
Settings.two_factor_auth: true
# Edit button in settings_quotas_tab
User.quotas_dialog: false
vms-tab:
actions: *provisionactions
images-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
- 3 # Onwer
- 4 # Group
- 5 # Datastore
#- 6 # Size
- 7 # Type
#- 8 # Registration time
#- 9 # Persistent
- 10 # Status
- 11 # #VMs
#- 12 # Target
vnets-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
#- 3 # Owner
- 4 # Group
- 5 # Status
#- 6 # Reservation
- 7 # Cluster
#- 8 # Bridge
#- 9 # Leases
#- 10 # VLAN ID
secgroups-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
#- 3 # Owner
- 4 # Group
#- 5 # Labels
vmgroup-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
- 3 # Owner
- 4 # Group
- 5 # Vms
#- 6 # Labels
#- 7 # Search data
datastores-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
- 3 # Owner
- 4 # Group
#- 5 # Capacity
#- 6 # Cluster
#- 7 # Basepath
#- 8 # TM
#- 9 # DS
#- 10 # Type
- 11 # Status
#- 12 # Labels
#- 13 # Search data
templates-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Owner
- 3 # Group
- 4 # Name
- 5 # Registration time
#- 6 # Labels
#- 7 # Search data
actions: *provisionactions
template_creation_tabs:
general: true
storage: true
network: true
os_booting: true
features: true
input_output: true
context: true
actions: true
scheduling: false
hybrid: false
vmgroup: true
other: true
numa: true
# In the cloud view, delete is the equivalent
# of 'onetemplate delete --recursive'
Template.delete: true
VM.rename: true
VM.resume: true
VM.reboot: true
VM.reboot_hard: true
VM.poweroff: true
VM.poweroff_hard: true
VM.undeploy: false
VM.undeploy_hard: false
VM.terminate: true
VM.terminate_hard: true
VM.resize: true
VM.disk_resize: true
VM.attachdisk: true
VM.detachdisk: true
VM.disk_saveas: true
VM.attachnic: true
VM.detachnic: true
VM.snapshot_create: true
VM.snapshot_revert: true
VM.snapshot_delete: true
VM.disk_snapshot_create: true
VM.disk_snapshot_revert: true
VM.disk_snapshot_rename: true
VM.disk_snapshot_delete: true
VM.migrate_poff: false
VM.migrate_poff_hard: false
VM.save_as_template: true
VM.lockU: true
VM.unlock: true
VM.startvnc: true
VM.startvmrc: true
VM.startspice: true
VM.vnc: true
VM.ssh: true
VM.rdp: true
VM.save_rdp: true
VM.save_virt_viewer: true
VM.updateconf: true
VM.attachsg: true
VM.detachsg: true
VM.instantiate_name: true
VM.backup_dialog: false
VM.backup: false
dashboard:
# Connected user's quotas
quotas: true
# Overview of connected user's VMs
vms: true
# Group's quotas
groupquotas: false
# Overview of group's VMs
groupvms: false
create_vm:
# True to allow capacity (CPU, MEMORY, VCPU) customization
capacity_select: true
# True to allow NIC customization
network_select: true
# True to allow vmgroup customization
vmgroup_select: true
# True to allow DISK size customization
disk_resize: true
# True to allow datastore customization
datastore_select: true
settings-tab:
panel_tabs:
settings_info_tab: false
settings_config_tab: true
settings_quotas_tab: true
settings_accounting_tab: true
settings_showback_tab: true
actions:
# Buttons for settings_info_tab
User.update_password: true
User.login_token: true
User.two_factor_auth: true
# Buttons for settings_config_tab
Settings.change_language: true
Settings.change_password: true
Settings.change_view: true
Settings.ssh_key: true
Settings.login_token: true
Settings.two_factor_auth: true
# Edit button in settings_quotas_tab
User.quotas_dialog: false
vms-tab:
actions: *provisionactions
images-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
- 3 # Onwer
- 4 # Group
- 5 # Datastore
#- 6 # Size
- 7 # Type
#- 8 # Registration time
#- 9 # Persistent
- 10 # Status
- 11 # #VMs
#- 12 # Target
vnets-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
#- 3 # Owner
- 4 # Group
- 5 # Status
#- 6 # Reservation
- 7 # Cluster
#- 8 # Bridge
#- 9 # Leases
#- 10 # VLAN ID
secgroups-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
#- 3 # Owner
- 4 # Group
#- 5 # Labels
vmgroup-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
- 3 # Owner
- 4 # Group
- 5 # Vms
#- 6 # Labels
#- 7 # Search data
datastores-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Name
- 3 # Owner
- 4 # Group
#- 5 # Capacity
#- 6 # Cluster
#- 7 # Basepath
#- 8 # TM
#- 9 # DS
#- 10 # Type
- 11 # Status
#- 12 # Labels
#- 13 # Search data
templates-tab:
table_columns:
- 0 # Checkbox
- 1 # ID
- 2 # Owner
- 3 # Group
- 4 # Name
- 5 # Registration time
#- 6 # Labels
#- 7 # Search data
actions: *provisionactions
template_creation_tabs:
general: true
storage: true
network: true
os_booting: true
features: true
input_output: true
context: true
actions: true
scheduling: false
hybrid: false
vmgroup: true
other: true
numa: true
backup: false

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -65,6 +65,7 @@ module OpenNebulaJSON
when "snapshot_delete" then self.snapshot_delete(action_hash['params'])
when "lock" then lock(action_hash['params']['level'].to_i)
when "unlock" then unlock()
when "restore" then restore(action_hash['params'])
else
error_msg = "#{action_hash['perform']} action not " <<
" available for this resource"
@ -137,5 +138,10 @@ module OpenNebulaJSON
def snapshot_delete(params=Hash.new)
super(params['snapshot_id'].to_i)
end
def restore(params=Hash.new)
restore_opts = params['restore_opts'] || ""
super(params['dst_id'].to_i, restore_opts)
end
end
end

View File

@ -134,6 +134,8 @@ module OpenNebulaJSON
sg_attach(action_hash['params'])
when 'sg_detach'
sg_detach(action_hash['params'])
when 'backup'
backup(action_hash['params']['dst_id'].to_i)
else
error_msg = "#{action_hash['perform']} action not " \
' available for this resource'

View File

@ -135,6 +135,7 @@ require.config({
"tabs/datastores-tab",
"tabs/images-tab",
"tabs/files-tab",
"tabs/backups-tab",
"tabs/marketplaces-tab",
"tabs/marketplaceapps-tab",
"tabs/network-top-tab",

View File

@ -73,6 +73,7 @@ define(function(require) {
"Host": Host,
"Image": Image,
"File": Image,
"Backup": Image,
"Network": Network,
"VNTemplate": VNTemplate,
"Role": Role,

View File

@ -18,16 +18,18 @@ define(function(require) {
var OpenNebulaAction = require('./action');
var Config = require('sunstone-config');
var OpenNebulaHelper = require('./helper');
var Locale = require('utils/locale');
var RESOURCE = "DATASTORE";
var STATES_STR = [
"ON",
"OFF"];
Locale.tr("ON"),
Locale.tr("OFF")];
var TYPES_STR = [
"IMAGE",
"SYSTEM",
"FILE"
Locale.tr("IMAGE"),
Locale.tr("SYSTEM"),
Locale.tr("FILE"),
Locale.tr("BACKUP")
];
var STATES = {
@ -38,7 +40,8 @@ define(function(require) {
var TYPES = {
IMAGE_DS : 0,
SYSTEM_DS : 1,
FILE_DS : 2
FILE_DS : 2,
BACKUP_DS : 3
};
var dsMadIndex = {};

View File

@ -39,7 +39,8 @@ define(function(require) {
Locale.tr("DATABLOCK"),
Locale.tr("KERNEL"),
Locale.tr("RAMDISK"),
Locale.tr("CONTEXT")
Locale.tr("CONTEXT"),
Locale.tr("BACKUP")
];
var STATES_COLOR = [
@ -76,7 +77,8 @@ define(function(require) {
DATABLOCK : 2,
KERNEL : 3,
RAMDISK : 4,
CONTEXT : 5
CONTEXT : 5,
BACKUP : 6
};
var Image = {
@ -170,6 +172,10 @@ define(function(require) {
},
"unlock" : function(params) {
OpenNebulaAction.simple_action(params, RESOURCE, "unlock");
},
"restore" : function(params) {
var action_obj = params.data.extra_param ? params.data.extra_param : {};
OpenNebulaAction.simple_action(params, RESOURCE, "restore", action_obj)
}
}

Some files were not shown because too many files have changed in this diff Show More