5
0
mirror of git://git.proxmox.com/git/pve-docs.git synced 2025-03-11 16:58:18 +03:00

import storage docs

This commit is contained in:
Dietmar Maurer 2016-01-05 10:02:14 +01:00
parent b8a8bec810
commit aa039b0f5a
9 changed files with 978 additions and 0 deletions

127
pve-storage-dir.adoc Normal file
View File

@ -0,0 +1,127 @@
Directory Backend
-----------------
Storage pool type: `dir`
{pve} can use local directories or locally mounted shares for
storage. A directory is a file level storage, so you can store any
content type like virtual disk images, containers, templates, ISO images
or backup files.
NOTE: You can mount additional storages via standard linux '/etc/fstab',
and then define a directory storage for that mount point. This way you
can use any file system supported by Linux.
This backend assumes that the underlying directory is POSIX
compatible, but nothing else. This implies that you cannot create
snapshots at the storage level. But there exists a woraround for VM
images using the `qcow2` file format, because that format supports
snapshots internally.
TIP: Some storage types does not support `O_DIRECT`, so you can't use
cache mode `none` with such storages. Simply use cache mode
`writeback` instead.
We use a predefined directory layout to store different content types
into different sub-directories. This layout is use by all file level
storage backends.
.Directory layout
[width="100%",cols="d,m",options="header"]
|===========================================================
|Content type |Subdir
|VM images |images/<VMID>/
|ISO images |template/iso/
|Container templates |template/cache
|Backup files |dump/
|===========================================================
Configuration
~~~~~~~~~~~~~
This backend supports all common storage properties, and adds an
additional property called `path` to specify the directory. This
needs to be an absolute file system path.
.Configuration Example ('/etc/pve/storage.cfg')
----
dir: backup
path /mnt/backup
content backup
maxfiles 7
----
Above configuration defines a storage pool called `backup`. That pool
can be used to store up to 7 backups (`maxfiles 7`) per VM. The real
path for the backup files is '/mnt/backup/dump/...'.
File naming conventions
~~~~~~~~~~~~~~~~~~~~~~~
This backend uses a well defined naming scheme for VM images:
vm-<VMID>-<NAME>.<FORMAT>
`<VMID>`::
This specifies the owner VM.
`<NAME>`::
This scan be an arbitrary name (`ascii`) without white spaces. The
backend uses `disk[N]` as default, where `[N]` is replaced by an
integer to make the name unique.
`<FORMAT>`::
Species the image format (`raw|qcow2|vmdk`).
When you create a VM template, all VM images are renamed to indicate
that they are now read-only, and can be uses as base image for clones:
base-<VMID>-<NAME>.<FORMAT>
NOTE: Such base images are used to generate cloned images. So it is
important that those files are read-only, and never gets modified. The
backend changes access mode to `0444`, and sets the immutable flag
(`chattr +i`) if the storage supports that.
Storage Features
~~~~~~~~~~~~~~~~
As mentioned above, most file systems does not support snapshots out
of the box. To workaround that problem, this backend is able to use
`qcow2` internal snapshot capabilities.
Same applies to clones. The backend uses the `qcow2` base image
feature to create clones.
.Storage features for backend `dir`
[width="100%",cols="m,m,3*d",options="header"]
|==============================================================================
|Content types |Image formats |Shared |Snapshots |Clones
|images rootdir vztempl iso backup |raw qcow2 vmdk subvol |no |qcow2 |qcow2
|==============================================================================
Examples
~~~~~~~~
Please use the following command to allocate a 4GB image on storage `local`:
# pvesm alloc local 100 vm-100-disk10.raw 4G
Formatting '/var/lib/vz/images/100/vm-100-disk10.raw', fmt=raw size=4294967296
sucessfuly created 'local:100/vm-100-disk10.raw'
NOTE: The image name must conform to above naming conventions.
The real file system path is shown with:
# pvesm path local:100/vm-100-disk10.raw
/var/lib/vz/images/100/vm-100-disk10.raw
And you can remove the image with:
# pvesm free local:100/vm-100-disk10.raw

View File

@ -0,0 +1,65 @@
GlusterFS Backend
-----------------
Storage pool type: `glusterfs`
GlusterFS is a salable network file system. The system uses a modular
design, runs on commodity hardware, and can provide a highly available
enterprise storage at low costs. Such system is capable of scaling to
several petabytes, and can handle thousands of clients.
NOTE: After a node/brick crash, GlusterFS does a full 'rsync' to make
sure data is consistent. This can take a very long time with large
files, so this backend is not suitable to store large VM images.
Configuration
~~~~~~~~~~~~~
The backend supports all common storage properties, and adds the
following GlusterFS specific options:
`server`::
GlusterFS volfile server IP or DNS name.
`server2`::
Backup volfile server IP or DNS name.
`volume`::
GlusterFS Volume.
`transport`::
GlusterFS transport: `tcp`, `unix` or `rdma`
.Configuration Example ('/etc/pve/storage.cfg')
----
glusterfs: Gluster
server 10.2.3.4
server2 10.2.3.5
volume glustervol
content images,iso
----
File naming conventions
~~~~~~~~~~~~~~~~~~~~~~~
The directory layout and the file naming conventions are inhertited
from the `dir` backend.
Storage Features
~~~~~~~~~~~~~~~~
The storage provides a file level interface, but no native
snapshot/clone implementation.
.Storage features for backend `glusterfs`
[width="100%",cols="m,m,3*d",options="header"]
|==============================================================================
|Content types |Image formats |Shared |Snapshots |Clones
|images vztempl iso backup |raw qcow2 vmdk |yes |qcow2 |qcow2
|==============================================================================

81
pve-storage-iscsi.adoc Normal file
View File

@ -0,0 +1,81 @@
http://www.open-iscsi.org/[Open-iSCSI] initiator
------------------------------------------------
Storage pool type: `iscsi`
iSCSI is a widely employed technology used to connect to storage
servers. Almost all storage vendors support iSCSI. There are also open
source iSCSI target solutions available,
e.g. http://www.openmediavault.org/[OpenMediaVault], which is based on
Debian.
To use this backend, you need to install the 'open-iscsi'
package. This is a standard Debian package, but it is not installed by
default to save resources.
# apt-get install open-iscsi
Low-level iscsi management task can be done using the 'iscsiadm' tool.
Configuration
~~~~~~~~~~~~~
The backend supports the common storage properties `content`, `nodes`,
`disable`, and the following iSCSI specific properties:
portal::
iSCSI portal (IP or DNS name with optional port).
target::
iSCSI target.
.Configuration Example ('/etc/pve/storage.cfg')
----
iscsi: mynas
portal 10.10.10.1
target iqn.2006-01.openfiler.com:tsn.dcb5aaaddd
content none
----
TIP: If you want to use LVM on top of iSCSI, it make sense to set
`content none`. That way it is not possible to create VMs using iSCSI
LUNs directly.
File naming conventions
~~~~~~~~~~~~~~~~~~~~~~~
The iSCSI protocol does not define an interface to allocate or delete
data. Instead, that needs to be done on the target side and is vendor
specific. The target simply exports them as numbered LUNs. So {pve}
iSCSI volume names just encodes some information about the LUN as seen
by the linux kernel.
Storage Features
~~~~~~~~~~~~~~~~
iSCSI is a block level type storage, and provides no management
interface. So it is usually best to export one big LUN, and setup LVM
on top of that LUN. You can then use the LVM plugin to manage the
storage on that iSCSI LUN.
.Storage features for backend `iscsi`
[width="100%",cols="m,m,3*d",options="header"]
|==============================================================================
|Content types |Image formats |Shared |Snapshots |Clones
|images none |raw |yes |no |no
|==============================================================================
Examples
~~~~~~~~
Scan a remote iSCSI portal, and returns a list of possible targets:
pvesm iscsiscan -portal <HOST[:PORT]>

View File

@ -0,0 +1,40 @@
User Mode iSCSI Backend
-----------------------
Storage pool type: `iscsidirect`
This backend provides basically the same functionality as the
Open-iSCSI backed, but uses a user-level library (package 'libiscsi2')
to implement it.
It should be noted that there are no kernel drivers involved, so this
can be viewed as performance optimization. But this comes with the
drawback that you cannot use LVM on top of such iSCSI LUN. So you need
to manage all space allocations at the storage server side.
Configuration
~~~~~~~~~~~~~
The user mode iSCSI backend uses the same configuration options as the
Open-iSCSI backed.
.Configuration Example ('/etc/pve/storage.cfg')
----
iscsidirect: faststore
portal 10.10.10.1
target iqn.2006-01.openfiler.com:tsn.dcb5aaaddd
----
Storage Features
~~~~~~~~~~~~~~~~
NOTE: This backend works with VMs only. Containers cannot use this
driver.
.Storage features for backend `iscsidirect`
[width="100%",cols="m,m,3*d",options="header"]
|==============================================================================
|Content types |Image formats |Shared |Snapshots |Clones
|images |raw |yes |no |no
|==============================================================================

87
pve-storage-lvm.adoc Normal file
View File

@ -0,0 +1,87 @@
LVM Backend
-----------
Storage pool type: `lvm`
LVM is a thin software layer on top of hard disks and partitions. It
can be used to split available disk space into smaller logical
volumes. LVM is widely used on Linux and makes managing hard drives
easier.
Another use case is to put LVM on top of a big iSCSI LUN. That way you
can easily manage space on that iSCSI LUN, which would not be possible
otherwise, because the iSCSI specification does not define a
management interface for space allocation.
Configuration
~~~~~~~~~~~~~
The LVM backend supports the common storage properties `content`, `nodes`,
`disable`, and the following LVM specific properties:
`vgname`::
LVM volume group name. This must point to an existing volume group.
`base`::
Base volume. This volume is automatically activated before accessing
the storage. This is mostly useful when the LVM volume group resides
on a remote iSCSI server.
`saferemove`::
Zero-out data when removing LVs. When removing a volume, this makes
sure that all data gets erased.
`saferemove_throughput`::
Wipe throughput ('cstream -t' parameter value).
.Configuration Example ('/etc/pve/storage.cfg')
----
lvm: myspace
vgname myspace
content rootdir,images
----
File naming conventions
~~~~~~~~~~~~~~~~~~~~~~~
The backend use basically the same naming conventions as the ZFS pool
backend.
vm-<VMID>-<NAME> // normal VM images
Storage Features
~~~~~~~~~~~~~~~~
LVM is a typical block storage, but this backend does not support
snapshot and clones. Unfortunately, normal LVM snapshots are quite
inefficient, because they interfere all writes on the whole volume
group during snapshot time.
One big advantage is that you can use it on top of a shared storage,
for example an iSCSI LUN. The backend itself implement proper cluster
wide locking.
TIP: The newer LVM-thin backend allows snapshot and clones, but does
not support shared storage.
.Storage features for backend `lvm`
[width="100%",cols="m,m,3*d",options="header"]
|==============================================================================
|Content types |Image formats |Shared |Snapshots |Clones
|images rootdir |raw |possible |no |no
|==============================================================================
Examples
~~~~~~~~
List available volume groups:
# pvesm lvmscan

76
pve-storage-nfs.adoc Normal file
View File

@ -0,0 +1,76 @@
NFS Backend
-----------
Storage pool type: `nfs`
The NFS backend is based on the directory backend, so it shares most
properties. The directory layout and the file naming conventions are
the same. The main advantage is that you can directly configure the
NFS server properties, so the backend can mount the share
automatically. There is no need to modify '/etc/fstab'. The backend
can also test if the server is online, and provides a method to query
the server for exported shares.
Configuration
~~~~~~~~~~~~~
The backend supports all common storage properties, except the shared
flag, which is always set. Additionally, the following properties are
used to configure the NFS server:
server::
Server IP or DNS name. To avoid DNS lookup delays, it is usually
preferrable to use an IP address instead of a DNS name - unless you
have a very reliable DNS server, or list the server in the local
`/etc/hosts` file.
export::
NFS export path (as listed by `pvesm nfsscan`).
You can also set NFS mount options:
path::
The local mount point (defaults to '/mnt/pve/`<STORAGE_ID>`/').
options::
NFS mount options (see `man nfs`).
.Configuration Example ('/etc/pve/storage.cfg')
----
nfs: iso-templates
path /mnt/pve/iso-templates
server 10.0.0.10
export /space/iso-templates
options vers=3,soft
content iso,vztmpl
----
TIP: After an NFS request times out, NFS request are retried
indefinitely by default. This can lead to unexpected hangs on the
client side. For read-only content, it is worth to consider the NFS
`soft` option, which limits the number of retries to three.
Storage Features
~~~~~~~~~~~~~~~~
NFS does not support snapshots, but the backend use `qcow2` features
to implement snapshots and cloning.
.Storage features for backend `nfs`
[width="100%",cols="m,m,3*d",options="header"]
|==============================================================================
|Content types |Image formats |Shared |Snapshots |Clones
|images rootdir vztempl iso backup |raw qcow2 vmdk subvol |yes |qcow2 |qcow2
|==============================================================================
Examples
~~~~~~~~
You can get a list of exported NFS shares with:
# pvesm nfsscan <server>

88
pve-storage-rbd.adoc Normal file
View File

@ -0,0 +1,88 @@
Ceph RADOS Block Devices (RBD)
------------------------------
Storage pool type: `rbd`
http://ceph.com[Ceph] is a distributed object store and file system
designed to provide excellent performance, reliability and
scalability. RADOS block devices implement a feature rich block level
storage, and you get the following advantages:
* thin provisioning
* resizable volumes
* distributed and redundant (striped over multiple OSDs)
* full snapshot and clone capabilities
* self healing
* no single point of failure
* scalable to the exabyte level
* kernel and unser space implementation available
NOTE: For smaller deployments, it is also possible to run Ceph
services directly on your {pve} nodes. Recent hardware has plenty
of CPU power and RAM, so running storage services and VMs on same node
is possible.
Configuration
~~~~~~~~~~~~~
This backend supports the common storage properties `nodes`,
`disable`, `content`, and the following `rbd` specific properties:
monhost::
List of monitor daemon IPs.
pool::
Ceph pool name.
username::
RBD user Id.
krbd::
Access rbd through krbd kernel module. This is required if you want to
use the storage for containers.
.Configuration Example ('/etc/pve/storage.cfg')
----
rbd: ceph3
monhost 10.1.1.20 10.1.1.21 10.1.1.22
pool ceph3
content images
username admin
----
TIP: You can use the 'rbd' utility to do low-level management tasks.
Authentication
~~~~~~~~~~~~~~
If you use cephx authentication, you need to copy the keyfile from
Ceph to Proxmox VE host.
Create the directory '/etc/pve/priv/ceph' with
mkdir /etc/pve/priv/ceph
Then copy the keyring
scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring
The keyring must be named to match your `<STORAGE_ID>`. Copying the
keyring generally requires root privileges.
Storage Features
~~~~~~~~~~~~~~~~
The `rbd` backend is a block level storage, and implements full
snapshot and clone functionality.
.Storage features for backend `rbd`
[width="100%",cols="m,m,3*d",options="header"]
|==============================================================================
|Content types |Image formats |Shared |Snapshots |Clones
|images rootdir |raw |yes |yes |yes
|==============================================================================

86
pve-storage-zfspool.adoc Normal file
View File

@ -0,0 +1,86 @@
Local ZFS Pool Backend
----------------------
Storage pool type: `zfspool`
This backend allows you to access local ZFS pools (or ZFS filesystems
inside such pools).
Configuration
~~~~~~~~~~~~~
The backend supports the common storage properties `content`, `nodes`,
`disable`, and the following ZFS specific properties:
pool::
Select the ZFS pool/filesystem. All allocations are done within that
pool.
blocksize::
Set ZFS blocksize parameter.
sparse::
Use ZFS thin-provisioning. A sparse volume is a volume whose
reservation is not equal to the volume size.
.Configuration Example ('/etc/pve/storage.cfg')
----
zfspool: vmdata
pool tank/vmdata
content rootdir,images
sparse
----
File naming conventions
~~~~~~~~~~~~~~~~~~~~~~~
The backend uses the following naming scheme for VM images:
vm-<VMID>-<NAME> // normal VM images
base-<VMID>-<NAME> // template VM image (read-only)
subvol-<VMID>-<NAME> // subvolumes (ZFS filesystem for containers)
`<VMID>`::
This specifies the owner VM.
`<NAME>`::
This scan be an arbitrary name (`ascii`) without white spaces. The
backend uses `disk[N]` as default, where `[N]` is replaced by an
integer to make the name unique.
Storage Features
~~~~~~~~~~~~~~~~
ZFS is probably the most advanced storage type regarding snapshot and
cloning. The backend uses ZFS datasets for both VM images (format
`raw`) and container data (format `subvol`). ZFS properties are
inherited from the parent dataset, so you can simply set defaults
on the parent dataset.
.Storage features for backend `zfs`
[width="100%",cols="m,m,3*d",options="header"]
|==============================================================================
|Content types |Image formats |Shared |Snapshots |Clones
|images rootdir |raw subvol |no |yes |yes
|==============================================================================
Examples
~~~~~~~~
It is recommended to create and extra ZFS filesystem to store your VM images:
# zfs create tank/vmdata
To enable compression on that newly allocated filesystem:
# zfs set compression=on tank/vmdata
You can get a list of available ZFS filesystems with:
# pvesm zfsscan

328
pvesm.adoc Normal file
View File

@ -0,0 +1,328 @@
include::attributes.txt[]
[[chapter-storage]]
ifdef::manvolnum[]
PVE({manvolnum})
================
NAME
----
pvesm - Proxmox VE Storage Manager
SYNOPSYS
--------
include::pvesm.1-synopsis.adoc[]
DESCRIPTION
-----------
endif::manvolnum[]
ifndef::manvolnum[]
{pve} Storage
=============
endif::manvolnum[]
The {pve} storage model is very flexible. Virtual machine images
can either be stored on one or several local storages, or on shared
storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
configure as many storage pools as you like. You can use all
storage technologies available for Debian Linux.
One major benefit of storing VMs on shared storage is the ability to
live-migrate running machines without any downtime, as all nodes in
the cluster have direct access to VM disk images. There is no need to
copy VM image data, so live migration is very fast in that case.
The storage library (package 'libpve-storage-perl') uses a flexible
plugin system to provide a common interface to all storage types. This
can be easily adopted to include further storage types in future.
Storage Types
-------------
There are basically two different classes of storage types:
Block level storage::
Allows to store large 'raw' images. It is usually not possible to store
other files (ISO, backups, ..) on such storage types. Most modern
block level storage implementations support snapshots and clones.
RADOS, Sheepdog and DRBD are distributed systems, replicating storage
data to different nodes.
File level storage::
They allow access to a full featured (POSIX) file system. They are
more flexible, and allows you to store any content type. ZFS is
probably the most advanced system, and it has full support for
snapshots and clones.
.Available storage types
[width="100%",cols="<d,1*m,4*d",options="header"]
|===========================================================
|Description |PVE type |Level |Shared|Snapshots|Stable
|ZFS (local) |zfspool |file |no |yes |yes
|Directory |dir |file |no |no |yes
|NFS |nfs |file |yes |no |yes
|GlusterFS |glusterfs |file |yes |no |yes
|LVM |lvm |block |no |no |yes
|LVM-thin |lvmthin |block |no |yes |beta
|iSCSI/kernel |iscsi |block |yes |no |yes
|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
|Ceph/RBD |rbd |block |yes |yes |yes
|Sheepdog |sheepdog |block |yes |yes |beta
|DRBD9 |drbd |block |yes |yes |beta
|ZFS over iSCSI |zfs |block |yes |yes |yes
|=========================================================
TIP: It is possible to use LVM on top of an iSCSI storage. That way
you get a 'shared' LVM storage.
Storage Configuration
---------------------
All {pve} related storage configuration is stored within a single text
file at '/etc/pve/storage.cfg'. As this file is within '/etc/pve/', it
gets automatically distributed to all cluster nodes. So all nodes
share the same storage configuration.
Sharing storage configuration make perfect sense for shared storage,
because the same 'shared' storage is accessible from all nodes. But is
also useful for local storage types. In this case such local storage
is available on all nodes, but it is physically different and can have
totally different content.
Storage Pools
~~~~~~~~~~~~~
Each storage pool has a `<type>`, and is uniquely identified by its `<STORAGE_ID>`. A pool configuration looks like this:
----
<type>: <STORAGE_ID>
<property> <value>
<property> <value>
...
----
NOTE: There is one special local storage pool named `local`. It refers to
directory '/var/lib/vz' and is automatically generated at installation
time.
The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
followed by a list of properties. Most properties have values, but some of them comes
with reasonable default. In that case you can omit the value.
.Default storage configuration ('/etc/pve/storage.cfg')
====
dir: local
path /var/lib/vz
content backup,iso,vztmpl,images,rootdir
maxfiles 3
====
Common Storage Properties
~~~~~~~~~~~~~~~~~~~~~~~~~
A few storage properties are common among differenty storage types.
nodes::
List of cluster node names where this storage is
usable/accessible. One can use this property to restrict storage
access to a limited set of nodes.
content::
A storage can support several content types, for example virtual disk
images, cdrom iso images, container templates or container root
directories. Not all storage types supports all content types. One can set
this property to select for what this storage is used for.
images:::
KVM-Qemu VM images.
rootdir:::
Allow to store Container data.
vztmpl:::
Container templates.
backup:::
Backup files ('vzdump').
iso:::
ISO images
shared::
Mark storage as shared.
disable::
You can use this flag to disable the storage completely.
maxfiles::
Maximal number of backup files per VM. Use `0` for unlimted.
format::
Default image format (`raw|qcow2|vmdk`)
WARNING: It is not advisable to use the same storage pool on different
{pve} clusters. Some storage operation needs exclusive access to the
storage, so proper locking is required. While this is implemented
within an cluster, it does not work between different clusters.
Volumes
-------
We use a special notation to address storage data. When you allocate
data from a storage pool, it returns such volume identifier. A volume
is identified by the `<STORAGE_ID>`, followed by a storage type
dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
like:
local:230/example-image.raw
local:iso/debian-501-amd64-netinst.iso
local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
To get the filesystem path for a `<VOLUME_ID>` use:
pvesm path <VOLUME_ID>
Volume Ownership
~~~~~~~~~~~~~~~~
There exists an ownership relation for 'image' type volumes. Each such
volume is owned by a VM or Container. For example volume
`local:230/example-image.raw` is owned by VM 230. Most storage
backends encodes this ownership information into the volume name.
When you remove a VM or Container, the system also remove all
associated volumes which are owned by that VM or Container.
Using the Command Line Interface
--------------------------------
I think it is required to understand the concept behind storage pools
and volume identifier, but in real life, you are not forced to do any
of those low level operations on the command line. Normally,
allocation and removal of volumes is done by the VM and Container
management tools.
Nevertheless, there is a command line tool called 'pvesm' ({pve}
storage manager), which is able to perform common storage management
tasks.
Examples
~~~~~~~~
Add storage pools
pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
pvesm add dir <STORAGE_ID> --path <PATH>
pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
Disable storage pools
pvesm set <STORAGE_ID> --disable 1
Enable storage pools
pvesm set <STORAGE_ID> --disable 0
Change/set storage options
pvesm set <STORAGE_ID> <OPTIONS>
pvesm set <STORAGE_ID> --shared 1
pvesm set local --format qcow2
pvesm set <STORAGE_ID> --content iso
Remove storage pools. This does not delete any data, and does not
disconnect or unmount anything. It just removes the storage
configuration.
pvesm remove <STORAGE_ID>
Allocate volumes
pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
Allocate a 4G volume in local storage. The name is auto-generated if
you pass an empty string as `<name>`
pvesm alloc local <VMID> '' 4G
Free volumes
pvesm free <VOLUME_ID>
WARNING: This really destroys all volume data.
List storage status
pvesm status
List storage contents
pvesm list <STORAGE_ID> [--vmid <VMID>]
List volumes allocated by VMID
pvesm list <STORAGE_ID> --vmid <VMID>
List iso images
pvesm list <STORAGE_ID> --iso
List container templates
pvesm list <STORAGE_ID> --vztmpl
Show filesystem path for a volume
pvesm path <VOLUME_ID>
// backend documentation
include::pve-storage-dir.adoc[]
include::pve-storage-nfs.adoc[]
include::pve-storage-glusterfs.adoc[]
include::pve-storage-zfspool.adoc[]
include::pve-storage-lvm.adoc[]
include::pve-storage-iscsi.adoc[]
include::pve-storage-iscsidirect.adoc[]
include::pve-storage-rbd.adoc[]
ifdef::manvolnum[]
include::pve-copyright.adoc[]
endif::manvolnum[]