5
0
mirror of git://git.proxmox.com/git/pve-docs.git synced 2025-06-03 09:06:03 +03:00

Add documentation for CIFS Storage Plugin.

This commit is contained in:
Wolfgang Link 2018-04-05 14:08:24 +02:00 committed by Thomas Lamprecht
parent dfe26b436c
commit de14ebffd2
5 changed files with 108 additions and 3 deletions

View File

@ -106,6 +106,7 @@ We currently support the following Network storage types:
* LVM Group (network backing with iSCSI targets)
* iSCSI target
* NFS Share
* CIFS Share
* Ceph RBD
* Directly use iSCSI LUNs
* GlusterFS
@ -125,7 +126,7 @@ running Containers and KVM guests. It basically creates an archive of
the VM or CT data which includes the VM/CT configuration files.
KVM live backup works for all storage types including VM images on
NFS, iSCSI LUN, Ceph RBD or Sheepdog. The new backup format is
NFS, CIFS, iSCSI LUN, Ceph RBD or Sheepdog. The new backup format is
optimized for storing VM backups fast and effective (sparse files, out
of order data, minimized I/O).

99
pve-storage-cifs.adoc Normal file
View File

@ -0,0 +1,99 @@
CIFS Backend
-----------
ifdef::wiki[]
:pve-toplevel:
:title: Storage: CIFS
endif::wiki[]
Storage pool type: `cifs`
The CIFS backend is based on the directory backend, so it shares most
properties. The directory layout and the file naming conventions are
the same. The main advantage is that you can directly configure the
CIFS server, so the backend can mount the share automatically in
the hole cluster. There is no need to modify `/etc/fstab`. The backend
can also test if the server is online, and provides a method to query
the server for exported shares.
Configuration
~~~~~~~~~~~~~
The backend supports all common storage properties, except the shared
flag, which is always set. Additionally, the following properties are
used to configure the CIFS server:
server::
Server IP or DNS name. To avoid DNS lookup delays, it is usually
preferable to use an IP address instead of a DNS name - unless you
have a very reliable DNS server, or list the server in the local
`/etc/hosts` file.
share::
CIFS share (as listed by `pvesm cifsscan`).
Optional properties:
username::
If not presents, "guest" is used.
password::
The user password.
It will be saved in a private directory (/etc/pve/priv/<STORAGE_ID>.cred).
domain::
sets the domain (workgroup) of the user
smbversion::
SMB protocol Version (default is `3`).
SMB1 is not supported due to security issues.
path::
The local mount point (defaults to `/mnt/pve/<STORAGE_ID>/`).
.Configuration Example (`/etc/pve/storage.cfg`)
----
cifs: backup
path /mnt/pve/backup
server 10.0.0.11
share VMData
content backup
username anna
smbversion 3
----
Storage Features
~~~~~~~~~~~~~~~~
CIFS does not support snapshots, but the backend uses `qcow2` features
to implement snapshots and cloning.
.Storage features for backend `nfs`
[width="100%",cols="m,m,3*d",options="header"]
|==============================================================================
|Content types |Image formats |Shared |Snapshots |Clones
|images rootdir vztmpl iso backup |raw qcow2 vmdk subvol |yes |qcow2 |qcow2
|==============================================================================
Examples
~~~~~~~~
You can get a list of exported CIFS shares with:
# pvesm cifsscan <server> [--username <username>] [--password]
ifdef::wiki[]
See Also
~~~~~~~~
* link:/wiki/Storage[Storage]
endif::wiki[]

View File

@ -71,6 +71,7 @@ snapshots and clones.
|ZFS (local) |zfspool |file |no |yes |yes
|Directory |dir |file |no |no^1^ |yes
|NFS |nfs |file |yes |no^1^ |yes
|CIFS |cifs |file |yes |no^1^ |yes
|GlusterFS |glusterfs |file |yes |no^1^ |yes
|LVM |lvm |block |no^2^ |no |yes
|LVM-thin |lvmthin |block |no |yes |yes
@ -370,6 +371,8 @@ See Also
* link:/wiki/Storage:_NFS[Storage: NFS]
* link:/wiki/Storage:_CIFS[Storage: CIFS]
* link:/wiki/Storage:_RBD[Storage: RBD]
* link:/wiki/Storage:_ZFS[Storage: ZFS]
@ -386,6 +389,8 @@ include::pve-storage-dir.adoc[]
include::pve-storage-nfs.adoc[]
include::pve-storage-cifs.adoc[]
include::pve-storage-glusterfs.adoc[]
include::pve-storage-zfspool.adoc[]

View File

@ -163,7 +163,7 @@ On each controller you attach a number of emulated hard disks, which are backed
by a file or a block device residing in the configured storage. The choice of
a storage type will determine the format of the hard disk image. Storages which
present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*,
whereas files based storages (Ext4, NFS, GlusterFS) will let you to choose
whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to choose
either the *raw disk image format* or the *QEMU image format*.
* the *QEMU image format* is a copy on write format which allows snapshots, and

View File

@ -111,7 +111,7 @@ started (resumed) again. This results in minimal downtime, but needs
additional space to hold the container copy.
+
When the container is on a local file system and the target storage of
the backup is an NFS server, you should set `--tmpdir` to reside on a
the backup is an NFS/CIFS server, you should set `--tmpdir` to reside on a
local file system too, as this will result in a many fold performance
improvement. Use of a local `tmpdir` is also required if you want to
backup a local container using ACLs in suspend mode if the backup