5
0
mirror of git://git.proxmox.com/git/pve-docs.git synced 2025-01-21 18:03:45 +03:00
pve-docs/pve-storage-rbd.adoc
Alwin Antreich 78f02fed81 reflect new PVE features for ceph pool configuration
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
2017-10-23 09:23:20 +02:00

108 lines
2.8 KiB
Plaintext

[[ceph_rados_block_devices]]
Ceph RADOS Block Devices (RBD)
------------------------------
ifdef::wiki[]
:pve-toplevel:
:title: Storage: RBD
endif::wiki[]
Storage pool type: `rbd`
http://ceph.com[Ceph] is a distributed object store and file system
designed to provide excellent performance, reliability and
scalability. RADOS block devices implement a feature rich block level
storage, and you get the following advantages:
* thin provisioning
* resizable volumes
* distributed and redundant (striped over multiple OSDs)
* full snapshot and clone capabilities
* self healing
* no single point of failure
* scalable to the exabyte level
* kernel and user space implementation available
NOTE: For smaller deployments, it is also possible to run Ceph
services directly on your {pve} nodes. Recent hardware has plenty
of CPU power and RAM, so running storage services and VMs on same node
is possible.
[[storage_rbd_config]]
Configuration
~~~~~~~~~~~~~
This backend supports the common storage properties `nodes`,
`disable`, `content`, and the following `rbd` specific properties:
monhost::
List of monitor daemon IPs. Optional, only needed if Ceph is not running on the
PVE cluster.
pool::
Ceph pool name.
username::
RBD user Id. Optional, only needed if Ceph is not running on the PVE cluster.
krbd::
Access rbd through krbd kernel module. This is required if you want to
use the storage for containers.
.Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`)
----
rbd: ceph-external
monhost 10.1.1.20 10.1.1.21 10.1.1.22
pool ceph-external
content images
username admin
----
TIP: You can use the `rbd` utility to do low-level management tasks.
Authentication
~~~~~~~~~~~~~~
If you use `cephx` authentication, you need to copy the keyfile from your
external Ceph cluster to a Proxmox VE host.
Create the directory `/etc/pve/priv/ceph` with
mkdir /etc/pve/priv/ceph
Then copy the keyring
scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring
The keyring must be named to match your `<STORAGE_ID>`. Copying the
keyring generally requires root privileges.
If Ceph is installed locally on the PVE cluster, this is done automatically by
'pveceph' or in the GUI.
Storage Features
~~~~~~~~~~~~~~~~
The `rbd` backend is a block level storage, and implements full
snapshot and clone functionality.
.Storage features for backend `rbd`
[width="100%",cols="m,m,3*d",options="header"]
|==============================================================================
|Content types |Image formats |Shared |Snapshots |Clones
|images rootdir |raw |yes |yes |yes
|==============================================================================
ifdef::wiki[]
See Also
~~~~~~~~
* link:/wiki/Storage[Storage]
endif::wiki[]