5
0
mirror of git://git.proxmox.com/git/pve-docs.git synced 2025-02-02 09:47:07 +03:00
pve-docs/pve-storage-rbd.adoc
Oguz Bektas a55d30db1d change http links to https
checked if they work -- some returned certificate errors so didn't
change those ones.

also updated some that didn't point to the right thing (open-iscsi, and
the list of supported CPUs was returning empty).

Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
2021-05-07 18:26:05 +02:00

111 lines
2.9 KiB
Plaintext

[[ceph_rados_block_devices]]
Ceph RADOS Block Devices (RBD)
------------------------------
ifdef::wiki[]
:pve-toplevel:
:title: Storage: RBD
endif::wiki[]
Storage pool type: `rbd`
https://ceph.com[Ceph] is a distributed object store and file system
designed to provide excellent performance, reliability and
scalability. RADOS block devices implement a feature rich block level
storage, and you get the following advantages:
* thin provisioning
* resizable volumes
* distributed and redundant (striped over multiple OSDs)
* full snapshot and clone capabilities
* self healing
* no single point of failure
* scalable to the exabyte level
* kernel and user space implementation available
NOTE: For smaller deployments, it is also possible to run Ceph
services directly on your {pve} nodes. Recent hardware has plenty
of CPU power and RAM, so running storage services and VMs on same node
is possible.
[[storage_rbd_config]]
Configuration
~~~~~~~~~~~~~
This backend supports the common storage properties `nodes`,
`disable`, `content`, and the following `rbd` specific properties:
monhost::
List of monitor daemon IPs. Optional, only needed if Ceph is not running on the
PVE cluster.
pool::
Ceph pool name.
username::
RBD user ID. Optional, only needed if Ceph is not running on the PVE cluster.
Note that only the user ID should be used. The "client." type prefix must be
left out.
krbd::
Enforce access to rados block devices through the krbd kernel module. Optional.
NOTE: Containers will use `krbd` independent of the option value.
.Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`)
----
rbd: ceph-external
monhost 10.1.1.20 10.1.1.21 10.1.1.22
pool ceph-external
content images
username admin
----
TIP: You can use the `rbd` utility to do low-level management tasks.
Authentication
~~~~~~~~~~~~~~
If you use `cephx` authentication, you need to copy the keyfile from your
external Ceph cluster to a Proxmox VE host.
Create the directory `/etc/pve/priv/ceph` with
mkdir /etc/pve/priv/ceph
Then copy the keyring
scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring
The keyring must be named to match your `<STORAGE_ID>`. Copying the
keyring generally requires root privileges.
If Ceph is installed locally on the PVE cluster, this is done automatically by
'pveceph' or in the GUI.
Storage Features
~~~~~~~~~~~~~~~~
The `rbd` backend is a block level storage, and implements full
snapshot and clone functionality.
.Storage features for backend `rbd`
[width="100%",cols="m,m,3*d",options="header"]
|==============================================================================
|Content types |Image formats |Shared |Snapshots |Clones
|images rootdir |raw |yes |yes |yes
|==============================================================================
ifdef::wiki[]
See Also
~~~~~~~~
* link:/wiki/Storage[Storage]
endif::wiki[]