mirror of
git://git.proxmox.com/git/pve-docs.git
synced 2025-01-06 13:17:48 +03:00
82afba9adc
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
146 lines
4.4 KiB
Plaintext
146 lines
4.4 KiB
Plaintext
[[storage_cephfs]]
|
|
Ceph Filesystem (CephFS)
|
|
------------------------
|
|
ifdef::wiki[]
|
|
:pve-toplevel:
|
|
:title: Storage: CephFS
|
|
endif::wiki[]
|
|
|
|
Storage pool type: `cephfs`
|
|
|
|
CephFS implements a POSIX-compliant filesystem, using a https://ceph.com[Ceph]
|
|
storage cluster to store its data. As CephFS builds upon Ceph, it shares most of
|
|
its properties. This includes redundancy, scalability, self-healing, and high
|
|
availability.
|
|
|
|
TIP: {pve} can xref:chapter_pveceph[manage Ceph setups], which makes
|
|
configuring a CephFS storage easier. As modern hardware offers a lot of
|
|
processing power and RAM, running storage services and VMs on same node is
|
|
possible without a significant performance impact.
|
|
|
|
To use the CephFS storage plugin, you must replace the stock Debian Ceph client,
|
|
by adding our xref:sysadmin_package_repositories_ceph[Ceph repository].
|
|
Once added, run `apt update`, followed by `apt dist-upgrade`, in order to get
|
|
the newest packages.
|
|
|
|
WARNING: Please ensure that there are no other Ceph repositories configured.
|
|
Otherwise the installation will fail or there will be mixed package versions on
|
|
the node, leading to unexpected behavior.
|
|
|
|
[[storage_cephfs_config]]
|
|
Configuration
|
|
~~~~~~~~~~~~~
|
|
|
|
This backend supports the common storage properties `nodes`,
|
|
`disable`, `content`, as well as the following `cephfs` specific properties:
|
|
|
|
fs-name::
|
|
|
|
Name of the Ceph FS.
|
|
|
|
monhost::
|
|
|
|
List of monitor daemon addresses. Optional, only needed if Ceph is not running
|
|
on the {pve} cluster.
|
|
|
|
path::
|
|
|
|
The local mount point. Optional, defaults to `/mnt/pve/<STORAGE_ID>/`.
|
|
|
|
username::
|
|
|
|
Ceph user id. Optional, only needed if Ceph is not running on the {pve} cluster,
|
|
where it defaults to `admin`.
|
|
|
|
subdir::
|
|
|
|
CephFS subdirectory to mount. Optional, defaults to `/`.
|
|
|
|
fuse::
|
|
|
|
Access CephFS through FUSE, instead of the kernel client. Optional, defaults
|
|
to `0`.
|
|
|
|
.Configuration example for an external Ceph cluster (`/etc/pve/storage.cfg`)
|
|
----
|
|
cephfs: cephfs-external
|
|
monhost 10.1.1.20 10.1.1.21 10.1.1.22
|
|
path /mnt/pve/cephfs-external
|
|
content backup
|
|
username admin
|
|
fs-name cephfs
|
|
----
|
|
NOTE: Don't forget to set up the client's secret key file, if cephx was not
|
|
disabled.
|
|
|
|
Authentication
|
|
~~~~~~~~~~~~~~
|
|
|
|
NOTE: If Ceph is installed locally on the {pve} cluster, the following is done
|
|
automatically when adding the storage.
|
|
|
|
If you use `cephx` authentication, which is enabled by default, you need to
|
|
provide the secret from the external Ceph cluster.
|
|
|
|
To configure the storage via the CLI, you first need to make the file
|
|
containing the secret available. One way is to copy the file from the external
|
|
Ceph cluster directly to one of the {pve} nodes. The following example will
|
|
copy it to the `/root` directory of the node on which we run it:
|
|
|
|
----
|
|
# scp <external cephserver>:/etc/ceph/cephfs.secret /root/cephfs.secret
|
|
----
|
|
|
|
Then use the `pvesm` CLI tool to configure the external RBD storage, use the
|
|
`--keyring` parameter, which needs to be a path to the secret file that you
|
|
copied. For example:
|
|
|
|
----
|
|
# pvesm add cephfs <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content backup --keyring /root/cephfs.secret
|
|
----
|
|
|
|
When configuring an external RBD storage via the GUI, you can copy and paste
|
|
the secret into the appropriate field.
|
|
|
|
The secret is only the key itself, as opposed to the `rbd` backend which also
|
|
contains a `[client.userid]` section.
|
|
|
|
The secret will be stored at
|
|
|
|
----
|
|
# /etc/pve/priv/ceph/<STORAGE_ID>.secret
|
|
----
|
|
|
|
A secret can be received from the Ceph cluster (as Ceph admin) by issuing the
|
|
command below, where `userid` is the client ID that has been configured to
|
|
access the cluster. For further information on Ceph user management, see the
|
|
Ceph docs.footnoteref:[cephusermgmt]
|
|
|
|
----
|
|
# ceph auth get-key client.userid > cephfs.secret
|
|
----
|
|
|
|
Storage Features
|
|
~~~~~~~~~~~~~~~~
|
|
|
|
The `cephfs` backend is a POSIX-compliant filesystem, on top of a Ceph cluster.
|
|
|
|
.Storage features for backend `cephfs`
|
|
[width="100%",cols="m,m,3*d",options="header"]
|
|
|==============================================================================
|
|
|Content types |Image formats |Shared |Snapshots |Clones
|
|
|vztmpl iso backup snippets |none |yes |yes^[1]^ |no
|
|
|==============================================================================
|
|
^[1]^ While no known bugs exist, snapshots are not yet guaranteed to be stable,
|
|
as they lack sufficient testing.
|
|
|
|
ifdef::wiki[]
|
|
|
|
See Also
|
|
~~~~~~~~
|
|
|
|
* link:/wiki/Storage[Storage]
|
|
|
|
endif::wiki[]
|
|
|