2016-10-13 08:40:48 +02:00
[[chapter_pveceph]]
2016-04-10 15:39:35 +02:00
ifdef::manvolnum[]
2016-10-12 06:54:29 +02:00
pveceph(1)
==========
2016-10-11 07:49:48 +02:00
:pve-toplevel:
2016-04-10 15:39:35 +02:00
NAME
----
2017-06-21 09:09:57 +02:00
pveceph - Manage Ceph Services on Proxmox VE Nodes
2016-04-10 15:39:35 +02:00
2016-10-06 15:12:49 +02:00
SYNOPSIS
2016-04-10 15:39:35 +02:00
--------
include::pveceph.1-synopsis.adoc[]
DESCRIPTION
-----------
endif::manvolnum[]
ifndef::manvolnum[]
2017-07-04 14:02:30 +02:00
Manage Ceph Services on Proxmox VE Nodes
========================================
2017-07-04 13:55:51 +02:00
:pve-toplevel:
2016-04-10 15:39:35 +02:00
endif::manvolnum[]
2017-07-05 07:26:10 +02:00
[thumbnail="gui-ceph-status.png"]
2017-06-28 10:56:42 +02:00
{pve} unifies your compute and storage systems, i.e. you can use the
same physical nodes within a cluster for both computing (processing
VMs and containers) and replicated storage. The traditional silos of
compute and storage resources can be wrapped up into a single
hyper-converged appliance. Separate storage networks (SANs) and
connections via network (NAS) disappear. With the integration of Ceph,
an open source software-defined storage platform, {pve} has the
ability to run and manage Ceph storage directly on the hypervisor
nodes.
Ceph is a distributed object store and file system designed to provide
excellent performance, reliability and scalability. For smaller
deployments, it is possible to install a Ceph server for RADOS Block
Devices (RBD) directly on your {pve} cluster nodes, see
xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]. Recent
hardware has plenty of CPU power and RAM, so running storage services
and VMs on the same node is possible.
2017-06-21 09:09:57 +02:00
To simplify management, we provide 'pveceph' - a tool to install and
manage {ceph} services on {pve} nodes.
Precondition
------------
2017-06-28 10:56:42 +02:00
To build a Proxmox Ceph Cluster there should be at least three (preferably)
identical servers for the setup.
2017-06-21 09:09:57 +02:00
2017-09-27 10:18:50 +02:00
A 10Gb network, exclusively used for Ceph, is recommended. A meshed
2017-06-28 10:56:42 +02:00
network setup is also an option if there are no 10Gb switches
available, see {webwiki-url}Full_Mesh_Network_for_Ceph_Server[wiki] .
2017-06-21 09:09:57 +02:00
Check also the recommendations from
2017-06-28 10:58:53 +02:00
http://docs.ceph.com/docs/master/start/hardware-recommendations/[Ceph's website].
2017-06-21 09:09:57 +02:00
Installation of Ceph Packages
-----------------------------
On each node run the installation script as follows:
[source,bash]
----
2017-06-28 10:58:53 +02:00
pveceph install
2017-06-21 09:09:57 +02:00
----
This sets up an `apt` package repository in
`/etc/apt/sources.list.d/ceph.list` and installs the required software.
Creating initial Ceph configuration
-----------------------------------
2017-07-05 07:26:10 +02:00
[thumbnail="gui-ceph-config.png"]
2017-06-21 09:09:57 +02:00
After installation of packages, you need to create an initial Ceph
configuration on just one node, based on your network (`10.10.10.0/24`
in the following example) dedicated for Ceph:
[source,bash]
----
pveceph init --network 10.10.10.0/24
----
This creates an initial config at `/etc/pve/ceph.conf`. That file is
2017-06-28 10:56:42 +02:00
automatically distributed to all {pve} nodes by using
2017-06-21 09:09:57 +02:00
xref:chapter_pmxcfs[pmxcfs]. The command also creates a symbolic link
from `/etc/ceph/ceph.conf` pointing to that file. So you can simply run
Ceph commands without the need to specify a configuration file.
2017-10-13 13:00:42 +02:00
[[pve_ceph_monitors]]
2017-06-21 09:09:57 +02:00
Creating Ceph Monitors
----------------------
2017-07-05 07:26:10 +02:00
[thumbnail="gui-ceph-monitor.png"]
2017-06-21 09:09:57 +02:00
On each node where a monitor is requested (three monitors are recommended)
create it by using the "Ceph" item in the GUI or run.
[source,bash]
----
pveceph createmon
----
2017-10-13 13:00:42 +02:00
[[pve_ceph_osds]]
2017-06-21 09:09:57 +02:00
Creating Ceph OSDs
------------------
2017-07-05 07:26:10 +02:00
[thumbnail="gui-ceph-osd-status.png"]
2017-06-21 09:09:57 +02:00
via GUI or via CLI as follows:
[source,bash]
----
pveceph createosd /dev/sd[X]
----
If you want to use a dedicated SSD journal disk:
NOTE: In order to use a dedicated journal disk (SSD), the disk needs
to have a https://en.wikipedia.org/wiki/GUID_Partition_Table[GPT]
partition table. You can create this with `gdisk /dev/sd(x)`. If there
is no GPT, you cannot select the disk as journal. Currently the
journal size is fixed to 5 GB.
[source,bash]
----
pveceph createosd /dev/sd[X] -journal_dev /dev/sd[X]
----
Example: Use /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD
journal disk.
[source,bash]
----
pveceph createosd /dev/sdf -journal_dev /dev/sdb
----
This partitions the disk (data and journal partition), creates
filesystems and starts the OSD, afterwards it is running and fully
functional. Please create at least 12 OSDs, distributed among your
nodes (4 OSDs on each node).
It should be noted that this command refuses to initialize disk when
it detects existing data. So if you want to overwrite a disk you
should remove existing data first. You can do that using:
[source,bash]
----
ceph-disk zap /dev/sd[X]
----
You can create OSDs containing both journal and data partitions or you
can place the journal on a dedicated SSD. Using a SSD journal disk is
highly recommended if you expect good performance.
2017-10-13 13:00:42 +02:00
[[pve_ceph_pools]]
2017-06-21 09:09:57 +02:00
Ceph Pools
----------
2017-07-05 07:26:10 +02:00
[thumbnail="gui-ceph-pools.png"]
2017-06-21 09:09:57 +02:00
The standard installation creates per default the pool 'rbd',
additional pools can be created via GUI.
Ceph Client
-----------
2017-07-05 07:26:10 +02:00
[thumbnail="gui-ceph-log.png"]
2017-06-21 09:09:57 +02:00
You can then configure {pve} to use such pools to store VM or
Container images. Simply use the GUI too add a new `RBD` storage (see
section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
You also need to copy the keyring to a predefined location.
NOTE: The file name needs to be `<storage_id> + `.keyring` - `<storage_id>` is
the expression after 'rbd:' in `/etc/pve/storage.cfg` which is
`my-ceph-storage` in the following example:
[source,bash]
----
mkdir /etc/pve/priv/ceph
cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
----
2016-04-10 15:39:35 +02:00
ifdef::manvolnum[]
include::pve-copyright.adoc[]
endif::manvolnum[]