5
0
mirror of git://git.proxmox.com/git/pve-docs.git synced 2025-01-18 06:03:46 +03:00

consistently capitalize Ceph

Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This commit is contained in:
Matthias Heiserer 2022-11-09 12:58:21 +01:00 committed by Thomas Lamprecht
parent 58141566d7
commit f226da0ef4
3 changed files with 7 additions and 7 deletions

View File

@ -48,9 +48,9 @@ Hyper-Converged Infrastructure: Storage
infrastructure. You can, for example, deploy and manage the following two
storage technologies by using the web interface only:
- *ceph*: a both self-healing and self-managing shared, reliable and highly
- *Ceph*: a both self-healing and self-managing shared, reliable and highly
scalable storage system. Checkout
xref:chapter_pveceph[how to manage ceph services on {pve} nodes]
xref:chapter_pveceph[how to manage Ceph services on {pve} nodes]
- *ZFS*: a combined file system and logical volume manager with extensive
protection against data corruption, various RAID modes, fast and cheap

View File

@ -109,9 +109,9 @@ management, see the Ceph docs.footnoteref:[cephusermgmt,{cephdocs-url}/rados/ope
Ceph client configuration (optional)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Connecting to an external ceph storage doesn't always allow setting
Connecting to an external Ceph storage doesn't always allow setting
client-specific options in the config DB on the external cluster. You can add a
`ceph.conf` beside the ceph keyring to change the ceph client configuration for
`ceph.conf` beside the Ceph keyring to change the Ceph client configuration for
the storage.
The ceph.conf needs to have the same name as the storage.

View File

@ -636,7 +636,7 @@ pvesm add rbd <storage-name> --pool <replicated-pool> --data-pool <ec-pool>
----
TIP: Do not forget to add the `keyring` and `monhost` option for any external
ceph clusters, not managed by the local {pve} cluster.
Ceph clusters, not managed by the local {pve} cluster.
Destroy Pools
~~~~~~~~~~~~~
@ -761,7 +761,7 @@ ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class
[frame="none",grid="none", align="left", cols="30%,70%"]
|===
|<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
|<root>|which crush root it should belong to (default ceph root "default")
|<root>|which crush root it should belong to (default Ceph root "default")
|<failure-domain>|at which failure-domain the objects should be distributed (usually host)
|<class>|what type of OSD backing store to use (e.g., nvme, ssd, hdd)
|===
@ -943,7 +943,7 @@ servers.
pveceph fs destroy NAME --remove-storages --remove-pools
----
+
This will automatically destroy the underlying ceph pools as well as remove
This will automatically destroy the underlying Ceph pools as well as remove
the storages from pve config.
After these steps, the CephFS should be completely removed and if you have