docs: update config.yaml, storage.md, digital-rebar.md

This PR is an omnibus edition of Steve's PRs. It includes three small
things to the docs:

- A change to config.toml to avoid redirects a bit better
- Updates to digital rebar doc that fixes a few links and typos.
- Updates for typos and wording of storage.md

Signed-off-by: Spencer Smith <spencer.smith@talos-systems.com>
This commit is contained in:
Spencer Smith 2022-03-31 21:01:31 -04:00 committed by Andrey Smirnov
parent 25d19131d3
commit 3889a58397
No known key found for this signature in database
GPG Key ID: 7B26396447AB6DFD
5 changed files with 51 additions and 59 deletions

View File

@ -125,47 +125,47 @@ offlineSearch = false
prism_syntax_highlighting = false
[[params.versions]]
url = "/v1.1"
url = "/v1.1/"
version = "v1.1 (pre-release)"
[[params.versions]]
url = "/v1.0"
url = "/v1.0/"
version = "v1.0 (latest)"
[[params.versions]]
url = "/v0.14"
url = "/v0.14/"
version = "v0.14"
[[params.versions]]
url = "/v0.13"
url = "/v0.13/"
version = "v0.13"
[[params.versions]]
url = "/v0.12"
url = "/v0.12/"
version = "v0.12"
[[params.versions]]
url = "/v0.11"
url = "/v0.11/"
version = "v0.11"
[[params.versions]]
url = "/v0.10"
url = "/v0.10/"
version = "v0.10"
[[params.versions]]
url = "/v0.9"
url = "/v0.9/"
version = "v0.9"
[[params.versions]]
url = "/v0.8"
url = "/v0.8/"
version = "v0.8"
[[params.versions]]
url = "/v0.7"
url = "/v0.7/"
version = "v0.7"
[[params.versions]]
url = "/v0.6"
url = "/v0.6/"
version = "v0.6"
# User interface configuration

View File

@ -5,10 +5,10 @@ description: "In this guide we will create an Kubernetes cluster with 1 worker n
## Prerequisites
- 3 nodes (please see [hardware requirements](../../guides/getting-started#system-requirements))
- 3 nodes (please see [hardware requirements]({{< relref "../introduction/system-requirements/">}}))
- Loadbalancer
- Digital Rebar Server
- Talosctl access (see [talosctl setup](../../guides/getting-started/talosctl))
- Talosctl access (see [talosctl setup]({{< relref "../introduction/getting-started/#talosctl">}}))
## Creating a Cluster
@ -49,7 +49,7 @@ worker.yaml is valid for metal mode
#### Publishing the Machine Configuration Files
Digital Rebar has a build-in fileserver, which means we can use this feature to expose the talos configuration files.
Digital Rebar has a built-in fileserver, which means we can use this feature to expose the talos configuration files.
We will place `controlplane.yaml`, and `worker.yaml` into Digital Rebar file server by using the `drpcli` tools.
Copy the generated files from the step above into your Digital Rebar installation.
@ -122,7 +122,7 @@ It's important to have a corresponding SHA256 hash matching the boot.tar.gz
#### Bootenv BootParams
We're using some of Digital Rebar build in templating to make sure the machine gets the correct role assigned.
We're using some of Digital Rebar built in templating to make sure the machine gets the correct role assigned.
`talos.platform=metal talos.config={{ .ProvisionerURL }}/files/{{.Param \"talos/role\"}}.yaml"`
@ -135,7 +135,7 @@ The `{{.Param \"talos/role\"}}` then gets populated with one of the above roles.
### Boot the Machines
In the UI of Digital Rebar you need to select the machines you want te provision.
In the UI of Digital Rebar you need to select the machines you want to provision.
Once selected, you need to assign to following:
- Profile
@ -144,7 +144,7 @@ Once selected, you need to assign to following:
This will provision the Stage and Bootenv with the talos values.
Once this is done, you can boot the machine.
To understand the boot process, we have a higher level overview located at [metal overview](../overview).
To understand the boot process, we have a higher level overview located at [metal overview](../../reference/platform/).
### Bootstrap Etcd

View File

@ -17,14 +17,13 @@ It is easy and automatic.
## Storage Clusters
> **Talos** recommends having a separate disks (apart from the Talos install disk) to be used for storage.
> **Sidero Labs** recommends having separate disks (apart from the Talos install disk) to be used for storage.
Redundancy in storage is usually very important.
Scaling capabilities, reliability, speed, maintenance load, and ease of use are all factors you must consider when managing your own storage.
Redundancy, scaling capabilities, reliability, speed, maintenance load, and ease of use are all factors you must consider when managing your own storage.
Running a storage cluster can be a very good choice when managing your own storage, and there are two project we recommend, depending on your situation.
Running a storage cluster can be a very good choice when managing your own storage, and there are two projects we recommend, depending on your situation.
If you need vast amounts of storage composed of more than a dozen or so disks, just use Rook to manage Ceph.
If you need vast amounts of storage composed of more than a dozen or so disks, we recommend you use Rook to manage Ceph.
Also, if you need _both_ mount-once _and_ mount-many capabilities, Ceph is your answer.
Ceph also bundles in an S3-compatible object store.
The down side of Ceph is that there are a lot of moving parts.
@ -40,9 +39,7 @@ If your storage needs are small enough to not need Ceph, use Mayastor.
[Ceph](https://ceph.io) is the grandfather of open source storage clusters.
It is big, has a lot of pieces, and will do just about anything.
It scales better than almost any other system out there, open source or proprietary, being able to easily add and remove storage over time with no downtime, safely and easily.
It comes bundled with RadosGW, an S3-compatible object store.
It comes with CephFS, a NFS-like clustered filesystem.
And of course, it comes with RBD, a block storage system.
It comes bundled with RadosGW, an S3-compatible object store; CephFS, a NFS-like clustered filesystem; and RBD, a block storage system.
With the help of [Rook](https://rook.io), the vast majority of the complexity of Ceph is hidden away by a very robust operator, allowing you to control almost everything about your Ceph cluster from fairly simple Kubernetes CRDs.
@ -69,22 +66,22 @@ It is much less complicated to set up than Ceph, but you probably wouldn't want
Mayastor is new, maybe _too_ new.
If you're looking for something well-tested and battle-hardened, this is not it.
If you're looking for something lean, future-oriented, and simpler than Ceph, it might be a great choice.
However, if you're looking for something lean, future-oriented, and simpler than Ceph, it might be a great choice.
### Video Walkthrough
#### Video Walkthrough
To see a live demo of this section, see the video below:
<iframe width="560" height="315" src="https://www.youtube.com/embed/q86Kidk81xE" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
### Prep Nodes
#### Prep Nodes
Either during initial cluster creation or on running worker nodes, several machine config values should be edited.
This information is gathered from the Mayastor [documentation](https://mayastor.gitbook.io/introduction/quickstart/preparing-the-cluster).
We need to set the `vm.nr_hugepages` sysctl and add `openebs.io/engine=mayastor` labels to the nodes which are meant to be storage nodes
(This information is gathered from the Mayastor [documentation](https://mayastor.gitbook.io/introduction/quickstart/preparing-the-cluster).)
We need to set the `vm.nr_hugepages` sysctl and add `openebs.io/engine=mayastor` labels to the nodes which are meant to be storage nodes.
This can be done with `talosctl patch machineconfig` or via config patches during `talosctl gen config`.
Some examples are shown below, modify as needed.
Some examples are shown below: modify as needed.
Using gen config
@ -104,18 +101,17 @@ talosctl patch --mode=no-reboot machineconfig -n <node ip> --patch '[{"op": "add
talosctl -n <node ip> service kubelet restart
```
### Deploy Mayastor
#### Deploy Mayastor
Continue setting up [Mayastor](https://mayastor.gitbook.io/introduction/quickstart/deploy-mayastor) using the official documentation.
## NFS
NFS is an old pack animal long past its prime.
NFS is slow, has all kinds of bottlenecks involving contention, distributed locking, single points of service, and more.
However, it is supported by a wide variety of systems.
You don't want to use it unless you have to, but unfortunately, that "have to" is too frequent.
NFS is slow, has all kinds of bottlenecks involving contention, distributed locking, single points of service, and more.
The NFS client is part of the [`kubelet` image](https://github.com/talos-systems/kubelet) maintained by the Talos team.
This means that the version installed in your running `kubelet` is the version of NFS supported by Talos.
You can reduce some of the contention problems by parceling Persistent Volumes from separate underlying directories.
@ -133,7 +129,7 @@ One of the most popular open source add-on object stores is [MinIO](https://min.
## Others (iSCSI)
The most common remaining systems involve iSCSI in one form or another.
This includes things like the original OpenEBS, Rancher's Longhorn, and many proprietary systems.
These include the original OpenEBS, Rancher's Longhorn, and many proprietary systems.
Unfortunately, Talos does _not_ support iSCSI-based systems.
iSCSI in Linux is facilitated by [open-iscsi](https://github.com/open-iscsi/open-iscsi).
This system was designed long before containers caught on, and it is not well

View File

@ -5,10 +5,10 @@ description: "In this guide we will create an Kubernetes cluster with 1 worker n
## Prerequisites
- 3 nodes (please see [hardware requirements](../../guides/getting-started#system-requirements))
- 3 nodes (please see [hardware requirements]({{< relref "../introduction/system-requirements/">}}))
- Loadbalancer
- Digital Rebar Server
- Talosctl access (see [talosctl setup](../../guides/getting-started/talosctl))
- Talosctl access (see [talosctl setup]({{< relref "../introduction/getting-started/#talosctl">}}))
## Creating a Cluster
@ -49,7 +49,7 @@ worker.yaml is valid for metal mode
#### Publishing the Machine Configuration Files
Digital Rebar has a build-in fileserver, which means we can use this feature to expose the talos configuration files.
Digital Rebar has a built-in fileserver, which means we can use this feature to expose the talos configuration files.
We will place `controlplane.yaml`, and `worker.yaml` into Digital Rebar file server by using the `drpcli` tools.
Copy the generated files from the step above into your Digital Rebar installation.
@ -122,7 +122,7 @@ It's important to have a corresponding SHA256 hash matching the boot.tar.gz
#### Bootenv BootParams
We're using some of Digital Rebar build in templating to make sure the machine gets the correct role assigned.
We're using some of Digital Rebar built in templating to make sure the machine gets the correct role assigned.
`talos.platform=metal talos.config={{ .ProvisionerURL }}/files/{{.Param \"talos/role\"}}.yaml"`
@ -135,7 +135,7 @@ The `{{.Param \"talos/role\"}}` then gets populated with one of the above roles.
### Boot the Machines
In the UI of Digital Rebar you need to select the machines you want te provision.
In the UI of Digital Rebar you need to select the machines you want to provision.
Once selected, you need to assign to following:
- Profile
@ -144,7 +144,7 @@ Once selected, you need to assign to following:
This will provision the Stage and Bootenv with the talos values.
Once this is done, you can boot the machine.
To understand the boot process, we have a higher level overview located at [metal overview](../overview).
To understand the boot process, we have a higher level overview located at [metal overview](../../reference/platform/).
### Bootstrap Etcd

View File

@ -17,14 +17,13 @@ It is easy and automatic.
## Storage Clusters
> **Talos** recommends having a separate disks (apart from the Talos install disk) to be used for storage.
> **Sidero Labs** recommends having separate disks (apart from the Talos install disk) to be used for storage.
Redundancy in storage is usually very important.
Scaling capabilities, reliability, speed, maintenance load, and ease of use are all factors you must consider when managing your own storage.
Redundancy, scaling capabilities, reliability, speed, maintenance load, and ease of use are all factors you must consider when managing your own storage.
Running a storage cluster can be a very good choice when managing your own storage, and there are two project we recommend, depending on your situation.
Running a storage cluster can be a very good choice when managing your own storage, and there are two projects we recommend, depending on your situation.
If you need vast amounts of storage composed of more than a dozen or so disks, just use Rook to manage Ceph.
If you need vast amounts of storage composed of more than a dozen or so disks, we recommend you use Rook to manage Ceph.
Also, if you need _both_ mount-once _and_ mount-many capabilities, Ceph is your answer.
Ceph also bundles in an S3-compatible object store.
The down side of Ceph is that there are a lot of moving parts.
@ -40,9 +39,7 @@ If your storage needs are small enough to not need Ceph, use Mayastor.
[Ceph](https://ceph.io) is the grandfather of open source storage clusters.
It is big, has a lot of pieces, and will do just about anything.
It scales better than almost any other system out there, open source or proprietary, being able to easily add and remove storage over time with no downtime, safely and easily.
It comes bundled with RadosGW, an S3-compatible object store.
It comes with CephFS, a NFS-like clustered filesystem.
And of course, it comes with RBD, a block storage system.
It comes bundled with RadosGW, an S3-compatible object store; CephFS, a NFS-like clustered filesystem; and RBD, a block storage system.
With the help of [Rook](https://rook.io), the vast majority of the complexity of Ceph is hidden away by a very robust operator, allowing you to control almost everything about your Ceph cluster from fairly simple Kubernetes CRDs.
@ -69,22 +66,22 @@ It is much less complicated to set up than Ceph, but you probably wouldn't want
Mayastor is new, maybe _too_ new.
If you're looking for something well-tested and battle-hardened, this is not it.
If you're looking for something lean, future-oriented, and simpler than Ceph, it might be a great choice.
However, if you're looking for something lean, future-oriented, and simpler than Ceph, it might be a great choice.
### Video Walkthrough
#### Video Walkthrough
To see a live demo of this section, see the video below:
<iframe width="560" height="315" src="https://www.youtube.com/embed/q86Kidk81xE" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
### Prep Nodes
#### Prep Nodes
Either during initial cluster creation or on running worker nodes, several machine config values should be edited.
This information is gathered from the Mayastor [documentation](https://mayastor.gitbook.io/introduction/quickstart/preparing-the-cluster).
We need to set the `vm.nr_hugepages` sysctl and add `openebs.io/engine=mayastor` labels to the nodes which are meant to be storage nodes
(This information is gathered from the Mayastor [documentation](https://mayastor.gitbook.io/introduction/quickstart/preparing-the-cluster).)
We need to set the `vm.nr_hugepages` sysctl and add `openebs.io/engine=mayastor` labels to the nodes which are meant to be storage nodes.
This can be done with `talosctl patch machineconfig` or via config patches during `talosctl gen config`.
Some examples are shown below, modify as needed.
Some examples are shown below: modify as needed.
Using gen config
@ -104,18 +101,17 @@ talosctl patch --mode=no-reboot machineconfig -n <node ip> --patch '[{"op": "add
talosctl -n <node ip> service kubelet restart
```
### Deploy Mayastor
#### Deploy Mayastor
Continue setting up [Mayastor](https://mayastor.gitbook.io/introduction/quickstart/deploy-mayastor) using the official documentation.
## NFS
NFS is an old pack animal long past its prime.
NFS is slow, has all kinds of bottlenecks involving contention, distributed locking, single points of service, and more.
However, it is supported by a wide variety of systems.
You don't want to use it unless you have to, but unfortunately, that "have to" is too frequent.
NFS is slow, has all kinds of bottlenecks involving contention, distributed locking, single points of service, and more.
The NFS client is part of the [`kubelet` image](https://github.com/talos-systems/kubelet) maintained by the Talos team.
This means that the version installed in your running `kubelet` is the version of NFS supported by Talos.
You can reduce some of the contention problems by parceling Persistent Volumes from separate underlying directories.
@ -133,7 +129,7 @@ One of the most popular open source add-on object stores is [MinIO](https://min.
## Others (iSCSI)
The most common remaining systems involve iSCSI in one form or another.
This includes things like the original OpenEBS, Rancher's Longhorn, and many proprietary systems.
These include the original OpenEBS, Rancher's Longhorn, and many proprietary systems.
Unfortunately, Talos does _not_ support iSCSI-based systems.
iSCSI in Linux is facilitated by [open-iscsi](https://github.com/open-iscsi/open-iscsi).
This system was designed long before containers caught on, and it is not well