docs: move docs repo to talos repo (#770)
Signed-off-by: Andrew Rynhard <andrew@andrewrynhard.com>
This commit is contained in:
parent
ebc725afa6
commit
9625857c8c
7
docs/content/components/_index.md
Normal file
7
docs/content/components/_index.md
Normal file
@ -0,0 +1,7 @@
|
||||
---
|
||||
title: "Components"
|
||||
date: 2018-10-29T19:40:55-07:00
|
||||
draft: false
|
||||
---
|
||||
|
||||
In this section we will discuss the various components that Talos is comprised of.
|
10
docs/content/components/containerd.md
Normal file
10
docs/content/components/containerd.md
Normal file
@ -0,0 +1,10 @@
|
||||
---
|
||||
title: containerd
|
||||
menu:
|
||||
docs:
|
||||
parent: components
|
||||
---
|
||||
|
||||
[Containerd](https://github.com/containerd/containerd) provides the container runtime to launch workloads on Talos as well as Kubernetes.
|
||||
|
||||
Talos services are namespaced under the `system` namespace in containerd whereas the Kubernetes services are namespaced under the `k8s.io` namespace.
|
26
docs/content/components/init.md
Normal file
26
docs/content/components/init.md
Normal file
@ -0,0 +1,26 @@
|
||||
---
|
||||
title: "init"
|
||||
date: 2018-10-29T19:40:55-07:00
|
||||
draft: false
|
||||
weight: 20
|
||||
menu:
|
||||
docs:
|
||||
parent: 'components'
|
||||
weight: 20
|
||||
---
|
||||
|
||||
A common theme throughout the design of Talos is minimalism.
|
||||
We believe strongly in the UNIX philosophy that each program should do one job well.
|
||||
The `init` included in Talos is one example of this.
|
||||
|
||||
We wanted to create a focused `init` that had one job - run Kubernetes. To that extent, `init` is relatively static in that it does not allow for arbitrary user defined services. Only the services necessary to run Kubernetes and manage the node are available. This includes:
|
||||
|
||||
- [containerd](/docs/components/containerd)
|
||||
- [kubeadm](/docs/components/kubeadm)
|
||||
- [kubelet](https://kubernetes.io/docs/concepts/overview/components/)
|
||||
- [networkd](/docs/components/networkd)
|
||||
- [ntpd](/docs/components/ntpd)
|
||||
- [osd](/docs/components/osd)
|
||||
- [proxyd](/docs/components/proxyd)
|
||||
- [trustd](/docs/components/trustd)
|
||||
- [udevd](/docs/components/udevd)
|
12
docs/content/components/kernel.md
Normal file
12
docs/content/components/kernel.md
Normal file
@ -0,0 +1,12 @@
|
||||
---
|
||||
title: "kernel"
|
||||
date: 2018-10-29T19:40:55-07:00
|
||||
draft: false
|
||||
weight: 10
|
||||
menu:
|
||||
docs:
|
||||
parent: 'components'
|
||||
weight: 10
|
||||
---
|
||||
|
||||
The kernel included with Talos is configured according to the recommendations outlined in the Kernel Self Protection Project ([KSSP](http://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project)).
|
12
docs/content/components/kubeadm.md
Normal file
12
docs/content/components/kubeadm.md
Normal file
@ -0,0 +1,12 @@
|
||||
---
|
||||
title: "kubeadm"
|
||||
date: 2018-10-29T19:40:55-07:00
|
||||
draft: false
|
||||
menu:
|
||||
docs:
|
||||
parent: 'components'
|
||||
---
|
||||
|
||||
[`kubeadm`](https://github.com/kubernetes/kubernetes/tree/master/cmd/kubeadm) handles the installation and configuration of Kubernetes. This is done to stay as close as possible to upstream Kubernetes best practices and recommendations. By integrating with `kubeadm` natively, the development and operational ecosystem is familiar to all Kubernetes users.
|
||||
|
||||
Kubeadm configuration is defined in the userdata under the `services.kubeadm` section.
|
8
docs/content/components/networkd.md
Normal file
8
docs/content/components/networkd.md
Normal file
@ -0,0 +1,8 @@
|
||||
---
|
||||
title: networkd
|
||||
menu:
|
||||
docs:
|
||||
parent: components
|
||||
---
|
||||
|
||||
Networkd handles all of the host level network configuration. Configuration is defined under the `networking` key.
|
8
docs/content/components/ntpd.md
Normal file
8
docs/content/components/ntpd.md
Normal file
@ -0,0 +1,8 @@
|
||||
---
|
||||
title: ntpd
|
||||
menu:
|
||||
docs:
|
||||
parent: components
|
||||
---
|
||||
|
||||
Ntpd handles the host time synchronization.
|
18
docs/content/components/osctl.md
Normal file
18
docs/content/components/osctl.md
Normal file
@ -0,0 +1,18 @@
|
||||
---
|
||||
title: "osctl"
|
||||
date: 2018-10-29T19:40:55-07:00
|
||||
draft: false
|
||||
menu:
|
||||
docs:
|
||||
parent: 'components'
|
||||
---
|
||||
|
||||
`osctl` CLI is the client to the [osd](/components/osd) service running on every node. `osctl` should provide enough functionality to be a replacement for typical interactive shell operations. With it you can do things like:
|
||||
|
||||
- `osctl logs <service>` - retrieve container logs
|
||||
- `osctl restart <service>` - restart a service
|
||||
- `osctl reboot` - reset a node
|
||||
- `osctl dmesg` - retrieve kernel logs
|
||||
- `osctl ps` - view running services
|
||||
- `osctl top` - view node resources
|
||||
- `osctl services` - view status of talos services
|
25
docs/content/components/osd.md
Normal file
25
docs/content/components/osd.md
Normal file
@ -0,0 +1,25 @@
|
||||
---
|
||||
title: "osd"
|
||||
date: 2018-10-29T19:40:55-07:00
|
||||
draft: false
|
||||
menu:
|
||||
docs:
|
||||
parent: 'components'
|
||||
---
|
||||
|
||||
Talos is unique in that it has no concept of host-level access.
|
||||
There is no ssh daemon.
|
||||
There is no interactive console session.
|
||||
There are no shells installed.
|
||||
Only what is required to run Kubernetes.
|
||||
Furthermore, there is no way to run any custom processes on the host level.
|
||||
|
||||
To make this work, we needed an out-of-band tool for managing the nodes.
|
||||
In an ideal world, the system would be self-healing and we would never have to touch it.
|
||||
But, in the real world, this does not happen.
|
||||
We still need a way to handle operational scenarios that may arise.
|
||||
|
||||
The `osd` daemon provides a way to do just that.
|
||||
Based on the Principle of Least Privilege, `osd` provides operational value for cluster administrators by providing an API for node management.
|
||||
|
||||
Interactions with `osd` are handled via [osctl](/docs/components/osctl) which communicates via gRPC.
|
11
docs/content/components/proxyd.md
Normal file
11
docs/content/components/proxyd.md
Normal file
@ -0,0 +1,11 @@
|
||||
---
|
||||
title: "proxyd"
|
||||
date: 2018-10-29T19:40:55-07:00
|
||||
draft: false
|
||||
menu:
|
||||
docs:
|
||||
parent: 'components'
|
||||
---
|
||||
|
||||
Highly available Kubernetes clusters are crucial for production quality clusters.
|
||||
The `proxyd` component is a simple yet powerful reverse proxy that adapts to where Talos is employed and provides load balancing across all API servers.
|
19
docs/content/components/trustd.md
Normal file
19
docs/content/components/trustd.md
Normal file
@ -0,0 +1,19 @@
|
||||
---
|
||||
title: "trustd"
|
||||
date: 2018-10-29T19:40:55-07:00
|
||||
draft: false
|
||||
menu:
|
||||
docs:
|
||||
parent: 'components'
|
||||
---
|
||||
|
||||
Security is one of the highest priorities within Talos.
|
||||
To run a Kubernetes cluster a certain level of trust is required to operate a cluster.
|
||||
For example, orchestrating the bootstrap of a highly available control plane requires the distribution of sensitive PKI data.
|
||||
|
||||
To that end, we created `trustd`.
|
||||
Based on the concept of a Root of Trust, `trustd` is a simple daemon responsible for establishing trust within the system.
|
||||
Once trust is established, various methods become available to the trustee.
|
||||
It can, for example, accept a write request from another node to place a file on disk.
|
||||
|
||||
We imagine that the number available methods will grow as Talos gets tested in the real world.
|
10
docs/content/components/udevd.md
Normal file
10
docs/content/components/udevd.md
Normal file
@ -0,0 +1,10 @@
|
||||
---
|
||||
title: "udevd"
|
||||
date: 2018-10-29T19:40:55-07:00
|
||||
draft: false
|
||||
menu:
|
||||
docs:
|
||||
parent: 'components'
|
||||
---
|
||||
|
||||
udevd handles the kernel device notifications and sets up the necessary links in `/dev`.
|
12
docs/content/configuration/_index.md
Normal file
12
docs/content/configuration/_index.md
Normal file
@ -0,0 +1,12 @@
|
||||
---
|
||||
title: "Configuration"
|
||||
date: 2018-10-29T19:40:55-07:00
|
||||
draft: false
|
||||
---
|
||||
|
||||
In this section, we will step through the configuration of a Talos based Kubernetes cluster.
|
||||
There are three major components we will configure:
|
||||
|
||||
- `osd` and `osctl`
|
||||
- the master nodes
|
||||
- the worker nodes
|
173
docs/content/configuration/masters.md
Normal file
173
docs/content/configuration/masters.md
Normal file
@ -0,0 +1,173 @@
|
||||
---
|
||||
title: "Masters"
|
||||
date: 2018-10-29T19:40:55-07:00
|
||||
draft: false
|
||||
weight: 20
|
||||
menu:
|
||||
docs:
|
||||
parent: 'configuration'
|
||||
weight: 20
|
||||
---
|
||||
|
||||
Configuring master nodes in a Talos Kubernetes cluster is a two part process:
|
||||
|
||||
- configuring the Talos specific options
|
||||
- and configuring the Kubernetes specific options
|
||||
|
||||
To get started, create a YAML file we will use in the following steps:
|
||||
|
||||
```bash
|
||||
touch <node-name>.yaml
|
||||
```
|
||||
|
||||
## Configuring Talos
|
||||
|
||||
### Injecting the Talos PKI
|
||||
|
||||
Using `osctl`, and our output from the `osd` configuration [documentation]({{< ref "osd.md" >}}), inject the generated PKI into the configuration file:
|
||||
|
||||
```bash
|
||||
osctl inject os --crt <organization>.crt --key <organization>.key <node-name>.yaml
|
||||
```
|
||||
|
||||
You should see the following fields populated:
|
||||
|
||||
```yaml
|
||||
security:
|
||||
os:
|
||||
ca:
|
||||
crt: <base 64 encoded root public certificate>
|
||||
key: <base 64 encoded root private key>
|
||||
...
|
||||
```
|
||||
|
||||
This process only needs to be performed on you initial node's configuration file.
|
||||
|
||||
### Configuring `trustd`
|
||||
|
||||
Each master node participates as a Root of Trust in the cluster.
|
||||
The responsibilities of `trustd` include:
|
||||
|
||||
- certificate as a service
|
||||
- and Kubernetes PKI distribution amongst master nodes
|
||||
|
||||
The auth done between `trustd` and a client is, for now, a simple username and password combination.
|
||||
Having these credentials gives a client the power to request a certifcate that identifies itself.
|
||||
In the `<node-name>.yaml`, add the follwing:
|
||||
|
||||
```yaml
|
||||
security:
|
||||
...
|
||||
services:
|
||||
...
|
||||
trustd:
|
||||
username: '<username>'
|
||||
password: '<password>'
|
||||
...
|
||||
```
|
||||
|
||||
## Configuring Kubernetes
|
||||
|
||||
### Generating the Root CA
|
||||
|
||||
To create the root CA for the Kubernetes cluster, run:
|
||||
|
||||
```bash
|
||||
osctl gen ca --rsa --hours <hours> --organization <kubernetes-organization>
|
||||
```
|
||||
|
||||
{{% note %}}The `--rsa` flag is required for the generation of the Kubernetes CA. {{% /note %}}
|
||||
|
||||
### Injecting the Kubernetes PKI
|
||||
|
||||
Using `osctl`, inject the generated PKI into the configuration file:
|
||||
|
||||
```bash
|
||||
osctl inject kubernetes --crt <kubernetes-organization>.crt --key <kubernetes-organization>.key <node-name>.yaml
|
||||
```
|
||||
|
||||
You should see the following fields populated:
|
||||
|
||||
```yaml
|
||||
security:
|
||||
...
|
||||
kubernetes:
|
||||
ca:
|
||||
crt: <base 64 encoded root public certificate>
|
||||
key: <base 64 encoded root private key>
|
||||
...
|
||||
```
|
||||
|
||||
### Configuring Kubeadm
|
||||
|
||||
The configuration of the `kubeadm` service is done in two parts:
|
||||
|
||||
- supplying the Talos specific options
|
||||
- supplying the `kubeadm` `InitConfiguration`
|
||||
|
||||
#### Talos Specific Options
|
||||
|
||||
```yaml
|
||||
services:
|
||||
...
|
||||
kubeadm:
|
||||
init:
|
||||
cni: <flannel|calico>
|
||||
...
|
||||
```
|
||||
|
||||
#### Kubeadm Specific Options
|
||||
|
||||
```yaml
|
||||
services:
|
||||
...
|
||||
kubeadm:
|
||||
...
|
||||
configuration: |
|
||||
apiVersion: kubeadm.k8s.io/v1alpha3
|
||||
kind: InitConfiguration
|
||||
...
|
||||
...
|
||||
```
|
||||
|
||||
> See the official [documentation](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/) for the options available in `InitConfiguration`.
|
||||
|
||||
In the end you should have something that looks similar to the following:
|
||||
|
||||
```yaml
|
||||
version: ""
|
||||
security:
|
||||
os:
|
||||
ca:
|
||||
crt: <base 64 encoded root public certificate>
|
||||
key: <base 64 encoded root private key>
|
||||
kubernetes:
|
||||
ca:
|
||||
crt: <base 64 encoded root public certificate>
|
||||
key: <base 64 encoded root private key>
|
||||
services:
|
||||
init:
|
||||
cni: <flannel|calico>
|
||||
kubeadm:
|
||||
configuration: |
|
||||
apiVersion: kubeadm.k8s.io/v1beta1
|
||||
kind: InitConfiguration
|
||||
apiEndpoint:
|
||||
advertiseAddress: <master ip>
|
||||
bindPort: 6443
|
||||
apiVersion: kubeadm.k8s.io/v1beta1
|
||||
bootstrapTokens:
|
||||
- token: '<kubeadm token>'
|
||||
ttl: 0s
|
||||
---
|
||||
apiVersion: kubeadm.k8s.io/v1beta1
|
||||
kind: ClusterConfiguration
|
||||
controlPlaneEndpoint: <master ip>:443
|
||||
networking:
|
||||
dnsDomain: cluster.local
|
||||
podSubnet: <pod subnet>
|
||||
serviceSubnet: <service subnet>
|
||||
trustd:
|
||||
username: '<username>'
|
||||
password: '<password>'
|
||||
```
|
101
docs/content/configuration/osd.md
Normal file
101
docs/content/configuration/osd.md
Normal file
@ -0,0 +1,101 @@
|
||||
---
|
||||
title: "osd"
|
||||
date: 2018-11-03T17:14:49-07:00
|
||||
draft: false
|
||||
weight: 10
|
||||
menu:
|
||||
docs:
|
||||
identifier: "osd-configuration"
|
||||
parent: 'configuration'
|
||||
weight: 10
|
||||
---
|
||||
|
||||
The `osd` service enforces a high level of security by utilizing mutual TLS for authentication and authorization.
|
||||
In this section we will configure mutual TLS by generating the certificates for the servers (`osd`) and clients (`osctl`).
|
||||
|
||||
### Cluster Owners
|
||||
|
||||
We recommend that the configuration of `osd` be performed by a cluster owner.
|
||||
A cluster owner should be a person of authority within an organization.
|
||||
Perhaps a director, manager, or senior member of a team.
|
||||
They are responsible for storing the root CA, and distributing the PKI for authorized cluster administrators.
|
||||
|
||||
### Cluster Administrators
|
||||
|
||||
The authorization to use `osctl` should be granted to a person fit for cluster administration.
|
||||
As a cluster administrator, the user gains access to the out-of-band management tools offered by Talos.
|
||||
|
||||
## Configuring `osd`
|
||||
|
||||
To configure `osd`, we will need:
|
||||
|
||||
- static IP addresses for each node that will participate as a master
|
||||
- and a root CA
|
||||
|
||||
The following steps should be performed by a cluster owner.
|
||||
|
||||
### Generating the Root CA
|
||||
|
||||
The root CA can be generated by running:
|
||||
|
||||
```bash
|
||||
osctl gen ca --hours <hours> --organization <organization>
|
||||
```
|
||||
|
||||
The cluster owner should store the generated private key (`<organization>.key`) in a safe place, that only other cluster owners have access to.
|
||||
The public certificate (`<organization>.crt`) should be made available to cluster administrators because, as we will see shortly, it is required to configure `osctl`.
|
||||
|
||||
{{% note %}}The `--rsa` flag should _not_ be specified for the generation of the `osd` CA.{{% /note %}}
|
||||
|
||||
### Generating the Identity Certificates
|
||||
|
||||
Talos provides automation for generating each node's certificate.
|
||||
|
||||
## Configuring `osctl`
|
||||
|
||||
To configure `osctl`, we will need:
|
||||
|
||||
- the root CA we generated above
|
||||
- and a certificate signed by the root CA specific to the user
|
||||
|
||||
The process for setting up `osctl` is done in part between a cluster owner and a user requesting to become a cluster administrator.
|
||||
|
||||
### Generating the User Certificate
|
||||
|
||||
The user requesting cluster administration access runs the following:
|
||||
|
||||
```bash
|
||||
osctl gen key --name <user>
|
||||
osctl gen csr --ip 127.0.0.1 --key <user>.key
|
||||
```
|
||||
|
||||
Now, the cluster owner must generate a certificate from the above CSR.
|
||||
To do this, the user requesting access submits the CSR generated above to the cluster owner, and the cluster owner runs the following:
|
||||
|
||||
```bash
|
||||
osctl gen crt --hours <hours> --ca <organization> --csr <user>.csr --name <user>
|
||||
```
|
||||
|
||||
The generated certificate is then sent to the requesting user using a secure channel.
|
||||
|
||||
### The Configuration File
|
||||
|
||||
With all the above steps done, the new cluster administrator can now create the configuration file for `osctl`.
|
||||
|
||||
```bash
|
||||
cat <organization>.crt | base64
|
||||
cat <user>.crt | base64
|
||||
cat <user>.key | base64
|
||||
```
|
||||
|
||||
Now, create `~/.talos/config` with the following contents:
|
||||
|
||||
```yaml
|
||||
context: <context>
|
||||
contexts:
|
||||
<context>:
|
||||
target: <node-ip>
|
||||
ca: <base 64 encoded root public certificate>
|
||||
crt: <base 64 encoded user public certificate>
|
||||
key: <base 64 encoded user private key>
|
||||
```
|
484
docs/content/configuration/userdata.md
Normal file
484
docs/content/configuration/userdata.md
Normal file
@ -0,0 +1,484 @@
|
||||
---
|
||||
title: User Data
|
||||
date: 2019-06-21T19:40:55-07:00
|
||||
draft: false
|
||||
weight: 20
|
||||
menu:
|
||||
docs:
|
||||
parent: 'configuration'
|
||||
weight: 20
|
||||
---
|
||||
|
||||
Talos User Data is responsible for the host and Kubernetes configuration.
|
||||
Talos user data is indepent of cloud config / cloud init.
|
||||
|
||||
## Version
|
||||
|
||||
Version represents the Talos userdata configuration version. This denotes
|
||||
what the schema of the configuration file.
|
||||
|
||||
```yaml
|
||||
version: "1"
|
||||
```
|
||||
|
||||
## Security
|
||||
|
||||
Security contains all of the certificate information for Talos.
|
||||
|
||||
### OS
|
||||
|
||||
OS handles the certificate configuration for Talos components ( osd, trustd, etc ).
|
||||
|
||||
#### CA
|
||||
|
||||
OS.CA contains the certificate/key pair.
|
||||
|
||||
```yaml
|
||||
security:
|
||||
os:
|
||||
ca:
|
||||
crt: <base64 encoded x509 pem certificate>
|
||||
key: <base64 encoded x509 pem certificate key>
|
||||
```
|
||||
|
||||
### Kubernetes
|
||||
|
||||
Kubernetes handles the certificate configuration for Kubernetes components ( api server ).
|
||||
|
||||
#### CA
|
||||
|
||||
Kubernetes.CA contains the certificate/key pair for the apiserver.
|
||||
|
||||
```yaml
|
||||
security:
|
||||
kubernetes:
|
||||
ca:
|
||||
crt: <base64 encoded x509 pem certificate>
|
||||
key: <base64 encoded x509 pem certificate key>
|
||||
```
|
||||
|
||||
#### SA
|
||||
|
||||
Kubernetes.SA contains the certificate/key pair for the default service account.
|
||||
This item is optional, if it is not provided a certificate/key pair will be
|
||||
generated.
|
||||
|
||||
```yaml
|
||||
security:
|
||||
kubernetes:
|
||||
sa:
|
||||
crt: <base64 encoded x509 pem certificate>
|
||||
key: <base64 encoded x509 pem certificate key>
|
||||
```
|
||||
|
||||
#### FrontProxy
|
||||
|
||||
Kubernetes.FrontProxy contains the certificate/key pair for the [Front Proxy](https://kubernetes.io/docs/tasks/access-kubernetes-api/setup-extension-api-server/).
|
||||
This item is optional, if it is not provided a certificate/key pair will be
|
||||
generated.
|
||||
|
||||
```yaml
|
||||
security:
|
||||
kubernetes:
|
||||
frontproxy:
|
||||
crt: <base64 encoded x509 pem certificate>
|
||||
key: <base64 encoded x509 pem certificate key>
|
||||
```
|
||||
|
||||
#### Etcd
|
||||
|
||||
Kubernetes.Etcd contains the certificate/key pair for [etcd](https://kubernetes.io/docs/concepts/overview/components/#etcd).
|
||||
This item is optional, if it is not provided a certificate/key pair will be
|
||||
generated.
|
||||
|
||||
```yaml
|
||||
security:
|
||||
kubernetes:
|
||||
etcd:
|
||||
crt: <base64 encoded x509 pem certificate>
|
||||
key: <base64 encoded x509 pem certificate key>
|
||||
```
|
||||
|
||||
## Networking
|
||||
|
||||
Networking allows for the customization of the host networking.
|
||||
|
||||
**Note** Bonding is currently not supported.
|
||||
|
||||
### OS
|
||||
|
||||
OS contains a list of host networking devices and their respective configurations.
|
||||
|
||||
#### Devices
|
||||
|
||||
```yaml
|
||||
networking:
|
||||
os:
|
||||
devices:
|
||||
- interface: eth0
|
||||
cidr: <ip/mask>
|
||||
dhcp: bool
|
||||
routes:
|
||||
- network: <ip/mask>
|
||||
gateway: <ip>
|
||||
```
|
||||
|
||||
##### Interface
|
||||
|
||||
This is the interface name that should be configured.
|
||||
|
||||
##### CIDR
|
||||
|
||||
CIDR is used to specify a static IP address to the interface.
|
||||
|
||||
**Note** This option is mutually exclusive with DHCP.
|
||||
|
||||
##### DHCP
|
||||
|
||||
DHCP is used to specify that this device should be configured via DHCP.
|
||||
|
||||
The following DHCP options are supported:
|
||||
|
||||
```
|
||||
OptionHostName
|
||||
OptionClasslessStaticRouteOption
|
||||
OptionDNSDomainSearchList
|
||||
OptionNTPServers
|
||||
```
|
||||
|
||||
**Note** This option is mutually exclusive with CIDR.
|
||||
|
||||
##### Routes
|
||||
|
||||
Routes is used to specify static routes that may be necessary.
|
||||
This parameter is optional.
|
||||
|
||||
## Services
|
||||
### Init
|
||||
|
||||
Init allows for the customizatin of the CNI plugin. This translates to
|
||||
additional host mounts.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
init:
|
||||
cni: [flannel|calico]
|
||||
```
|
||||
|
||||
**Note** This options will be deprecated
|
||||
|
||||
### Kubelet
|
||||
#### ExtraMounts
|
||||
|
||||
Kubelet.ExtraMounts allows you to specify additional host mounts that should be presented
|
||||
to kubelet.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
kubelet:
|
||||
extraMounts:
|
||||
- < opencontainers/runtime-spec/mounts >
|
||||
```
|
||||
|
||||
### Kubeadm
|
||||
#### Configuration
|
||||
|
||||
Kubeadm.Configuration contains the various kubeadm configs as a yaml block of yaml configs.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
kubeadm:
|
||||
configuration: |
|
||||
apiVersion: kubeadm.k8s.io/v1beta1
|
||||
kind: InitConfiguration
|
||||
...
|
||||
---
|
||||
apiVersion: kubeadm.k8s.io/v1beta1
|
||||
kind: ClusterConfiguration
|
||||
...
|
||||
---
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
...
|
||||
---
|
||||
apiVersion: kubeproxy.config.k8s.io/v1alpha1
|
||||
kind: KubeProxyConfiguration
|
||||
...
|
||||
```
|
||||
|
||||
#### ExtraArgs
|
||||
|
||||
Kubeadm.Extraargs contains an additional list of arguments that can be passed into kubeadm.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
kubeadm:
|
||||
extraArgs:
|
||||
- some arg
|
||||
- some arg
|
||||
...
|
||||
```
|
||||
|
||||
#### IgnorePreflightErrors
|
||||
Kubeadm.Ignorepreflighterrors is a list of Kubeadm preflight errors to ignore.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
kubeadm:
|
||||
ignorePreflightErrors:
|
||||
- Swap
|
||||
- SystemVerification
|
||||
...
|
||||
```
|
||||
|
||||
#### InitToken
|
||||
|
||||
kubeadm.Inittoken denotes that this node should bootstrap the Kubernetes cluster.
|
||||
The token is a UUIDv1 token which means it includes a timestamp of when it was
|
||||
generated. There is a 1 hour TTL associated with this token where it will perform
|
||||
a `kubeadm init` to bootstrap the cluster.
|
||||
This token is a UUIDv1 token and can be generated via `osctl gen token`.
|
||||
|
||||
This token should only be specified on a single master node.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
kubeadm:
|
||||
initToken: d4171920-80f1-11e9-aeb1-acde48001122
|
||||
```
|
||||
|
||||
### Trustd
|
||||
|
||||
#### Token
|
||||
|
||||
Trustd.Token can be used for auth for trustd.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
trustd:
|
||||
token: a9u3hjikoof.ADa
|
||||
```
|
||||
|
||||
**Note** Token is mutually exclusive from Username and Password.
|
||||
|
||||
#### Username
|
||||
|
||||
Trustd.Username is part of the username/password combination used for auth for trustd.
|
||||
The values defined here will be the credentials trustd will use.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
trustd:
|
||||
username: trusty
|
||||
```
|
||||
|
||||
**Note** Username/Password mutually exclusive from Token.
|
||||
|
||||
#### Password
|
||||
|
||||
Trustd.Password is part of the username/password combination used for auth for trustd.
|
||||
The values defined here will be the credentials trustd will use.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
trustd:
|
||||
password: mypass
|
||||
```
|
||||
|
||||
**Note** Username/Password mutually exclusive from Token.
|
||||
|
||||
#### Endpoints
|
||||
|
||||
The endpoints denote the other trustd instances. All trustd instances should
|
||||
be listed here. These are typically your master nodes.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
trustd:
|
||||
endpoints:
|
||||
- endpoint
|
||||
```
|
||||
|
||||
#### CertSANs
|
||||
|
||||
```yaml
|
||||
services:
|
||||
trustd:
|
||||
certSANs:
|
||||
- san
|
||||
```
|
||||
|
||||
### NTP
|
||||
#### Server
|
||||
|
||||
NTP.Server allows you to customize which NTP server to use. By default it consumes
|
||||
from pool.ntp.org.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
ntp:
|
||||
server: <ntp server>
|
||||
```
|
||||
|
||||
## Install
|
||||
|
||||
Install is primarily used in bare metal situations. It defines the disk layout and
|
||||
installation properties.
|
||||
|
||||
### Boot
|
||||
#### Device
|
||||
|
||||
The device name to use for the `/boot` partition. This should be specified as
|
||||
the unpartitioned block device. If this parameter is omitted, the value of
|
||||
`install.root.device` is used.
|
||||
|
||||
```yaml
|
||||
install:
|
||||
boot:
|
||||
device: <name of device to use>
|
||||
```
|
||||
|
||||
#### Size
|
||||
|
||||
The size of the `/boot` partition in bytes. If this parameter is omitted, a
|
||||
default value of 512MB will be used.
|
||||
|
||||
```yaml
|
||||
install:
|
||||
boot:
|
||||
size: <size in bytes>
|
||||
```
|
||||
|
||||
#### Kernel
|
||||
|
||||
This parameter can be used to specify a custom kernel to use. If this parameter
|
||||
is omitted, the most recent Talos release will be used ( fetched from github releases ).
|
||||
|
||||
```yaml
|
||||
install:
|
||||
boot:
|
||||
kernel: <path or url to vmlinuz>
|
||||
```
|
||||
|
||||
**Note** The asset name **must** be named `vmlinuz`.
|
||||
|
||||
#### Initramfs
|
||||
|
||||
This parameter can be used to specify a custom initramfs to use. If this parameter
|
||||
is omitted, the most recent Talos release will be used ( fetched from github releases ).
|
||||
|
||||
```yaml
|
||||
install:
|
||||
boot:
|
||||
initramfs: <path or url to initramfs.xz>
|
||||
```
|
||||
|
||||
**Note** The asset name **must** be named `initramfs.xz`.
|
||||
|
||||
### Root
|
||||
#### Device
|
||||
|
||||
The device name to use for the `/` partition. This should be specified as the
|
||||
unpartitioned block device.
|
||||
|
||||
```yaml
|
||||
install:
|
||||
root:
|
||||
device: <name of device to use>
|
||||
```
|
||||
|
||||
#### Size
|
||||
|
||||
The size of the `/` partition in bytes. If this parameter is omitted, a default
|
||||
value of 2GB will be used.
|
||||
|
||||
```yaml
|
||||
install:
|
||||
root:
|
||||
size: <size in bytes>
|
||||
```
|
||||
|
||||
#### Rootfs
|
||||
|
||||
This parameter can be used to specify a custom root filesystem to use. If this
|
||||
parameter is omitted, the most recent Talos release will be used ( fetched from
|
||||
github releases ).
|
||||
|
||||
```yaml
|
||||
install:
|
||||
root:
|
||||
rootfs: <path or url to rootfs.tar.gz>
|
||||
```
|
||||
|
||||
**Note** The asset name **must** be named `rootfs.tar.gz`.
|
||||
|
||||
### Data
|
||||
#### Device
|
||||
|
||||
The device name to use for the `/var` partition. This should be specified as the
|
||||
unpartitioned block device. If this parameter is omitted, the value of
|
||||
`install.root.device` is used.
|
||||
|
||||
```yaml
|
||||
install:
|
||||
data:
|
||||
device: <name of device to use>
|
||||
```
|
||||
|
||||
#### Size
|
||||
|
||||
The size of the `/var` partition in bytes. If this parameter is omitted, a default
|
||||
value of 1GB will be used. This partition will auto extend to consume the remainder
|
||||
of the unpartitioned space on the disk.
|
||||
|
||||
```yaml
|
||||
install:
|
||||
data:
|
||||
size: <size in bytes>
|
||||
```
|
||||
|
||||
### Wipe
|
||||
|
||||
Wipe denotes if the disk should be wiped ( zero's written ) before it is partitioned.
|
||||
|
||||
```
|
||||
install:
|
||||
wipe: <bool>
|
||||
```
|
||||
|
||||
### Force
|
||||
|
||||
Force allows the partitiong to proceed if there is already a filesystem detected.
|
||||
|
||||
```
|
||||
install:
|
||||
force: <bool>
|
||||
```
|
||||
|
||||
### ExtraDevices
|
||||
|
||||
ExtraDevices allows for the extension of the partitioning scheme on the specified
|
||||
device. These new partitions will be formatted as `xfs` filesystems.
|
||||
|
||||
```yaml
|
||||
install:
|
||||
extraDevices:
|
||||
- device: sdb
|
||||
partitions:
|
||||
- size: 2048000000
|
||||
mountpoint: /var/lib/etcd
|
||||
```
|
||||
|
||||
#### Device
|
||||
|
||||
ExtraDevices.Device specifies a device to use for additional host mountpoints.
|
||||
|
||||
#### Partitions
|
||||
##### Size
|
||||
|
||||
Size specifies the size in bytes of the new partition.
|
||||
|
||||
##### MountPoint
|
||||
|
||||
Mountpoint specifies where the device should be mounted.
|
||||
|
40
docs/content/configuration/workers.md
Normal file
40
docs/content/configuration/workers.md
Normal file
@ -0,0 +1,40 @@
|
||||
---
|
||||
title: "Workers"
|
||||
date: 2018-10-29T19:40:55-07:00
|
||||
draft: false
|
||||
weight: 30
|
||||
menu:
|
||||
docs:
|
||||
parent: 'configuration'
|
||||
weight: 30
|
||||
---
|
||||
|
||||
Configuring the worker nodes is much more simple in comparison to configuring the master nodes.
|
||||
Using the `trustd` API, worker nodes submit a `CSR`, and, if authenticated, receive a valid `osd` certificate.
|
||||
Similarly, using a `kubeadm` token, the node joins an existing cluster.
|
||||
|
||||
We need to specify:
|
||||
|
||||
- the `osd` public certificate
|
||||
- `trustd` credentials and endpoints
|
||||
- and a `kubeadm` `JoinConfiguration`
|
||||
|
||||
```yaml
|
||||
version: ""
|
||||
...
|
||||
services:
|
||||
kubeadm:
|
||||
configuration: |
|
||||
apiVersion: kubeadm.k8s.io/v1alpha3
|
||||
kind: JoinConfiguration
|
||||
...
|
||||
trustd:
|
||||
username: <username>
|
||||
password: <password>
|
||||
endpoints:
|
||||
- <master-1>
|
||||
...
|
||||
- <master-n>
|
||||
```
|
||||
|
||||
> See the official [documentation](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/) for the options available in `JoinConfiguration`.
|
7
docs/content/guides/_index.md
Normal file
7
docs/content/guides/_index.md
Normal file
@ -0,0 +1,7 @@
|
||||
---
|
||||
title: "Examples"
|
||||
date: 2018-10-29T19:40:55-07:00
|
||||
draft: false
|
||||
---
|
||||
One of the primary goals of Talos is a consistent experience regardless of _where_ you are operating.
|
||||
In the following sections we will cover how to deploy Talos to well known platforms.
|
24
docs/content/guides/aws.md
Normal file
24
docs/content/guides/aws.md
Normal file
@ -0,0 +1,24 @@
|
||||
---
|
||||
title: "AWS"
|
||||
date: 2018-10-29T19:40:55-07:00
|
||||
draft: false
|
||||
menu:
|
||||
docs:
|
||||
parent: 'guides'
|
||||
---
|
||||
|
||||
First, create the AMI:
|
||||
|
||||
```bash
|
||||
docker run \
|
||||
--rm \
|
||||
--volume $HOME/.aws/credentials:/root/.aws/credentials \
|
||||
--env AWS_DEFAULT_PROFILE=${PROFILE} \
|
||||
--env AWS_DEFAULT_REGION=${REGION} \
|
||||
talos-systems/talos:latest ami -var regions=${COMMA_SEPARATED_LIST_OF_REGIONS}
|
||||
```
|
||||
|
||||
Once the AMI is created, you can now start an EC2 instance using the AMI ID.
|
||||
Provide the proper configuration as the instance's user data.
|
||||
|
||||
> An official Terraform module is currently being developed, stay tuned!
|
37
docs/content/guides/bare_metal.md
Normal file
37
docs/content/guides/bare_metal.md
Normal file
@ -0,0 +1,37 @@
|
||||
---
|
||||
title: Bare Metal
|
||||
date: 2019-06-21T06:25:46-08:00
|
||||
draft: false
|
||||
menu:
|
||||
docs:
|
||||
parent: 'guides'
|
||||
---
|
||||
|
||||
## Generate configuration
|
||||
|
||||
When considering Talos for production usage, the best way to get started is by using the `osctl config generate`.
|
||||
|
||||
Talos requires 3 static IPs, one for each of the master nodes. After allocating these addresses, you can generate the necessary configs with the following commands:
|
||||
|
||||
```bash
|
||||
osctl config generate <cluster name> <master-1 ip,master-2 ip, master-3 ip>
|
||||
```
|
||||
|
||||
This will generate 5 files - `master-{1,2,3}.yaml`, `worker.yaml`, and `talosconfig`. The master and worker config files contain just enough config to bootstrap your cluster, and can be further customized as necessary. These config files should be supplied as machine userdata or some internally accessible url so they can be downloaded during machine bootup. When specifying a remote location to download userdata from, the kernel parameter `talos.autonomy.io/userdata=http://myurl.com`.
|
||||
|
||||
An iPXE server such as [coreos/Matchbox](https://github.com/poseidon/matchbox) is recommended.
|
||||
|
||||
## Cluster interaction
|
||||
|
||||
After the machines have booted up, you'll want to manage your talos config file. The default location the `osctl` tool looks for configuration is under `~/.talos/config`. The location can also be specified at runtime via `osctl --talosconfig myconfigfile`. In the previous step, the talos configuration was generate in your working directory as `talosconfig`.
|
||||
|
||||
By default, the talos configuration points to a single node. This can be overridden at runtime via `--target <ip>` flag so you can point to another node in your cluster.
|
||||
|
||||
Next, we'll need to generate the kubeconfig for our cluster. This can be achieved via `osctl kubeconfig` command.
|
||||
|
||||
## Finalizing Kubernetes Setup
|
||||
|
||||
Once your machines boot up, you will want to apply a Pod Security Policy (PSP). There is a basic example that can be used found [here](https://raw.githubusercontent.com/talos-systems/talos/master/hack/dev/manifests/psp.yaml) or you can create your own.
|
||||
|
||||
Following this, you'll want to apply a CNI plugin. You'll want to take note of the kubeadm `networking.podsubnet` parameter and ensure the network range matches up.
|
||||
|
30
docs/content/guides/gcloud.md
Normal file
30
docs/content/guides/gcloud.md
Normal file
@ -0,0 +1,30 @@
|
||||
---
|
||||
title: "Google Cloud"
|
||||
date: 2019-2-19
|
||||
draft: false
|
||||
menu:
|
||||
docs:
|
||||
parent: 'guides'
|
||||
---
|
||||
|
||||
First, create the Google Cloud compatible image:
|
||||
|
||||
```bash
|
||||
make image-gcloud
|
||||
```
|
||||
|
||||
Upload the image with:
|
||||
|
||||
```bash
|
||||
gsutil cp /path/to/talos/build/gcloud/talos.tar.gz gs://<gcloud bucket name>
|
||||
```
|
||||
|
||||
Create a custom google cloud image with:
|
||||
|
||||
```bash
|
||||
gcloud compute images create talos --source-uri=gs://<gcloud bucket name>/talos.tar.gz --guest-os-features=VIRTIO_SCSI_MULTIQUEUE
|
||||
```
|
||||
|
||||
Create an instance in google cloud, making sure to create a `user-data` key in the "Metadata" section, with a value of your full talos node configuration.
|
||||
|
||||
{{% note %}} Further exploration is needed to see if we can use the "Startup script" section instead. {{% /note %}}
|
48
docs/content/guides/getting_started.md
Normal file
48
docs/content/guides/getting_started.md
Normal file
@ -0,0 +1,48 @@
|
||||
---
|
||||
title: Getting Started
|
||||
date: 2019-06-21T06:25:46-08:00
|
||||
draft: false
|
||||
menu:
|
||||
docs:
|
||||
parent: 'guides'
|
||||
weight: 1
|
||||
---
|
||||
|
||||
The quickest way to get started with Talos is to test out the local docker setup. This will bring up a 3 master, 1 worker node environment.
|
||||
|
||||
## Environment
|
||||
|
||||
Before we get started, you'll want to make sure you have docker installed and running as well as the most recent `osctl` release. This can be found on the [Talos Releases](https://github.com/talos-systems/talos/releases) page.
|
||||
|
||||
## Bring up the Docker Environment
|
||||
|
||||
```bash
|
||||
osctl cluster create
|
||||
```
|
||||
|
||||
Startup times can vary, but it typically takes ~45s-1min for the environment to be available.
|
||||
|
||||
## Apply PSP and CNI
|
||||
|
||||
Once the environment is available, the pod security policies will need to be applied to allow the control plane to come up. Following that, the default CNI (flannel) configuration will be applied.
|
||||
|
||||
```bash
|
||||
# Fix up kubeconfig to use localhost since we're connecting to a local docker instance
|
||||
osctl kubeconfig | sed -e 's/10.5.0.2:/127.0.0.1:6/' > kubeconfig
|
||||
|
||||
# Apply PSP
|
||||
kubectl --kubeconfig ./kubeconfig apply -f https://raw.githubusercontent.com/talos-systems/talos/master/hack/dev/manifests/psp.yaml
|
||||
|
||||
# Apply CNI
|
||||
kubectl --kubeconfig ./kubeconfig apply -f https://raw.githubusercontent.com/talos-systems/talos/master/hack/dev/manifests/flannel.yaml
|
||||
|
||||
# Fix loop detection for docker dns
|
||||
kubectl --kubeconfig ./kubeconfig apply -f https://raw.githubusercontent.com/talos-systems/talos/master/hack/dev/manifests/coredns.yaml
|
||||
```
|
||||
|
||||
## Interact with the environment
|
||||
|
||||
Once the environment is available, you should be able to make use of `osctl` and `kubectl` commands.
|
||||
You can view the current running containers via `osctl ps` and `osctl ps -k`. You can view logs of running containers via `osctl logs <container>` or `osctl logs -k <container>`
|
||||
|
||||
**Note** We only set up port forwarding to master-1 so other nodes will not be directly accessible.
|
74
docs/content/guides/kvm.md
Normal file
74
docs/content/guides/kvm.md
Normal file
@ -0,0 +1,74 @@
|
||||
---
|
||||
title: "KVM"
|
||||
date: 2018-10-29T19:40:55-07:00
|
||||
draft: false
|
||||
menu:
|
||||
docs:
|
||||
parent: 'guides'
|
||||
---
|
||||
|
||||
## Creating a Master Node
|
||||
|
||||
On the KVM host, install a master node to an available block device:
|
||||
|
||||
```bash
|
||||
docker run \
|
||||
--rm \
|
||||
--privileged \
|
||||
--volume /dev:/dev \
|
||||
talos-systems/talos:latest image -b /dev/sdb -f -p bare-metal -u http://${IP}:8080/master.yaml
|
||||
```
|
||||
|
||||
{{% note %}}`http://${IP}:8080/master.yaml` should be reachable by the VM and contain a valid master configuration file.{{% /note %}}
|
||||
|
||||
Now, create the VM:
|
||||
|
||||
```bash
|
||||
virt-install \
|
||||
-n master \
|
||||
--description "Kubernetes master node." \
|
||||
--os-type=Linux \
|
||||
--os-variant=generic \
|
||||
--virt-type=kvm \
|
||||
--cpu=host \
|
||||
--vcpus=2 \
|
||||
--ram=4096 \
|
||||
--disk path=/dev/sdb \
|
||||
--network bridge=br0,model=e1000,mac=52:54:00:A8:4C:E1 \
|
||||
--graphics none \
|
||||
--boot hd \
|
||||
--rng /dev/random
|
||||
```
|
||||
|
||||
## Creating a Worker Node
|
||||
|
||||
On the KVM host, install a worker node to an available block device:
|
||||
|
||||
```bash
|
||||
docker run \
|
||||
--rm \
|
||||
--privileged \
|
||||
--volume /dev:/dev \
|
||||
talos-systems/talos:latest image -b /dev/sdc -f -p bare-metal -u http://${IP}:8080/worker.yaml
|
||||
```
|
||||
|
||||
{{% note %}}`http://${IP}:8080/worker.yaml` should be reachable by the VM and contain a valid worker configuration file.{{% /note %}}
|
||||
|
||||
Now, create the VM:
|
||||
|
||||
```bash
|
||||
virt-install \
|
||||
-n master \
|
||||
--description "Kubernetes worker node." \
|
||||
--os-type=Linux \
|
||||
--os-variant=generic \
|
||||
--virt-type=kvm \
|
||||
--cpu=host \
|
||||
--vcpus=2 \
|
||||
--ram=4096 \
|
||||
--disk path=/dev/sdc \
|
||||
--network bridge=br0,model=e1000,mac=52:54:00:B9:5D:F2 \
|
||||
--graphics none \
|
||||
--boot hd \
|
||||
--rng /dev/random
|
||||
```
|
88
docs/content/guides/xen.md
Normal file
88
docs/content/guides/xen.md
Normal file
@ -0,0 +1,88 @@
|
||||
---
|
||||
title: "Xen"
|
||||
date: 2018-11-06T06:25:46-08:00
|
||||
draft: false
|
||||
menu:
|
||||
docs:
|
||||
parent: 'guides'
|
||||
---
|
||||
|
||||
## Creating a Master Node
|
||||
|
||||
On `Dom0`, install Talos to an available block device:
|
||||
|
||||
```bash
|
||||
docker run \
|
||||
--rm \
|
||||
--privileged \
|
||||
--volume /dev:/dev \
|
||||
talos-systems/talos:latest image -b /dev/sdb
|
||||
```
|
||||
|
||||
Save the following as `/etc/xen/master.cfg`
|
||||
|
||||
```python
|
||||
name = "master"
|
||||
|
||||
builder='hvm'
|
||||
bootloader = "/bin/pygrub"
|
||||
firmware_override = "/usr/lib64/xen/boot/hvmloader"
|
||||
|
||||
vcpus=2
|
||||
memory = 4096
|
||||
serial = "pty"
|
||||
|
||||
kernel = "/var/lib/xen/talos/vmlinuz"
|
||||
ramdisk = "/var/lib/xen/talos/initramfs.xz"
|
||||
disk = [ 'phy:/dev/sdb,xvda,w', ]
|
||||
vif = [ 'mac=52:54:00:A8:4C:E1,bridge=xenbr0,model=e1000', ]
|
||||
extra = "consoleblank=0 console=hvc0 console=tty0 console=ttyS0,9600 talos.platform=bare-metal talos.userdata=http://${IP}:8080/master.yaml"
|
||||
```
|
||||
|
||||
{{% note %}}`http://${IP}:8080/master.yaml` should be reachable by the VM and contain a valid master configuration file.{{% /note %}}
|
||||
|
||||
Now, create the VM:
|
||||
|
||||
```bash
|
||||
xl create /etc/xen/master.cfg
|
||||
```
|
||||
|
||||
## Creating a Worker Node
|
||||
|
||||
On `Dom0`, install Talos to an available block device:
|
||||
|
||||
```bash
|
||||
docker run \
|
||||
--rm \
|
||||
--privileged \
|
||||
--volume /dev:/dev \
|
||||
talos-systems/talos:latest image -b /dev/sdc
|
||||
```
|
||||
|
||||
Save the following as `/etc/xen/worker.cfg`
|
||||
|
||||
```python
|
||||
name = "worker"
|
||||
|
||||
builder='hvm'
|
||||
bootloader = "/bin/pygrub"
|
||||
firmware_override = "/usr/lib64/xen/boot/hvmloader"
|
||||
|
||||
vcpus=2
|
||||
memory = 4096
|
||||
serial = "pty"
|
||||
|
||||
kernel = "/var/lib/xen/talos/vmlinuz"
|
||||
ramdisk = "/var/lib/xen/talos/initramfs.xz"
|
||||
disk = [ 'phy:/dev/sdc,xvda,w', ]
|
||||
vif = [ 'mac=52:54:00:B9:5D:F2,bridge=xenbr0,model=e1000', ]
|
||||
extra = "consoleblank=0 console=hvc0 console=tty0 console=ttyS0,9600 talos.platform=bare-metal talos.userdata=http://${IP}:8080/worker.yaml"
|
||||
```
|
||||
|
||||
{{% note %}}`http://${IP}:8080/worker.yaml` should be reachable by the VM and contain a valid worker configuration file.{{% /note %}}
|
||||
|
||||
Now, create the VM:
|
||||
|
||||
```bash
|
||||
xl create /etc/xen/worker.cfg
|
||||
```
|
9
docs/content/talos.md
Normal file
9
docs/content/talos.md
Normal file
@ -0,0 +1,9 @@
|
||||
---
|
||||
title: "Talos Documentation"
|
||||
date: 2018-10-29T19:40:55-07:00
|
||||
menu:
|
||||
docs:
|
||||
weight: 1
|
||||
---
|
||||
|
||||
Talos is a modern Linux distribution designed to be secure, immutable, and minimal.
|
Loading…
x
Reference in New Issue
Block a user