docs: use variables and templates in the docs

Only 1.0 docs are updated which will be used as a template for future
documentation versions.

Signed-off-by: Andrey Smirnov <andrey.smirnov@talos-systems.com>
This commit is contained in:
Andrey Smirnov
2022-03-25 15:34:01 +03:00
parent 4c83847b90
commit c5da386092
43 changed files with 135 additions and 126 deletions

View File

@ -1,5 +1,6 @@
{
"default": true,
"MD013": false,
"MD033": false
"MD033": false,
"MD034": false
}

View File

@ -1082,7 +1082,7 @@ type InstallConfig struct {
// description: |
// Allows for supplying the image used to perform the installation.
// Image reference for each Talos release can be found on
// [GitHub releases page](https://github.com/talos-systems/talos/releases).
// [GitHub releases page](https://github.com/siderolabs/talos/releases).
// examples:
// - value: '"ghcr.io/siderolabs/installer:latest"'
InstallImage string `yaml:"image,omitempty"`

View File

@ -717,7 +717,7 @@ func init() {
InstallConfigDoc.Fields[3].Name = "image"
InstallConfigDoc.Fields[3].Type = "string"
InstallConfigDoc.Fields[3].Note = ""
InstallConfigDoc.Fields[3].Description = "Allows for supplying the image used to perform the installation.\nImage reference for each Talos release can be found on\n[GitHub releases page](https://github.com/talos-systems/talos/releases)."
InstallConfigDoc.Fields[3].Description = "Allows for supplying the image used to perform the installation.\nImage reference for each Talos release can be found on\n[GitHub releases page](https://github.com/siderolabs/talos/releases)."
InstallConfigDoc.Fields[3].Comments[encoder.LineComment] = "Allows for supplying the image used to perform the installation."
InstallConfigDoc.Fields[3].AddExample("", "ghcr.io/siderolabs/installer:latest")

View File

@ -5,6 +5,9 @@ linkTitle: "Documentation"
cascade:
type: docs
preRelease: true
lastRelease: v1.0.0-beta.3
kubernetesRelease: "1.23.5"
prevKubernetesRelease: "1.23.1"
---
## Welcome
@ -20,12 +23,12 @@ If you are just getting familiar with Talos, we recommend starting here:
### Community
- GitHub: [repo](https://github.com/talos-systems/talos)
- GitHub: [repo](https://github.com/siderolabs/talos)
- Slack: Join our [slack channel](https://slack.dev.talos-systems.io)
- Matrix: Join our Matrix channels:
- Community: [#talos:matrix.org](https://matrix.to/#/#talos:matrix.org)
- Support: [#talos-support:matrix.org](https://matrix.to/#/#talos-support:matrix.org)
- Support: Questions, bugs, feature requests [GitHub Discussions](https://github.com/talos-systems/talos/discussions)
- Support: Questions, bugs, feature requests [GitHub Discussions](https://github.com/siderolabs/talos/discussions)
- Forum: [community](https://groups.google.com/a/siderolabs.com/forum/#!forum/community)
- Twitter: [@SideroLabs](https://twitter.com/talossystems)
- Email: [info@SideroLabs.com](mailto:info@SideroLabs.com)

View File

@ -33,7 +33,7 @@ created talosconfig
> The loadbalancer is used to distribute the load across multiple controlplane nodes.
> This isn't covered in detail, because we assume some loadbalancing knowledge before hand.
> If you think this should be added to the docs, please [create a issue](https://github.com/talos-systems/talos/issues).
> If you think this should be added to the docs, please [create a issue](https://github.com/siderolabs/talos/issues).
At this point, you can modify the generated configs to your liking.
Optionally, you can specify `--config-patch` with RFC6902 jsonpatch which will be applied during the config generation.
@ -62,7 +62,7 @@ Replacing `<file>` with controlplane or worker.
### Download the boot files
Download a recent version of `boot.tar.gz` from [github.](https://github.com/talos-systems/talos/releases/)
Download a recent version of `boot.tar.gz` from [github.](https://github.com/siderolabs/talos/releases/)
Upload to DRB:
@ -74,7 +74,7 @@ $ drpcli isos upload boot.tar.gz as talos.tar.gz
}
```
We have some Digital Rebar [example files](https://github.com/talos-systems/talos/tree/master/hack/test/digitalrebar/) in the Git repo you can use to provision Digital Rebar with drpcli.
We have some Digital Rebar [example files](https://github.com/siderolabs/talos/tree/master/hack/test/digitalrebar/) in the Git repo you can use to provision Digital Rebar with drpcli.
To apply these configs you need to create them, and then apply them as follow:

View File

@ -12,7 +12,7 @@ This guide assumes the user has a working API token, the [Equinix Metal CLI](htt
To install Talos to a server a working TFTP and iPXE server are needed.
How this is done varies and is left as an exercise for the user.
In general this requires a Talos kernel vmlinuz and initramfs.
These assets can be downloaded from a given [release](https://github.com/talos-systems/talos/releases).
These assets can be downloaded from a given [release](https://github.com/siderolabs/talos/releases).
## Special Considerations

View File

@ -46,7 +46,7 @@ This directory is automatically served by Matchbox.
### Create the Matchbox Configuration Files
The profiles we will create will reference `vmlinuz`, and `initramfs.xz`.
Download these files from the [release](https://github.com/talos-systems/talos/releases) of your choice, and place them in `/var/lib/matchbox/assets`.
Download these files from the [release](https://github.com/siderolabs/talos/releases) of your choice, and place them in `/var/lib/matchbox/assets`.
#### Profiles

View File

@ -8,7 +8,7 @@ description: "Creating a cluster via the AWS CLI."
Official AMI image ID can be found in the `cloud-images.json` file attached to the Talos release:
```bash
curl -sL https://github.com/siderolabs/talos/releases/download/v1.0.0/cloud-images.json | \
curl -sL https://github.com/siderolabs/talos/releases/download/{{< release >}}/cloud-images.json | \
jq -r '.[] | select(.region == "us-east-1") | select (.arch == "amd64") | .id'
```
@ -53,7 +53,7 @@ Note that the role should be associated with the S3 bucket we created above.
First, download the AWS image from a Talos release:
```bash
curl -LO https://github.com/talos-systems/talos/releases/latest/download/aws-amd64.tar.gz | tar -xv
curl -LO https://github.com/siderolabs/talos/releases/{{< release >}}/download/aws-amd64.tar.gz | tar -xv
```
Copy the RAW disk to S3 and import it as a snapshot:

View File

@ -36,7 +36,7 @@ export CONNECTION=$(az storage account show-connection-string \
### Create the Image
First, download the Azure image from a [Talos release](https://github.com/talos-systems/talos/releases).
First, download the Azure image from a [Talos release](https://github.com/siderolabs/talos/releases).
Once downloaded, untar with `tar -xvf /path/to/azure-amd64.tar.gz`
#### Upload the VHD

View File

@ -27,7 +27,7 @@ export REGION="us-central1"
### Create the Image
First, download the Google Cloud image from a Talos [release](https://github.com/talos-systems/talos/releases).
First, download the Google Cloud image from a Talos [release](https://github.com/siderolabs/talos/releases).
These images are called `gcp-$ARCH.tar.gz`.
#### Upload the Image
@ -253,10 +253,10 @@ cd talos-gcp-deployment
We need to download two deployment manifests for the deployment from the Talos github repository.
```bash
curl -fsSLO "https://raw.githubusercontent.com/talos-systems/talos/master/website/content/docs/v0.14/Cloud%20Platforms/gcp/config.yaml"
curl -fsSLO "https://raw.githubusercontent.com/talos-systems/talos/master/website/content/docs/v0.14/Cloud%20Platforms/gcp/talos-ha.jinja"
curl -fsSLO "https://raw.githubusercontent.com/siderolabs/talos/master/website/content/{{< version >}}/cloud-platforms/gcp/config.yaml"
curl -fsSLO "https://raw.githubusercontent.com/siderolabs/talos/master/website/content/{{< version >}}/cloud-platforms/gcp/talos-ha.jinja"
# if using ccm
curl -fsSLO "https://raw.githubusercontent.com/talos-systems/talos/master/website/content/docs/v0.14/Cloud%20Platforms/gcp/gcp-ccm.yaml"
curl -fsSLO "https://raw.githubusercontent.com/siderolabs/talos/master/website/content/{{< version >}}/cloud-platforms/gcp/gcp-ccm.yaml"
```
### Updating the config
@ -288,7 +288,7 @@ outputs:
#### Enabling external cloud provider
Note: The `externalCloudProvider` property is set to `false` by default.
The [manifest](https://raw.githubusercontent.com/talos-systems/talos/master/website/content/docs/v0.14/Cloud%20Platforms/gcp/gcp-ccm.yaml#L256) used for deploying the ccm (cloud controller manager) is currently using the GCP ccm provided by openshift since there are no public images for the [ccm](https://github.com/kubernetes/cloud-provider-gcp) yet.
The [manifest](https://raw.githubusercontent.com/siderolabs/talos/master/website/content/{{< version >}}/cloud-platforms/gcp/gcp-ccm.yaml#L256) used for deploying the ccm (cloud controller manager) is currently using the GCP ccm provided by openshift since there are no public images for the [ccm](https://github.com/kubernetes/cloud-provider-gcp) yet.
> Since the routes controller is disabled while deploying the CCM, the CNI pods needs to be restarted after the CCM deployment is complete to remove the `node.kubernetes.io/network-unavailable` taint.
See [Nodes network-unavailable taint not removed after installing ccm](https://github.com/kubernetes/cloud-provider-gcp/issues/291) for more information

View File

@ -6,7 +6,7 @@ description: "Creating a cluster via the CLI (hcloud) on Hetzner."
## Upload image
Hetzner Cloud does not support uploading custom images.
You can email their support to get a Talos ISO uploaded by following [issues:3599](https://github.com/talos-systems/talos/issues/3599#issuecomment-841172018) or you can prepare image snapshot by yourself.
You can email their support to get a Talos ISO uploaded by following [issues:3599](https://github.com/siderolabs/talos/issues/3599#issuecomment-841172018) or you can prepare image snapshot by yourself.
There are two options to upload your own.

View File

@ -17,7 +17,7 @@ See [here](https://docs.openstack.org/newton/user-guide/common/cli-set-environme
### Create the Image
First, download the Openstack image from a Talos [release](https://github.com/talos-systems/talos/releases).
First, download the Openstack image from a Talos [release](https://github.com/siderolabs/talos/releases).
These images are called `openstack-$ARCH.tar.gz`.
Untar this file with `tar -xvf openstack-$ARCH.tar.gz`.
The resulting file will be called `disk.raw`.

View File

@ -99,7 +99,7 @@ The only required flag for this guide is `--registry-mirror '*'=http://10.5.0.1:
The endpoint being used is `10.5.0.1`, as this is the default bridge interface address which will be routable from the QEMU VMs (`127.0.0.1` IP will be pointing to the VM itself).
```bash
$ sudo -E talosctl cluster create --provisioner=qemu --registry-mirror '*'=http://10.5.0.1:6000 --install-image=ghcr.io/siderolabs/installer:v1.0.0
$ sudo -E talosctl cluster create --provisioner=qemu --registry-mirror '*'=http://10.5.0.1:6000 --install-image=ghcr.io/siderolabs/installer:{{< release >}}
validating CIDR and reserving IPs
generating PKI and tokens
creating state directory in "/home/smira/.talos/clusters/talos-default"

View File

@ -42,7 +42,7 @@ This is not always possible, however, so this page lays out the minimal network
</table>
> Ports marked with a `*` are not currently configurable, but that may change in the future.
> [Follow along here](https://github.com/talos-systems/talos/issues/1836).
> [Follow along here](https://github.com/siderolabs/talos/issues/1836).
### Worker node(s)
@ -68,4 +68,4 @@ This is not always possible, however, so this page lays out the minimal network
</table>
> Ports marked with a `*` are not currently configurable, but that may change in the future.
> [Follow along here](https://github.com/talos-systems/talos/issues/1836).
> [Follow along here](https://github.com/siderolabs/talos/issues/1836).

View File

@ -180,13 +180,13 @@ As the inline manifest is processed from top to bottom make sure to manually put
## Known issues
- Currently there is an interaction between a Kubespan enabled Talos cluster and Cilium that results in the cluster going down during bootstrap after applying the Cilium manifests.
For more details: [Kubespan and Cilium compatiblity: etcd is failing](https://github.com/talos-systems/talos/issues/4836)
For more details: [Kubespan and Cilium compatiblity: etcd is failing](https://github.com/siderolabs/talos/issues/4836)
- There are some gotchas when using Talos and Cilium on the Google cloud platform when using internal load balancers.
For more details: [GCP ILB support / support scope local routes to be configured](https://github.com/talos-systems/talos/issues/4109)
For more details: [GCP ILB support / support scope local routes to be configured](https://github.com/siderolabs/talos/issues/4109)
- Some kernel values changed by kube-proxy are not set to good defaults when running the cilium kernel-proxy alternative.
For more details: [Kernel default values (sysctl)](https://github.com/talos-systems/talos/issues/4654)
For more details: [Kernel default values (sysctl)](https://github.com/siderolabs/talos/issues/4654)
## Other things to know

View File

@ -27,7 +27,7 @@ cluster:
Disabling all registries effectively disables member discovery altogether.
> As of v0.14, Talos supports the `kubernetes` and `service` registries.
> Talos supports the `kubernetes` and `service` registries.
`Kubernetes` registry uses Kubernetes `Node` resource data and additional Talos annotations:
@ -43,7 +43,7 @@ Annotations: cluster.talos.dev/node-id: Utoh3O0ZneV0kT2IUBrh7TgdouRcUW2yz
## Resource Definitions
Talos v0.14 introduces seven new resources that can be used to introspect the new discovery and KubeSpan features.
Talos provides seven resources that can be used to introspect the new discovery and KubeSpan features.
### Discovery
@ -107,9 +107,9 @@ The members of the cluster can be obtained with:
```sh
$ talosctl get members
ID VERSION HOSTNAME MACHINE TYPE OS ADDRESSES
talos-default-master-1 2 talos-default-master-1 controlplane Talos (v1.0.0) ["172.20.0.2","fd83:b1f7:fcb5:2802:8c13:71ff:feaf:7c94"]
talos-default-master-2 1 talos-default-master-2 controlplane Talos (v1.0.0) ["172.20.0.3","fd83:b1f7:fcb5:2802:986b:7eff:fec5:889d"]
talos-default-master-3 1 talos-default-master-3 controlplane Talos (v1.0.0) ["172.20.0.4","fd83:b1f7:fcb5:2802:248f:1fff:fe5c:c3f"]
talos-default-worker-1 1 talos-default-worker-1 worker Talos (v1.0.0) ["172.20.0.5","fd83:b1f7:fcb5:2802:cc80:3dff:fece:d89d"]
talos-default-worker-2 1 talos-default-worker-2 worker Talos (v1.0.0) ["172.20.0.6","fd83:b1f7:fcb5:2802:2805:fbff:fe80:5ed2"]
talos-default-master-1 2 talos-default-master-1 controlplane Talos ({{< release >}}) ["172.20.0.2","fd83:b1f7:fcb5:2802:8c13:71ff:feaf:7c94"]
talos-default-master-2 1 talos-default-master-2 controlplane Talos ({{< release >}}) ["172.20.0.3","fd83:b1f7:fcb5:2802:986b:7eff:fec5:889d"]
talos-default-master-3 1 talos-default-master-3 controlplane Talos ({{< release >}}) ["172.20.0.4","fd83:b1f7:fcb5:2802:248f:1fff:fe5c:c3f"]
talos-default-worker-1 1 talos-default-worker-1 worker Talos ({{< release >}}) ["172.20.0.5","fd83:b1f7:fcb5:2802:cc80:3dff:fece:d89d"]
talos-default-worker-2 1 talos-default-worker-2 worker Talos ({{< release >}}) ["172.20.0.6","fd83:b1f7:fcb5:2802:2805:fbff:fe80:5ed2"]
```

View File

@ -26,7 +26,7 @@ Each of these commands can operate in one of four modes:
> Note: applying change on next reboot (`--mode=staged`) doesn't modify current node configuration, so next call to
> `talosctl edit machineconfig --mode=staged` will not see changes
The list of config changes allowed to be applied immediately in Talos v1.0:
The list of config changes allowed to be applied immediately in Talos {{< release >}}:
* `.debug`
* `.cluster`
@ -107,14 +107,14 @@ Command `talosctl patch` works similar to `talosctl edit` command - it loads cur
Example, updating kubelet version (in auto mode):
```bash
$ talosctl -n <IP> patch machineconfig -p '[{"op": "replace", "path": "/machine/kubelet/image", "value": "ghcr.io/siderolabs/kubelet:v1.20.5"}]'
$ talosctl -n <IP> patch machineconfig -p '[{"op": "replace", "path": "/machine/kubelet/image", "value": "ghcr.io/siderolabs/kubelet:v{{< k8s_release >}}"}]'
patched mc at the node <IP>
```
Updating kube-apiserver version in immediate mode (without a reboot):
```bash
$ talosctl -n <IP> patch machineconfig --mode=no-reboot -p '[{"op": "replace", "path": "/cluster/apiServer/image", "value": "k8s.gcr.io/kube-apiserver:v1.20.5"}]'
$ talosctl -n <IP> patch machineconfig --mode=no-reboot -p '[{"op": "replace", "path": "/cluster/apiServer/image", "value": "k8s.gcr.io/kube-apiserver:v{{< k8s_release >}}"}]'
patched mc at the node <IP>
```
@ -137,7 +137,7 @@ Talos can detect file format automatically:
# kubelet-patch.yaml
- op: replace
path: /machine/kubelet/image
value: ghcr.io/siderolabs/kubelet:v1.23.3
value: ghcr.io/siderolabs/kubelet:v{{< k8s_release >}}
```
```bash

View File

@ -54,7 +54,7 @@ cluster:
### Upgrading an Existing Cluster
In order to enable KubeSpan for an existing cluster, upgrade to the latest v0.14.
In order to enable KubeSpan for an existing cluster, upgrade to the latest version of Talos ({{< release >}}).
Once your cluster is upgraded, the configuration of each node must contain the globally unique identifier, the shared secret for the cluster, and have KubeSpan and discovery enabled.
> Note: Discovery can be used without KubeSpan, but KubeSpan requires at least one discovery registry.
@ -88,7 +88,7 @@ cluster:
> Note: This can be applied in immediate mode (no reboot required).
#### Talos v0.12
#### Talos v0.12 or More
Enable `kubespan` and `discovery`.

View File

@ -45,7 +45,7 @@ NODE NAMESPACE ID IMAG
172.20.1.2 k8s.io └─ kube-system/kube-flannel-dk6d5:install-config quay.io/coreos/flannel:v0.13.0 0 CONTAINER_EXITED
172.20.1.2 k8s.io └─ kube-system/kube-flannel-dk6d5:kube-flannel quay.io/coreos/flannel:v0.13.0 1610 CONTAINER_RUNNING
172.20.1.2 k8s.io kube-system/kube-proxy-gfkqj k8s.gcr.io/pause:3.5 1311 SANDBOX_READY
172.20.1.2 k8s.io └─ kube-system/kube-proxy-gfkqj:kube-proxy k8s.gcr.io/kube-proxy:v1.23.0 1379 CONTAINER_RUNNING
172.20.1.2 k8s.io └─ kube-system/kube-proxy-gfkqj:kube-proxy k8s.gcr.io/kube-proxy:v{{< k8s_release >}} 1379 CONTAINER_RUNNING
$ talosctl -n 172.20.1.2 logs -k kube-system/kube-proxy-gfkqj:kube-proxy
172.20.1.2: 2021-11-30T19:13:20.567825192Z stderr F I1130 19:13:20.567737 1 server_others.go:138] "Detected node IP" address="172.20.0.3"

View File

@ -215,7 +215,7 @@ In any case, the status of the control plane components on each control plane no
$ talosctl -n <IP> containers --kubernetes
NODE NAMESPACE ID IMAGE PID STATUS
172.20.0.2 k8s.io kube-system/kube-apiserver-talos-default-master-1 k8s.gcr.io/pause:3.2 2539 SANDBOX_READY
172.20.0.2 k8s.io └─ kube-system/kube-apiserver-talos-default-master-1:kube-apiserver k8s.gcr.io/kube-apiserver:v1.20.4 2572 CONTAINER_RUNNING
172.20.0.2 k8s.io └─ kube-system/kube-apiserver-talos-default-master-1:kube-apiserver k8s.gcr.io/kube-apiserver:v{{< k8s_release >}} 2572 CONTAINER_RUNNING
```
If `kube-apiserver` shows as `CONTAINER_EXITED`, it might have exited due to configuration error.
@ -316,9 +316,9 @@ $ talosctl -n <IP> c -k
NODE NAMESPACE ID IMAGE PID STATUS
...
172.20.0.2 k8s.io kube-system/kube-controller-manager-talos-default-master-1 k8s.gcr.io/pause:3.2 2547 SANDBOX_READY
172.20.0.2 k8s.io └─ kube-system/kube-controller-manager-talos-default-master-1:kube-controller-manager k8s.gcr.io/kube-controller-manager:v1.20.4 2580 CONTAINER_RUNNING
172.20.0.2 k8s.io └─ kube-system/kube-controller-manager-talos-default-master-1:kube-controller-manager k8s.gcr.io/kube-controller-manager:v{{< k8s_release >}} 2580 CONTAINER_RUNNING
172.20.0.2 k8s.io kube-system/kube-scheduler-talos-default-master-1 k8s.gcr.io/pause:3.2 2638 SANDBOX_READY
172.20.0.2 k8s.io └─ kube-system/kube-scheduler-talos-default-master-1:kube-scheduler k8s.gcr.io/kube-scheduler:v1.20.4 2670 CONTAINER_RUNNING
172.20.0.2 k8s.io └─ kube-system/kube-scheduler-talos-default-master-1:kube-scheduler k8s.gcr.io/kube-scheduler:v{{< k8s_release >}} 2670 CONTAINER_RUNNING
...
```

View File

@ -19,38 +19,38 @@ Upgrading Kubernetes is non-disruptive to the cluster workloads.
To trigger a Kubernetes upgrade, issue a command specifiying the version of Kubernetes to ugprade to, such as:
`talosctl --nodes <master node> upgrade-k8s --to 1.23.0`
`talosctl --nodes <master node> upgrade-k8s --to {{< k8s_release >}}`
Note that the `--nodes` parameter specifies the control plane node to send the API call to, but all members of the cluster will be upgraded.
To check what will be upgraded you can run `talosctl upgrade-k8s` with the `--dry-run` flag:
```bash
$ talosctl --nodes <master node> upgrade-k8s --to 1.23.0 --dry-run
WARNING: found resources which are going to be deprecated/migrated in the version 1.22.0
$ talosctl --nodes <master node> upgrade-k8s --to {{< k8s_release >}} --dry-run
WARNING: found resources which are going to be deprecated/migrated in the version {{< k8s_release >}}
RESOURCE COUNT
validatingwebhookconfigurations.v1beta1.admissionregistration.k8s.io 4
mutatingwebhookconfigurations.v1beta1.admissionregistration.k8s.io 3
customresourcedefinitions.v1beta1.apiextensions.k8s.io 25
apiservices.v1beta1.apiregistration.k8s.io 54
leases.v1beta1.coordination.k8s.io 4
automatically detected the lowest Kubernetes version 1.22.4
checking for resource APIs to be deprecated in version 1.23.0
automatically detected the lowest Kubernetes version {{< k8s_prev_release >}}
checking for resource APIs to be deprecated in version {{< k8s_release >}}
discovered master nodes ["172.20.0.2" "172.20.0.3" "172.20.0.4"]
discovered worker nodes ["172.20.0.5" "172.20.0.6"]
updating "kube-apiserver" to version "1.23.0"
updating "kube-apiserver" to version "{{< k8s_release >}}"
> "172.20.0.2": starting update
> update kube-apiserver: v1.22.4 -> 1.23.0
> update kube-apiserver: v{{< k8s_prev_release >}} -> {{< k8s_release >}}
> skipped in dry-run
> "172.20.0.3": starting update
> update kube-apiserver: v1.22.4 -> 1.23.0
> update kube-apiserver: v{{< k8s_prev_release >}} -> {{< k8s_release >}}
> skipped in dry-run
> "172.20.0.4": starting update
> update kube-apiserver: v1.22.4 -> 1.23.0
> update kube-apiserver: v{{< k8s_prev_release >}} -> {{< k8s_release >}}
> skipped in dry-run
updating "kube-controller-manager" to version "1.23.0"
updating "kube-controller-manager" to version "{{< k8s_release >}}"
> "172.20.0.2": starting update
> update kube-controller-manager: v1.22.4 -> 1.23.0
> update kube-controller-manager: v{{< k8s_prev_release >}} -> {{< k8s_release >}}
> skipped in dry-run
> "172.20.0.3": starting update
@ -64,22 +64,22 @@ updating manifests
<snip>
```
To upgrade Kubernetes from v1.22.4 to v1.23.0 run:
To upgrade Kubernetes from v{{< k8s_prev_release >}} to v{{< k8s_release >}} run:
```bash
$ talosctl --nodes <master node> upgrade-k8s --to 1.24.0
automatically detected the lowest Kubernetes version 1.22.4
checking for resource APIs to be deprecated in version 1.23.0
$ talosctl --nodes <master node> upgrade-k8s --to {{< k8s_release >}}
automatically detected the lowest Kubernetes version {{< k8s_prev_release >}}
checking for resource APIs to be deprecated in version {{< k8s_release >}}
discovered master nodes ["172.20.0.2" "172.20.0.3" "172.20.0.4"]
discovered worker nodes ["172.20.0.5" "172.20.0.6"]
updating "kube-apiserver" to version "1.23.0"
updating "kube-apiserver" to version "{{< k8s_release >}}"
> "172.20.0.2": starting update
> update kube-apiserver: v1.22.4 -> 1.23.0
> update kube-apiserver: v{{< k8s_prev_release >}} -> {{< k8s_release >}}
> "172.20.0.2": machine configuration patched
> "172.20.0.2": waiting for API server state pod update
< "172.20.0.2": successfully updated
> "172.20.0.3": starting update
> update kube-apiserver: v1.22.4 -> 1.23.0
> update kube-apiserver: v{{< k8s_prev_release >}} -> {{< k8s_release >}}
<snip>
```
@ -117,7 +117,7 @@ talosctl --nodes <master node> kubeconfig
Patch machine configuration using `talosctl patch` command:
```bash
$ talosctl -n <CONTROL_PLANE_IP_1> patch mc --mode=no-reboot -p '[{"op": "replace", "path": "/cluster/apiServer/image", "value": "k8s.gcr.io/kube-apiserver:v1.20.4"}]'
$ talosctl -n <CONTROL_PLANE_IP_1> patch mc --mode=no-reboot -p '[{"op": "replace", "path": "/cluster/apiServer/image", "value": "k8s.gcr.io/kube-apiserver:v{{< k8s_release >}}"}]'
patched mc at the node 172.20.0.2
```
@ -137,7 +137,7 @@ metadata:
version: 5
phase: running
spec:
image: k8s.gcr.io/kube-apiserver:v1.20.4
image: k8s.gcr.io/kube-apiserver:v{{< k8s_release >}}
cloudProvider: ""
controlPlaneEndpoint: https://172.20.0.1:6443
etcdServers:
@ -171,7 +171,7 @@ Repeat this process for every control plane node, verifying that state got propa
Patch machine configuration using `talosctl patch` command:
```bash
$ talosctl -n <CONTROL_PLANE_IP_1> patch mc --mode=no-reboot -p '[{"op": "replace", "path": "/cluster/controllerManager/image", "value": "k8s.gcr.io/kube-controller-manager:v1.20.4"}]'
$ talosctl -n <CONTROL_PLANE_IP_1> patch mc --mode=no-reboot -p '[{"op": "replace", "path": "/cluster/controllerManager/image", "value": "k8s.gcr.io/kube-controller-manager:v{{< k8s_release >}}"}]'
patched mc at the node 172.20.0.2
```
@ -189,7 +189,7 @@ metadata:
version: 3
phase: running
spec:
image: k8s.gcr.io/kube-controller-manager:v1.20.4
image: k8s.gcr.io/kube-controller-manager:v{{< k8s_release >}}
cloudProvider: ""
podCIDR: 10.244.0.0/16
serviceCIDR: 10.96.0.0/12
@ -220,7 +220,7 @@ Repeat this process for every control plane node, verifying that state propagate
Patch machine configuration using `talosctl patch` command:
```bash
$ talosctl -n <CONTROL_PLANE_IP_1> patch mc --mode=no-reboot -p '[{"op": "replace", "path": "/cluster/scheduler/image", "value": "k8s.gcr.io/kube-scheduler:v1.20.4"}]'
$ talosctl -n <CONTROL_PLANE_IP_1> patch mc --mode=no-reboot -p '[{"op": "replace", "path": "/cluster/scheduler/image", "value": "k8s.gcr.io/kube-scheduler:v{{< k8s_release >}}"}]'
patched mc at the node 172.20.0.2
```
@ -238,7 +238,7 @@ metadata:
version: 3
phase: running
spec:
image: k8s.gcr.io/kube-scheduler:v1.20.4
image: k8s.gcr.io/kube-scheduler:v{{< k8s_release >}}
extraArgs: {}
extraVolumes: []
```
@ -275,7 +275,7 @@ spec:
spec:
containers:
- name: kube-proxy
image: k8s.gcr.io/kube-proxy:v1.20.1
image: k8s.gcr.io/kube-proxy:v{{< k8s_release >}}
tolerations:
- ...
```
@ -292,7 +292,7 @@ spec:
spec:
containers:
- name: kube-proxy
image: k8s.gcr.io/kube-proxy:v1.20.4
image: k8s.gcr.io/kube-proxy:v{{< k8s_release >}}
tolerations:
- ...
- key: node-role.kubernetes.io/control-plane
@ -333,7 +333,7 @@ kubectl apply -f manifests.yaml
For every node, patch machine configuration with new kubelet version, wait for the kubelet to restart with new version:
```bash
$ talosctl -n <IP> patch mc --mode=no-reboot -p '[{"op": "replace", "path": "/machine/kubelet/image", "value": "ghcr.io/siderolabs/kubelet:v1.23.0"}]'
$ talosctl -n <IP> patch mc --mode=no-reboot -p '[{"op": "replace", "path": "/machine/kubelet/image", "value": "ghcr.io/siderolabs/kubelet:v{{< k8s_release >}}"}]'
patched mc at the node 172.20.0.2
```
@ -342,5 +342,5 @@ Once `kubelet` restarts with the new configuration, confirm upgrade with `kubect
```bash
$ kubectl get nodes talos-default-master-1
NAME STATUS ROLES AGE VERSION
talos-default-master-1 Ready control-plane,master 123m v1.23.0
talos-default-master-1 Ready control-plane,master 123m v{{< k8s_release >}}
```

View File

@ -26,7 +26,7 @@ To see a live demo of an upgrade of Talos Linux, see the video below:
<iframe width="560" height="315" src="https://www.youtube.com/embed/AAF6WhX0USo" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
### After Upgrade to 0.15
### After Upgrade to {{< release >}}
TBD
@ -41,7 +41,7 @@ as:
```sh
$ talosctl upgrade --nodes 10.20.30.40 \
--image ghcr.io/siderolabs/installer:v1.0.0
--image ghcr.io/siderolabs/installer:{{< release >}}
```
There is an option to this command: `--preserve`, which will explicitly tell Talos to keep ephemeral data intact.
@ -68,7 +68,7 @@ It also applies an upgrade flow which allows you to classify some machines as
early adopters and others as getting only stable, tested versions.
To find out more about the controller manager and to get it installed and
configured, take a look at the [GitHub page](https://github.com/talos-systems/talos-controller-manager).
configured, take a look at the [GitHub page](https://github.com/siderolabs/talos-controller-manager).
Please note that the controller manager is still in fairly early development.
More advanced features, such as time slot scheduling, will be coming in the
future.

View File

@ -29,7 +29,7 @@ You should install `talosctl` before continuing:
#### `amd64`
```bash
curl -Lo /usr/local/bin/talosctl https://github.com/talos-systems/talos/releases/latest/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/{{< release >}}/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
```
@ -38,7 +38,7 @@ chmod +x /usr/local/bin/talosctl
For `linux` and `darwin` operating systems `talosctl` is also available for the `arm64` processor architecture.
```bash
curl -Lo /usr/local/bin/talosctl https://github.com/talos-systems/talos/releases/latest/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-arm64
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/{{< release >}}/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-arm64
chmod +x /usr/local/bin/talosctl
```
@ -46,10 +46,10 @@ chmod +x /usr/local/bin/talosctl
The easiest way to install Talos is to use the ISO image.
The latest ISO image can be found on the Github [Releases](https://github.com/talos-systems/talos/releases) page:
The latest ISO image can be found on the Github [Releases](https://github.com/siderolabs/talos/releases) page:
- X86: [https://github.com/siderolabs/talos/releases/download/v1.0.0/talos-amd64.iso](https://github.com/siderolabs/talos/releases/download/v1.0.0/talos-amd64.iso)
- ARM64: [https://github.com/siderolabs/talos/releases/download/v1.0.0/talos-arm64.iso](https://github.com/siderolabs/talos/releases/download/v1.0.0/talos-arm64.iso)
- X86: [https://github.com/siderolabs/talos/releases/download/{{< release >}}/talos-amd64.iso](https://github.com/siderolabs/talos/releases/download/{{< release >}}/talos-amd64.iso)
- ARM64: [https://github.com/siderolabs/talos/releases/download/{{< release >}}/talos-arm64.iso](https://github.com/siderolabs/talos/releases/download/{{< release >}}/talos-arm64.iso)
When booted from the ISO, Talos will run in RAM, and it will not install itself
until it is provided a configuration.
@ -59,8 +59,8 @@ Thus, it is safe to boot the ISO onto any machine.
For network booting and self-built media, you can use the published kernel and initramfs images:
- X86: [vmlinuz-amd64](https://github.com/siderolabs/talos/releases/download/v1.0.0/vmlinuz-amd64) [initramfs-amd64.xz](https://github.com/siderolabs/talos/releases/download/v1.0.0/initramfs-amd64.xz)
- ARM64: [vmlinuz-arm64](https://github.com/siderolabs/talos/releases/download/v1.0.0/vmlinuz-arm64) [initramfs-arm64.xz](https://github.com/siderolabs/talos/releases/download/v1.0.0/initramfs-arm64.xz)
- X86: [vmlinuz-amd64](https://github.com/siderolabs/talos/releases/download/{{< release >}}/vmlinuz-amd64) [initramfs-amd64.xz](https://github.com/siderolabs/talos/releases/download/{{< release >}}/initramfs-amd64.xz)
- ARM64: [vmlinuz-arm64](https://github.com/siderolabs/talos/releases/download/{{< release >}}/vmlinuz-arm64) [initramfs-arm64.xz](https://github.com/siderolabs/talos/releases/download/{{< release >}}/initramfs-arm64.xz)
Note that to use alternate booting, there are a number of required kernel parameters.
Please see the [kernel](../../reference/kernel/) docs for more information.

View File

@ -26,7 +26,7 @@ Download `talosctl`:
##### `amd64`
```bash
curl -Lo /usr/local/bin/talosctl https://github.com/talos-systems/talos/releases/latest/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/{{< release >}}/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
```
@ -35,7 +35,7 @@ chmod +x /usr/local/bin/talosctl
For `linux` and `darwin` operating systems `talosctl` is also available for the `arm64` processor architecture.
```bash
curl -Lo /usr/local/bin/talosctl https://github.com/talos-systems/talos/releases/latest/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-arm64
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/{{< release >}}/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-arm64
chmod +x /usr/local/bin/talosctl
```
@ -56,8 +56,8 @@ Verify that you can reach Kubernetes:
```bash
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
talos-default-master-1 Ready master 115s v1.20.2 10.5.0.2 <none> Talos (v1.0.0) <host kernel> containerd://1.5.5
talos-default-worker-1 Ready <none> 115s v1.20.2 10.5.0.3 <none> Talos (v1.0.0) <host kernel> containerd://1.5.5
talos-default-master-1 Ready master 115s v{{< k8s_release >}} 10.5.0.2 <none> Talos ({{< release >}}) <host kernel> containerd://1.5.5
talos-default-worker-1 Ready <none> 115s v{{< k8s_release >}} 10.5.0.3 <none> Talos ({{< release >}}) <host kernel> containerd://1.5.5
```
### Destroy the Cluster

View File

@ -9,7 +9,7 @@ Some steps might work under Mac OS X, but using Linux is highly advised.
## Prepare
Check out the [Talos repository](https://github.com/talos-systems/talos).
Check out the [Talos repository](https://github.com/siderolabs/talos).
Try running `make help` to see available `make` commands.
You would need Docker and `buildx` installed on the host.
@ -85,7 +85,7 @@ sudo -E _out/talosctl-linux-amd64 cluster create \
> Note: as boot loader is not used, it's not necessary to rebuild `installer` each time (old image is fine), but sometimes it's needed (when configuration changes are done and old installer doesn't validate the config).
>
> `talosctl cluster create` derives Talos machine configuration version from the install image tag, so sometimes early in the development cycle (when new minor tag is not released yet), machine config version can be overridden with `--talos-version=v0.14`.
> `talosctl cluster create` derives Talos machine configuration version from the install image tag, so sometimes early in the development cycle (when new minor tag is not released yet), machine config version can be overridden with `--talos-version={{< version >}}`.
If the `--with-bootloader=false` flag is not enabled, for Talos cluster to pick up new changes to the code (in `initramfs`), it will require a Talos upgrade (so new `installer` should be built).
With `--with-bootloader=false` flag, Talos always boots from `initramfs` in `_out/` directory, so simple reboot is enough to pick up new code changes.

View File

@ -13,7 +13,7 @@ Furthermore, if you are running Talos in production, it provides an excellent wa
The follow are requirements for running Talos in Docker:
- Docker 18.03 or greater
- a recent version of [`talosctl`](https://github.com/talos-systems/talos/releases)
- a recent version of [`talosctl`](https://github.com/siderolabs/talos/releases)
## Caveats

View File

@ -40,16 +40,16 @@ apt install qemu-system-x86 qemu-kvm
### Install talosctl
You can download `talosctl` and all required binaries via
[github.com/talos-systems/talos/releases](https://github.com/talos-systems/talos/releases)
[github.com/siderolabs/talos/releases](https://github.com/siderolabs/talos/releases)
```bash
curl https://github.com/siderolabs/talos/releases/download/<version>/talosctl-<platform>-<arch> -L -o talosctl
```
For example version `v1.0.0` for `linux` platform:
For example version `{{< release >}}` for `linux` platform:
```bash
curl https://github.com/talos-systems/talos/releases/latest/download/talosctl-linux-amd64 -L -o talosctl
curl https://github.com/siderolabs/talos/releases/{{< release >}}/download/talosctl-linux-amd64 -L -o talosctl
sudo cp talosctl /usr/local/bin
sudo chmod +x /usr/local/bin/talosctl
```
@ -65,11 +65,11 @@ curl https://github.com/siderolabs/talos/releases/download/<version>/vmlinuz-<ar
curl https://github.com/siderolabs/talos/releases/download/<version>/initramfs-<arch>.xz -L -o _out/initramfs-<arch>.xz
```
For example version `v1.0.0`:
For example version `{{< release >}}`:
```bash
curl https://github.com/siderolabs/talos/releases/download/v1.0.0/vmlinuz-amd64 -L -o _out/vmlinuz-amd64
curl https://github.com/siderolabs/talos/releases/download/v1.0.0/initramfs-amd64.xz -L -o _out/initramfs-amd64.xz
curl https://github.com/siderolabs/talos/releases/download/{{< release >}}/vmlinuz-amd64 -L -o _out/vmlinuz-amd64
curl https://github.com/siderolabs/talos/releases/download/{{< release >}}/initramfs-amd64.xz -L -o _out/initramfs-amd64.xz
```
## Create the Cluster

View File

@ -25,16 +25,16 @@ apt install virtualbox
### Install talosctl
You can download `talosctl` via
[github.com/talos-systems/talos/releases](https://github.com/talos-systems/talos/releases)
[github.com/siderolabs/talos/releases](https://github.com/siderolabs/talos/releases)
```bash
curl https://github.com/siderolabs/talos/releases/download/<version>/talosctl-<platform>-<arch> -L -o talosctl
```
For example version `v1.0.0` for `linux` platform:
For example version `{{< release >}}` for `linux` platform:
```bash
curl https://github.com/talos-systems/talos/releases/latest/download/talosctl-linux-amd64 -L -o talosctl
curl https://github.com/siderolabs/talos/releases/{{< release >}}/download/talosctl-linux-amd64 -L -o talosctl
sudo cp talosctl /usr/local/bin
sudo chmod +x /usr/local/bin/talosctl
```
@ -43,18 +43,18 @@ sudo chmod +x /usr/local/bin/talosctl
In order to install Talos in VirtualBox, you will need the ISO image from the Talos release page.
You can download `talos-amd64.iso` via
[github.com/talos-systems/talos/releases](https://github.com/talos-systems/talos/releases)
[github.com/siderolabs/talos/releases](https://github.com/siderolabs/talos/releases)
```bash
mkdir -p _out/
curl https://github.com/siderolabs/talos/releases/download/<version>/talos-<arch>.iso -L -o _out/talos-<arch>.iso
```
For example version `v1.0.0` for `linux` platform:
For example version `{{< release >}}` for `linux` platform:
```bash
mkdir -p _out/
curl https://github.com/talos-systems/talos/releases/latest/download/talos-amd64.iso -L -o _out/talos-amd64.iso
curl https://github.com/siderolabs/talos/releases/{{< release >}}/download/talos-amd64.iso -L -o _out/talos-amd64.iso
```
## Create VMs

View File

@ -2279,7 +2279,7 @@ extraKernelArgs:
Allows for supplying the image used to perform the installation.
Image reference for each Talos release can be found on
[GitHub releases page](https://github.com/talos-systems/talos/releases).
[GitHub releases page](https://github.com/siderolabs/talos/releases).

View File

@ -13,7 +13,7 @@ You will need
Download the latest `talosctl`.
```bash
curl -Lo /usr/local/bin/talosctl https://github.com/talos-systems/talos/releases/latest/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/{{< release >}}/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
```
@ -22,7 +22,7 @@ chmod +x /usr/local/bin/talosctl
Download the image and decompress it:
```bash
curl -LO https://github.com/talos-systems/talos/releases/latest/download/metal-bananapi_m64-arm64.img.xz
curl -LO https://github.com/siderolabs/talos/releases/{{< release >}}/download/metal-bananapi_m64-arm64.img.xz
xz -d metal-bananapi_m64-arm64.img.xz
```

View File

@ -14,7 +14,7 @@ You will need
Download the latest `talosctl`.
```bash
curl -Lo /usr/local/bin/talosctl https://github.com/talos-systems/talos/releases/latest/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/{{< release >}}/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
```
@ -83,7 +83,7 @@ Once the flashing is done you can disconnect the USB cable and power off the Jet
Download the image and decompress it:
```bash
curl -LO https://github.com/talos-systems/talos/releases/latest/download/metal-jetson_nano-arm64.img.xz
curl -LO https://github.com/siderolabs/talos/releases/{{< release >}}/download/metal-jetson_nano-arm64.img.xz
xz -d metal-jetson_nano-arm64.img.xz
```

View File

@ -13,7 +13,7 @@ You will need
Download the latest `talosctl`.
```bash
curl -Lo /usr/local/bin/talosctl https://github.com/talos-systems/talos/releases/latest/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/{{< release >}}/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
```
@ -22,7 +22,7 @@ chmod +x /usr/local/bin/talosctl
Download the image and decompress it:
```bash
curl -LO https://github.com/talos-systems/talos/releases/latest/download/metal-libretech_all_h3_cc_h5-arm64.img.xz
curl -LO https://github.com/siderolabs/talos/releases/{{< release >}}/download/metal-libretech_all_h3_cc_h5-arm64.img.xz
xz -d metal-libretech_all_h3_cc_h5-arm64.img.xz
```

View File

@ -13,7 +13,7 @@ You will need
Download the latest `talosctl`.
```bash
curl -Lo /usr/local/bin/talosctl https://github.com/talos-systems/talos/releases/latest/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/{{< release >}}/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
```
@ -22,7 +22,7 @@ chmod +x /usr/local/bin/talosctl
Download the image and decompress it:
```bash
curl -LO https://github.com/talos-systems/talos/releases/latest/download/metal-pine64-arm64.img.xz
curl -LO https://github.com/siderolabs/talos/releases/{{< release >}}/download/metal-pine64-arm64.img.xz
xz -d metal-pine64-arm64.img.xz
```

View File

@ -13,7 +13,7 @@ You will need
Download the latest `talosctl`.
```bash
curl -Lo /usr/local/bin/talosctl https://github.com/talos-systems/talos/releases/latest/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/{{< release >}}/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
```
@ -22,7 +22,7 @@ chmod +x /usr/local/bin/talosctl
Download the image and decompress it:
```bash
curl -LO https://github.com/talos-systems/talos/releases/latest/download/metal-rock64-arm64.img.xz
curl -LO https://github.com/siderolabs/talos/releases/{{< release >}}/download/metal-rock64-arm64.img.xz
xz -d metal-rock64-arm64.img.xz
```

View File

@ -13,7 +13,7 @@ You will need
Download the latest `talosctl`.
```bash
curl -Lo /usr/local/bin/talosctl https://github.com/talos-systems/talos/releases/latest/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/{{< release >}}/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
```
@ -22,7 +22,7 @@ chmod +x /usr/local/bin/talosctl
Download the image and decompress it:
```bash
curl -LO https://github.com/talos-systems/talos/releases/latest/download/metal-rockpi_4-arm64.img.xz
curl -LO https://github.com/siderolabs/talos/releases/{{< release >}}/download/metal-rockpi_4-arm64.img.xz
xz -d metal-rockpi_4-arm64.img.xz
```
@ -81,7 +81,7 @@ sudo dd if=rkspi_loader-v20.11.2-trunk-v2.img of=/dev/mtdblock0 bs=4K
- Optionally, you can also write Talos image to the SSD drive right from your Rock PI board:
```bash
curl -LO https://github.com/talos-systems/talos/releases/latest/download/metal-rockpi_4-arm64.img.xz
curl -LO https://github.com/siderolabs/talos/releases/{{< release >}}/download/metal-rockpi_4-arm64.img.xz
xz -d metal-rockpi_4-arm64.img.xz
sudo dd if=metal-rockpi_4-arm64.img.xz of=/dev/nvme0n1
```

View File

@ -18,7 +18,7 @@ You will need
Download the latest `talosctl`.
```bash
curl -Lo /usr/local/bin/talosctl https://github.com/talos-systems/talos/releases/latest/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
curl -Lo /usr/local/bin/talosctl https://github.com/siderolabs/talos/releases/{{< release >}}/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl
```
@ -53,7 +53,7 @@ Power off the Raspberry Pi and remove the SD card from it.
Download the image and decompress it:
```bash
curl -LO https://github.com/talos-systems/talos/releases/latest/download/metal-rpi_4-arm64.img.xz
curl -LO https://github.com/siderolabs/talos/releases/{{< release >}}/download/metal-rpi_4-arm64.img.xz
xz -d metal-rpi_4-arm64.img.xz
```

View File

@ -5,7 +5,7 @@ description: "Creating a Talos Kubernetes cluster using Hyper-V."
## Pre-requisities
1. Download the latest `talos-amd64.iso` ISO from github [releases page](https://github.com/talos-systems/talos/releases)
1. Download the latest `talos-amd64.iso` ISO from github [releases page](https://github.com/siderolabs/talos/releases)
2. Create a New-TalosVM folder in any of your PS Module Path folders `$env:PSModulePath -split ';'` and save the [New-TalosVM.psm1](https://github.com/nebula-it/New-TalosVM/blob/main/New-TalosVM.psm1) there
## Plan Overview

View File

@ -21,16 +21,16 @@ Visit the [Proxmox](https://www.proxmox.com/en/downloads) downloads page if nece
### Install talosctl
You can download `talosctl` via
[github.com/talos-systems/talos/releases](https://github.com/talos-systems/talos/releases)
[github.com/siderolabs/talos/releases](https://github.com/siderolabs/talos/releases)
```bash
curl https://github.com/siderolabs/talos/releases/download/<version>/talosctl-<platform>-<arch> -L -o talosctl
```
For example version `v1.0.0` for `linux` platform:
For example version `{{< release >}}` for `linux` platform:
```bash
curl https://github.com/talos-systems/talos/releases/latest/download/talosctl-linux-amd64 -L -o talosctl
curl https://github.com/siderolabs/talos/releases/{{< release >}}/download/talosctl-linux-amd64 -L -o talosctl
sudo cp talosctl /usr/local/bin
sudo chmod +x /usr/local/bin/talosctl
```
@ -39,18 +39,18 @@ sudo chmod +x /usr/local/bin/talosctl
In order to install Talos in Proxmox, you will need the ISO image from the Talos release page.
You can download `talos-amd64.iso` via
[github.com/talos-systems/talos/releases](https://github.com/talos-systems/talos/releases)
[github.com/siderolabs/talos/releases](https://github.com/siderolabs/talos/releases)
```bash
mkdir -p _out/
curl https://github.com/siderolabs/talos/releases/download/<version>/talos-<arch>.iso -L -o _out/talos-<arch>.iso
```
For example version `v1.0.0` for `linux` platform:
For example version `{{< release >}}` for `linux` platform:
```bash
mkdir -p _out/
curl https://github.com/talos-systems/talos/releases/latest/download/talos-amd64.iso -L -o _out/talos-amd64.iso
curl https://github.com/siderolabs/talos/releases/{{< release >}}/download/talos-amd64.iso -L -o _out/talos-amd64.iso
```
## Upload ISO

View File

@ -22,7 +22,7 @@ This can be done with the `talosctl gen config ...` command.
Take note that we will also use a JSON6902 patch when creating the configs so that the control plane nodes get some special information about the VIP we chose earlier, as well as a daemonset to install vmware tools on talos nodes.
First, download `the cp.patch` to your local machine and edit the VIP to match your chosen IP.
You can do this by issuing `https://raw.githubusercontent.com/talos-systems/talos/master/website/content/docs/v0.14/Virtualized%20Platforms/vmware/cp.patch`.
You can do this by issuing `https://raw.githubusercontent.com/siderolabs/talos/master/website/content/{{< version >}}/virtualized-platforms/vmware/cp.patch`.
It's contents should look like the following:
```yaml
@ -91,7 +91,7 @@ If you wish to carry out the manual approach, simply skip ahead to the "Manual A
### Scripted Install
Download the `vmware.sh` script to your local machine.
You can do this by issuing `curl -fsSLO "https://raw.githubusercontent.com/talos-systems/talos/master/website/content/docs/v0.14/Virtualized%20Platforms/vmware/vmware.sh"`.
You can do this by issuing `curl -fsSLO "https://raw.githubusercontent.com/siderolabs/talos/master/website/content/{{< version >}}/virtualized-platforms/vmware/vmware.sh"`.
This script has default variables for things like Talos version and cluster name that may be interesting to tweak before deploying.
#### Import OVA
@ -118,7 +118,7 @@ You may now skip past the "Manual Approach" section down to "Bootstrap Cluster".
#### Import the OVA into vCenter
A `talos.ova` asset is published with each [release](https://github.com/talos-systems/talos/releases).
A `talos.ova` asset is published with each [release](https://github.com/siderolabs/talos/releases).
We will refer to the version of the release as `$TALOS_VERSION` below.
It can be easily exported with `export TALOS_VERSION="v0.3.0-alpha.10"` or similar.

View File

@ -0,0 +1 @@
{{ .Page.FirstSection.Params.prevkubernetesrelease -}}

View File

@ -0,0 +1 @@
{{ .Page.FirstSection.Params.kubernetesrelease -}}

View File

@ -0,0 +1 @@
{{ .Page.FirstSection.Params.lastrelease -}}

View File

@ -0,0 +1,2 @@
{{ $major_minor := split .Page.FirstSection.Params.lastRelease "." | first 2 -}}
{{- delimit $major_minor "." -}}