docs: fix spelling mistakes

Resolve spelling with `misspell -w .`

Signed-off-by: Caleb Woodbine <calebwoodbine.public@gmail.com>
Signed-off-by: Andrey Smirnov <andrey.smirnov@talos-systems.com>
This commit is contained in:
Caleb Woodbine 2022-03-15 20:37:13 +13:00 committed by Andrey Smirnov
parent 5fdedae208
commit d256b5c5e4
No known key found for this signature in database
GPG Key ID: 7B26396447AB6DFD
75 changed files with 111 additions and 111 deletions

View File

@ -24,7 +24,7 @@ with a single `--mode` flag that can take the following values:
Command `talosctl gen config` now defaults to Kubernetes version pinning in the generate machine configuration.
Previously default was to omit explicit Kubernetes version, so Talos picked up the default version it was built against.
Old behavior can be achieved by specifiying empty flag value: `--kubernetes-version=`.
Old behavior can be achieved by specifying empty flag value: `--kubernetes-version=`.
### Machine Configuration
@ -2010,7 +2010,7 @@ cluster:
### Windows Suport
### Windows Support
CLI tool talosctl is now built for Windows and published as part of the release.
@ -2325,7 +2325,7 @@ cluster:
### Windows Suport
### Windows Support
CLI tool talosctl is now built for Windows and published as part of the release.
@ -2642,7 +2642,7 @@ cluster:
### Windows Suport
### Windows Support
CLI tool talosctl is now built for Windows and published as part of the release.
@ -2879,7 +2879,7 @@ This release of Talos provides some initial support for cluster membership disco
These new features are not enabled by default.
### Windows Suport
### Windows Support
CLI tool talosctl is now built for Windows and published as part of the release.

View File

@ -134,7 +134,7 @@ for applications using `img` tool.
All artifacts will be output to ./$(ARTIFACTS). Images will be tagged with the
registry "$(IMAGE_REGISTRY)", username "$(USERNAME)", and a dynamic tag (e.g. $(REGISTRY_AND_USERNAME)/image:$(IMAGE_TAG)).
The registry and username can be overriden by exporting REGISTRY, and USERNAME
The registry and username can be overridden by exporting REGISTRY, and USERNAME
respectively.
## Race Detector

View File

@ -9,7 +9,7 @@ import "google/protobuf/empty.proto";
// The inspect service definition.
//
// InspectService provides auxilary API to inspect OS internals.
// InspectService provides auxiliary API to inspect OS internals.
service InspectService {
rpc ControllerRuntimeDependencies(google.protobuf.Empty) returns (ControllerRuntimeDependenciesResponse);
}

View File

@ -67,7 +67,7 @@ This provides a clean slate for upgrades.
Once we enter the `rootfs` we have three high level tasks:
- retreive the machineconfig
- retrieve the machineconfig
- create, format, and mount partitions per the builtin specifications
- start system and k8s.io services

View File

@ -11,7 +11,7 @@ This proposal will outline how we'll handle the passing of machine configuration
I think the easiest way to background this is to take a look at the init node machine config that we currently have a template for, since it is our most verbose template with the most options.
When looking it, it's somewhat self-explanatory on what is available to tweak, but it also gives a good starting point to view what is similar between the three types of Talos nodes: init (the first master), control plane (any other masters), and workers.
I've also appended some additional fields that we use for certain platforms like Packet.
Additionally, as some background around naming, we'll be referring to our configs only as "machine configs", since using terms like "userdata" interchangably led users to believe we supported cloud-init, which is not true.
Additionally, as some background around naming, we'll be referring to our configs only as "machine configs", since using terms like "userdata" interchangeably led users to believe we supported cloud-init, which is not true.
### Init

View File

@ -138,7 +138,7 @@ Static pod definitions can be updated without a node reboot.
description="""\
Command `talosctl gen config` now defaults to Kubernetes version pinning in the generate machine configuration.
Previously default was to omit explicit Kubernetes version, so Talos picked up the default version it was built against.
Old behavior can be achieved by specifiying empty flag value: `--kubernetes-version=`.
Old behavior can be achieved by specifying empty flag value: `--kubernetes-version=`.
"""
[notes.admission]

View File

@ -33,7 +33,7 @@ created talosconfig
```
> The loadbalancer is used to distribute the load across multiple controlplane nodes.
> This isn't covered in detail, because we asume some loadbalancing knowledge before hand.
> This isn't covered in detail, because we assume some loadbalancing knowledge before hand.
> If you think this should be added to the docs, please [create a issue](https://github.com/talos-systems/talos/issues).
At this point, you can modify the generated configs to your liking.

View File

@ -42,7 +42,7 @@ join.yaml is valid for metal mode
#### Publishing the Machine Configuration Files
In bare-metal setups it is up to the user to provide the configuration files over HTTP(S).
A special kernel parameter (`talos.config`) must be used to inform Talos about _where_ it should retreive its' configuration file.
A special kernel parameter (`talos.config`) must be used to inform Talos about _where_ it should retrieve its' configuration file.
To keep things simple we will place `init.yaml`, `controlplane.yaml`, and `join.yaml` into Matchbox's `assets` directory.
This directory is automatically served by Matchbox.
@ -174,7 +174,7 @@ Now, create the following groups, and ensure that the `selector`s are accurate f
### Boot the Machines
Now that we have our configuraton files in place, boot all the machines.
Now that we have our configuration files in place, boot all the machines.
Talos will come up on each machine, grab its' configuration file, and bootstrap itself.
### Retrieve the `kubeconfig`

View File

@ -25,7 +25,7 @@ This is not a requirement, but rather a document to explain some key settings.
#### Endpoint
To configure the `talosctl` endpoint, it is recommended you use a resolvable DNS name.
This way, if you decide to upgrade to a multi-controlplane cluster you only have to add the ip adres to the hostname configuration.
This way, if you decide to upgrade to a multi-controlplane cluster you only have to add the ip address to the hostname configuration.
The configuration can either be done on a Loadbalancer, or simply trough DNS.
For example:

View File

@ -244,7 +244,7 @@ kube-flannel-jknt9 1/1 Running 0 23
Full error might look like:
```bash
x509: certificate signed by unknown authority (possiby because of crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"
x509: certificate signed by unknown authority (possibly because of crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"
```
Commonly, the control plane endpoint points to a different cluster, as the client certificate

View File

@ -98,7 +98,7 @@ Can I break my cluster by upgrading everything?
**A.** No.
Nothing prevents the user from sending any number of near-simultaneous upgrades to each node of the cluster.
While most people would not attempt to do this, it may be the desired behaviour in certain situations.
While most people would not attempt to do this, it may be the desired behavior in certain situations.
If, however, multiple control plane nodes are asked to upgrade at the same time, Talos will protect itself by making sure only one control plane node upgrades at any time, through its checking of etcd quorum.
A lease is taken out by the winning control plane node, and no other control plane node is allowed to execute the upgrade until the lease is released and the etcd cluster is healthy and _will_ be healthy when the next node performs its upgrade.

View File

@ -585,7 +585,7 @@ The ControllerRuntimeDependency message contains the graph of controller-resourc
### InspectService
The inspect service definition.
InspectService provides auxilary API to inspect OS internals.
InspectService provides auxiliary API to inspect OS internals.
| Method Name | Request Type | Response Type | Description |
| ----------- | ------------ | ------------- | ------------|

View File

@ -32,7 +32,7 @@ created talosconfig
```
> The loadbalancer is used to distribute the load across multiple controlplane nodes.
> This isn't covered in detail, because we asume some loadbalancing knowledge before hand.
> This isn't covered in detail, because we assume some loadbalancing knowledge before hand.
> If you think this should be added to the docs, please [create a issue](https://github.com/talos-systems/talos/issues).
At this point, you can modify the generated configs to your liking.

View File

@ -39,7 +39,7 @@ join.yaml is valid for metal mode
#### Publishing the Machine Configuration Files
In bare-metal setups it is up to the user to provide the configuration files over HTTP(S).
A special kernel parameter (`talos.config`) must be used to inform Talos about _where_ it should retreive its' configuration file.
A special kernel parameter (`talos.config`) must be used to inform Talos about _where_ it should retrieve its' configuration file.
To keep things simple we will place `controlplane.yaml`, and `join.yaml` into Matchbox's `assets` directory.
This directory is automatically served by Matchbox.
@ -147,7 +147,7 @@ Now, create the following groups, and ensure that the `selector`s are accurate f
### Boot the Machines
Now that we have our configuraton files in place, boot all the machines.
Now that we have our configuration files in place, boot all the machines.
Talos will come up on each machine, grab its' configuration file, and bootstrap itself.
### Bootstrap Etcd

View File

@ -25,7 +25,7 @@ This is not a requirement, but rather a document to explain some key settings.
#### Endpoint
To configure the `talosctl` endpoint, it is recommended you use a resolvable DNS name.
This way, if you decide to upgrade to a multi-controlplane cluster you only have to add the ip adres to the hostname configuration.
This way, if you decide to upgrade to a multi-controlplane cluster you only have to add the ip address to the hostname configuration.
The configuration can either be done on a Loadbalancer, or simply trough DNS.
For example:

View File

@ -47,7 +47,7 @@ With the help of [Rook](https://rook.io), the vast majority of the complexity of
So if Ceph is so great, why not use it for everything?
Ceph can be rather slow for small clusters.
It relies heavily on CPUs and massive parallelisation to provide good cluster performance, so if you don't have much of those dedicated to Ceph, it is not going to be well-optimised for you.
It relies heavily on CPUs and massive parallelisation to provide good cluster performance, so if you don't have much of those dedicated to Ceph, it is not going to be well-optimized for you.
Also, if your cluster is small, just running Ceph may eat up a significant amount of the resources you have available.
Troubleshooting Ceph can be difficult if you do not understand its architecture.
@ -56,7 +56,7 @@ There are very good tools for inspection and debugging, but this is still freque
### Mayastor
[Mayastor](https://github.com/openebs/Mayastor) is an OpenEBS project built in Rust utilising the modern NVMEoF system.
[Mayastor](https://github.com/openebs/Mayastor) is an OpenEBS project built in Rust utilizing the modern NVMEoF system.
(Despite the name, Mayastor does _not_ require you to have NVME drives.)
It is fast and lean but still cluster-oriented and cloud native.
Unlike most of the other OpenEBS project, it is _not_ built on the ancient iSCSI system.

View File

@ -244,7 +244,7 @@ kube-flannel-jknt9 1/1 Running 0 23
Full error might look like:
```bash
x509: certificate signed by unknown authority (possiby because of crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"
x509: certificate signed by unknown authority (possibly because of crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"
```
Commonly, the control plane endpoint points to a different cluster, as the client certificate

View File

@ -98,7 +98,7 @@ Can I break my cluster by upgrading everything?
**A.** No.
Nothing prevents the user from sending any number of near-simultaneous upgrades to each node of the cluster.
While most people would not attempt to do this, it may be the desired behaviour in certain situations.
While most people would not attempt to do this, it may be the desired behavior in certain situations.
If, however, multiple control plane nodes are asked to upgrade at the same time, Talos will protect itself by making sure only one control plane node upgrades at any time, through its checking of etcd quorum.
A lease is taken out by the winning control plane node, and no other control plane node is allowed to execute the upgrade until the lease is released and the etcd cluster is healthy and _will_ be healthy when the next node performs its upgrade.

View File

@ -440,7 +440,7 @@ The ControllerRuntimeDependency message contains the graph of controller-resourc
### InspectService
The inspect service definition.
InspectService provides auxilary API to inspect OS internals.
InspectService provides auxiliary API to inspect OS internals.
| Method Name | Request Type | Response Type | Description |
| ----------- | ------------ | ------------- | ------------|

View File

@ -115,7 +115,7 @@ If you wish to export this IP as a bash variable, simply issue a command like `e
### Without DHCP server
To apply the machine configurations in maintenance mode, VM has to have IP on the network.
So you can set it on boot time manualy.
So you can set it on boot time manually.
<img src="/images/proxmox-guide/maintenance-mode-grub-menu.png" width="600px">

View File

@ -32,7 +32,7 @@ created talosconfig
```
> The loadbalancer is used to distribute the load across multiple controlplane nodes.
> This isn't covered in detail, because we asume some loadbalancing knowledge before hand.
> This isn't covered in detail, because we assume some loadbalancing knowledge before hand.
> If you think this should be added to the docs, please [create a issue](https://github.com/talos-systems/talos/issues).
At this point, you can modify the generated configs to your liking.

View File

@ -39,7 +39,7 @@ worker.yaml is valid for metal mode
#### Publishing the Machine Configuration Files
In bare-metal setups it is up to the user to provide the configuration files over HTTP(S).
A special kernel parameter (`talos.config`) must be used to inform Talos about _where_ it should retreive its' configuration file.
A special kernel parameter (`talos.config`) must be used to inform Talos about _where_ it should retrieve its' configuration file.
To keep things simple we will place `controlplane.yaml`, and `worker.yaml` into Matchbox's `assets` directory.
This directory is automatically served by Matchbox.
@ -147,7 +147,7 @@ Now, create the following groups, and ensure that the `selector`s are accurate f
### Boot the Machines
Now that we have our configuraton files in place, boot all the machines.
Now that we have our configuration files in place, boot all the machines.
Talos will come up on each machine, grab its' configuration file, and bootstrap itself.
### Bootstrap Etcd

View File

@ -25,7 +25,7 @@ This is not a requirement, but rather a document to explain some key settings.
#### Endpoint
To configure the `talosctl` endpoint, it is recommended you use a resolvable DNS name.
This way, if you decide to upgrade to a multi-controlplane cluster you only have to add the ip adres to the hostname configuration.
This way, if you decide to upgrade to a multi-controlplane cluster you only have to add the ip address to the hostname configuration.
The configuration can either be done on a Loadbalancer, or simply trough DNS.
For example:

View File

@ -47,7 +47,7 @@ With the help of [Rook](https://rook.io), the vast majority of the complexity of
So if Ceph is so great, why not use it for everything?
Ceph can be rather slow for small clusters.
It relies heavily on CPUs and massive parallelisation to provide good cluster performance, so if you don't have much of those dedicated to Ceph, it is not going to be well-optimised for you.
It relies heavily on CPUs and massive parallelisation to provide good cluster performance, so if you don't have much of those dedicated to Ceph, it is not going to be well-optimized for you.
Also, if your cluster is small, just running Ceph may eat up a significant amount of the resources you have available.
Troubleshooting Ceph can be difficult if you do not understand its architecture.
@ -56,7 +56,7 @@ There are very good tools for inspection and debugging, but this is still freque
### Mayastor
[Mayastor](https://github.com/openebs/Mayastor) is an OpenEBS project built in Rust utilising the modern NVMEoF system.
[Mayastor](https://github.com/openebs/Mayastor) is an OpenEBS project built in Rust utilizing the modern NVMEoF system.
(Despite the name, Mayastor does _not_ require you to have NVME drives.)
It is fast and lean but still cluster-oriented and cloud native.
Unlike most of the other OpenEBS project, it is _not_ built on the ancient iSCSI system.

View File

@ -234,7 +234,7 @@ kube-flannel-jknt9 1/1 Running 0 23
Full error might look like:
```bash
x509: certificate signed by unknown authority (possiby because of crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"
x509: certificate signed by unknown authority (possibly because of crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"
```
Commonly, the control plane endpoint points to a different cluster, as the client certificate

View File

@ -98,7 +98,7 @@ Can I break my cluster by upgrading everything?
**A.** No.
Nothing prevents the user from sending any number of near-simultaneous upgrades to each node of the cluster.
While most people would not attempt to do this, it may be the desired behaviour in certain situations.
While most people would not attempt to do this, it may be the desired behavior in certain situations.
If, however, multiple control plane nodes are asked to upgrade at the same time, Talos will protect itself by making sure only one control plane node upgrades at any time, through its checking of etcd quorum.
A lease is taken out by the winning control plane node, and no other control plane node is allowed to execute the upgrade until the lease is released and the etcd cluster is healthy and _will_ be healthy when the next node performs its upgrade.

View File

@ -438,7 +438,7 @@ The ControllerRuntimeDependency message contains the graph of controller-resourc
### InspectService
The inspect service definition.
InspectService provides auxilary API to inspect OS internals.
InspectService provides auxiliary API to inspect OS internals.
| Method Name | Request Type | Response Type | Description |
| ----------- | ------------ | ------------- | ------------|

View File

@ -115,7 +115,7 @@ If you wish to export this IP as a bash variable, simply issue a command like `e
### Without DHCP server
To apply the machine configurations in maintenance mode, VM has to have IP on the network.
So you can set it on boot time manualy.
So you can set it on boot time manually.
<img src="/images/proxmox-guide/maintenance-mode-grub-menu.png" width="600px">

View File

@ -32,7 +32,7 @@ created talosconfig
```
> The loadbalancer is used to distribute the load across multiple controlplane nodes.
> This isn't covered in detail, because we asume some loadbalancing knowledge before hand.
> This isn't covered in detail, because we assume some loadbalancing knowledge before hand.
> If you think this should be added to the docs, please [create a issue](https://github.com/talos-systems/talos/issues).
At this point, you can modify the generated configs to your liking.

View File

@ -39,7 +39,7 @@ worker.yaml is valid for metal mode
#### Publishing the Machine Configuration Files
In bare-metal setups it is up to the user to provide the configuration files over HTTP(S).
A special kernel parameter (`talos.config`) must be used to inform Talos about _where_ it should retreive its' configuration file.
A special kernel parameter (`talos.config`) must be used to inform Talos about _where_ it should retrieve its' configuration file.
To keep things simple we will place `controlplane.yaml`, and `worker.yaml` into Matchbox's `assets` directory.
This directory is automatically served by Matchbox.
@ -147,7 +147,7 @@ Now, create the following groups, and ensure that the `selector`s are accurate f
### Boot the Machines
Now that we have our configuraton files in place, boot all the machines.
Now that we have our configuration files in place, boot all the machines.
Talos will come up on each machine, grab its' configuration file, and bootstrap itself.
### Bootstrap Etcd

View File

@ -10,7 +10,7 @@ You can email their support to get a Talos ISO uploaded by following [issues:359
There are two options to upload your own.
1. Run an instance in rescue mode and replase the system OS with the Talos image
1. Run an instance in rescue mode and replaces the system OS with the Talos image
2. Use [Hashicorp packer](https://www.packer.io/docs/builders/hetzner-cloud) to prepare an image
### Rescue mode

View File

@ -25,7 +25,7 @@ This is not a requirement, but rather a document to explain some key settings.
#### Endpoint
To configure the `talosctl` endpoint, it is recommended you use a resolvable DNS name.
This way, if you decide to upgrade to a multi-controlplane cluster you only have to add the ip adres to the hostname configuration.
This way, if you decide to upgrade to a multi-controlplane cluster you only have to add the ip address to the hostname configuration.
The configuration can either be done on a Loadbalancer, or simply trough DNS.
For example:

View File

@ -47,7 +47,7 @@ With the help of [Rook](https://rook.io), the vast majority of the complexity of
So if Ceph is so great, why not use it for everything?
Ceph can be rather slow for small clusters.
It relies heavily on CPUs and massive parallelisation to provide good cluster performance, so if you don't have much of those dedicated to Ceph, it is not going to be well-optimised for you.
It relies heavily on CPUs and massive parallelisation to provide good cluster performance, so if you don't have much of those dedicated to Ceph, it is not going to be well-optimized for you.
Also, if your cluster is small, just running Ceph may eat up a significant amount of the resources you have available.
Troubleshooting Ceph can be difficult if you do not understand its architecture.
@ -56,7 +56,7 @@ There are very good tools for inspection and debugging, but this is still freque
### Mayastor
[Mayastor](https://github.com/openebs/Mayastor) is an OpenEBS project built in Rust utilising the modern NVMEoF system.
[Mayastor](https://github.com/openebs/Mayastor) is an OpenEBS project built in Rust utilizing the modern NVMEoF system.
(Despite the name, Mayastor does _not_ require you to have NVME drives.)
It is fast and lean but still cluster-oriented and cloud native.
Unlike most of the other OpenEBS project, it is _not_ built on the ancient iSCSI system.

View File

@ -234,7 +234,7 @@ kube-flannel-jknt9 1/1 Running 0 23
Full error might look like:
```bash
x509: certificate signed by unknown authority (possiby because of crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"
x509: certificate signed by unknown authority (possibly because of crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"
```
Commonly, the control plane endpoint points to a different cluster, as the client certificate

View File

@ -8,7 +8,7 @@ weight: 5
This release of Talos includes two new closely related
features: [cluster membership discovery](../../guides/discovery/) and [KubeSpan](../../guides/kubespan/).
KubeSpan is a feature of Talos that automates the setup and maintainance of a full mesh [WireGuard](https://www.wireguard.com) network for your cluster, giving you the ablility to operate hybrid Kubernetes clusters that can span the edge, datacenter, and cloud.
KubeSpan is a feature of Talos that automates the setup and maintenance of a full mesh [WireGuard](https://www.wireguard.com) network for your cluster, giving you the ablility to operate hybrid Kubernetes clusters that can span the edge, datacenter, and cloud.
Management of keys and discovery of peers can be completely automated for a zero-touch experience that makes it simple and easy to create hybrid clusters.
These new features are not enabled by default, to enable them please make following changes to the machine configuration:
@ -55,7 +55,7 @@ The address advertised by etcd can now be controlled with [new machine configura
The addresses picked by kubelet can now be controlled with [new machine configuration option](../../reference/configuration/#kubeletconfig) `machine.kubelet.nodeIP.validSubnets`.
### Windows Suport
### Windows Support
CLI tool talosctl is now built for Windows and published as part of the [release](https://github.com/talos-systems/talos/releases/tag/v0.13.0).

View File

@ -30,7 +30,7 @@ For this discussion, we will point out two of these tiers:
See [discovery service](../discovery) to learn more about the external service.
The Kubernetes-based system utilises annotations on Kubernetes Nodes which describe each node's public key and local addresses.
The Kubernetes-based system utilizes annotations on Kubernetes Nodes which describe each node's public key and local addresses.
On top of this, we also route Pod subnets.
This is often (maybe even usually) taken care of by the CNI, but there are many situations where the CNI fails to be able to do this itself, across networks.
@ -70,9 +70,9 @@ However, there is a big problem with IPTables.
It is a common namespace in which any number of other pieces of software may dump things.
We have no surety that what we add will not be wiped out by something else (from Kubernetes itself, to the CNI, to some workload application), be rendered unusable by higher-priority rules, or just generally cause trouble and conflicts.
Instead, we use a three-pronged system which is both more foundational and less centralised.
Instead, we use a three-pronged system which is both more foundational and less centralized.
NFTables offers a separately namespaced, decentralised way of marking packets for later processing based on IP sets.
NFTables offers a separately namespaced, decentralized way of marking packets for later processing based on IP sets.
Instead of a common set of well-known tables, NFTables uses hooks into the kernel's netfilter system, which are less vulnerable to being usurped, bypassed, or a source of interference than IPTables, but which are rendered down by the kernel to the same underlying XTables system.
Our NFTables system is where we store the IP sets.

View File

@ -98,7 +98,7 @@ Can I break my cluster by upgrading everything?
**A.** No.
Nothing prevents the user from sending any number of near-simultaneous upgrades to each node of the cluster.
While most people would not attempt to do this, it may be the desired behaviour in certain situations.
While most people would not attempt to do this, it may be the desired behavior in certain situations.
If, however, multiple control plane nodes are asked to upgrade at the same time, Talos will protect itself by making sure only one control plane node upgrades at any time, through its checking of etcd quorum.
A lease is taken out by the winning control plane node, and no other control plane node is allowed to execute the upgrade until the lease is released and the etcd cluster is healthy and _will_ be healthy when the next node performs its upgrade.

View File

@ -438,7 +438,7 @@ The ControllerRuntimeDependency message contains the graph of controller-resourc
### InspectService
The inspect service definition.
InspectService provides auxilary API to inspect OS internals.
InspectService provides auxiliary API to inspect OS internals.
| Method Name | Request Type | Response Type | Description |
| ----------- | ------------ | ------------- | ------------|

View File

@ -39,7 +39,7 @@ worker.yaml is valid for metal mode
#### Publishing the Machine Configuration Files
In bare-metal setups it is up to the user to provide the configuration files over HTTP(S).
A special kernel parameter (`talos.config`) must be used to inform Talos about _where_ it should retreive its' configuration file.
A special kernel parameter (`talos.config`) must be used to inform Talos about _where_ it should retrieve its' configuration file.
To keep things simple we will place `controlplane.yaml`, and `worker.yaml` into Matchbox's `assets` directory.
This directory is automatically served by Matchbox.
@ -147,7 +147,7 @@ Now, create the following groups, and ensure that the `selector`s are accurate f
### Boot the Machines
Now that we have our configuraton files in place, boot all the machines.
Now that we have our configuration files in place, boot all the machines.
Talos will come up on each machine, grab its' configuration file, and bootstrap itself.
### Bootstrap Etcd

View File

@ -55,7 +55,7 @@ helm install cilium cilium/cilium \
--set ipam.mode=kubernetes
```
Or if you want to deploy Cilium in strict mode without kube-proxy, also set some extra paramaters:
Or if you want to deploy Cilium in strict mode without kube-proxy, also set some extra parameters:
```bash
export KUBERNETES_API_SERVER_ADDRESS=<>
@ -180,7 +180,7 @@ As the inline manifest is processed from top to bottom make sure to manually put
## Known issues
- Currently there is an interaction between a Kubespan enabled Talos cluster and Cilium that results in the cluster going down during bootstrap after applying the Cilium manifests.
For more details: [Kubespan and Cilium compatiblity: etcd is failing](https://github.com/talos-systems/talos/issues/4836)
For more details: [Kubespan and Cilium compatibility: etcd is failing](https://github.com/talos-systems/talos/issues/4836)
- When running Cilium with a kube-proxy eBPF replacement (strict mode) there is a conflicting kernel module that results in locked tx queues.
This can be fixed by blacklisting `aoe_init` with extraKernelArgs.

View File

@ -49,7 +49,7 @@ With the help of [Rook](https://rook.io), the vast majority of the complexity of
So if Ceph is so great, why not use it for everything?
Ceph can be rather slow for small clusters.
It relies heavily on CPUs and massive parallelisation to provide good cluster performance, so if you don't have much of those dedicated to Ceph, it is not going to be well-optimised for you.
It relies heavily on CPUs and massive parallelisation to provide good cluster performance, so if you don't have much of those dedicated to Ceph, it is not going to be well-optimized for you.
Also, if your cluster is small, just running Ceph may eat up a significant amount of the resources you have available.
Troubleshooting Ceph can be difficult if you do not understand its architecture.
@ -58,7 +58,7 @@ There are very good tools for inspection and debugging, but this is still freque
### Mayastor
[Mayastor](https://github.com/openebs/Mayastor) is an OpenEBS project built in Rust utilising the modern NVMEoF system.
[Mayastor](https://github.com/openebs/Mayastor) is an OpenEBS project built in Rust utilizing the modern NVMEoF system.
(Despite the name, Mayastor does _not_ require you to have NVME drives.)
It is fast and lean but still cluster-oriented and cloud native.
Unlike most of the other OpenEBS project, it is _not_ built on the ancient iSCSI system.

View File

@ -234,7 +234,7 @@ kube-flannel-jknt9 1/1 Running 0 23
Full error might look like:
```bash
x509: certificate signed by unknown authority (possiby because of crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"
x509: certificate signed by unknown authority (possibly because of crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"
```
Commonly, the control plane endpoint points to a different cluster, as the client certificate

View File

@ -168,7 +168,7 @@ _out/integration-test-linux-amd64 \
Whole test suite can be run removing `-test.short` flag.
Specfic tests can be run with `-test.run=TestIntegration/api.ResetSuite`.
Specific tests can be run with `-test.run=TestIntegration/api.ResetSuite`.
## Build Flavors

View File

@ -30,7 +30,7 @@ For this discussion, we will point out two of these tiers:
See [discovery service](../discovery) to learn more about the external service.
The Kubernetes-based system utilises annotations on Kubernetes Nodes which describe each node's public key and local addresses.
The Kubernetes-based system utilizes annotations on Kubernetes Nodes which describe each node's public key and local addresses.
On top of this, we also route Pod subnets.
This is often (maybe even usually) taken care of by the CNI, but there are many situations where the CNI fails to be able to do this itself, across networks.
@ -70,9 +70,9 @@ However, there is a big problem with IPTables.
It is a common namespace in which any number of other pieces of software may dump things.
We have no surety that what we add will not be wiped out by something else (from Kubernetes itself, to the CNI, to some workload application), be rendered unusable by higher-priority rules, or just generally cause trouble and conflicts.
Instead, we use a three-pronged system which is both more foundational and less centralised.
Instead, we use a three-pronged system which is both more foundational and less centralized.
NFTables offers a separately namespaced, decentralised way of marking packets for later processing based on IP sets.
NFTables offers a separately namespaced, decentralized way of marking packets for later processing based on IP sets.
Instead of a common set of well-known tables, NFTables uses hooks into the kernel's netfilter system, which are less vulnerable to being usurped, bypassed, or a source of interference than IPTables, but which are rendered down by the kernel to the same underlying XTables system.
Our NFTables system is where we store the IP sets.

View File

@ -98,7 +98,7 @@ Can I break my cluster by upgrading everything?
**A.** No.
Nothing prevents the user from sending any number of near-simultaneous upgrades to each node of the cluster.
While most people would not attempt to do this, it may be the desired behaviour in certain situations.
While most people would not attempt to do this, it may be the desired behavior in certain situations.
If, however, multiple control plane nodes are asked to upgrade at the same time, Talos will protect itself by making sure only one control plane node upgrades at any time, through its checking of etcd quorum.
A lease is taken out by the winning control plane node, and no other control plane node is allowed to execute the upgrade until the lease is released and the etcd cluster is healthy and _will_ be healthy when the next node performs its upgrade.

View File

@ -440,7 +440,7 @@ The ControllerRuntimeDependency message contains the graph of controller-resourc
### InspectService
The inspect service definition.
InspectService provides auxilary API to inspect OS internals.
InspectService provides auxiliary API to inspect OS internals.
| Method Name | Request Type | Response Type | Description |
| ----------- | ------------ | ------------- | ------------|

View File

@ -32,7 +32,7 @@ created talosconfig
```
> The loadbalancer is used to distribute the load across multiple controlplane nodes.
> This isn't covered in detail, because we asume some loadbalancing knowledge before hand.
> This isn't covered in detail, because we assume some loadbalancing knowledge before hand.
> If you think this should be added to the docs, please [create a issue](https://github.com/talos-systems/talos/issues).
At this point, you can modify the generated configs to your liking.

View File

@ -40,7 +40,7 @@ join.yaml is valid for metal mode
#### Publishing the Machine Configuration Files
In bare-metal setups it is up to the user to provide the configuration files over HTTP(S).
A special kernel parameter (`talos.config`) must be used to inform Talos about _where_ it should retreive its' configuration file.
A special kernel parameter (`talos.config`) must be used to inform Talos about _where_ it should retrieve its' configuration file.
To keep things simple we will place `init.yaml`, `controlplane.yaml`, and `join.yaml` into Matchbox's `assets` directory.
This directory is automatically served by Matchbox.
@ -175,7 +175,7 @@ Now, create the following groups, and ensure that the `selector`s are accurate f
### Boot the Machines
Now that we have our configuraton files in place, boot all the machines.
Now that we have our configuration files in place, boot all the machines.
Talos will come up on each machine, grab its' configuration file, and bootstrap itself.
### Retrieve the `kubeconfig`

View File

@ -25,7 +25,7 @@ This is not a requirement, but rather a document to explain some key settings.
#### Endpoint
To configure the `talosctl` endpoint, it is recommended you use a resolvable DNS name.
This way, if you decide to upgrade to a multi-controlplane cluster you only have to add the ip adres to the hostname configuration.
This way, if you decide to upgrade to a multi-controlplane cluster you only have to add the ip address to the hostname configuration.
The configuration can either be done on a Loadbalancer, or simply trough DNS.
For example:

View File

@ -32,7 +32,7 @@ created talosconfig
```
> The loadbalancer is used to distribute the load across multiple controlplane nodes.
> This isn't covered in detail, because we asume some loadbalancing knowledge before hand.
> This isn't covered in detail, because we assume some loadbalancing knowledge before hand.
> If you think this should be added to the docs, please [create a issue](https://github.com/talos-systems/talos/issues).
At this point, you can modify the generated configs to your liking.

View File

@ -40,7 +40,7 @@ join.yaml is valid for metal mode
#### Publishing the Machine Configuration Files
In bare-metal setups it is up to the user to provide the configuration files over HTTP(S).
A special kernel parameter (`talos.config`) must be used to inform Talos about _where_ it should retreive its' configuration file.
A special kernel parameter (`talos.config`) must be used to inform Talos about _where_ it should retrieve its' configuration file.
To keep things simple we will place `init.yaml`, `controlplane.yaml`, and `join.yaml` into Matchbox's `assets` directory.
This directory is automatically served by Matchbox.
@ -175,7 +175,7 @@ Now, create the following groups, and ensure that the `selector`s are accurate f
### Boot the Machines
Now that we have our configuraton files in place, boot all the machines.
Now that we have our configuration files in place, boot all the machines.
Talos will come up on each machine, grab its' configuration file, and bootstrap itself.
### Retrieve the `kubeconfig`

View File

@ -25,7 +25,7 @@ This is not a requirement, but rather a document to explain some key settings.
#### Endpoint
To configure the `talosctl` endpoint, it is recommended you use a resolvable DNS name.
This way, if you decide to upgrade to a multi-controlplane cluster you only have to add the ip adres to the hostname configuration.
This way, if you decide to upgrade to a multi-controlplane cluster you only have to add the ip address to the hostname configuration.
The configuration can either be done on a Loadbalancer, or simply trough DNS.
For example:

View File

@ -98,7 +98,7 @@ Can I break my cluster by upgrading everything?
**A.** No.
Nothing prevents the user from sending any number of near-simultaneous upgrades to each node of the cluster.
While most people would not attempt to do this, it may be the desired behaviour in certain situations.
While most people would not attempt to do this, it may be the desired behavior in certain situations.
If, however, multiple control plane nodes are asked to upgrade at the same time, Talos will protect itself by making sure only one control plane node upgrades at any time, through its checking of etcd quorum.
A lease is taken out by the winning control plane node, and no other control plane node is allowed to execute the upgrade until the lease is released and the etcd cluster is healthy and _will_ be healthy when the next node performs its upgrade.

View File

@ -33,7 +33,7 @@ created talosconfig
```
> The loadbalancer is used to distribute the load across multiple controlplane nodes.
> This isn't covered in detail, because we asume some loadbalancing knowledge before hand.
> This isn't covered in detail, because we assume some loadbalancing knowledge before hand.
> If you think this should be added to the docs, please [create a issue](https://github.com/talos-systems/talos/issues).
At this point, you can modify the generated configs to your liking.

View File

@ -41,7 +41,7 @@ join.yaml is valid for metal mode
#### Publishing the Machine Configuration Files
In bare-metal setups it is up to the user to provide the configuration files over HTTP(S).
A special kernel parameter (`talos.config`) must be used to inform Talos about _where_ it should retreive its' configuration file.
A special kernel parameter (`talos.config`) must be used to inform Talos about _where_ it should retrieve its' configuration file.
To keep things simple we will place `init.yaml`, `controlplane.yaml`, and `join.yaml` into Matchbox's `assets` directory.
This directory is automatically served by Matchbox.
@ -176,7 +176,7 @@ Now, create the following groups, and ensure that the `selector`s are accurate f
### Boot the Machines
Now that we have our configuraton files in place, boot all the machines.
Now that we have our configuration files in place, boot all the machines.
Talos will come up on each machine, grab its' configuration file, and bootstrap itself.
### Retrieve the `kubeconfig`

View File

@ -25,7 +25,7 @@ This is not a requirement, but rather a document to explain some key settings.
#### Endpoint
To configure the `talosctl` endpoint, it is recommended you use a resolvable DNS name.
This way, if you decide to upgrade to a multi-controlplane cluster you only have to add the ip adres to the hostname configuration.
This way, if you decide to upgrade to a multi-controlplane cluster you only have to add the ip address to the hostname configuration.
The configuration can either be done on a Loadbalancer, or simply trough DNS.
For example:

View File

@ -98,7 +98,7 @@ Can I break my cluster by upgrading everything?
**A.** No.
Nothing prevents the user from sending any number of near-simultaneous upgrades to each node of the cluster.
While most people would not attempt to do this, it may be the desired behaviour in certain situations.
While most people would not attempt to do this, it may be the desired behavior in certain situations.
If, however, multiple control plane nodes are asked to upgrade at the same time, Talos will protect itself by making sure only one control plane node upgrades at any time, through its checking of etcd quorum.
A lease is taken out by the winning control plane node, and no other control plane node is allowed to execute the upgrade until the lease is released and the etcd cluster is healthy and _will_ be healthy when the next node performs its upgrade.

View File

@ -33,7 +33,7 @@ created talosconfig
```
> The loadbalancer is used to distribute the load across multiple controlplane nodes.
> This isn't covered in detail, because we asume some loadbalancing knowledge before hand.
> This isn't covered in detail, because we assume some loadbalancing knowledge before hand.
> If you think this should be added to the docs, please [create a issue](https://github.com/talos-systems/talos/issues).
At this point, you can modify the generated configs to your liking.

View File

@ -41,7 +41,7 @@ join.yaml is valid for metal mode
#### Publishing the Machine Configuration Files
In bare-metal setups it is up to the user to provide the configuration files over HTTP(S).
A special kernel parameter (`talos.config`) must be used to inform Talos about _where_ it should retreive its' configuration file.
A special kernel parameter (`talos.config`) must be used to inform Talos about _where_ it should retrieve its' configuration file.
To keep things simple we will place `init.yaml`, `controlplane.yaml`, and `join.yaml` into Matchbox's `assets` directory.
This directory is automatically served by Matchbox.
@ -173,7 +173,7 @@ Now, create the following groups, and ensure that the `selector`s are accurate f
### Boot the Machines
Now that we have our configuraton files in place, boot all the machines.
Now that we have our configuration files in place, boot all the machines.
Talos will come up on each machine, grab its' configuration file, and bootstrap itself.
### Retrieve the `kubeconfig`

View File

@ -25,7 +25,7 @@ This is not a requirement, but rather a document to explain some key settings.
#### Endpoint
To configure the `talosctl` endpoint, it is recommended you use a resolvable DNS name.
This way, if you decide to upgrade to a multi-controlplane cluster you only have to add the ip adres to the hostname configuration.
This way, if you decide to upgrade to a multi-controlplane cluster you only have to add the ip address to the hostname configuration.
The configuration can either be done on a Loadbalancer, or simply trough DNS.
For example:

View File

@ -244,7 +244,7 @@ kube-flannel-jknt9 1/1 Running 0 23
Full error might look like:
```bash
x509: certificate signed by unknown authority (possiby because of crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"
x509: certificate signed by unknown authority (possibly because of crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"
```
Commonly, the control plane endpoint points to a different cluster, as the client certificate

View File

@ -98,7 +98,7 @@ Can I break my cluster by upgrading everything?
**A.** No.
Nothing prevents the user from sending any number of near-simultaneous upgrades to each node of the cluster.
While most people would not attempt to do this, it may be the desired behaviour in certain situations.
While most people would not attempt to do this, it may be the desired behavior in certain situations.
If, however, multiple control plane nodes are asked to upgrade at the same time, Talos will protect itself by making sure only one control plane node upgrades at any time, through its checking of etcd quorum.
A lease is taken out by the winning control plane node, and no other control plane node is allowed to execute the upgrade until the lease is released and the etcd cluster is healthy and _will_ be healthy when the next node performs its upgrade.

View File

@ -578,7 +578,7 @@ The ControllerRuntimeDependency message contains the graph of controller-resourc
### InspectService
The inspect service definition.
InspectService provides auxilary API to inspect OS internals.
InspectService provides auxiliary API to inspect OS internals.
| Method Name | Request Type | Response Type | Description |
| ----------- | ------------ | ------------- | ------------|

View File

@ -39,7 +39,7 @@ worker.yaml is valid for metal mode
#### Publishing the Machine Configuration Files
In bare-metal setups it is up to the user to provide the configuration files over HTTP(S).
A special kernel parameter (`talos.config`) must be used to inform Talos about _where_ it should retreive its' configuration file.
A special kernel parameter (`talos.config`) must be used to inform Talos about _where_ it should retrieve its' configuration file.
To keep things simple we will place `controlplane.yaml`, and `worker.yaml` into Matchbox's `assets` directory.
This directory is automatically served by Matchbox.
@ -147,7 +147,7 @@ Now, create the following groups, and ensure that the `selector`s are accurate f
### Boot the Machines
Now that we have our configuraton files in place, boot all the machines.
Now that we have our configuration files in place, boot all the machines.
Talos will come up on each machine, grab its' configuration file, and bootstrap itself.
### Bootstrap Etcd

View File

@ -55,7 +55,7 @@ helm install cilium cilium/cilium \
--set ipam.mode=kubernetes
```
Or if you want to deploy Cilium in strict mode without kube-proxy, also set some extra paramaters:
Or if you want to deploy Cilium in strict mode without kube-proxy, also set some extra parameters:
```bash
export KUBERNETES_API_SERVER_ADDRESS=<>
@ -180,7 +180,7 @@ As the inline manifest is processed from top to bottom make sure to manually put
## Known issues
- Currently there is an interaction between a Kubespan enabled Talos cluster and Cilium that results in the cluster going down during bootstrap after applying the Cilium manifests.
For more details: [Kubespan and Cilium compatiblity: etcd is failing](https://github.com/talos-systems/talos/issues/4836)
For more details: [Kubespan and Cilium compatibility: etcd is failing](https://github.com/talos-systems/talos/issues/4836)
- There are some gotchas when using Talos and Cilium on the Google cloud platform when using internal load balancers.
For more details: [GCP ILB support / support scope local routes to be configured](https://github.com/talos-systems/talos/issues/4109)

View File

@ -49,7 +49,7 @@ With the help of [Rook](https://rook.io), the vast majority of the complexity of
So if Ceph is so great, why not use it for everything?
Ceph can be rather slow for small clusters.
It relies heavily on CPUs and massive parallelisation to provide good cluster performance, so if you don't have much of those dedicated to Ceph, it is not going to be well-optimised for you.
It relies heavily on CPUs and massive parallelisation to provide good cluster performance, so if you don't have much of those dedicated to Ceph, it is not going to be well-optimized for you.
Also, if your cluster is small, just running Ceph may eat up a significant amount of the resources you have available.
Troubleshooting Ceph can be difficult if you do not understand its architecture.
@ -58,7 +58,7 @@ There are very good tools for inspection and debugging, but this is still freque
### Mayastor
[Mayastor](https://github.com/openebs/Mayastor) is an OpenEBS project built in Rust utilising the modern NVMEoF system.
[Mayastor](https://github.com/openebs/Mayastor) is an OpenEBS project built in Rust utilizing the modern NVMEoF system.
(Despite the name, Mayastor does _not_ require you to have NVME drives.)
It is fast and lean but still cluster-oriented and cloud native.
Unlike most of the other OpenEBS project, it is _not_ built on the ancient iSCSI system.

View File

@ -282,7 +282,7 @@ kube-flannel-jknt9 1/1 Running 0 23
The full error might look like:
```bash
x509: certificate signed by unknown authority (possiby because of crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"
x509: certificate signed by unknown authority (possibly because of crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"
```
Usually, this occurs because the control plane endpoint points to a different

View File

@ -17,7 +17,7 @@ The recommended method to upgrade Kubernetes is to use the `talosctl upgrade-k8s
This will automatically update the components needed to upgrade Kubernetes safely.
Upgrading Kubernetes is non-disruptive to the cluster workloads.
To trigger a Kubernetes upgrade, issue a command specifiying the version of Kubernetes to ugprade to, such as:
To trigger a Kubernetes upgrade, issue a command specifying the version of Kubernetes to upgrade to, such as:
`talosctl --nodes <master node> upgrade-k8s --to 1.23.0`

View File

@ -111,7 +111,7 @@ In this case, Talos will fail to download the upgraded image and will abort the
Sometimes, Talos is unable to successfully kill off all of the disk access points, in which case it cannot safely unmount all filesystems to effect the upgrade.
In this case, it will abort the upgrade and reboot.
(`upgrade --stage` can ensure that upgrades can occur even when the filesytems cannot be unmounted.)
(`upgrade --stage` can ensure that upgrades can occur even when the filesystem cannot be unmounted.)
It is possible (especially with test builds) that the upgraded Talos system will fail to start.
In this case, the node will be rebooted, and the bootloader will automatically use the previous Talos kernel and image, thus effectively rolling back the upgrade.

View File

@ -19,7 +19,7 @@ CentOS is RHEL, but made license-free.
Talos Linux _isn't_ based on any other distribution, so there's no help here.
We often think of ourselves as being the second-generation of
container-optimised operating systems, where things like CoreOS, Flatcar, and Rancher represent the first generation, but that implies heredity where there is none.
container-optimized operating systems, where things like CoreOS, Flatcar, and Rancher represent the first generation, but that implies heredity where there is none.
It does, though, allow a conceptual handle to the concept.
Talos Linux is actually a ground-up rewrite of the userspace, from PID 1.
@ -108,7 +108,7 @@ Luckily, the Talos API makes this easy.
In the old days, Talos Linux had the idea of an `init` node.
The `init` node was a "special" controlplane node which was designated as the
founder of the cluster.
It was the first, was guaranteed to be the elector, and was authorised to create
It was the first, was guaranteed to be the elector, and was authorized to create
a cluster...
even if one already existed.
This made the formation of a cluster cluster really easy, but it had a lot of

View File

@ -168,7 +168,7 @@ _out/integration-test-linux-amd64 \
Whole test suite can be run removing `-test.short` flag.
Specfic tests can be run with `-test.run=TestIntegration/api.ResetSuite`.
Specific tests can be run with `-test.run=TestIntegration/api.ResetSuite`.
## Build Flavors

View File

@ -30,7 +30,7 @@ For this discussion, we will point out two of these tiers:
See [discovery service](../discovery) to learn more about the external service.
The Kubernetes-based system utilises annotations on Kubernetes Nodes which describe each node's public key and local addresses.
The Kubernetes-based system utilizes annotations on Kubernetes Nodes which describe each node's public key and local addresses.
On top of this, we also route Pod subnets.
This is often (maybe even usually) taken care of by the CNI, but there are many situations where the CNI fails to be able to do this itself, across networks.
@ -70,9 +70,9 @@ However, there is a big problem with IPTables.
It is a common namespace in which any number of other pieces of software may dump things.
We have no surety that what we add will not be wiped out by something else (from Kubernetes itself, to the CNI, to some workload application), be rendered unusable by higher-priority rules, or just generally cause trouble and conflicts.
Instead, we use a three-pronged system which is both more foundational and less centralised.
Instead, we use a three-pronged system which is both more foundational and less centralized.
NFTables offers a separately namespaced, decentralised way of marking packets for later processing based on IP sets.
NFTables offers a separately namespaced, decentralized way of marking packets for later processing based on IP sets.
Instead of a common set of well-known tables, NFTables uses hooks into the kernel's netfilter system, which are less vulnerable to being usurped, bypassed, or a source of interference than IPTables, but which are rendered down by the kernel to the same underlying XTables system.
Our NFTables system is where we store the IP sets.

View File

@ -442,7 +442,7 @@ The ControllerRuntimeDependency message contains the graph of controller-resourc
### InspectService
The inspect service definition.
InspectService provides auxilary API to inspect OS internals.
InspectService provides auxiliary API to inspect OS internals.
| Method Name | Request Type | Response Type | Description |
| ----------- | ------------ | ------------- | ------------|

View File

@ -69,7 +69,7 @@ Apply the config to both nodes.
Now that our nodes are ready, we are ready to bootstrap the Kubernetes cluster.
```powershell
# Use following command to set node and endpoint permanantly in config so you dont have to type it everytime
# Use following command to set node and endpoint permanently in config so you dont have to type it everytime
talosctl config endpoint $CONTROL_PLANE_IP
talosctl config node $CONTROL_PLANE_IP

View File

@ -118,7 +118,7 @@
"dependencies": {
"@babel/helper-function-name": "^7.12.13",
"@babel/helper-member-expression-to-functions": "^7.13.0",
"@babel/helper-optimise-call-expression": "^7.12.13",
"@babel/helper-optimize-call-expression": "^7.12.13",
"@babel/helper-replace-supers": "^7.13.0",
"@babel/helper-split-export-declaration": "^7.12.13"
},
@ -222,7 +222,7 @@
"@babel/types": "^7.13.14"
}
},
"node_modules/@babel/helper-optimise-call-expression": {
"node_modules/@babel/helper-optimize-call-expression": {
"version": "7.12.13",
"resolved": "https://registry.npmjs.org/@babel/helper-optimise-call-expression/-/helper-optimise-call-expression-7.12.13.tgz",
"integrity": "sha512-BdWQhoVJkp6nVjB7nkFWcn43dkprYauqtk++Py2eaf/GRDFm5BxRqEIZCiHlZUGAVmtwKcsVL1dC68WmzeFmiA==",
@ -251,7 +251,7 @@
"integrity": "sha512-Gz1eiX+4yDO8mT+heB94aLVNCL+rbuT2xy4YfyNqu8F+OI6vMvJK891qGBTqL9Uc8wxEvRW92Id6G7sDen3fFw==",
"dependencies": {
"@babel/helper-member-expression-to-functions": "^7.13.12",
"@babel/helper-optimise-call-expression": "^7.12.13",
"@babel/helper-optimize-call-expression": "^7.12.13",
"@babel/traverse": "^7.13.0",
"@babel/types": "^7.13.12"
}
@ -729,7 +729,7 @@
"dependencies": {
"@babel/helper-annotate-as-pure": "^7.12.13",
"@babel/helper-function-name": "^7.12.13",
"@babel/helper-optimise-call-expression": "^7.12.13",
"@babel/helper-optimize-call-expression": "^7.12.13",
"@babel/helper-plugin-utils": "^7.13.0",
"@babel/helper-replace-supers": "^7.13.0",
"@babel/helper-split-export-declaration": "^7.12.13",
@ -17307,7 +17307,7 @@
"requires": {
"@babel/helper-function-name": "^7.12.13",
"@babel/helper-member-expression-to-functions": "^7.13.0",
"@babel/helper-optimise-call-expression": "^7.12.13",
"@babel/helper-optimize-call-expression": "^7.12.13",
"@babel/helper-replace-supers": "^7.13.0",
"@babel/helper-split-export-declaration": "^7.12.13"
}
@ -17402,7 +17402,7 @@
"@babel/types": "^7.13.14"
}
},
"@babel/helper-optimise-call-expression": {
"@babel/helper-optimize-call-expression": {
"version": "7.12.13",
"resolved": "https://registry.npmjs.org/@babel/helper-optimise-call-expression/-/helper-optimise-call-expression-7.12.13.tgz",
"integrity": "sha512-BdWQhoVJkp6nVjB7nkFWcn43dkprYauqtk++Py2eaf/GRDFm5BxRqEIZCiHlZUGAVmtwKcsVL1dC68WmzeFmiA==",
@ -17431,7 +17431,7 @@
"integrity": "sha512-Gz1eiX+4yDO8mT+heB94aLVNCL+rbuT2xy4YfyNqu8F+OI6vMvJK891qGBTqL9Uc8wxEvRW92Id6G7sDen3fFw==",
"requires": {
"@babel/helper-member-expression-to-functions": "^7.13.12",
"@babel/helper-optimise-call-expression": "^7.12.13",
"@babel/helper-optimize-call-expression": "^7.12.13",
"@babel/traverse": "^7.13.0",
"@babel/types": "^7.13.12"
}
@ -17801,7 +17801,7 @@
"requires": {
"@babel/helper-annotate-as-pure": "^7.12.13",
"@babel/helper-function-name": "^7.12.13",
"@babel/helper-optimise-call-expression": "^7.12.13",
"@babel/helper-optimize-call-expression": "^7.12.13",
"@babel/helper-plugin-utils": "^7.13.0",
"@babel/helper-replace-supers": "^7.13.0",
"@babel/helper-split-export-declaration": "^7.12.13",