5
0
mirror of git://git.proxmox.com/git/pve-docs.git synced 2025-01-20 14:03:42 +03:00
pve-docs/pve-network.adoc
Emmanuel Kasper 052130090e Add a paragraph to explain how network models match use cases
Also :
 * explain more clearly when PVE switched to persistent device naming. (5.0)
 * use eno1 instead of eno0 everywhere when refering to the first onboard device
 * use IP addresses from the range IPv4 Address Blocks for Documentation
 (rfc5737) instead of private IPv4 addresses when giving examples of public IPs
2017-10-10 10:56:01 +02:00

351 lines
12 KiB
Plaintext

[[sysadmin_network_configuration]]
Network Configuration
---------------------
ifdef::wiki[]
:pve-toplevel:
endif::wiki[]
Network configuration can be done either via the GUI, or by manually
editing the file `/etc/network/interfaces`, which contains the
whole network configuration. The `interfaces(5)` manual page contains the
complete format description. All {pve} tools try hard to keep direct
user modifications, but using the GUI is still preferable, because it
protects you from errors.
Once the network is configured, you can use the Debian traditional tools `ifup`
and `ifdown` commands to bring interfaces up and down.
NOTE: {pve} does not write changes directly to
`/etc/network/interfaces`. Instead, we write into a temporary file
called `/etc/network/interfaces.new`, and commit those changes when
you reboot the node.
Naming Conventions
~~~~~~~~~~~~~~~~~~
We currently use the following naming conventions for device names:
* Ethernet devices: en*, systemd network interface names. This naming scheme is
used for new {pve} installations since version 5.0.
* Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...) This naming
scheme is used for {pve} hosts which were installed before the 5.0
release. When upgrading to 5.0, the names are kept as-is.
* Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
* Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
* VLANs: Simply add the VLAN number to the device name,
separated by a period (`eno1.50`, `bond1.30`)
This makes it easier to debug networks problems, because the device
name implies the device type.
Systemd Network Interface Names
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Systemd uses the two character prefix 'en' for Ethernet network
devices. The next characters depends on the device driver and the fact
which schema matches first.
* o<index>[n<phys_port_name>|d<dev_port>] — devices on board
* s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — device by hotplug id
* [P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — devices by bus id
* x<MAC> — device by MAC address
The most common patterns are:
* eno1 — is the first on board NIC
* enp3s0f1 — is the NIC on pcibus 3 slot 0 and use the NIC function 1.
For more information see https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/[Predictable Network Interface Names].
Choosing a network configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Depending on your current network organization and your resources you can
choose either a bridged, routed, or masquerading networking setup.
{pve} server in a private LAN, using an external gateway to reach the internet
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The *Bridged* model makes the most sense in this case, and this is also
the default mode on new {pve} installations.
Each of your Guest system will have a virtual interface attached to the
{pve} bridge. This is similar in effect to having the Guest network card
directly connected to a new switch on your LAN, the {pve} host playing the role
of the switch.
{pve} server at hosting provider, with public IP ranges for Guests
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For this setup, you can use either a *Bridged* or *Routed* model, depending on
what your provider allows.
{pve} server at hosting provider, with a single public IP address
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In that case the only way to get outgoing network accesses for your guest
systems is to use *Masquerading*. For incoming network access to your guests,
you will need to configure *Port Forwarding*.
For further flexibility, you can configure
VLANs (IEEE 802.1q) and network bonding, also known as "link
aggregation". That way it is possible to build complex and flexible
virtual networks.
Default Configuration using a Bridge
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Bridges are like physical network switches implemented in software.
All VMs can share a single bridge, or you can create multiple bridges to
separate network domains. Each host can have up to 4094 bridges.
The installation program creates a single bridge named `vmbr0`, which
is connected to the first Ethernet card. The corresponding
configuration in `/etc/network/interfaces` might look like this:
----
auto lo
iface lo inet loopback
iface eno1 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.10.2
netmask 255.255.255.0
gateway 192.168.10.1
bridge_ports eno1
bridge_stp off
bridge_fd 0
----
Virtual machines behave as if they were directly connected to the
physical network. The network, in turn, sees each virtual machine as
having its own MAC, even though there is only one network cable
connecting all of these VMs to the network.
Routed Configuration
~~~~~~~~~~~~~~~~~~~~
Most hosting providers do not support the above setup. For security
reasons, they disable networking as soon as they detect multiple MAC
addresses on a single interface.
TIP: Some providers allows you to register additional MACs on there
management interface. This avoids the problem, but is clumsy to
configure because you need to register a MAC for each of your VMs.
You can avoid the problem by ``routing'' all traffic via a single
interface. This makes sure that all network packets use the same MAC
address.
A common scenario is that you have a public IP (assume `198.51.100.5`
for this example), and an additional IP block for your VMs
(`203.0.113.16/29`). We recommend the following setup for such
situations:
----
auto lo
iface lo inet loopback
auto eno1
iface eno1 inet static
address 198.51.100.5
netmask 255.255.255.0
gateway 198.51.100.1
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp
auto vmbr0
iface vmbr0 inet static
address 203.0.113.17
netmask 255.255.255.248
bridge_ports none
bridge_stp off
bridge_fd 0
----
Masquerading (NAT) with `iptables`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Masquerading allows guests having only a private IP address to access the
network by using the host IP address for outgoing traffic. Each outgoing
packet is rewritten by `iptables` to appear as originating from the host,
and responses are rewritten accordingly to be routed to the original sender.
----
auto lo
iface lo inet loopback
auto eno1
#real IP address
iface eno1 inet static
address 198.51.100.5
netmask 255.255.255.0
gateway 198.51.100.1
auto vmbr0
#private sub network
iface vmbr0 inet static
address 10.10.10.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
----
Linux Bond
~~~~~~~~~~
Bonding (also called NIC teaming or Link Aggregation) is a technique
for binding multiple NIC's to a single network device. It is possible
to achieve different goals, like make the network fault-tolerant,
increase the performance or both together.
High-speed hardware like Fibre Channel and the associated switching
hardware can be quite expensive. By doing link aggregation, two NICs
can appear as one logical interface, resulting in double speed. This
is a native Linux kernel feature that is supported by most
switches. If your nodes have multiple Ethernet ports, you can
distribute your points of failure by running network cables to
different switches and the bonded connection will failover to one
cable or the other in case of network trouble.
Aggregated links can improve live-migration delays and improve the
speed of replication of data between Proxmox VE Cluster nodes.
There are 7 modes for bonding:
* *Round-robin (balance-rr):* Transmit network packets in sequential
order from the first available network interface (NIC) slave through
the last. This mode provides load balancing and fault tolerance.
* *Active-backup (active-backup):* Only one NIC slave in the bond is
active. A different slave becomes active if, and only if, the active
slave fails. The single logical bonded interface's MAC address is
externally visible on only one NIC (port) to avoid distortion in the
network switch. This mode provides fault tolerance.
* *XOR (balance-xor):* Transmit network packets based on [(source MAC
address XOR'd with destination MAC address) modulo NIC slave
count]. This selects the same NIC slave for each destination MAC
address. This mode provides load balancing and fault tolerance.
* *Broadcast (broadcast):* Transmit network packets on all slave
network interfaces. This mode provides fault tolerance.
* *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
aggregation groups that share the same speed and duplex
settings. Utilizes all slave network interfaces in the active
aggregator group according to the 802.3ad specification.
* *Adaptive transmit load balancing (balance-tlb):* Linux bonding
driver mode that does not require any special network-switch
support. The outgoing network packet traffic is distributed according
to the current load (computed relative to the speed) on each network
interface slave. Incoming traffic is received by one currently
designated slave network interface. If this receiving slave fails,
another slave takes over the MAC address of the failed receiving
slave.
* *Adaptive load balancing (balance-alb):* Includes balance-tlb plus receive
load balancing (rlb) for IPV4 traffic, and does not require any
special network switch support. The receive load balancing is achieved
by ARP negotiation. The bonding driver intercepts the ARP Replies sent
by the local system on their way out and overwrites the source
hardware address with the unique hardware address of one of the NIC
slaves in the single logical bonded interface such that different
network-peers use different MAC addresses for their network packet
traffic.
If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
the corresponding bonding mode (802.3ad). Otherwise you should generally use the
active-backup mode. +
// http://lists.linux-ha.org/pipermail/linux-ha/2013-January/046295.html
If you intend to run your cluster network on the bonding interfaces, then you
have to use active-passive mode on the bonding interfaces, other modes are
unsupported.
The following bond configuration can be used as distributed/shared
storage network. The benefit would be that you get more speed and the
network will be fault-tolerant.
.Example: Use bond with fixed IP address
----
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet manual
auto bond0
iface bond0 inet static
slaves eno1 eno2
address 192.168.1.2
netmask 255.255.255.0
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2+3
auto vmbr0
iface vmbr0 inet static
address 10.10.10.2
netmask 255.255.255.0
gateway 10.10.10.1
bridge_ports eno1
bridge_stp off
bridge_fd 0
----
Another possibility it to use the bond directly as bridge port.
This can be used to make the guest network fault-tolerant.
.Example: Use a bond as bridge port
----
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet manual
auto bond0
iface bond0 inet manual
slaves eno1 eno2
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2+3
auto vmbr0
iface vmbr0 inet static
address 10.10.10.2
netmask 255.255.255.0
gateway 10.10.10.1
bridge_ports bond0
bridge_stp off
bridge_fd 0
----
////
TODO: explain IPv6 support?
TODO: explain OVS
////