5
0
mirror of git://git.proxmox.com/git/pve-docs.git synced 2025-01-08 21:17:52 +03:00
pve-docs/pve-network.adoc

321 lines
10 KiB
Plaintext
Raw Normal View History

2016-10-13 09:40:48 +03:00
[[sysadmin_network_configuration]]
Network Configuration
---------------------
2016-10-08 18:22:48 +03:00
ifdef::wiki[]
:pve-toplevel:
endif::wiki[]
{pve} uses a bridged networking model. Each host can have up to 4094
bridges. Bridges are like physical network switches implemented in
software. All VMs can share a single bridge, as if
virtual network cables from each guest were all plugged into the same
switch. But you can also create multiple bridges to separate network
domains.
For connecting VMs to the outside world, bridges are attached to
physical network cards. For further flexibility, you can configure
VLANs (IEEE 802.1q) and network bonding, also known as "link
aggregation". That way it is possible to build complex and flexible
virtual networks.
Debian traditionally uses the `ifup` and `ifdown` commands to
configure the network. The file `/etc/network/interfaces` contains the
whole network setup. Please refer to to manual page (`man interfaces`)
for a complete format description.
NOTE: {pve} does not write changes directly to
`/etc/network/interfaces`. Instead, we write into a temporary file
called `/etc/network/interfaces.new`, and commit those changes when
you reboot the node.
It is worth mentioning that you can directly edit the configuration
file. All {pve} tools tries hard to keep such direct user
modifications. Using the GUI is still preferable, because it
protect you from errors.
Naming Conventions
~~~~~~~~~~~~~~~~~~
We currently use the following naming conventions for device names:
* New Ethernet devices: en*, systemd network interface names.
* Legacy Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...)
They are available when Proxmox VE has been updated by an earlier version.
* Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
* Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
* VLANs: Simply add the VLAN number to the device name,
separated by a period (`eno1.50`, `bond1.30`)
This makes it easier to debug networks problems, because the device
names implies the device type.
Systemd Network Interface Names
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2017-07-10 13:22:01 +03:00
Systemd uses the two character prefix 'en' for Ethernet network
devices. The next characters depends on the device driver and the fact
which schema matches first.
* o<index>[n<phys_port_name>|d<dev_port>] — devices on board
* s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — device by hotplug id
* [P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — devices by bus id
* x<MAC> — device by MAC address
The most common patterns are:
* eno1 — is the first on board NIC
* enp3s0f1 — is the NIC on pcibus 3 slot 0 and use the NIC function 1.
For more information see https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/[Predictable Network Interface Names].
Default Configuration using a Bridge
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The installation program creates a single bridge named `vmbr0`, which
2017-07-10 13:22:01 +03:00
is connected to the first Ethernet card `eno0`. The corresponding
configuration in `/etc/network/interfaces` looks like this:
----
auto lo
iface lo inet loopback
iface eno1 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.10.2
netmask 255.255.255.0
gateway 192.168.10.1
bridge_ports eno1
bridge_stp off
bridge_fd 0
----
Virtual machines behave as if they were directly connected to the
physical network. The network, in turn, sees each virtual machine as
having its own MAC, even though there is only one network cable
connecting all of these VMs to the network.
Routed Configuration
~~~~~~~~~~~~~~~~~~~~
Most hosting providers do not support the above setup. For security
reasons, they disable networking as soon as they detect multiple MAC
addresses on a single interface.
TIP: Some providers allows you to register additional MACs on there
management interface. This avoids the problem, but is clumsy to
configure because you need to register a MAC for each of your VMs.
You can avoid the problem by ``routing'' all traffic via a single
interface. This makes sure that all network packets use the same MAC
address.
A common scenario is that you have a public IP (assume `192.168.10.2`
for this example), and an additional IP block for your VMs
(`10.10.10.1/255.255.255.0`). We recommend the following setup for such
situations:
----
auto lo
iface lo inet loopback
auto eno1
iface eno1 inet static
address 192.168.10.2
netmask 255.255.255.0
gateway 192.168.10.1
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp
auto vmbr0
iface vmbr0 inet static
address 10.10.10.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
----
Masquerading (NAT) with `iptables`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In some cases you may want to use private IPs behind your Proxmox
host's true IP, and masquerade the traffic using NAT:
----
auto lo
iface lo inet loopback
auto eno0
#real IP adress
iface eno1 inet static
address 192.168.10.2
netmask 255.255.255.0
gateway 192.168.10.1
auto vmbr0
#private sub network
iface vmbr0 inet static
address 10.10.10.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
----
2016-10-03 11:05:29 +03:00
Linux Bond
~~~~~~~~~~
2016-10-04 12:31:35 +03:00
Bonding (also called NIC teaming or Link Aggregation) is a technique
for binding multiple NIC's to a single network device. It is possible
to achieve different goals, like make the network fault-tolerant,
increase the performance or both together.
High-speed hardware like Fibre Channel and the associated switching
hardware can be quite expensive. By doing link aggregation, two NICs
can appear as one logical interface, resulting in double speed. This
is a native Linux kernel feature that is supported by most
switches. If your nodes have multiple Ethernet ports, you can
distribute your points of failure by running network cables to
different switches and the bonded connection will failover to one
cable or the other in case of network trouble.
Aggregated links can improve live-migration delays and improve the
speed of replication of data between Proxmox VE Cluster nodes.
2016-10-03 11:05:29 +03:00
There are 7 modes for bonding:
* *Round-robin (balance-rr):* Transmit network packets in sequential
order from the first available network interface (NIC) slave through
the last. This mode provides load balancing and fault tolerance.
* *Active-backup (active-backup):* Only one NIC slave in the bond is
active. A different slave becomes active if, and only if, the active
slave fails. The single logical bonded interface's MAC address is
externally visible on only one NIC (port) to avoid distortion in the
network switch. This mode provides fault tolerance.
* *XOR (balance-xor):* Transmit network packets based on [(source MAC
address XOR'd with destination MAC address) modulo NIC slave
count]. This selects the same NIC slave for each destination MAC
address. This mode provides load balancing and fault tolerance.
* *Broadcast (broadcast):* Transmit network packets on all slave
network interfaces. This mode provides fault tolerance.
* *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
aggregation groups that share the same speed and duplex
settings. Utilizes all slave network interfaces in the active
aggregator group according to the 802.3ad specification.
* *Adaptive transmit load balancing (balance-tlb):* Linux bonding
driver mode that does not require any special network-switch
support. The outgoing network packet traffic is distributed according
to the current load (computed relative to the speed) on each network
interface slave. Incoming traffic is received by one currently
designated slave network interface. If this receiving slave fails,
another slave takes over the MAC address of the failed receiving
slave.
* *Adaptive load balancing (balance-alb):* Includes balance-tlb plus receive
2016-10-03 11:05:29 +03:00
load balancing (rlb) for IPV4 traffic, and does not require any
special network switch support. The receive load balancing is achieved
by ARP negotiation. The bonding driver intercepts the ARP Replies sent
by the local system on their way out and overwrites the source
hardware address with the unique hardware address of one of the NIC
slaves in the single logical bonded interface such that different
network-peers use different MAC addresses for their network packet
traffic.
For the most setups the active-backup are the best choice or if your
switch support LACP "IEEE 802.3ad" this mode should be preferred.
2016-10-04 12:31:34 +03:00
The following bond configuration can be used as distributed/shared
storage network. The benefit would be that you get more speed and the
network will be fault-tolerant.
2016-10-03 11:05:29 +03:00
.Example: Use bond with fixed IP address
----
auto lo
iface lo inet loopback
iface eno1 inet manual
2016-10-03 11:05:29 +03:00
iface eno2 inet manual
2016-10-03 11:05:29 +03:00
auto bond0
iface bond0 inet static
slaves eno1 eno2
2016-10-03 11:05:29 +03:00
address 192.168.1.2
netmask 255.255.255.0
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2+3
auto vmbr0
iface vmbr0 inet static
address 10.10.10.2
netmask 255.255.255.0
gateway 10.10.10.1
bridge_ports eno1
2016-10-03 11:05:29 +03:00
bridge_stp off
bridge_fd 0
----
2016-10-04 12:31:34 +03:00
Another possibility it to use the bond directly as bridge port.
This can be used to make the guest network fault-tolerant.
.Example: Use a bond as bridge port
2016-10-03 11:05:29 +03:00
----
auto lo
iface lo inet loopback
iface eno1 inet manual
2016-10-03 11:05:29 +03:00
iface eno2 inet manual
2016-10-03 11:05:29 +03:00
auto bond0
iface bond0 inet maunal
slaves eno1 eno2
2016-10-03 11:05:29 +03:00
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2+3
auto vmbr0
iface vmbr0 inet static
address 10.10.10.2
netmask 255.255.255.0
gateway 10.10.10.1
bridge_ports bond0
bridge_stp off
bridge_fd 0
----
////
TODO: explain IPv6 support?
TODO: explan OVS
////