1
0
mirror of https://gitlab.com/libvirt/libvirt.git synced 2024-12-27 07:22:07 +03:00
libvirt/docs/drvlxc.html.in

739 lines
25 KiB
XML

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<body>
<h1>LXC container driver</h1>
<ul id="toc"></ul>
<p>
The libvirt LXC driver manages "Linux Containers". At their simplest, containers
can just be thought of as a collection of processes, separated from the main
host processes via a set of resource namespaces and constrained via control
groups resource tunables. The libvirt LXC driver has no dependency on the LXC
userspace tools hosted on sourceforge.net. It directly utilizes the relevant
kernel features to build the container environment. This allows for sharing
of many libvirt technologies across both the QEMU/KVM and LXC drivers. In
particular sVirt for mandatory access control, auditing of operations,
integration with control groups and many other features.
</p>
<h2><a name="cgroups">Control groups Requirements</a></h2>
<p>
In order to control the resource usage of processes inside containers, the
libvirt LXC driver requires that certain cgroups controllers are mounted on
the host OS. The minimum required controllers are 'cpuacct', 'memory' and
'devices', while recommended extra controllers are 'cpu', 'freezer' and
'blkio'. Libvirt will not mount the cgroups filesystem itself, leaving
this up to the init system to take care of. Systemd will do the right thing
in this respect, while for other init systems the <code>cgconfig</code>
init service will be required. For further information, consult the general
libvirt <a href="cgroups.html">cgroups documentation</a>.
</p>
<h2><a name="namespaces">Namespace requirements</a></h2>
<p>
In order to separate processes inside a container from those in the
primary "host" OS environment, the libvirt LXC driver requires that
certain kernel namespaces are compiled in. Libvirt currently requires
the 'mount', 'ipc', 'pid', and 'uts' namespaces to be available. If
separate network interfaces are desired, then the 'net' namespace is
required. If the guest configuration declares a
<a href="formatdomain.html#elementsOSContainer">UID or GID mapping</a>,
the 'user' namespace will be enabled to apply these. <strong>A suitably
configured UID/GID mapping is a pre-requisite to making containers
secure, in the absence of sVirt confinement.</strong>
</p>
<h2><a name="init">Default container setup</a></h2>
<h3><a name="cliargs">Command line arguments</a></h3>
<p>
When the container "init" process is started, it will typically
not be given any command line arguments (eg the equivalent of
the bootloader args visible in <code>/proc/cmdline</code>). If
any arguments are desired, then must be explicitly set in the
container XML configuration via one or more <code>initarg</code>
elements. For example, to run <code>systemd --unit emergency.service</code>
would use the following XML
</p>
<pre>
&lt;os&gt;
&lt;type arch='x86_64'&gt;exe&lt;/type&gt;
&lt;init&gt;/bin/systemd&lt;/init&gt;
&lt;initarg&gt;--unit&lt;/initarg&gt;
&lt;initarg&gt;emergency.service&lt;/initarg&gt;
&lt;/os&gt;
</pre>
<h3><a name="envvars">Environment variables</a></h3>
<p>
When the container "init" process is started, it will be given several useful
environment variables. The following standard environment variables are mandated
by <a href="http://www.freedesktop.org/wiki/Software/systemd/ContainerInterface">systemd container interface</a>
to be provided by all container technologies on Linux.
</p>
<dl>
<dt>container</dt>
<dd>The fixed string <code>libvirt-lxc</code> to identify libvirt as the creator</dd>
<dt>container_uuid</dt>
<dd>The UUID assigned to the container by libvirt</dd>
<dt>PATH</dt>
<dd>The fixed string <code>/bin:/usr/bin</code></dd>
<dt>TERM</dt>
<dd>The fixed string <code>linux</code></dd>
</dl>
<p>
In addition to the standard variables, the following libvirt specific
environment variables are also provided
</p>
<dl>
<dt>LIBVIRT_LXC_NAME</dt>
<dd>The name assigned to the container by libvirt</dd>
<dt>LIBVIRT_LXC_UUID</dt>
<dd>The UUID assigned to the container by libvirt</dd>
<dt>LIBVIRT_LXC_CMDLINE</dt>
<dd>The unparsed command line arguments specified in the container configuration.
Use of this is discouraged, in favour of passing arguments directly to the
container init process via the <code>initarg</code> config element.</dd>
</dl>
<h3><a name="fsmounts">Filesystem mounts</a></h3>
<p>
In the absence of any explicit configuration, the container will
inherit the host OS filesystem mounts. A number of mount points will
be made read only, or re-mounted with new instances to provide
container specific data. The following special mounts are setup
by libvirt
</p>
<ul>
<li><code>/dev</code> a new "tmpfs" pre-populated with authorized device nodes</li>
<li><code>/dev/pts</code> a new private "devpts" instance for console devices</li>
<li><code>/sys</code> the host "sysfs" instance remounted read-only</li>
<li><code>/proc</code> a new instance of the "proc" filesystem</li>
<li><code>/proc/sys</code> the host "/proc/sys" bind-mounted read-only</li>
<li><code>/sys/fs/selinux</code> the host "selinux" instance remounted read-only</li>
<li><code>/sys/fs/cgroup/NNNN</code> the host cgroups controllers bind-mounted to
only expose the sub-tree associated with the container</li>
<li><code>/proc/meminfo</code> a FUSE backed file reflecting memory limits of the container</li>
</ul>
<h3><a name="devnodes">Device nodes</a></h3>
<p>
The container init process will be started with <code>CAP_MKNOD</code>
capability removed and blocked from re-acquiring it. As such it will
not be able to create any device nodes in <code>/dev</code> or anywhere
else in its filesystems. Libvirt itself will take care of pre-populating
the <code>/dev</code> filesystem with any devices that the container
is authorized to use. The current devices that will be made available
to all containers are
</p>
<ul>
<li><code>/dev/zero</code></li>
<li><code>/dev/null</code></li>
<li><code>/dev/full</code></li>
<li><code>/dev/random</code></li>
<li><code>/dev/urandom</code></li>
<li><code>/dev/stdin</code> symlinked to <code>/proc/self/fd/0</code></li>
<li><code>/dev/stdout</code> symlinked to <code>/proc/self/fd/1</code></li>
<li><code>/dev/stderr</code> symlinked to <code>/proc/self/fd/2</code></li>
<li><code>/dev/fd</code> symlinked to <code>/proc/self/fd</code></li>
<li><code>/dev/ptmx</code> symlinked to <code>/dev/pts/ptmx</code></li>
<li><code>/dev/console</code> symlinked to <code>/dev/pts/0</code></li>
</ul>
<p>
In addition, for every console defined in the guest configuration,
a symlink will be created from <code>/dev/ttyN</code> symlinked to
the corresponding <code>/dev/pts/M</code> pseudo TTY device. The
first console will be <code>/dev/tty1</code>, with further consoles
numbered incrementally from there.
</p>
<p>
Since /dev/ttyN and /dev/console are linked to the pts devices. The
tty device of login program is pts device. the pam module securetty
may prevent root user from logging in container. If you want root
user to log in container successfully, add the pts device to the file
/etc/securetty of container.
</p>
<p>
Further block or character devices will be made available to containers
depending on their configuration.
</p>
<h2><a name="security">Security considerations</a></h2>
<p>
The libvirt LXC driver is fairly flexible in how it can be configured,
and as such does not enforce a requirement for strict security
separation between a container and the host. This allows it to be used
in scenarios where only resource control capabilities are important,
and resource sharing is desired. Applications wishing to ensure secure
isolation between a container and the host must ensure that they are
writing a suitable configuration.
</p>
<h3><a name="securenetworking">Network isolation</a></h3>
<p>
If the guest configuration does not list any network interfaces,
the <code>network</code> namespace will not be activated, and thus
the container will see all the host's network interfaces. This will
allow apps in the container to bind to/connect from TCP/UDP addresses
and ports from the host OS. It also allows applications to access
UNIX domain sockets associated with the host OS, which are in the
abstract namespace. If access to UNIX domains sockets in the abstract
namespace is not wanted, then applications should set the
<code>&lt;privnet/&gt;</code> flag in the
<code>&lt;features&gt;....&lt;/features&gt;</code> element.
</p>
<h3><a name="securefs">Filesystem isolation</a></h3>
<p>
If the guest configuration does not list any filesystems, then
the container will be set up with a root filesystem that matches
the host's root filesystem. As noted earlier, only a few locations
such as <code>/dev</code>, <code>/proc</code> and <code>/sys</code>
will be altered. This means that, in the absence of restrictions
from sVirt, a process running as user/group N:M inside the container
will be able to access almost exactly the same files as a process
running as user/group N:M in the host.
</p>
<p>
There are multiple options for restricting this. It is possible to
simply map the existing root filesystem through to the container in
read-only mode. Alternatively a completely separate root filesystem
can be configured for the guest. In both cases, further sub-mounts
can be applied to customize the content that is made visible. Note
that in the absence of sVirt controls, it is still possible for the
root user in a container to unmount any sub-mounts applied. The user
namespace feature can also be used to restrict access to files based
on the UID/GID mappings.
</p>
<p>
Sharing the host filesystem tree, also allows applications to access
UNIX domains sockets associated with the host OS, which are in the
filesystem namespaces. It should be noted that a number of init
systems including at least <code>systemd</code> and <code>upstart</code>
have UNIX domain socket which are used to control their operation.
Thus, if the directory/filesystem holding their UNIX domain socket is
exposed to the container, it will be possible for a user in the container
to invoke operations on the init service in the same way it could if
outside the container. This also applies to other applications in the
host which use UNIX domain sockets in the filesystem, such as DBus,
Libvirtd, and many more. If this is not desired, then applications
should either specify the UID/GID mapping in the configuration to
enable user namespaces and thus block access to the UNIX domain socket
based on permissions, or should ensure the relevant directories have
a bind mount to hide them. This is particularly important for the
<code>/run</code> or <code>/var/run</code> directories.
</p>
<h3><a name="secureusers">User and group isolation</a></h3>
<p>
If the guest configuration does not list any ID mapping, then the
user and group IDs used inside the container will match those used
outside the container. In addition, the capabilities associated with
a process in the container will infer the same privileges they would
for a process in the host. This has obvious implications for security,
since a root user inside the container will be able to access any
file owned by root that is visible to the container, and perform more
or less any privileged kernel operation. In the absence of additional
protection from sVirt, this means that the root user inside a container
is effectively as powerful as the root user in the host. There is no
security isolation of the root user.
</p>
<p>
The ID mapping facility was introduced to allow for stricter control
over the privileges of users inside the container. It allows apps to
define rules such as "user ID 0 in the container maps to user ID 1000
in the host". In addition the privileges associated with capabilities
are somewhat reduced so that they cannot be used to escape from the
container environment. A full description of user namespaces is outside
the scope of this document, however LWN has
<a href="https://lwn.net/Articles/532593/">a good write-up on the topic</a>.
From the libvirt point of view, the key thing to remember is that defining
an ID mapping for users and groups in the container XML configuration
causes libvirt to activate the user namespace feature.
</p>
<h2><a name="activation">Systemd Socket Activation Integration</a></h2>
<p>
The libvirt LXC driver provides the ability to pass across pre-opened file
descriptors when starting LXC guests. This allows for libvirt LXC to support
systemd's <a href="http://0pointer.de/blog/projects/socket-activated-containers.html">socket
activation capability</a>, where an incoming client connection
in the host OS will trigger the startup of a container, which runs another
copy of systemd which gets passed the server socket, and then activates the
actual service handler in the container.
</p>
<p>
Let us assume that you already have a LXC guest created, running
a systemd instance as PID 1 inside the container, which has an
SSHD service configured. The goal is to automatically activate
the container when the first SSH connection is made. The first
step is to create a couple of unit files for the host OS systemd
instance. The <code>/etc/systemd/system/mycontainer.service</code>
unit file specifies how systemd will start the libvirt LXC container
</p>
<pre>
[Unit]
Description=My little container
[Service]
ExecStart=/usr/bin/virsh -c lxc:/// start --pass-fds 3 mycontainer
ExecStop=/usr/bin/virsh -c lxc:/// destroy mycontainer
Type=oneshot
RemainAfterExit=yes
KillMode=none
</pre>
<p>
The <code>--pass-fds 3</code> argument specifies that the file
descriptor number 3 that <code>virsh</code> inherits from systemd,
is to be passed into the container. Since <code>virsh</code> will
exit immediately after starting the container, the <code>RemainAfterExit</code>
and <code>KillMode</code> settings must be altered from their defaults.
</p>
<p>
Next, the <code>/etc/systemd/system/mycontainer.socket</code> unit
file is created to get the host systemd to listen on port 23 for
TCP connections. When this unit file is activated by the first
incoming connection, it will cause the <code>mycontainer.service</code>
unit to be activated with the FD corresponding to the listening TCP
socket passed in as FD 3.
</p>
<pre>
[Unit]
Description=The SSH socket of my little container
[Socket]
ListenStream=23
</pre>
<p>
Port 23 was picked here so that the container doesn't conflict
with the host's SSH which is on the normal port 22. That's it
in terms of host side configuration.
</p>
<p>
Inside the container, the <code>/etc/systemd/system/sshd.socket</code>
unit file must be created
</p>
<pre>
[Unit]
Description=SSH Socket for Per-Connection Servers
[Socket]
ListenStream=23
Accept=yes
</pre>
<p>
The <code>ListenStream</code> value listed in this unit file, must
match the value used in the host file. When systemd in the container
receives the pre-opened FD from libvirt during container startup, it
looks at the <code>ListenStream</code> values to figure out which
FD to give to which service. The actual service to start is defined
by a correspondingly named <code>/etc/systemd/system/sshd@.service</code>
</p>
<pre>
[Unit]
Description=SSH Per-Connection Server for %I
[Service]
ExecStart=-/usr/sbin/sshd -i
StandardInput=socket
</pre>
<p>
Finally, make sure this SSH service is set to start on boot of the container,
by running the following command inside the container:
</p>
<pre>
# mkdir -p /etc/systemd/system/sockets.target.wants/
# ln -s /etc/systemd/system/sshd.socket /etc/systemd/system/sockets.target.wants/
</pre>
<p>
This example shows how to activate the container based on an incoming
SSH connection. If the container was also configured to have an httpd
service, it may be desirable to activate it upon either an httpd or a
sshd connection attempt. In this case, the <code>mycontainer.socket</code>
file in the host would simply list multiple socket ports. Inside the
container a separate <code>xxxxx.socket</code> file would need to be
created for each service, with a corresponding <code>ListenStream</code>
value set.
</p>
<!--
<h2>Container configuration</h2>
<h3>Init process</h3>
<h3>Console devices</h3>
<h3>Filesystem devices</h3>
<h3>Disk devices</h3>
<h3>Block devices</h3>
<h3>USB devices</h3>
<h3>Character devices</h3>
<h3>Network devices</h3>
-->
<h2>Container security</h2>
<h3>sVirt SELinux</h3>
<p>
In the absence of the "user" namespace being used, containers cannot
be considered secure against exploits of the host OS. The sVirt SELinux
driver provides a way to secure containers even when the "user" namespace
is not used. The cost is that writing a policy to allow execution of
arbitrary OS is not practical. The SELinux sVirt policy is typically
tailored to work with an simpler application confinement use case,
as provided by the "libvirt-sandbox" project.
</p>
<h3>Auditing</h3>
<p>
The LXC driver is integrated with libvirt's auditing subsystem, which
causes audit messages to be logged whenever there is an operation
performed against a container which has impact on host resources.
So for example, start/stop, device hotplug will all log audit messages
providing details about what action occurred and any resources
associated with it. There are the following 3 types of audit messages
</p>
<ul>
<li><code>VIRT_MACHINE_ID</code> - details of the SELinux process and
image security labels assigned to the container.</li>
<li><code>VIRT_CONTROL</code> - details of an action / operation
performed against a container. There are the following types of
operation
<ul>
<li><code>op=start</code> - a container has been started. Provides
the machine name, uuid and PID of the <code>libvirt_lxc</code>
controller process</li>
<li><code>op=init</code> - the init PID of the container has been
started. Provides the machine name, uuid and PID of the
<code>libvirt_lxc</code> controller process and PID of the
init process (in the host PID namespace)</li>
<li><code>op=stop</code> - a container has been stopped. Provides
the machine name, uuid</li>
</ul>
</li>
<li><code>VIRT_RESOURCE</code> - details of a host resource
associated with a container action.</li>
</ul>
<h3>Device access</h3>
<p>
All containers are launched with the CAP_MKNOD capability cleared
and removed from the bounding set. Libvirt will ensure that the
/dev filesystem is pre-populated with all devices that a container
is allowed to use. In addition, the cgroup "device" controller is
configured to block read/write/mknod from all devices except those
that a container is authorized to use.
</p>
<h2><a name="exconfig">Example configurations</a></h2>
<h3>Example config version 1</h3>
<p></p>
<pre>
&lt;domain type='lxc'&gt;
&lt;name&gt;vm1&lt;/name&gt;
&lt;memory&gt;500000&lt;/memory&gt;
&lt;os&gt;
&lt;type&gt;exe&lt;/type&gt;
&lt;init&gt;/bin/sh&lt;/init&gt;
&lt;/os&gt;
&lt;vcpu&gt;1&lt;/vcpu&gt;
&lt;clock offset='utc'/&gt;
&lt;on_poweroff&gt;destroy&lt;/on_poweroff&gt;
&lt;on_reboot&gt;restart&lt;/on_reboot&gt;
&lt;on_crash&gt;destroy&lt;/on_crash&gt;
&lt;devices&gt;
&lt;emulator&gt;/usr/libexec/libvirt_lxc&lt;/emulator&gt;
&lt;interface type='network'&gt;
&lt;source network='default'/&gt;
&lt;/interface&gt;
&lt;console type='pty' /&gt;
&lt;/devices&gt;
&lt;/domain&gt;
</pre>
<p>
In the &lt;emulator&gt; element, be sure you specify the correct path
to libvirt_lxc, if it does not live in /usr/libexec on your system.
</p>
<p>
The next example assumes there is a private root filesystem
(perhaps hand-crafted using busybox, or installed from media,
debootstrap, whatever) under /opt/vm-1-root:
</p>
<p></p>
<pre>
&lt;domain type='lxc'&gt;
&lt;name&gt;vm1&lt;/name&gt;
&lt;memory&gt;32768&lt;/memory&gt;
&lt;os&gt;
&lt;type&gt;exe&lt;/type&gt;
&lt;init&gt;/init&lt;/init&gt;
&lt;/os&gt;
&lt;vcpu&gt;1&lt;/vcpu&gt;
&lt;clock offset='utc'/&gt;
&lt;on_poweroff&gt;destroy&lt;/on_poweroff&gt;
&lt;on_reboot&gt;restart&lt;/on_reboot&gt;
&lt;on_crash&gt;destroy&lt;/on_crash&gt;
&lt;devices&gt;
&lt;emulator&gt;/usr/libexec/libvirt_lxc&lt;/emulator&gt;
&lt;filesystem type='mount'&gt;
&lt;source dir='/opt/vm-1-root'/&gt;
&lt;target dir='/'/&gt;
&lt;/filesystem&gt;
&lt;interface type='network'&gt;
&lt;source network='default'/&gt;
&lt;/interface&gt;
&lt;console type='pty' /&gt;
&lt;/devices&gt;
&lt;/domain&gt;
</pre>
<h2><a name="usage">Container usage / management</a></h2>
<p>
As with any libvirt virtualization driver, LXC containers can be
managed via a wide variety of libvirt based tools. At the lowest
level the <code>virsh</code> command can be used to perform many
tasks, by passing the <code>-c lxc:///</code> argument. As an
alternative to repeating the URI with every command, the <code>LIBVIRT_DEFAULT_URI</code>
environment variable can be set to <code>lxc:///</code>. The
examples that follow outline some common operations with virsh
and LXC. For further details about usage of virsh consult its
manual page.
</p>
<h3><a name="usageSave">Defining (saving) container configuration</a></h3>
<p>
The <code>virsh define</code> command takes an XML configuration
document and loads it into libvirt, saving the configuration on disk
</p>
<pre>
# virsh -c lxc:/// define myguest.xml
</pre>
<h3><a name="usageView">Viewing container configuration</a></h3>
<p>
The <code>virsh dumpxml</code> command can be used to view the
current XML configuration of a container. By default the XML
output reflects the current state of the container. If the
container is running, it is possible to explicitly request the
persistent configuration, instead of the current live configuration
using the <code>--inactive</code> flag
</p>
<pre>
# virsh -c lxc:/// dumpxml myguest
</pre>
<h3><a name="usageStart">Starting containers</a></h3>
<p>
The <code>virsh start</code> command can be used to start a
container from a previously defined persistent configuration
</p>
<pre>
# virsh -c lxc:/// start myguest
</pre>
<p>
It is also possible to start so called "transient" containers,
which do not require a persistent configuration to be saved
by libvirt, using the <code>virsh create</code> command.
</p>
<pre>
# virsh -c lxc:/// create myguest.xml
</pre>
<h3><a name="usageStop">Stopping containers</a></h3>
<p>
The <code>virsh shutdown</code> command can be used
to request a graceful shutdown of the container. By default
this command will first attempt to send a message to the
init process via the <code>/dev/initctl</code> device node.
If no such device node exists, then it will send SIGTERM
to PID 1 inside the container.
</p>
<pre>
# virsh -c lxc:/// shutdown myguest
</pre>
<p>
If the container does not respond to the graceful shutdown
request, it can be forcibly stopped using the <code>virsh destroy</code>
</p>
<pre>
# virsh -c lxc:/// destroy myguest
</pre>
<h3><a name="usageReboot">Rebooting a container</a></h3>
<p>
The <code>virsh reboot</code> command can be used
to request a graceful shutdown of the container. By default
this command will first attempt to send a message to the
init process via the <code>/dev/initctl</code> device node.
If no such device node exists, then it will send SIGHUP
to PID 1 inside the container.
</p>
<pre>
# virsh -c lxc:/// reboot myguest
</pre>
<h3><a name="usageDelete">Undefining (deleting) a container configuration</a></h3>
<p>
The <code>virsh undefine</code> command can be used to delete the
persistent configuration of a container. If the guest is currently
running, this will turn it into a "transient" guest.
</p>
<pre>
# virsh -c lxc:/// undefine myguest
</pre>
<h3><a name="usageConnect">Connecting to a container console</a></h3>
<p>
The <code>virsh console</code> command can be used to connect
to the text console associated with a container.
</p>
<pre>
# virsh -c lxc:/// console myguest
</pre>
<p>
If the container has been configured with multiple console devices,
then the <code>--devname</code> argument can be used to choose the
console to connect to.
In LXC, multiple consoles will be named
as 'console0', 'console1', 'console2', etc.
</p>
<pre>
# virsh -c lxc:/// console myguest --devname console1
</pre>
<h3><a name="usageEnter">Running commands in a container</a></h3>
<p>
The <code>virsh lxc-enter-namespace</code> command can be used
to enter the namespaces and security context of a container
and then execute an arbitrary command.
</p>
<pre>
# virsh -c lxc:/// lxc-enter-namespace myguest -- /bin/ls -al /dev
</pre>
<h3><a name="usageTop">Monitoring container utilization</a></h3>
<p>
The <code>virt-top</code> command can be used to monitor the
activity and resource utilization of all containers on a
host
</p>
<pre>
# virt-top -c lxc:///
</pre>
<h3><a name="usageConvert">Converting LXC container configuration</a></h3>
<p>
The <code>virsh domxml-from-native</code> command can be used to convert
most of the LXC container configuration into a domain XML fragment
</p>
<pre>
# virsh -c lxc:/// domxml-from-native lxc-tools /var/lib/lxc/myguest/config
</pre>
<p>
This conversion has some limitations due to the fact that the
domxml-from-native command output has to be independent of the host. Here
are a few things to take care of before converting:
</p>
<ul>
<li>
Replace the fstab file referenced by <tt>lxc.mount</tt> by the corresponding
lxc.mount.entry lines.
</li>
<li>
Replace all relative sizes of tmpfs mount entries to absolute sizes. Also
make sure that tmpfs entries all have a size option (default is 50%).
</li>
<li>
Define <tt>lxc.cgroup.memory.limit_in_bytes</tt> to properly limit the memory
available to the container. The conversion will use 64MiB as the default.
</li>
</ul>
</body>
</html>