2012-06-23 01:14:19 +04:00
<?xml version='1.0'?> <!-- * - nxml - * -->
2019-03-14 16:40:58 +03:00
< !DOCTYPE refentry PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN"
2015-06-18 20:47:44 +03:00
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
2020-11-09 07:23:58 +03:00
<!-- SPDX - License - Identifier: LGPL - 2.1 - or - later -->
2012-06-23 01:14:19 +04:00
<refentry id= "bootup" >
2015-02-04 05:14:13 +03:00
<refentryinfo >
<title > bootup</title>
<productname > systemd</productname>
</refentryinfo>
<refmeta >
<refentrytitle > bootup</refentrytitle>
<manvolnum > 7</manvolnum>
</refmeta>
<refnamediv >
<refname > bootup</refname>
<refpurpose > System bootup process</refpurpose>
</refnamediv>
<refsect1 >
<title > Description</title>
2019-03-22 15:10:39 +03:00
<para > A number of different components are involved in the boot of a Linux system. Immediately after
power-up, the system firmware will do minimal hardware initialization, and hand control over to a boot
loader (e.g.
<citerefentry > <refentrytitle > systemd-boot</refentrytitle> <manvolnum > 7</manvolnum> </citerefentry> or
<ulink url= "https://www.gnu.org/software/grub/" > GRUB</ulink> ) stored on a persistent storage device. This
boot loader will then invoke an OS kernel from disk (or the network). On systems using EFI or other types
of firmware, this firmware may also load the kernel directly.</para>
2022-09-23 15:59:02 +03:00
<para > The kernel (optionally) mounts an in-memory file system, often generated by <citerefentry
project='man-pages'><refentrytitle > dracut</refentrytitle> <manvolnum > 8</manvolnum> </citerefentry> , which
looks for the root file system. Nowadays this is implemented as an "initramfs" — a compressed CPIO
man: "the initial RAM disk" → "the initrd"
In many places we spelled out the phrase behind "initrd" in full, but this
isn't terribly useful. In fact, no "RAM disk" is used, so emphasizing this
is just confusing to the reader. Let's just say "initrd" everywhere, people
understand what this refers to, and that it's in fact an initramfs image.
Also, s/i.e./e.g./ where appropriate.
Also, don't say "in RAM", when in fact it's virtual memory, whose pages
may or may not be loaded in page frames in RAM, and we have no control over
this.
Also, add <filename></filename> and other minor cleanups.
2022-09-15 15:43:59 +03:00
archive that the kernel extracts into a tmpfs. In the past normal file systems using an in-memory block
device (ramdisk) were used, and the name "initrd" is still used to describe both concepts. It's the boot
loader or the firmware that loads both the kernel and initrd/initramfs images into memory, but the kernel
which interprets it as a file system.
<citerefentry > <refentrytitle > systemd</refentrytitle> <manvolnum > 1</manvolnum> </citerefentry> may be used
to manage services in the initrd, similarly to the real system.</para>
2019-03-22 15:10:39 +03:00
<para > After the root file system is found and mounted, the initrd hands over control to the host's system
manager (such as
<citerefentry > <refentrytitle > systemd</refentrytitle> <manvolnum > 1</manvolnum> </citerefentry> ) stored in
the root file system, which is then responsible for probing all remaining hardware, mounting all
necessary file systems and spawning all configured services.</para>
2015-02-04 05:14:13 +03:00
<para > On shutdown, the system manager stops all services, unmounts
all file systems (detaching the storage technologies backing
them), and then (optionally) jumps back into the initrd code which
unmounts/detaches the root file system and the storage it resides
on. As a last step, the system is powered down.</para>
<para > Additional information about the system boot process may be
found in
<citerefentry project= 'man-pages' > <refentrytitle > boot</refentrytitle> <manvolnum > 7</manvolnum> </citerefentry> .</para>
</refsect1>
<refsect1 >
<title > System Manager Bootup</title>
<para > At boot, the system manager on the OS image is responsible
for initializing the required file systems, services and drivers
that are necessary for operation of the system. On
<citerefentry > <refentrytitle > systemd</refentrytitle> <manvolnum > 1</manvolnum> </citerefentry>
systems, this process is split up in various discrete steps which
are exposed as target units. (See
<citerefentry > <refentrytitle > systemd.target</refentrytitle> <manvolnum > 5</manvolnum> </citerefentry>
for detailed information about target units.) The boot-up process
is highly parallelized so that the order in which specific target
units are reached is not deterministic, but still adheres to a
limited amount of ordering structure.</para>
<para > When systemd starts up the system, it will activate all
units that are dependencies of <filename > default.target</filename>
(as well as recursively all dependencies of these dependencies).
Usually, <filename > default.target</filename> is simply an alias of
<filename > graphical.target</filename> or
<filename > multi-user.target</filename> , depending on whether the
system is configured for a graphical UI or only for a text
console. To enforce minimal ordering between the units pulled in,
a number of well-known target units are available, as listed on
<citerefentry > <refentrytitle > systemd.special</refentrytitle> <manvolnum > 7</manvolnum> </citerefentry> .</para>
<para > The following chart is a structural overview of these
well-known units and their position in the boot-up logic. The
arrows describe which units are pulled in and ordered before which
other units. Units near the top are started before units nearer to
the bottom of the chart.</para>
2012-06-23 01:14:19 +04:00
2018-03-20 11:54:01 +03:00
<!-- note: do not use unicode ellipsis here, because docbook will replace that
with three dots anyway, messing up alignment -->
veritysetup-generator: add support for veritytab
This adds the support for veritytab.
The veritytab file contains at most five fields, the first four are
mandatory, the last one is optional:
- The first field contains the name of the resulting verity volume; its
block device is set up /dev/mapper/</filename>.
- The second field contains a path to the underlying block data device,
or a specification of a block device via UUID= followed by the UUID.
- The third field contains a path to the underlying block hash device,
or a specification of a block device via UUID= followed by the UUID.
- The fourth field is the roothash in hexadecimal.
- The fifth field, if present, is a comma-delimited list of options.
The following options are recognized only: ignore-corruption,
restart-on-corruption, panic-on-corruption, ignore-zero-blocks,
check-at-most-once and root-hash-signature. The others options will
be implemented later.
Also, this adds support for the new kernel verity command line boolean
option "veritytab" which enables the read for veritytab, and the new
environment variable SYSTEMD_VERITYTAB which sets the path to the file
veritytab to read.
2020-11-14 17:21:39 +03:00
<programlisting > cryptsetup-pre.target veritysetup-pre.target
2019-12-18 12:32:03 +03:00
|
(various low-level v
veritysetup-generator: add support for veritytab
This adds the support for veritytab.
The veritytab file contains at most five fields, the first four are
mandatory, the last one is optional:
- The first field contains the name of the resulting verity volume; its
block device is set up /dev/mapper/</filename>.
- The second field contains a path to the underlying block data device,
or a specification of a block device via UUID= followed by the UUID.
- The third field contains a path to the underlying block hash device,
or a specification of a block device via UUID= followed by the UUID.
- The fourth field is the roothash in hexadecimal.
- The fifth field, if present, is a comma-delimited list of options.
The following options are recognized only: ignore-corruption,
restart-on-corruption, panic-on-corruption, ignore-zero-blocks,
check-at-most-once and root-hash-signature. The others options will
be implemented later.
Also, this adds support for the new kernel verity command line boolean
option "veritytab" which enables the read for veritytab, and the new
environment variable SYSTEMD_VERITYTAB which sets the path to the file
veritytab to read.
2020-11-14 17:21:39 +03:00
API VFS mounts: (various cryptsetup/veritysetup devices...)
2019-12-18 12:32:03 +03:00
mqueue, configfs, | |
debugfs, ...) v |
| cryptsetup.target |
| (various swap | | remote-fs-pre.target
| devices...) | | | |
| | | | | v
| v local-fs-pre.target | | | (network file systems)
| swap.target | | v v |
| | v | remote-cryptsetup.target |
veritysetup-generator: add support for veritytab
This adds the support for veritytab.
The veritytab file contains at most five fields, the first four are
mandatory, the last one is optional:
- The first field contains the name of the resulting verity volume; its
block device is set up /dev/mapper/</filename>.
- The second field contains a path to the underlying block data device,
or a specification of a block device via UUID= followed by the UUID.
- The third field contains a path to the underlying block hash device,
or a specification of a block device via UUID= followed by the UUID.
- The fourth field is the roothash in hexadecimal.
- The fifth field, if present, is a comma-delimited list of options.
The following options are recognized only: ignore-corruption,
restart-on-corruption, panic-on-corruption, ignore-zero-blocks,
check-at-most-once and root-hash-signature. The others options will
be implemented later.
Also, this adds support for the new kernel verity command line boolean
option "veritytab" which enables the read for veritytab, and the new
environment variable SYSTEMD_VERITYTAB which sets the path to the file
veritytab to read.
2020-11-14 17:21:39 +03:00
| | (various low-level (various mounts and | remote-veritysetup.target |
2022-01-07 19:06:54 +03:00
| | services: udevd, fsck services...) | | |
| | tmpfiles, random | | | remote-fs.target
| | seed, sysctl, ...) v | | |
| | | local-fs.target | | _____________/
| | | | | |/
\____|______|_______________ ______|___________/ |
\ / |
v |
sysinit.target |
| |
______________________/|\_____________________ |
/ | | | \ |
| | | | | |
v v | v | |
(various (various | (various | |
2019-12-18 12:32:03 +03:00
timers...) paths...) | sockets...) | |
| | | | | |
v v | v | |
timers.target paths.target | sockets.target | |
| | | | v |
v \_______ | _____/ rescue.service |
\|/ | |
v v |
basic.target <emphasis > rescue.target</emphasis> |
| |
________v____________________ |
/ | \ |
| | | |
v v v |
display- (various system (various system |
manager.service services services) |
| required for | |
| graphical UIs) v v
| | <emphasis > multi-user.target</emphasis>
emergency.service | | |
| \_____________ | _____________/
v \|/
<emphasis > emergency.target</emphasis> v
<emphasis > graphical.target</emphasis> </programlisting>
2015-02-04 05:14:13 +03:00
<para > Target units that are commonly used as boot targets are
<emphasis > emphasized</emphasis> . These units are good choices as
goal targets, for example by passing them to the
<varname > systemd.unit=</varname> kernel command line option (see
<citerefentry > <refentrytitle > systemd</refentrytitle> <manvolnum > 1</manvolnum> </citerefentry> )
2015-02-04 17:35:37 +03:00
or by symlinking <filename > default.target</filename> to them.
</para>
2015-02-04 05:14:13 +03:00
<para > <filename > timers.target</filename> is pulled-in by
<filename > basic.target</filename> asynchronously. This allows
timers units to depend on services which become only available
later in boot.</para>
</refsect1>
2019-12-18 13:23:30 +03:00
<refsect1 >
<title > User manager startup</title>
<para > The system manager starts the <filename > user@<replaceable > uid</replaceable> .service</filename> unit
for each user, which launches a separate unprivileged instance of <command > systemd</command> for each
user — the user manager. Similarly to the system manager, the user manager starts units which are pulled
in by <filename > default.target</filename> . The following chart is a structural overview of the well-known
user units. For non-graphical sessions, <filename > default.target</filename> is used. Whenever the user
logs into a graphical session, the login manager will start the
<filename > graphical-session.target</filename> target that is used to pull in units required for the
2020-04-21 21:46:53 +03:00
graphical session. A number of targets (shown on the right side) are started when specific hardware is
2019-12-18 13:23:30 +03:00
available to the user.</para>
<programlisting >
2022-07-15 10:04:41 +03:00
(various (various (various
timers...) paths...) sockets...) (sound devices)
| | | |
v v v v
timers.target paths.target sockets.target sound.target
| | |
\______________ _|_________________/ (bluetooth devices)
\ / |
V v
basic.target bluetooth.target
|
__________/ \_______ (smartcard devices)
/ \ |
| | v
| v smartcard.target
v graphical-session-pre.target
(various user services) | (printers)
| v |
| (services for the graphical session) v
| | printer.target
v v
<emphasis > default.target</emphasis> graphical-session.target</programlisting>
2019-12-18 13:23:30 +03:00
2022-07-15 10:04:41 +03:00
</refsect1>
2019-12-18 13:23:30 +03:00
2015-02-04 05:14:13 +03:00
<refsect1 >
2022-09-23 16:10:06 +03:00
<title > Bootup in the initrd</title>
2022-10-13 23:30:48 +03:00
<para > Systemd can be used in the initrd as well. It detects the initrd environment by checking for the
<filename > /etc/initrd-release</filename> file. The default target in the initrd is
<filename > initrd.target</filename> . The bootup process is identical to the system manager bootup until
the target <filename > basic.target</filename> . After that, systemd executes the special target
<filename > initrd.target</filename> .
Before any file systems are mounted, the manager will determine whether the system shall resume from
hibernation or proceed with normal boot. This is accomplished by
<filename > systemd-hibernate-resume@.service</filename> which must be finished before
<filename > local-fs-pre.target</filename> , so no filesystems can be mounted before the check is complete.
2016-11-23 07:19:56 +03:00
2016-05-12 19:42:39 +03:00
When the root device becomes available,
2020-03-06 15:51:28 +03:00
<filename > initrd-root-device.target</filename> is reached.
2015-02-04 05:14:13 +03:00
If the root device can be mounted at
<filename > /sysroot</filename> , the
<filename > sysroot.mount</filename> unit becomes active and
<filename > initrd-root-fs.target</filename> is reached. The service
<filename > initrd-parse-etc.service</filename> scans
<filename > /sysroot/etc/fstab</filename> for a possible
2020-10-05 19:08:21 +03:00
<filename > /usr/</filename> mount point and additional entries
2015-02-04 05:14:13 +03:00
marked with the <emphasis > x-initrd.mount</emphasis> option. All
entries found are mounted below <filename > /sysroot</filename> , and
<filename > initrd-fs.target</filename> is reached. The service
<filename > initrd-cleanup.service</filename> isolates to the
<filename > initrd-switch-root.target</filename> , where cleanup
services can run. As the very last step, the
<filename > initrd-switch-root.service</filename> is activated,
which will cause the system to switch its root to
<filename > /sysroot</filename> .
</para>
2022-07-15 10:04:41 +03:00
<programlisting > : (beginning identical to above)
:
v
basic.target
| emergency.service
______________________/| |
/ | v
| initrd-root-device.target <emphasis > emergency.target</emphasis>
| |
| v
| sysroot.mount
| |
| v
| initrd-root-fs.target
| |
| v
v initrd-parse-etc.service
(custom initrd |
services...) v
| (sysroot-usr.mount and
| various mounts marked
| with fstab option
| x-initrd.mount...)
| |
| v
| initrd-fs.target
\______________________ |
\|
v
initrd.target
|
v
initrd-cleanup.service
isolates to
initrd-switch-root.target
|
v
______________________/|
/ v
| initrd-udevadm-cleanup-db.service
v |
(custom initrd |
services...) |
\______________________ |
\|
v
initrd-switch-root.target
|
v
initrd-switch-root.service
|
v
Transition to Host OS</programlisting>
2015-02-04 05:14:13 +03:00
</refsect1>
<refsect1 >
<title > System Manager Shutdown</title>
<para > System shutdown with systemd also consists of various target
units with some minimal ordering structure applied:</para>
2022-07-15 10:04:41 +03:00
<programlisting > (conflicts with (conflicts with
all system all file system
services) mounts, swaps,
| cryptsetup/
| veritysetup
| devices, ...)
| |
v v
shutdown.target umount.target
| |
\_______ ______/
\ /
v
(various low-level
services)
|
v
final.target
|
___________________________/ \_________________
/ | | \
| | | |
v | | |
systemd-reboot.service | | |
| v | |
| systemd-poweroff.service | |
v | v |
<emphasis > reboot.target</emphasis> | systemd-halt.service |
v | v
<emphasis > poweroff.target</emphasis> | systemd-kexec.service
v |
<emphasis > halt.target</emphasis> |
v
<emphasis > kexec.target</emphasis> </programlisting>
2015-02-04 05:14:13 +03:00
2018-03-21 22:57:06 +03:00
<para > Commonly used system shutdown targets are <emphasis > emphasized</emphasis> .</para>
<para > Note that
2018-03-28 18:07:11 +03:00
<citerefentry > <refentrytitle > systemd-halt.service</refentrytitle> <manvolnum > 8</manvolnum> </citerefentry> ,
2018-03-21 22:57:06 +03:00
<filename > systemd-reboot.service</filename> , <filename > systemd-poweroff.service</filename> and
<filename > systemd-kexec.service</filename> will transition the system and server manager (PID 1) into the second
phase of system shutdown (implemented in the <filename > systemd-shutdown</filename> binary), which will unmount any
remaining file systems, kill any remaining processes and release any other remaining resources, in a simple and
robust fashion, without taking any service or unit concept into account anymore. At that point, regular
applications and resources are generally terminated and released already, the second phase hence operates only as
safety net for everything that couldn't be stopped or released for some reason during the primary, unit-based
shutdown phase described above.</para>
2015-02-04 05:14:13 +03:00
</refsect1>
<refsect1 >
<title > See Also</title>
<para >
<citerefentry > <refentrytitle > systemd</refentrytitle> <manvolnum > 1</manvolnum> </citerefentry> ,
<citerefentry project= 'man-pages' > <refentrytitle > boot</refentrytitle> <manvolnum > 7</manvolnum> </citerefentry> ,
<citerefentry > <refentrytitle > systemd.special</refentrytitle> <manvolnum > 7</manvolnum> </citerefentry> ,
<citerefentry > <refentrytitle > systemd.target</refentrytitle> <manvolnum > 5</manvolnum> </citerefentry> ,
2018-03-21 22:57:06 +03:00
<citerefentry > <refentrytitle > systemd-halt.service</refentrytitle> <manvolnum > 8</manvolnum> </citerefentry> ,
2020-06-25 15:37:24 +03:00
<citerefentry project= 'man-pages' > <refentrytitle > dracut</refentrytitle> <manvolnum > 8</manvolnum> </citerefentry>
2015-02-04 05:14:13 +03:00
</para>
</refsect1>
2012-06-23 01:14:19 +04:00
</refentry>