rpm-ostree/fedostree/web/partials/home.html

247 lines
11 KiB
HTML

<article>
<h1>fedostree</h1>
<p>An instance
of <a href="https://github.com/cgwalters/rpm-ostree">rpm-ostree</a>
for Fedora. This project takes multiple RPM package sets from
Fedora, assembles them on the build server side, and stores these
trees in
an <a href="http://live.gnome.org/Projects/OSTree">OSTree</a>
repository. Client systems can them atomically upgrade and switch
between these trees.
</p>
<h3>Trying it out</h3>
<p>See <a href="#installation">installation</a>.</p>
<h3>Background</h3>
<p>Fedora today is an extremely flexible system. One can find
Fedora builds running on everything from hobbyist ARM devices,
to workstations, to testing servers.
</p>
<p>
This flexibility derives in large part from the fact that from a
technological point of view, Fedora is a collection of packages.
While pre-assembled "deliverables" such as the Live CDs are
distributed by the project, they are only a temporary state.
</p>
<p>
For example, as soon as they are installed, upgrading involves
having a package manager that dynamically reassembles the system
from newer parts in the Fedora package collection. One cannot
file a bug against the "default offering" as a whole - a package
must be chosen.
</p>
<p>
Nearly every aspect of the Fedora infrastructure (and
documentation) is structured in terms of packages, from
user-facing tools such as Bugzilla and Bodhi, to developer tools
such as Koji. The announced security updates are based on package
names.
</p>
<p>
In contrast for example, ChromeOS is delivered and updated as an
pre-assembled atomic unit, targeted at specific hardware.
ChromeOS is (compared to Fedora) completely inflexible, but
fulfills a targeted role clearly well.
</p>
<h3>How OSTree allows a middle ground</h3>
<p>
Fundamentally, packages are partial filesystem trees with metadata
- they are traditionally assembled by a package manager on every
client machine into a complete bootable tree. It's important to
emphasize that it is only these <emphasis>complete</emphasis>
trees that users run.
</p>
<p>
OSTree allows an OS distributor to
ship <emphasis>multiple</emphasis> pre-built complete bootable
filesystem trees, and furthermore, client machines can atomically
switch between them, or even track multiple trees independently.
</p>
<p>
This allows a middle ground between the two extremes of a
combinatorial explosion of packages, and a singular OS.
</p>
<p>For example, these are some of the trees the current prototype generates:
<ul>
<li><tt>fedostree/20/x86_64/server/docker-io</tt> - This tree contains <tt>@standard</tt> plus <tt>docker-io</tt>.</li>
<li><tt>fedostree/20/x86_64/server/jbossas</tt> - This tree contains <tt>@standard</tt> plus JBoss.</li>
<li><tt>fedostree/20/x86_64/workstation/gnome/core</tt> - The GNOME workstation with no applications.</li>
<li><tt>fedostree/20/x86_64/workstation/gnome/development-virt</tt>
<li><tt>fedostree/20/x86_64/workstation/kde/core</tt> - The KDE workstation with no applications.</li>
- The above, with development tools, and virtualization client
and server side.</li>
</ul>
</p>
<h3>Initial goals</h3>
<p>
The first goal of this project is to be an <i>additional</i>
deployment option built in the Fedora infrastructure; possibly
only for Fedora rawhide. Developers and testers can use OSTree to
atomically replicate, upgrade to newer versions of, and transition
between the pre-assembled trees produced by this build server.
</p>
<p>
Notably in this phase, no common mechanism for additional software
installation is provided. That said, individual trees can do so;
for example <tt>server/docker-io</tt> tree can use Docker to
install and run server container applications independent of
OSTree.
</p>
<p>
This phase does include basic integration testing on the build
server side, which will be a major benefit to the Fedora project
and its downstreams.
</p>
<h3>Required changes in Fedora/RPM for initial deployment</h3>
<h5>/usr/lib/passwd and nss-altfiles</h5>
<p>
A change to
include <a href="https://sourceware.org/bugzilla/show_bug.cgi?id=16142">/usr/lib/passwd</a>
is required; this is implemented currently by
the <tt>nss-altfiles</tt> package. See
also <a href="http://fedorapeople.org/~walters/Use-usr-lib-passwd-for-system-users-if-it-exists.patch">this
patch</a> for shadow-utils.
</p>
<h5>/var should be dynamically populated</h5>
<p>
All RPMs should stop shipping files and directories
in <tt>/var</tt>.
See <a href="https://people.gnome.org/~walters/ostree/doc/layout.html">this
section</a> of the OSTree documentation.
</p>
<p>
RPM should cope with its database living
in <tt>/usr/share/rpm</tt> and being immutable.
</p>
<h5>SELinux support</h5>
<p>OSTree was designed from the very beginning to work with SELinux; it just
needs some bugfixes and integration on the rpm-ostree side.</p>
<h5>Anaconda support</h5>
<p>Anaconda should have an OSTree backend in addition to RPM. A basic UI
that provides a listview of shipped trees and allows picking them would
be quite sufficient initially.</p>
<h5>Dracut</h5>
<p>OSTree, when replicating content from a build server, effectively reverts
the <a href="https://fedoraproject.org/wiki/Features/DracutHostOnly">Dracut
host-only mode</a>. Furthermore, at the moment we hardcode
/etc/machine-id, which is a definite bug that needs to be fixed.
Possibly systemd should support reading the machine ID from the
kernel commandline, as it's the only host-writable area available
in early boot.</p>
<h3>Development area: OSTree Layering</h3>
<p>
This phase would be allowing "layering" of trees. For example,
if one installs the <tt>base/minimal</tt> tree, one could imagine
taking the <tt>strace</tt> package, and computing a new
filesystem tree which is the union of the two.
</p>
<p>
While plain standalone ELF executables would work with no
modification, a generalization of this kind of dynamic layering
implies a higher level above OSTree that is aware of things
like <tt>ldconfig</tt> and <tt>gtk-update-icon-cache</tt> and how
to trigger them when layers are combined
</p>
<p>
Conceptually, this is a step back towards combinatorics. For example,
if libvirt is a layer that could be applied on top of the base server
layer as well as the workstation layer, then there would need to be
some notion of dependencies.
</p>
<h3>Development area: Local package assembly</h3>
<p>
There is absolutely no reason one could not just use the package
manager on the client side to download and assemble packages -
but rather than operating live on your current root, OSTree
allows setting up the chosen tree for the next boot
atomically.
</p>
<p>
The problem is making this sort of thing efficient and scalable;
it would require careful integration of the local OSTree
repository and the package manager caching to operate at a speed
comparable to traditional package management.</p>
<p>There are two primary issues. First, OSTree mandates that the
running tree be immutable, and requires construction of a new
tree. This is just a deep mismatch with how package managers are
built. Second, current package managers are totally unaware of
the existence of the OSTree repository at /ostree/repo which holds
binaries. A naive implementation would just redownload the packages,
which would be quite slow.
</p>
</p>
<h3>Development area: Live updates</h3>
<p>
If one is using OSTree in a model without a separate application
mechanism (as is the case for rpm-ostree), it is potentially
painful to reboot just to upgrade applications.
</p>
<p>
It would be quite easy to do a bind mount of the new /usr over
top of the old. This would avoid some of the problems dpkg/rpm
have in creating an <emphasis>partial</emphasis> view. But even
this model would still break many real world desktop applications
such as Firefox. Few applications handle files from an arbitrary
new version replacing their current ones.
</p>
<p>
Starting from an <emphasis>safe</emphasis> basis, it should be
possible to engineer userspace so that many classes of upgrades
can be applied both live and safely, without race conditions.
</p>
<h3>OSTree example: Bisecting Mesa</h3>
<p>
OSTree allows not just dual booting - one can just as easily have
50 or more trees locally. Suppose that you're tracking Fedora
rawhide, and an upgrade breaks Mesa+GNOME (or sound, or something
else). You can not only easily revert to a last known good tree,
you can use OSTree to download intermediate builds from the build
server and <i>bisect</i> across them. Given the ability to do
local builds from git, automating bisection across source code is
entirely possible as well.
</p>
<h3>OSTree example: Server side fast CI/CD</h3>
<p>
OSTree <b>mandates</b> that operating system updates work entirely
offline. Work that has been prototyped already in
the <a href="https://wiki.gnome.org/Projects/GnomeContinuous">gnome-continuous</a>
codebase then takes updated content from an OSTree repository, and
incrementally updates a cached VM disk image from the host side.
This could allow <i>booting</i> updated systemd RPMs within
minutes after Koji builds. This is fundamentally different from
RPM <tt>%check</tt> as we are booting the tree as it will be
shipped to the user - whereas RPM checks may run in e.g. a RHEL6
kernel.
</p>
<h3>OSTree example: Parallel installing Red Hat Enterprise Linux and Fedora</h3>
<p>
Many contributors to Fedora are also Red Hat engineers working on
Red Hat Enterprise Linux. An example way to use OSTree is to have
EL7 installed in the physical /, and install Fedora in
/ostree/deploy/fedora. One can choose whether or not to share
/home.
</p>
<h3>OSTree example: Trying rawhide safely</h3>
<p>
This is an obvious use case - you can run a stable release, and
periodically try the development release on bare metal with a
great deal of safety, particularly if you choose not to share
/home. In this model, the only major risk is the newer kernel
containing filesystem corrupting bugs.
</p>
<h3>OSTree example: "Recovery partition"</h3>
<p>
You can keep your OS installed in /, manage it with a traditional
package manager, but have a small (e.g. base/minimal) tree
installed via OSTree to use as a "recovery partition" when
something goes wrong. Except unlike traditional recovery
partitions, you can easily delete it if you want space, upgrade it
offline, etc.
</p>
<h3>OSTree example: Reliable safe upgrades of a server cluster</h3>
<p>
OSTree allows taking a "cloud" like approach to a cluster of
traditional servers. Every upgrade is atomic and (relatively)
efficient, and can be served by any plain HTTP server.
</p>
</article>