Rewrite manual in mkdocs

I don't much like Docbook (and am considering converting the man pages
too), but let's start with the manual.

I looked at various documentation generators (there are a lot), and
I had a few requirements:

 - Markdown
 - Packaged in Fedora
 - Suitable for upload to a static webserver

`mkdocs` seems to fit the bill.
This commit is contained in:
Colin Walters 2016-01-27 16:56:16 -05:00
parent 32c360b5a0
commit 64ebe2b82a
17 changed files with 692 additions and 1023 deletions

View File

@ -1,121 +0,0 @@
Submitting patches
------------------
You can:
1. Send mail to ostree-list@gnome.org, with the patch attached
1. Submit a pull request against https://github.com/GNOME/ostree
1. Attach them to https://bugzilla.gnome.org/
Please look at "git log" and match the commit log style.
Running the test suite
----------------------
Currently, ostree uses https://wiki.gnome.org/GnomeGoals/InstalledTests
To run just ostree's tests:
./configure ... --enable-installed-tests
gnome-desktop-testing-runner -p 0 ostree/
Also, there is a regular:
make check
That runs a different set of tests.
Coding style
------------
Indentation is GNU. Files should start with the appropriate mode lines.
Use GCC `__attribute__((cleanup))` wherever possible. If interacting
with a third party library, try defining local cleanup macros.
Use GError and GCancellable where appropriate.
Prefer returning `gboolean` to signal success/failure, and have output
values as parameters.
Prefer linear control flow inside functions (aside from standard
loops). In other words, avoid "early exits" or use of `goto` besides
`goto out;`.
This is an example of an "early exit":
static gboolean
myfunc (...)
{
gboolean ret = FALSE;
/* some code */
/* some more code */
if (condition)
return FALSE;
/* some more code */
ret = TRUE;
out:
return ret;
}
If you must shortcut, use:
if (condition)
{
ret = TRUE;
goto out;
}
A consequence of this restriction is that you are encouraged to avoid
deep nesting of loops or conditionals. Create internal static helper
functions, particularly inside loops. For example, rather than:
while (condition)
{
/* some code */
if (condition)
{
for (i = 0; i < somevalue; i++)
{
if (condition)
{
/* deeply nested code */
}
/* more nested code */
}
}
}
Instead do this:
static gboolean
helperfunc (..., GError **error)
{
if (condition)
{
/* deeply nested code */
}
/* more nested code */
return ret;
}
while (condition)
{
/* some code */
if (!condition)
continue;
for (i = 0; i < somevalue; i++)
{
if (!helperfunc (..., i, error))
goto out;
}
}

1
CONTRIBUTING.md Symbolic link
View File

@ -0,0 +1 @@
docs/CONTRIBUTING.md

View File

@ -46,8 +46,8 @@ versions support extended validation using
However, in order to build from a git clone, you must update the However, in order to build from a git clone, you must update the
submodules. If you're packaging OSTree and want a tarball, I submodules. If you're packaging OSTree and want a tarball, I
recommend using a "recursive git archive" script. There are several recommend using a "recursive git archive" script. There are several
available online; [this available online;
code](https://git.gnome.org/browse/ostree/tree/packaging/Makefile.dist-packaging#n11) [this code](https://git.gnome.org/browse/ostree/tree/packaging/Makefile.dist-packaging#n11)
in OSTree is an example. in OSTree is an example.
Once you have a git clone or recursive archive, building is the Once you have a git clone or recursive archive, building is the

View File

@ -95,11 +95,6 @@ HTML_IMAGES=
# Extra SGML files that are included by $(DOC_MAIN_SGML_FILE). # Extra SGML files that are included by $(DOC_MAIN_SGML_FILE).
# e.g. content_files=running.sgml building.sgml changes-2.0.sgml # e.g. content_files=running.sgml building.sgml changes-2.0.sgml
content_files= \ content_files= \
overview.xml \
repo.xml \
deployment.xml \
atomic-upgrades.xml \
adapting-existing.xml \
$(NULL) $(NULL)
# SGML files where gtk-doc abbrevations (#GtkWidget) are expanded # SGML files where gtk-doc abbrevations (#GtkWidget) are expanded

View File

@ -1,267 +0,0 @@
<?xml version="1.0"?>
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"
"http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" [
<!ENTITY version SYSTEM "../version.xml">
]>
<part id="adapting-existing">
<title>Adapting existing mainstream distributions</title>
<chapter id="layout">
<title>System layout</title>
<para>
First, OSTree encourages systems to implement <ulink
url="http://www.freedesktop.org/wiki/Software/systemd/TheCaseForTheUsrMerge/">UsrMove</ulink>.
This is simply to avoid the need for more bind mounts. By
default OSTree's dracut hook creates a read-only bind mount over
<filename class='directory'>/usr</filename>; you can of course
generate individual bind-mounts for <filename
class='directory'>/bin</filename>, all the <filename
class='directory'>/lib</filename> variants, etc. So it is not
intended to be a hard requirement.
</para>
<para>
Remember, because by default the system is booted into a
<literal>chroot</literal> equivalent, there has to be some way
to refer to the actual physical root filesystem. Therefore,
your operating system tree should contain an empty <filename
class='directory'>/sysroot</filename> directory; at boot time,
OSTree will make this a bind mount to the physical / root
directory. There is precedent for this name in the initramfs
context. You should furthermore make a toplevel symbolic link
<filename class='directory'>/ostree</filename> which points to
<filename class='directory'>/sysroot/ostree</filename>, so that
the OSTree tool at runtime can consistently find the system data
regardless of whether it's operating on a physical root or
inside a deployment.
</para>
<para>
Because OSTree only preserves <filename
class='directory'>/var</filename> across upgrades (each
deployment's chroot directory will be garbage collected
eventually), you will need to choose how to handle other
toplevel writable directories specified by the <ulink
url="http://www.pathname.com/fhs/">Filesystem Hierarchy
Standard</ulink>. Your operating system may of course choose
not to support some of these such as <filename
class='directory'>/usr/local</filename>, but following is the
recommended set:
<itemizedlist>
<listitem>
<para>
<filename class='directory'>/home</filename> to <filename class='directory'>/var/home</filename>
</para>
</listitem>
<listitem>
<para>
<filename class='directory'>/opt</filename> to <filename class='directory'>/var/opt</filename>
</para>
</listitem>
<listitem>
<para>
<filename class='directory'>/srv</filename> to <filename class='directory'>/var/srv</filename>
</para>
</listitem>
<listitem>
<para>
<filename class='directory'>/root</filename> to <filename class='directory'>/var/roothome</filename>
</para>
</listitem>
<listitem>
<para>
<filename class='directory'>/usr/local</filename> to <filename class='directory'>/var/local</filename>
</para>
</listitem>
<listitem>
<para>
<filename class='directory'>/mnt</filename> to <filename class='directory'>/var/mnt</filename>
</para>
</listitem>
<listitem>
<para>
<filename class='directory'>/tmp</filename> to <filename class='directory'>/sysroot/tmp</filename>
</para>
</listitem>
</itemizedlist>
</para>
<para>
Furthermore, since <filename class='directory'>/var</filename>
is empty by default, your operating system will need to
dynamically create the <emphasis>targets</emphasis> of these at
boot. A good way to do this is using
<command>systemd-tmpfiles</command>, if your OS uses systemd.
For example:
</para>
<programlisting>
<![CDATA[
d /var/log/journal 0755 root root -
L /var/home - - - - ../sysroot/home
d /var/opt 0755 root root -
d /var/srv 0755 root root -
d /var/roothome 0700 root root -
d /var/usrlocal 0755 root root -
d /var/usrlocal/bin 0755 root root -
d /var/usrlocal/etc 0755 root root -
d /var/usrlocal/games 0755 root root -
d /var/usrlocal/include 0755 root root -
d /var/usrlocal/lib 0755 root root -
d /var/usrlocal/man 0755 root root -
d /var/usrlocal/sbin 0755 root root -
d /var/usrlocal/share 0755 root root -
d /var/usrlocal/src 0755 root root -
d /var/mnt 0755 root root -
d /run/media 0755 root root -
]]>
</programlisting>
<para>
Particularly note here the double indirection of <filename
class='directory'>/home</filename>. By default, each
deployment will share the global toplevel <filename
class='directory'>/home</filename> directory on the physical
root filesystem. It is then up to higher levels of management
tools to keep <filename>/etc/passwd</filename> or equivalent
synchronized between operating systems.
</para>
<para>
Each deployment can easily be reconfigured to have its own home
directory set simply by making <filename
class='directory'>/var/home</filename> a real directory.
</para>
</chapter>
<chapter id="booting">
<title>Booting and initramfs technology</title>
<para>
OSTree comes with optional dracut+systemd integration code that
parses the <literal>ostree=</literal> kernel command line
argument in the initramfs, and then sets up the read-only bind
mount on <filename class='directory'>/usr</filename>, a bind
mount on the deployment's <filename
class='directory'>/sysroot</filename> to the physical <filename
class='directory'>/</filename>, and then finally uses
<literal>mount(MS_MOVE)</literal> to make the deployment root appear to be the
root filesystem before telling systemd to switch root.
</para>
<para>
If you are not using dracut or systemd, using OSTree should still
be possible, but you will have to write the integration code. Patches
to support other initramfs technologies and init systems, if sufficiently
clean, will likely be accepted upstream.
</para>
<para>
A further specific note regarding <command>sysvinit</command>:
OSTree used to support recording device files such the
<filename>/dev/initctl</filename> FIFO, but no longer does.
It's recommended to just patch your initramfs to create this at
boot.
</para>
</chapter>
<chapter id="lib-passwd">
<title>/usr/lib/passwd</title>
<para>
Unlike traditional package systems, OSTree trees contain
<emphasis>numeric</emphasis> uid and gids. Furthermore, it does
not have a <literal>%post</literal> type mechanism where
<filename>useradd</filename> could be invoked. In order to ship
an OS that contains both system users and users dynamically
created on client machines, you will need to choose a solution
for <filename>/etc/passwd</filename>. The core problem is that
if you add a user to the system for a daemon, the OSTree upgrade
process for <filename class='directory'>/etc</filename> will
simply notice that because <filename>/etc/passwd</filename>
differs from the previous default, it will keep the modified
config file, and your new OS user will not be visible.
</para>
<para>
The solution chosen for the <ulink
url="https://live.gnome.org/Projects/GnomeContinuous">gnome-continuous</ulink>
operating system is to create
<filename>/usr/lib/passwd</filename>, and to include a NSS
module <ulink
url="https://github.com/aperezdc/nss-altfiles">nss-altfiles</ulink>
which instructs glibc to read from it. Then, the build system
places all system users there, freeing up
<filename>/etc/passwd</filename> to be purely a database of
local users. See also a more recent effort from <ulink
url="http://0pointer.de/blog/projects/stateless.html">Systemd
stateless</ulink>.
</para>
</chapter>
<chapter id="adapting-package-manager">
<title>Adapting existing package managers</title>
<para>
The largest endeavor is likely to be redesigning your
distribution's package manager to be on top of OSTree,
particularly if you want to keep compatibility with the "old
way" of installing into the physical <filename
class='directory'>/</filename>. This section will use examples
from both <command>dpkg</command> and <command>rpm</command> as
the author has familiarity with both; but the abstract concepts
should apply to most traditional package managers.
</para>
<para>
There are many levels of possible integration; initially, we
will describe the most naive implementation which is the
simplest but also the least efficient. We will assume here that
the admin is booted into an OSTree-enabled system, and wants to
add a set of packages.
</para>
<para>
Many package managers store their state in <filename
class='directory'>/var</filename>; but since in the OSTree model
that directory is shared between independent versions, the
package database must first be found in the per-deployment
<filename class='directory'>/usr</filename> directory. It
becomes read-only; remember, all upgrades involve constructing a
new filesystem tree, so your package manager will also need to
create a copy of its database. Most likely, if you want to
continue supporting non-OSTree deployments, simply have your
package manager fall back to the legacy <filename
class='directory'>/var</filename> location if the one in
<filename class='directory'>/usr</filename> is not found.
</para>
<para>
To install a set of new packages (without removing any existing
ones), enumerate the set of packages in the currently booted
deployment, and perform dependency resolution to compute the
complete set of new packages. Download and unpack these new
packages to a temporary directory.
</para>
<para>
Now, because we are merely installing new packages and not
removing anything, we can make the major optimization of reusing
our existing filesystem tree, and merely
<emphasis>layering</emphasis> the composed filesystem tree of
these new packages on top. A command lke this: <command>ostree
commit -b osname/releasename/description
--tree=ref=<replaceable>osname/releasenamename/description</replaceable>
--tree=dir=/var/tmp/newpackages.13A8D0/</command> will create a
new commit in the
<replaceable>osname/releasename/description</replaceable>
branch. The OSTree SHA256 checksum of all the files in
/var/tmp/newpackages.13A8D0/ will be computed, but we will not
re-checksum the present existing tree. In this layering model,
earlier directories will take precedence, but files in later
layers will silently override earlier layers.
</para>
<para>
Then to actually deploy this tree for the next boot:
<command>ostree admin deploy
<replaceable>osname/releasenamename/description</replaceable></command>
</para>
</chapter>
</part>

View File

@ -1,181 +0,0 @@
<?xml version="1.0"?>
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"
"http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" [
<!ENTITY version SYSTEM "../version.xml">
]>
<part id="atomic-upgrades">
<title>Atomic Upgrades</title>
<chapter id="upgrades-intro">
<title>You can turn off the power anytime you want...</title>
<para>
OSTree is designed to implement fully atomic and safe upgrades;
more generally, atomic transitions between lists of bootable
deployments. If the system crashes or you pull the power, you
will have either the old system, or the new one.
</para>
</chapter>
<chapter id="simple-http">
<title>Simple upgrades via HTTP</title>
<para>
First, the most basic model OSTree supports is one where it
replicates pre-generated filesystem trees from a server over
HTTP, tracking exactly one ref, which is stored in the <filename
class='extension'>.origin</filename> file for the deployment.
The command <command>ostree admin upgrade</command> implements
this.
</para>
<para>
To begin a simple upgrade, OSTree fetches the contents of the
ref from the remote server. Suppose we're tracking a ref named
<literal>exampleos/buildmaster/x86_64-runtime</literal>.
OSTree fetches the URL
<literal>http://<replaceable>example.com</replaceable>/repo/refs/exampleos/buildmaster/x86_64-runtime</literal>,
which contains a SHA256 checksum. This determines the tree to
deploy, and <filename class='directory'>/etc</filename> will be
merged from currently booted tree.
</para>
<para>
If we do not have this commit, then, then we perform a pull
process. At present (without static deltas), this involves
quite simply just fetching each individual object that we do not
have, asynchronously. Put in other words, we only download
changed files (zlib-compressed). Each object has its checksum
validated and is stored in <filename
class='directory'>/ostree/repo/objects/</filename>.
</para>
<para>
Once the pull is complete, we have all the objects locally
we need to perform a deployment.
</para>
</chapter>
<chapter id="package-manager">
<title>Upgrades via external tools (e.g. package managers)</title>
<para>
As mentioned in the introduction, OSTree is also designed to
allow a model where filesystem trees are computed on the client.
It is completely agnostic as to how those trees are generated;
they could be computed with traditional packages, packages with
post-deployment scripts on top, or built by developers directly
from revision control locally, etc.
</para>
<para>
At a practical level, most package managers today
(<command>dpkg</command> and <command>rpm</command>) operate
"live" on the currently booted filesystem. The way they could
work with OSTree is instead to take the list of installed
packages in the currently booted tree, and compute a new
filesystem from that. A later chapter describes in more details
how this could work: <xref linkend="adapting-existing"/>.
</para>
<para>
For the purposes of this section, let's assume that we have a
newly generated filesystem tree stored in the repo (which shares
storage with the existing booted tree). We can then move on to
checking it back out of the repo into a deployment.
</para>
</chapter>
<chapter id="deployment-dir">
<title>Assembling a new deployment directory</title>
<para>
Given a commit to deploy, OSTree first allocates a directory for
it. This is of the form <filename
class='directory'>/boot/loader/entries/ostree-<replaceable>osname</replaceable>-<replaceable>checksum</replaceable>.<replaceable>serial</replaceable>.conf</filename>.
The <replaceable>serial</replaceable> is normally 0, but if a
given commit is deployed more than once, it will be incremented.
This is supported because the previous deployment may have
configuration in <filename class='directory'>/etc</filename>
that we do not want to use or overwrite.
</para>
<para>
Now that we have a deployment directory, a 3-way merge is
performed between the (by default) currently booted deployment's
<filename class='directory'>/etc</filename>, its default
configuration, and the new deployment (based on its <filename
class='directory'>/usr/etc</filename>).
</para>
</chapter>
<chapter id="swapping-boot">
<title>Atomically swapping boot configuration</title>
<para>
At this point, a new deployment directory has been created as a
hardlink farm; the running system is untouched, and the
bootloader configuration is untouched. We want to add this deployment
to the "deployment list".
</para>
<para>
To support a more general case, OSTree supports atomic
transitioning between arbitrary sets of deployments, with the
restriction that the currently booted deployment must always be
in the new set. In the normal case, we have exactly one
deployment, which is the booted one, and we want to add the new
deployment to the list. A more complex command might allow
creating 100 deployments as part of one atomic transaction, so
that one can set up an automated system to bisect across them.
</para>
<simplesect id="bootversion">
<title>The bootversion</title>
<para>
OSTree allows swapping between boot configurations by
implementing the "swapped directory pattern" in <filename
class='directory'>/boot</filename>. This means it is a
symbolic link to one of two directories <filename
class='directory'>/ostree/boot.<replaceable>[0|1]</replaceable></filename>.
To swap the contents atomically, if the current version is
<literal>0</literal>, we create <filename
class='directory'>/ostree/boot.1</filename>, populate it with
the new contents, then atomically swap the symbolic link. Finally,
the old contents can be garbage collected at any point.
</para>
</simplesect>
<simplesect id="ostree-bootversion">
<title>The /ostree/boot directory</title>
<para>
However, we want to optimize for the case where we the set of
kernel/initramfs pairs is the same between both the old and
new deployment lists. This happens when doing an upgrade that
does not include the kernel; think of a simple translation
update. OSTree optimizes for this case because on some
systems <filename class='directory'>/boot</filename> may be on
a separate medium such as flash storage not optimized for
significant amounts of write traffic.
</para>
<para>
To implement this, OSTree also maintains the directory
<filename
class='directory'>/ostree/boot.<replaceable>bootversion</replaceable></filename>,
which is a set of symbolic links to the deployment
directories. The <replaceable>bootversion</replaceable> here
must match the version of <filename
class='directory'>/boot</filename>. However, in order to
allow atomic transitions of <emphasis>this</emphasis>
directory, this is also a swapped directory, so just like
<filename class='directory'>/boot</filename>, it has a version
of <literal>0</literal> or <literal>1</literal> appended.
</para>
<para>
Each bootloader entry has a special <literal>ostree=</literal>
argument which refers to one of these symbolic links. This is
parsed at runtime in the initramfs.
</para>
</simplesect>
</chapter>
</part>

View File

@ -1,158 +0,0 @@
<?xml version="1.0"?>
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"
"http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" [
<!ENTITY version SYSTEM "../version.xml">
]>
<part id="deployment">
<title>Deployments</title>
<chapter id="deployment-intro">
<title>Overview</title>
<para>
Built on top of the OSTree versioning filesystem core is a layer
that knows how to deploy, parallel install, and manage Unix-like
operating systems (accessible via <command>ostree
admin</command>). The core content of these operating systems
are treated as read-only, but they transparently share storage.
</para>
<para>
A deployment is physically located at a path of the form
<filename
class='directory'>/ostree/deploy/<replaceable>osname</replaceable>/deploy/<replaceable>checksum</replaceable></filename>.
OSTree is designed to boot directly into exactly one deployment
at a time; each deployment is intended to be a target for
<literal>chroot()</literal> or equivalent.
</para>
</chapter>
<chapter id="deployment-osname">
<title>"osname": Group of deployments that share /var</title>
<para>
Each deployment is grouped in exactly one "osname". From
above, you can see that an osname is physically represented in
the <filename
class='directory'>/ostree/deploy/<replaceable>osname</replaceable></filename>
directory. For example, OSTree can allow parallel installing
Debian in <filename
class='directory'>/ostree/deploy/debian</filename> and Red Hat
Enterprise Linux in <filename
class='directory'>/ostree/deploy/rhel</filename> (subject to
operating system support, present released versions of these
operating systems may not support this).
</para>
<para>
Each osname has exactly one copy of the traditional Unix
<filename class='directory'>/var</filename>, stored physically
in <filename
class='directory'>/ostree/deploy/<replaceable>osname</replaceable>/var</filename>.
OSTree provides support tools for <command>systemd</command>
to create a Linux bind mount that ensures the booted
deployment sees the shared copy of <filename
class='directory'>/var</filename>.
</para>
<para>
OSTree does not touch the contents of <filename
class='directory'>/var</filename>. Operating system components
such as daemon services are required to create any directories
they require there at runtime (e.g. <filename
class='directory'>/var/cache/<replaceable>daemonname</replaceable></filename>),
and to manage upgrading data formats inside those directories.
</para>
</chapter>
<chapter id="deployment-contents">
<title>Contents of a deployment</title>
<para>
A deployment begins with a specific commit (represented as a
SHA256 hash) in the OSTree repository in <filename
class='directory'>/ostree/repo</filename>. This commit refers
to a filesystem tree that represents the underlying basis of a
deployment. For short, we will call this the "tree", to
distinguish it from the concept of a deployment.
</para>
<para>
First, the tree must include a kernel stored as <filename
class='directory'>/boot/vmlinuz-<replaceable>checksum</replaceable></filename>.
The checksum should be a SHA256 hash of the kernel contents;
it must be pre-computed before storing the kernel in the
repository. Optionally, the tree can contain an initramfs,
stored as <filename
class='directory'>/boot/initramfs-<replaceable>checksum</replaceable></filename>.
If this exists, the checksum must include both the kernel and
initramfs contents. OSTree will use this to determine which
kernels are shared. The rationale for this is to avoid
computing checksums on the client by default.
</para>
<para>
The deployment should not have a traditional UNIX <filename
class='directory'>/etc</filename>; instead, it should include
<filename class='directory'>/usr/etc</filename>. This is the
"default configuration". When OSTree creates a deployment, it
performs a 3-way merge using the <emphasis>old</emphasis>
default configuration, the active system's <filename
class='directory'>/etc</filename>, and the new default
configuration. In the final filesystem tree for a deployment
then, <filename class='directory'>/etc</filename> is a regular
writable directory.
</para>
<para>
Besides the exceptions of <filename
class='directory'>/var</filename> and <filename
class='directory'>/etc</filename> then, the rest of the
contents of the tree are checked out as hard links into the
repository. It's strongly recommended that operating systems
ship all of their content in <filename
class='directory'>/usr</filename>, but this is not a hard
requirement.
</para>
<para>
Finally, a deployment may have a <filename
class='extension'>.origin</filename> file, stored next to its
directory. This file tells <command>ostree admin
upgrade</command> how to upgrade it. At the moment, OSTree only
supports upgrading a single refspec. However, in the future
OSTree may support a syntax for composing layers of trees, for
example.
</para>
</chapter>
<chapter id="managing-boot">
<title>The system /boot</title>
<para>
While OSTree parallel installs deployments cleanly inside the
<filename class='directory'>/ostree</filename> directory,
ultimately it has to control the system's <filename
class='directory'>/boot</filename> directory. The way this
works is via the <ulink
url="http://www.freedesktop.org/wiki/Specifications/BootLoaderSpec/">boot
loader specification</ulink>, which is a standard for
bootloader-independent drop-in configuration files.
</para>
<para>
When a tree is deployed, it will have a configuration file
generated of the form <filename
class='directory'>/boot/loader/entries/ostree-<replaceable>osname</replaceable>-<replaceable>checksum</replaceable>.<replaceable>serial</replaceable>.conf</filename>.
This configuration file will include a special
<literal>ostree=</literal> kernel argument that allows the
initramfs to find (and <literal>chroot()</literal> into) the
specified deployment.
</para>
<para>
At present, not all bootloaders implement the BootLoaderSpec,
so OSTree contains code for some of these to regenerate native
config files (such as
<filename>/boot/syslinux/syslinux.conf</filename> based on the
entries.
</para>
</chapter>
</part>

View File

@ -7,16 +7,10 @@
]> ]>
<book id="index"> <book id="index">
<bookinfo> <bookinfo>
<title>OSTree Manual</title> <title>OSTree API references</title>
<releaseinfo>for OSTree &version;</releaseinfo> <releaseinfo>for OSTree &version;</releaseinfo>
</bookinfo> </bookinfo>
<xi:include href="overview.xml"/>
<xi:include href="repo.xml"/>
<xi:include href="deployment.xml"/>
<xi:include href="atomic-upgrades.xml"/>
<xi:include href="adapting-existing.xml"/>
<chapter xml:id="reference"> <chapter xml:id="reference">
<title>API Reference</title> <title>API Reference</title>
<xi:include href="xml/libostree-core.xml"/> <xi:include href="xml/libostree-core.xml"/>

View File

@ -1,155 +0,0 @@
<?xml version="1.0"?>
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"
"http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" [
<!ENTITY version SYSTEM "../version.xml">
]>
<part id="overview">
<title>OSTree Overview</title>
<chapter id="ostree-intro">
<title>Introduction</title>
<para>
OSTree an upgrade system for Linux-based operating systems that
performs atomic upgrades of complete filesystem trees. It is
not a package system; rather, it is intended to complement them.
A primary model is composing packages on a server, and then
replicating them to clients.
</para>
<para>
The underlying architecture might be summarized as "git for
operating system binaries". It operates in userspace, and will
work on top of any Linux filesystem. At its core is a git-like
content-addressed object store, and layered on top of that is
bootloader configuration, management of
<filename>/etc</filename>, and other functions to perform an
upgrade beyond just replicating files.
</para>
<para>
You can use OSTree standalone in the pure replication model,
but another approach is to add a package manager on top,
thus creating a hybrid tree/package system.
</para>
</chapter>
<chapter id="ostree-package-comparison">
<title>Comparison with "package managers"</title>
<para>
Because OSTree is designed for deploying core operating
systems, a comparison with traditional "package managers" such
as dpkg and rpm is illustrative. Packages are traditionally
composed of partial filesystem trees with metadata and scripts
attached, and these are dynamically assembled on the client
machine, after a process of dependency resolution.
</para>
<para>
In contrast, OSTree only supports recording and deploying
<emphasis>complete</emphasis> (bootable) filesystem trees. It
has no built-in knowledge of how a given filesystem tree was
generated or the origin of individual files, or dependencies,
descriptions of individual components. Put another way, OSTree
only handles delivery and deployment; you will likely still want
to include inside each tree metadata about the individual
components that went into the tree. For example, a system
administrator may want to know what version of OpenSSL was
included in your tree, so you should support the equivalent of
<command>rpm -q</command> or <command>dpkg -L</command>.
</para>
<para>
The OSTree core emphasizes replicating read-only OS trees via
HTTP, and where the OS includes (if desired) an entirely
separate mechanism to install applications, stored in <filename
class='directory'>/var</filename> if they're system global, or
<filename class='directory'>/home</filename> for per-user
application installation. An example application mechanism is
<ulink url="http://docker.io/">Docker</ulink>.
</para>
<para>
However, it is entirely possible to use OSTree underneath a
package system, where the contents of <filename
class='directory'>/usr</filename> are computed on the client.
For example, when installing a package, rather than changing the
currently running filesystem, the package manager could assemble
a new filesystem tree that layers the new packages on top of a
base tree, record it in the local OSTree repository, and then
set it up for the next boot. To support this model, OSTree
provides an (introspectable) C shared library.
</para>
</chapter>
<chapter id="ostree-block-comparison">
<title>Comparison with block/image replication</title>
<para>
OSTree shares some similarity with "dumb" replication and
stateless deployments, such as the model common in "cloud"
deployments where nodes are booted from an (effectively)
readonly disk, and user data is kept on a different volumes.
The advantage of "dumb" replication, shared by both OSTree and
the cloud model, is that it's <emphasis>reliable</emphasis>
and <emphasis>predictable</emphasis>.
</para>
<para>
But unlike many default image-based deployments, OSTree supports
exactly two persistent writable directories that are preserved
across upgrades: <literal>/etc</literal> and
<literal>/var</literal>.
</para>
<para>
Because OSTree operates at the Unix filesystem layer, it works
on top of any filesystem or block storage layout; it's possible
to replicate a given filesystem tree from an OSTree repository
into plain ext4, BTRFS, XFS, or in general any Unix-compatible
filesystem that supports hard links. Note: OSTree will
transparently take advantage of some BTRFS features if deployed
on it.
</para>
</chapter>
<chapter id="ostree-atomic-parallel-installation">
<title>Atomic transitions between parallel-installable read-only filesystem trees</title>
<para>
Another deeply fundamental difference between both package
managers and image-based replication is that OSTree is
designed to parallel-install <emphasis>multiple
versions</emphasis> of multiple
<emphasis>independent</emphasis> operating systems. OSTree
relies on a new toplevel <filename
class='directory'>ostree</filename> directory; it can in fact
parallel install inside an existing OS or distribution
occupying the physical <filename
class='directory'>/</filename> root.
</para>
<para>
On each client machine, there is an OSTree repository stored
in <filename class='directory'>/ostree/repo</filename>, and a
set of "deployments" stored in <filename
class='directory'>/ostree/deploy/<replaceable>OSNAME</replaceable>/<replaceable>CHECKSUM</replaceable></filename>.
Each deployment is primarily composed of a set of hardlinks
into the repository. This means each version is deduplicated;
an upgrade process only costs disk space proportional to the
new files, plus some constant overhead.
</para>
<para>
The model OSTree emphasizes is that the OS read-only content
is kept in the classic Unix <filename
class='directory'>/usr</filename>; it comes with code to
create a Linux read-only bind mount to prevent inadvertent
corruption. There is exactly one <filename
class='directory'>/var</filename> writable directory shared
between each deployment for a given OS. The OSTree core code
does not touch content in this directory; it is up to the code
in each operating system for how to manage and upgrade state.
</para>
<para>
Finally, each deployment has its own writable copy of the
configuration store <filename
class='directory'>/etc</filename>. On upgrade, OSTree will
perform a basic 3-way diff, and apply any local changes to the
new copy, while leaving the old untouched.
</para>
</chapter>
</part>

View File

@ -1,127 +0,0 @@
<?xml version="1.0"?>
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"
"http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" [
<!ENTITY version SYSTEM "../version.xml">
]>
<part id="repository">
<title>Anatomy of an OSTree repository</title>
<chapter id="repository-intro">
<title>Core object types and data model</title>
<para>
OSTree is deeply inspired by git; the core layer is a userspace
content-addressed versioning filesystem. It is worth taking
some time to familiarize yourself with <ulink
url="http://git-scm.com/book/en/Git-Internals">Git
Internals</ulink>, as this section will assume some knowledge of
how git works.
</para>
<para>
Its object types are similar to git; it has commit objects and
content objects. Git has "tree" objects, whereas OSTree splits
them into "dirtree" and "dirmeta" objects. But unlike git,
OSTree's checksums are SHA256. And most crucially, its content
objects include uid, gid, and extended attributes (but still no
timestamps).
</para>
<simplesect id="commits">
<title>Commit objects</title>
<para>
A commit object contains metadata such as a timestamp, a log
message, and most importantly, a reference to a
dirtree/dirmeta pair of checksums which describe the root
directory of the filesystem.
</para>
<para>
Also like git, each commit in OSTree can have a parent. It is
designed to store a history of your binary builds, just like git
stores a history of source control. However, OSTree also makes
it easy to delete data, under the assumption that you can
regenerate it from source code.
</para>
</simplesect>
<simplesect id="dirtree">
<title>Dirtree objects</title>
<para>
A dirtree contains a sorted array of (filename, checksum)
pairs for content objects, and a second sorted array of
(filename, dirtree checksum, dirmeta checksum), which are
subdirectories.
</para>
</simplesect>
<simplesect id="dirmeta">
<title>Dirmeta objects</title>
<para>
In git, tree objects contain the metadata such as permissions
for their children. But OSTree splits this into a separate
object to avoid duplicating extended attribute listings.
</para>
</simplesect>
<simplesect id="content">
<title>Content objects</title>
<para>
Unlike the first three object types which are metadata,
designed to be <literal>mmap()ed</literal>, the content object
has a separate internal header and payload sections. The
header contains uid, gid, mode, and symbolic link target (for
symlinks), as well as extended attributes. After the header,
for regular files, the content follows.
</para>
</simplesect>
</chapter>
<chapter id="repository-types">
<title>Repository types and locations</title>
<para>
Also unlike git, an OSTree repository can be in one of two
separate modes: <literal>bare</literal> and
<literal>archive-z2</literal>. A bare repository is one where
content files are just stored as regular files; it's designed to
be the source of a "hardlink farm", where each operating system
checkout is merely links into it. If you want to store files
owned by e.g. root in this mode, you must run OSTree as root.
In contrast, the <literal>archive-z2</literal> mode is designed
for serving via plain HTTP. Like tar files, it can be
read/written by non-root users.
</para>
<para>
On an OSTree-deployed system, the "system repository" is
<filename class='directory'>/ostree/repo</filename>. It can be
read by any uid, but only written by root. Unless the
<literal>--repo</literal> argument is given to the
<command>ostree</command> command, it will operate on the system
repository.
</para>
</chapter>
<chapter id="refs">
<title>Refs</title>
<para>
Like git, OSTree uses "refs" to which are text files that point
to particular commits (i.e. filesystem trees). For example, the
gnome-ostree operating system creates trees named like
<literal>gnome-ostree/buildmaster/x86_64-runtime</literal> and
<literal>gnome-ostree/buildmaster/x86_64-devel-debug</literal>.
These two refs point to two different generated filesystem
trees. In this example, the "runtime" tree contains just enough
to run a basic GNOME system, and "devel-debug" contains all of
the developer tools.
</para>
<para>
The <command>ostree</command> supports a simple syntax using the
carat <literal>^</literal> to refer to the parent of a given
commit. For example,
<literal>gnome-ostree/buildmaster/x86_64-runtime^</literal>
refers to the previous build, and
<literal>gnome-ostree/buildmaster/x86_64-runtime^^</literal>
refers to the one before that.
</para>
</chapter>
</part>

121
docs/CONTRIBUTING.md Normal file
View File

@ -0,0 +1,121 @@
Submitting patches
------------------
You can:
1. Send mail to ostree-list@gnome.org, with the patch attached
1. Submit a pull request against https://github.com/GNOME/ostree
1. Attach them to https://bugzilla.gnome.org/
Please look at "git log" and match the commit log style.
Running the test suite
----------------------
Currently, ostree uses https://wiki.gnome.org/GnomeGoals/InstalledTests
To run just ostree's tests:
./configure ... --enable-installed-tests
gnome-desktop-testing-runner -p 0 ostree/
Also, there is a regular:
make check
That runs a different set of tests.
Coding style
------------
Indentation is GNU. Files should start with the appropriate mode lines.
Use GCC `__attribute__((cleanup))` wherever possible. If interacting
with a third party library, try defining local cleanup macros.
Use GError and GCancellable where appropriate.
Prefer returning `gboolean` to signal success/failure, and have output
values as parameters.
Prefer linear control flow inside functions (aside from standard
loops). In other words, avoid "early exits" or use of `goto` besides
`goto out;`.
This is an example of an "early exit":
static gboolean
myfunc (...)
{
gboolean ret = FALSE;
/* some code */
/* some more code */
if (condition)
return FALSE;
/* some more code */
ret = TRUE;
out:
return ret;
}
If you must shortcut, use:
if (condition)
{
ret = TRUE;
goto out;
}
A consequence of this restriction is that you are encouraged to avoid
deep nesting of loops or conditionals. Create internal static helper
functions, particularly inside loops. For example, rather than:
while (condition)
{
/* some code */
if (condition)
{
for (i = 0; i < somevalue; i++)
{
if (condition)
{
/* deeply nested code */
}
/* more nested code */
}
}
}
Instead do this:
static gboolean
helperfunc (..., GError **error)
{
if (condition)
{
/* deeply nested code */
}
/* more nested code */
return ret;
}
while (condition)
{
/* some code */
if (!condition)
continue;
for (i = 0; i < somevalue; i++)
{
if (!helperfunc (..., i, error))
goto out;
}
}

1
docs/index.md Symbolic link
View File

@ -0,0 +1 @@
../README.md

View File

@ -0,0 +1,159 @@
# Adapting existing mainstream distributions
## System layout
First, OSTree encourages systems to implement
[UsrMove](http://www.freedesktop.org/wiki/Software/systemd/TheCaseForTheUsrMerge/)
This is simply to avoid the need for more bind mounts. By default
OSTree's dracut hook creates a read-only bind mount over `/usr`; you
can of course generate individual bind-mounts for `/bin`, all the
`/lib` variants, etc. So it is not intended to be a hard requirement.
Remember, because by default the system is booted into a `chroot`
equivalent, there has to be some way to refer to the actual physical
root filesystem. Therefore, your operating system tree should contain
an empty `/sysroot` directory; at boot time, OSTree will make this a
bind mount to the physical / root directory. There is precedent for
this name in the initramfs context. You should furthermore make a
toplevel symbolic link `/ostree` which points to `/sysroot/ostree`, so
that the OSTree tool at runtime can consistently find the system data
regardless of whether it's operating on a physical root or inside a
deployment.
Because OSTree only preserves `/var` across upgrades (each
deployment's chroot directory will be garbage collected
eventually), you will need to choose how to handle other
toplevel writable directories specified by the [Filesystem Hierarchy Standard](http://www.pathname.com/fhs/")
Your operating system may of course choose
not to support some of these such as `/usr/local`, but following is the
recommended set:
- `/home``/var/home`
- `/opt``/var/opt`
- `/srv``/var/srv`
- `/root``/var/roothome`
- `/usr/local``/var/local`
- `/mnt``/var/mnt`
- `/tmp``/sysroot/tmp`
Furthermore, since `/var` is empty by default, your operating system
will need to dynamically create the <emphasis>targets</emphasis> of
these at boot. A good way to do this is using `systemd-tmpfiles`, if
your OS uses systemd. For example:
```
d /var/log/journal 0755 root root -
L /var/home - - - - ../sysroot/home
d /var/opt 0755 root root -
d /var/srv 0755 root root -
d /var/roothome 0700 root root -
d /var/usrlocal 0755 root root -
d /var/usrlocal/bin 0755 root root -
d /var/usrlocal/etc 0755 root root -
d /var/usrlocal/games 0755 root root -
d /var/usrlocal/include 0755 root root -
d /var/usrlocal/lib 0755 root root -
d /var/usrlocal/man 0755 root root -
d /var/usrlocal/sbin 0755 root root -
d /var/usrlocal/share 0755 root root -
d /var/usrlocal/src 0755 root root -
d /var/mnt 0755 root root -
d /run/media 0755 root root -
```
Particularly note here the double indirection of `/home`. By default,
each deployment will share the global toplevel `/home` directory on
the physical root filesystem. It is then up to higher levels of
management tools to keep <filename>/etc/passwd</filename> or
equivalent synchronized between operating systems. Each deployment
can easily be reconfigured to have its own home directory set simply
by making `/var/home` a real directory. </chapter>
## Booting and initramfs technology
OSTree comes with optional dracut+systemd integration code that parses
the `ostree=` kernel command line argument in the initramfs, and then
sets up the read-only bind mount on `/usr`, a bind mount on the
deployment's `/sysroot` to the physical `/`, and then finally uses
`mount(MS_MOVE)` to make the deployment root appear to be the root
filesystem before telling systemd to switch root.
If you are not using dracut or systemd, using OSTree should still be
possible, but you will have to write the integration code. Patches to
support other initramfs technologies and init systems, if sufficiently
clean, will likely be accepted upstream.
A further specific note regarding `sysvinit`: OSTree used to support
recording device files such the `/dev/initctl` FIFO, but no longer
does. It's recommended to just patch your initramfs to create this at
boot.
## /usr/lib/passwd
Unlike traditional package systems, OSTree trees contain *numeric* uid
and gids. Furthermore, it does not have a `%post` type mechanism
where `useradd` could be invoked. In order to ship an OS that
contains both system users and users dynamically created on client
machines, you will need to choose a solution for `/etc/passwd`. The
core problem is that if you add a user to the system for a daemon, the
OSTree upgrade process for `/etc` will simply notice that because
`/etc/passwd` differs from the previous default, it will keep the
modified config file, and your new OS user will not be visible. The
solution chosen for the [Gnome Continuous](https://live.gnome.org/Projects/GnomeContinuous) operating
system is to create `/usr/lib/passwd`, and to include a NSS module
[nss-altfiles](https://github.com/aperezdc/nss-altfiles) which
instructs glibc to read from it. Then, the build system places all
system users there, freeing up `/etc/passwd` to be purely a database
of local users. See also a more recent effort from [Systemd Stateless](http://0pointer.de/blog/projects/stateless.html)
## Adapting existing package managers
The largest endeavor is likely to be redesigning your distribution's
package manager to be on top of OSTree, particularly if you want to
keep compatibility with the "old way" of installing into the physical
`/`. This section will use examples from both `dpkg` and `rpm` as the
author has familiarity with both; but the abstract concepts should
apply to most traditional package managers.
There are many levels of possible integration; initially, we will
describe the most naive implementation which is the simplest but also
the least efficient. We will assume here that the admin is booted
into an OSTree-enabled system, and wants to add a set of packages.
Many package managers store their state in `/var`; but since in the
OSTree model that directory is shared between independent versions,
the package database must first be found in the per-deployment `/usr`
directory. It becomes read-only; remember, all upgrades involve
constructing a new filesystem tree, so your package manager will also
need to create a copy of its database. Most likely, if you want to
continue supporting non-OSTree deployments, simply have your package
manager fall back to the legacy `/var` location if the one in `/usr`
is not found.
To install a set of new packages (without removing any existing ones),
enumerate the set of packages in the currently booted deployment, and
perform dependency resolution to compute the complete set of new
packages. Download and unpack these new packages to a temporary
directory.
Now, because we are merely installing new packages and not
removing anything, we can make the major optimization of reusing
our existing filesystem tree, and merely
*layering* the composed filesystem tree of
these new packages on top. A command like this:
```
ostree commit -b osname/releasename/description
--tree=ref=$osname/releasename/description
--tree=dir=/var/tmp/newpackages.13A8D0/
```
will create a new commit in the `$osname/releasename/description`
branch. The OSTree SHA256 checksum of all the files in
`/var/tmp/newpackages.13A8D0/` will be computed, but we will not
re-checksum the present existing tree. In this layering model,
earlier directories will take precedence, but files in later layers
will silently override earlier layers.
Then to actually deploy this tree for the next boot:
`ostree admin deploy <replaceable>osname/releasename/description`

View File

@ -0,0 +1,116 @@
# Atomic Upgrades
## You can turn off the power anytime you want...
OSTree is designed to implement fully atomic and safe upgrades;
more generally, atomic transitions between lists of bootable
deployments. If the system crashes or you pull the power, you
will have either the old system, or the new one.
## Simple upgrades via HTTP
First, the most basic model OSTree supports is one where it replicates
pre-generated filesystem trees from a server over HTTP, tracking
exactly one ref, which is stored in the `.origin` file for the
deployment. The command `ostree admin upgrade`
implements this.
o begin a simple upgrade, OSTree fetches the contents of the ref from
the remote server. Suppose we're tracking a ref named
`exampleos/buildmaster/x86_64-runtime`. OSTree fetches the URL
`http://$example.com/repo/refs/exampleos/buildmaster/x86_64-runtime`,
which contains a SHA256 checksum. This determines the tree to deploy,
and `/etc` will be merged from currently booted tree.
If we do not have this commit, then, then we perform a pull process.
At present (without static deltas), this involves quite simply just
fetching each individual object that we do not have, asynchronously.
Put in other words, we only download changed files (zlib-compressed).
Each object has its checksum validated and is stored in `/ostree/repo/objects/`.
Once the pull is complete, we have all the objects locally
we need to perform a deployment.
## Upgrades via external tools (e.g. package managers)
As mentioned in the introduction, OSTree is also designed to allow a
model where filesystem trees are computed on the client. It is
completely agnostic as to how those trees are generated; hey could be
computed with traditional packages, packages with post-deployment
scripts on top, or built by developers directly from revision control
locally, etc.
At a practical level, most package managers today (`dpkg` and `rpm`)
operate "live" on the currently booted filesystem. The way they could
work with OSTree is instead to take the list of installed packages in
the currently booted tree, and compute a new filesystem from that. A
later chapter describes in more details how this could work:
[adapting-existing.md](Adapting Existing Systems).
For the purposes of this section, let's assume that we have a
newly generated filesystem tree stored in the repo (which shares
storage with the existing booted tree). We can then move on to
checking it back out of the repo into a deployment.
## Assembling a new deployment directory
Given a commit to deploy, OSTree first allocates a directory for
it. This is of the form `/boot/loader/entries/ostree-$osname-$checksum.$serial.conf`.
he $serial is normally 0, but if a
given commit is deployed more than once, it will be incremented.
his is supported because the previous deployment may have
configuration in `/etc`
hat we do not want to use or overwrite.
Now that we have a deployment directory, a 3-way merge is
performed between the (by default) currently booted deployment's
`/etc`, its default
configuration, and the new deployment (based on its `/usr/etc`).
## Atomically swapping boot configuration
At this point, a new deployment directory has been created as a
hardlink farm; the running system is untouched, and the bootloader
configuration is untouched. We want to add this deployment o the
"deployment list".
To support a more general case, OSTree supports atomic ransitioning
between arbitrary sets of deployments, with the restriction that the
currently booted deployment must always be in the new set. In the
normal case, we have exactly one deployment, which is the booted one,
and we want to add the new deployment to the list. A more complex
command might allow creating 100 deployments as part of one atomic
transaction, so that one can set up an automated system to bisect
across them.
## The bootversion
OSTree allows swapping between boot configurations by implementing the
"swapped directory pattern" in `/boot`. This means it is a symbolic
link to one of two directories `/ostree/boot.[0|1]`. To swap the
contents atomically, if the current version is `0`, we create
`/ostree/boot.1`, populate it with the new contents, then atomically
swap the symbolic link. Finally, the old contents can be garbage
collected at any point.
## The /ostree/boot directory
However, we want to optimize for the case where we the set of
kernel/initramfs pairs is the same between both the old and new
deployment lists. This happens when doing an upgrade that does not
include the kernel; think of a simple translation update. OSTree
optimizes for this case because on some systems `/boot` may be on a
separate medium such as flash storage not optimized for significant
amounts of write traffic.
To implement this, OSTree also maintains the directory
`/ostree/boot.<replaceable>bootversion</replaceable>`, which is a set
of symbolic links to the deployment directories. The
<replaceable>bootversion</replaceable> here must match the version of
`/boot`. However, in order to allow atomic transitions of
<emphasis>this</emphasis> directory, this is also a swapped directory,
so just like `/boot`, it has a version of `0` or `1` appended.
Each bootloader entry has a special `ostree=` argument which refers to
one of these symbolic links. This is parsed at runtime in the
initramfs.

90
docs/manual/deployment.md Normal file
View File

@ -0,0 +1,90 @@
# Deployments
## Overview
Built on top of the OSTree versioning filesystem core is a layer
that knows how to deploy, parallel install, and manage Unix-like
operating systems (accessible via `ostree admin`). The core content of these operating systems
are treated as read-only, but they transparently share storage.
A deployment is physically located at a path of the form
`/ostree/deploy/$osname/deploy/$checksum`.
OSTree is designed to boot directly into exactly one deployment
at a time; each deployment is intended to be a target for
`chroot()` or equivalent.
### "osname": Group of deployments that share /var</title>
Each deployment is grouped in exactly one "osname". From above, you
can see that an osname is physically represented in the
`/ostree/deploy/$osname` directory. For example, OSTree can allow
parallel installing Debian in `/ostree/deploy/debian` and Red Hat
Enterprise Linux in `/ostree/deploy/rhel` (subject to operating system
support, present released versions of these operating systems may not
support this).
Each osname has exactly one copy of the traditional Unix `/var`,
stored physically in `/ostree/deploy/$osname/var`. OSTree provides
support tools for `systemd` to create a Linux bind mount that ensures
the booted deployment sees the shared copy of `/var`.
OSTree does not touch the contents of `/var`. Operating system
components such as daemon services are required to create any
directories they require there at runtime
(e.g. `/var/cache/$daemonname`), and to manage upgrading data formats
inside those directories.
### Contents of a deployment
A deployment begins with a specific commit (represented as a
SHA256 hash) in the OSTree repository in `/ostree/repo`. This commit refers
to a filesystem tree that represents the underlying basis of a
deployment. For short, we will call this the "tree", to
distinguish it from the concept of a deployment.
First, the tree must include a kernel stored as
`/boot/vmlinuz-$checksum`. The checksum should be a SHA256 hash of
the kernel contents; it must be pre-computed before storing the kernel
in the repository. Optionally, the tree can contain an initramfs,
stored as `/boot/initramfs-$checksum`. If this exists, the checksum
must include both the kernel and initramfs contents. OSTree will use
this to determine which kernels are shared. The rationale for this is
to avoid computing checksums on the client by default.
The deployment should not have a traditional UNIX `/etc`; instead, it
should include `/usr/etc`. This is the "default configuration". When
OSTree creates a deployment, it performs a 3-way merge using the
<emphasis>old</emphasis> default configuration, the active system's
`/etc`, and the new default configuration. In the final filesystem
tree for a deployment then, `/etc` is a regular writable directory.
Besides the exceptions of `/var` and `/etc` then, the rest of the
contents of the tree are checked out as hard links into the
repository. It's strongly recommended that operating systems ship all
of their content in `/usr`, but this is not a hard requirement.
Finally, a deployment may have a `.origin` file, stored next to its
directory. This file tells `ostree admin upgrade` how to upgrade it.
At the moment, OSTree only supports upgrading a single refspec.
However, in the future OSTree may support a syntax for composing
layers of trees, for example.
### The system /boot
While OSTree parallel installs deployments cleanly inside the
`/ostree` directory, ultimately it has to control the system's `/boot`
directory. The way this works is via the
[Boot Loader Specification](http://www.freedesktop.org/wiki/Specifications/BootLoaderSpec),
which is a standard for bootloader-independent drop-in configuration
files.
When a tree is deployed, it will have a configuration file generated
of the form
`/boot/loader/entries/ostree-$osname-$checksum.$serial.conf`. This
configuration file will include a special `ostree=` kernel argument
that allows the initramfs to find (and `chroot()` into) the specified
deployment.
At present, not all bootloaders implement the BootLoaderSpec, so
OSTree contains code for some of these to regenerate native config
files (such as `/boot/syslinux/syslinux.conf` based on the entries.

110
docs/manual/introduction.md Normal file
View File

@ -0,0 +1,110 @@
# OSTree Overview
## Introduction
OSTree an upgrade system for Linux-based operating systems that
performs atomic upgrades of complete filesystem trees. It is
not a package system; rather, it is intended to complement them.
A primary model is composing packages on a server, and then
replicating them to clients.
The underlying architecture might be summarized as "git for
operating system binaries". It operates in userspace, and will
work on top of any Linux filesystem. At its core is a git-like
content-addressed object store, and layered on top of that is
bootloader configuration, management of
`/etc`, and other functions to perform an
upgrade beyond just replicating files.
You can use OSTree standalone in the pure replication model,
but another approach is to add a package manager on top,
thus creating a hybrid tree/package system.
## Comparison with "package managers"
Because OSTree is designed for deploying core operating
systems, a comparison with traditional "package managers" such
as dpkg and rpm is illustrative. Packages are traditionally
composed of partial filesystem trees with metadata and scripts
attached, and these are dynamically assembled on the client
machine, after a process of dependency resolution.
In contrast, OSTree only supports recording and deploying
*complete* (bootable) filesystem trees. It
has no built-in knowledge of how a given filesystem tree was
generated or the origin of individual files, or dependencies,
descriptions of individual components. Put another way, OSTree
only handles delivery and deployment; you will likely still want
to include inside each tree metadata about the individual
components that went into the tree. For example, a system
administrator may want to know what version of OpenSSL was
included in your tree, so you should support the equivalent of
`rpm -q` or `dpkg -L`.
The OSTree core emphasizes replicating read-only OS trees via
HTTP, and where the OS includes (if desired) an entirely
separate mechanism to install applications, stored in `/var` if they're system global, or
`/home` for per-user
application installation. An example application mechanism is
http://docker.io/
However, it is entirely possible to use OSTree underneath a
package system, where the contents of `/usr` are computed on the client.
For example, when installing a package, rather than changing the
currently running filesystem, the package manager could assemble
a new filesystem tree that layers the new packages on top of a
base tree, record it in the local OSTree repository, and then
set it up for the next boot. To support this model, OSTree
provides an (introspectable) C shared library.
## Comparison with block/image replication
OSTree shares some similarity with "dumb" replication and
stateless deployments, such as the model common in "cloud"
deployments where nodes are booted from an (effectively)
readonly disk, and user data is kept on a different volumes.
The advantage of "dumb" replication, shared by both OSTree and
the cloud model, is that it's *reliable*
and *predictable*.
But unlike many default image-based deployments, OSTree supports
exactly two persistent writable directories that are preserved across
upgrades: `/etc` and `/var`.
Because OSTree operates at the Unix filesystem layer, it works
on top of any filesystem or block storage layout; it's possible
to replicate a given filesystem tree from an OSTree repository
into plain ext4, BTRFS, XFS, or in general any Unix-compatible
filesystem that supports hard links. Note: OSTree will
transparently take advantage of some BTRFS features if deployed
on it.
## Atomic transitions between parallel-installable read-only filesystem trees
Another deeply fundamental difference between both package
managers and image-based replication is that OSTree is
designed to parallel-install *multiple versions* of multiple
*independent* operating systems. OSTree
relies on a new toplevel `ostree` directory; it can in fact
parallel install inside an existing OS or distribution
occupying the physical `/` root.
On each client machine, there is an OSTree repository stored
in `/ostree/repo`, and a set of "deployments" stored in `/ostree/deploy/$OSNAME/$CHECKSUM`.
Each deployment is primarily composed of a set of hardlinks
into the repository. This means each version is deduplicated;
an upgrade process only costs disk space proportional to the
new files, plus some constant overhead.
The model OSTree emphasizes is that the OS read-only content
is kept in the classic Unix `/usr`; it comes with code to
create a Linux read-only bind mount to prevent inadvertent
corruption. There is exactly one `/var` writable directory shared
between each deployment for a given OS. The OSTree core code
does not touch content in this directory; it is up to the code
in each operating system for how to manage and upgrade state.
Finally, each deployment has its own writable copy of the
configuration store `/etc`. On upgrade, OSTree will
perform a basic 3-way diff, and apply any local changes to the
new copy, while leaving the old untouched.

81
docs/manual/repo.md Normal file
View File

@ -0,0 +1,81 @@
# Anatomy of an OSTree repository
## Core object types and data model
OSTree is deeply inspired by git; the core layer is a userspace
content-addressed versioning filesystem. It is worth taking some time
to familiarize yourself with
[Git Internals](http://git-scm.com/book/en/Git-Internals), as this
section will assume some knowledge of how git works.
Its object types are similar to git; it has commit objects and content
objects. Git has "tree" objects, whereas OSTree splits them into
"dirtree" and "dirmeta" objects. But unlike git, OSTree's checksums
are SHA256. And most crucially, its content objects include uid, gid,
and extended attributes (but still no timestamps).
### Commit objects
A commit object contains metadata such as a timestamp, a log
message, and most importantly, a reference to a
dirtree/dirmeta pair of checksums which describe the root
directory of the filesystem.
Also like git, each commit in OSTree can have a parent. It is
designed to store a history of your binary builds, just like git
stores a history of source control. However, OSTree also makes
it easy to delete data, under the assumption that you can
regenerate it from source code.
### Dirtree objects
A dirtree contains a sorted array of (filename, checksum)
pairs for content objects, and a second sorted array of
(filename, dirtree checksum, dirmeta checksum), which are
subdirectories.
### Dirmeta objects
In git, tree objects contain the metadata such as permissions
for their children. But OSTree splits this into a separate
object to avoid duplicating extended attribute listings.
### Content objects
Unlike the first three object types which are metadata, designed to be
`mmap()`ed, the content object has a separate internal header and
payload sections. The header contains uid, gid, mode, and symbolic
link target (for symlinks), as well as extended attributes. After the
header, for regular files, the content follows.
# Repository types and locations
Also unlike git, an OSTree repository can be in one of two separate
modes: `bare` and `archive-z2`. A bare repository is one where
content files are just stored as regular files; it's designed to be
the source of a "hardlink farm", where each operating system checkout
is merely links into it. If you want to store files owned by
e.g. root in this mode, you must run OSTree as root. In contrast, the
`archive-z2` mode is designed for serving via plain HTTP. Like tar
files, it can be read/written by non-root users.
On an OSTree-deployed system, the "system repository" is
`/ostree/repo`. It can be read by any uid, but only written by root.
Unless the `--repo` argument is given to the <command>ostree</command>
command, it will operate on the system repository.
## Refs
Like git, OSTree uses "refs" to which are text files that point to
particular commits (i.e. filesystem trees). For example, the
gnome-ostree operating system creates trees named like
`exampleos/buildmaster/x86_64-runtime` and
`exampleos/buildmaster/x86_64-devel-debug`. These two refs point to
two different generated filesystem trees. In this example, the
"runtime" tree contains just enough to run a basic system, and
"devel-debug" contains all of the developer tools and debuginfo.
The `ostree` supports a simple syntax using the carat `^` to refer to
the parent of a given commit. For example,
`exampleos/buildmaster/x86_64-runtime^` refers to the previous build,
and `exampleos/buildmaster/x86_64-runtime^^` refers to the one before
that.

10
mkdocs.yml Normal file
View File

@ -0,0 +1,10 @@
site_name: My Docs
pages:
- Home: 'index.md'
- Contributing: 'CONTRIBUTING.md'
- Manual:
- Introduction: 'manual/introduction.md'
- Repository: 'manual/repo.md'
- Deployments: 'manual/deployment.md'
- Atomic Upgrades: 'manual/atomic-upgrades.md'
- Adapting Existing Systems: 'manual/adapting-existing.md'