Dan Nicholson 4e78ddd2da lib/repo: Add repo locking mechanism
Currently ostree has no method of guarding against concurrent pruning.
When there are multiple repo writers, it's possible to have a pull or
commit race against a prune and end up with missing objects.

This adds a file based repo locking mechanism. The intention is to take
a shared lock when writing objects and an exclusive lock when deleting
them. In order to make use of the locking throughout the library in a
fine grained fashion, the lock acts recursively with a stack of lock
states. If the lock becomes exclusive, it will stay in that state until
the stack is unwound past the initial exclusive push. The file locking
is similar to GLnxLockFile in that it uses open file descriptor locks
but falls back to flock when needed.

The lock also attempts to be thread safe by storing the lock state in
thread local storage with GPrivate. This means that each thread will
have an independent lock for each repository it opens. There are some
drawbacks to that, but it seemed impossible to manage the lock state
coherently in the face of multithreaded access.

The API is a push/pop interface in accordance with the recursive nature
of the locking. The push interface uses an enum that's translated to
LOCK_SH or LOCK_EX as needed. Both interfaces use an internal timeout
field to decide whether to manage the lock in a blocking or non-blocking
fashion. The intention is to allow ostree applications as well as
administrators to control this timeout. For now, the default is a 30
second timeout.

Note that the timeout is handled synchronously in thread since the lock
is maintained in thread local storage. I.e., the thread that acquires
the lock needs to be the same thread that runs the operation. There may
be a way to offer an asynchronous version, but it's not clear exactly
how that would work since it would likely involve a separate thread that
invokes a callback when the locking operation completes.

https://bugzilla.gnome.org/show_bug.cgi?id=759442

Closes: #1343
Approved by: cgwalters
2017-12-05 02:32:47 +00:00
2015-03-26 23:33:07 +01:00
2017-09-21 21:50:40 +00:00
2017-12-05 02:32:47 +00:00
2017-12-04 16:41:06 +00:00
2017-09-21 22:03:11 +00:00
2015-03-26 23:33:07 +01:00
2017-11-27 17:46:07 +00:00
2017-09-21 22:03:11 +00:00
2016-01-28 09:31:37 -05:00
2016-04-07 12:49:40 +00:00
2017-10-27 21:49:26 +00:00
2014-07-31 11:26:32 +02:00
2017-12-04 18:26:51 +00:00
2015-01-30 15:27:36 +01:00

libostree

New! See the docs online at Read The Docs (OSTree)


This project is now known as "libostree", though it is still appropriate to use the previous name: "OSTree" (or "ostree"). The focus is on projects which use libostree's shared library, rather than users directly invoking the command line tools (except for build systems). However, in most of the rest of the documentation, we will use the term "OSTree", since it's slightly shorter, and changing all documentation at once is impractical. We expect to transition to the new name over time.

As implied above, libostree is both a shared library and suite of command line tools that combines a "git-like" model for committing and downloading bootable filesystem trees, along with a layer for deploying them and managing the bootloader configuration.

The core OSTree model is like git in that it checksums individual files and has a content-addressed-object store. It's unlike git in that it "checks out" the files via hardlinks, and they should thus be immutable. Therefore, another way to think of OSTree is that it's just a more polished version of Linux VServer hardlinks.

Features:

  • Transactional upgrades and rollback for the system
  • Replicating content incrementally over HTTP via GPG signatures and "pinned TLS" support
  • Support for parallel installing more than just 2 bootable roots
  • Binary history on the server side (and client)
  • Introspectable shared library API for build and deployment systems
  • Flexible support for multiple branches and repositories, supporting projects like flatpak which use libostree for applications, rather than hosts.

Projects using OSTree

meta-updater is a layer available for OpenEmbedded systems.

QtOTA is Qt's over-the-air update framework which uses libostree.

rpm-ostree is a next-generation hybrid package/image system for Fedora and CentOS, used by the Atomic Host project. By default it uses libostree to atomically replicate a base OS (all dependency resolution is done on the server), but it supports "package layering", where additional RPMs can be layered on top of the base. This brings a "best of both worlds"" model for image and package systems.

flatpak uses libostree for desktop application containers. Unlike most of the other systems here, flatpak does not use the "libostree host system" aspects (e.g. bootloader management), just the "git-like hardlink dedup". For example, flatpak supports a per-user OSTree repository.

Endless OS uses libostree for their host system as well as flatpak. See their eos-updater and deb-ostree-builder projects.

GNOME Continuous is where OSTree was born - as a high performance continuous delivery/testing system for GNOME.

The BuildStream build and integration tool uses libostree as a caching system to store and share built artifacts.

Building

Releases are available as GPG signed git tags, and most recent versions support extended validation using git-evtag.

However, in order to build from a git clone, you must update the submodules. If you're packaging OSTree and want a tarball, I recommend using a "recursive git archive" script. There are several available online; this code in OSTree is an example.

Once you have a git clone or recursive archive, building is the same as almost every autotools project:

git submodule update --init
env NOCONFIGURE=1 ./autogen.sh
./configure --prefix=...
make
make install DESTDIR=/path/to/dest

More documentation

New! See the docs online at Read The Docs (OSTree)

Contributing

See Contributing.

Description
Operating system and container binary deployment and upgrades
Readme 32 MiB
Languages
C 73.5%
Shell 15.4%
Rust 7.4%
Yacc 1%
M4 0.9%
Other 1.7%