IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
I was seeing an `EPERM` here which was confusing.
It turned out the real error was `EEXIST`.
Since we're referring to the original error, but we do a
lot of computation in the middle, we need to save errno.
This fixes some aspects of OstreeRepoAutoTransaction and re-aligns
it with the logic in flatpak. Specifically:
* link to the underlying repo through refcounting
* bridge internal errors to warning messages
* verify the input pointer type
This is a preparation step before exposing this logic as a public API.
As pointed out in the original review, `gpg-list-keys` fits better
alongside the existing `gpg-import`.
Changes were done with:
```
git grep -l list-gpg-keys | xargs sed -i 's/list-gpg-keys/gpg-list-keys/'
for src in $(git ls-files '*list-gpg-keys*'); do
dst=${src/list-gpg-keys/gpg-list-keys}
git mv "$src" "$dst"
done
```
Calculate the advanced and direct update URLs for the key discovery
portion[1] of the OpenPGP Web Key Directory specification, and include
the URLs in the key listing in ostree_repo_remote_get_gpg_keys(). These
URLs can be used to locate updated GPG keys for the remote.
1. https://datatracker.ietf.org/doc/html/draft-koch-openpgp-webkey-service#section-3.1
This will be used to implement the PGP Web Key Directory (WKD) URL
generation. This is a slightly cleaned up implementation[1] taken from
the zbase32 author's original implementation[2]. It provides a single
zbase32_encode API to convert a set of bytes to the zbase32 encoding.
I believe this should be acceptable for inclusion in ostree. The license
in the source files is BSD style while the original repo LICENSE file
claims the Creative Commons CC0 1.0 Universal license, which is public
domain.
1. https://github.com/dbnicholson/libbase32/tree/for-ostree
2. https://github.com/zooko/libbase32
This provides a wrapper for the `ostree_repo_remote_get_gpg_keys`
function to show the GPG keys associated with a remote. This is
particularly useful for validating that GPG key updates have been
applied. Tests are added, which checks the
`ostree_repo_remote_get_gpg_keys` API by extension.
This function enumerates the trusted GPG keys for a remote and returns
an array of `GVariant`s describing them. This is useful to see which
keys are collected by ostree for a particular remote. The same
information can be gathered with `gpg`. However, since ostree allows
multiple keyring locations, that's only really useful if you have
knowledge of how ostree collects GPG keyrings.
The format of the variants is documented in
`OSTREE_GPG_KEY_GVARIANT_FORMAT`. This format is primarily a copy of
selected fields within `gpgme_key_t` and its subtypes. The fields are
placed within vardicts rather than using a more efficient tuple of
concrete types. This will allow flexibility if more components of
`gpgme_key_t` are desired in the future.
Currently the verifier decides whether to include the global keyrings
based on whether the specified remote has its own keyring or not. Allow
callers to exclude the global keyrings even when that's not the case.
This will be used in a subsequent commit in order to get the GPG keys
only associated with a remote.
In order to use the GPG verifier, it needs to be seeded with GPG keys
after instantation. Currently this is only used for verifying data, but
it will also be used for getting a list of trusted GPG keys in a
subsequent commit.
In configure the systemd unit path is optional, but in the code it's
assumed to be defined. Add an `#ifdef` that throws an error when it's
not defined like the handling of `HAVE_LIBMOUNT` below it.
We struggled for a long time with enablement of our "internal units",
trying to follow the philosophy that units should only be enabled
by explicit preset.
See https://bugzilla.redhat.com/show_bug.cgi?id=1451458
and https://github.com/coreos/rpm-ostree/pull/1482
etc.
And I just saw chat (RH internal on a proprietary system sadly) where
someone hit `ostree-remount.service` not being enabled in CentOS8.
Thinking about this more, I realized we've shipped a systemd generator
for a long time and while its only role until now was to generate `var.mount`,
but by using it to force on our internal units, we don't require
people to deal with presets anymore.
Basically we're inverting things so that "if ostree= is on the kernel
cmdline, then enable our units" and not "enable our units, but have
them use ConditionKernelCmdline=ostree to skip".
Drop the weird gyrations we were doing around `ostree-finalize-staged.path`
too; forking `systemctl start` is just asking for bugs.
So after this, hopefully we won't ever again have to think about
distribution presets and our units.
This will be ignored, so let's make it very clear
people are doing something wrong. Motivated by a bug
in a build pipeline that injected `/var/lib/rpm` into an ostree
commit which ended up crashing rpm-ostree because it was an empty db
which it wasn't expecting.
It *also* turns out rpm-ostree is incorrectly dumping content in the
deployment `/var` today, which is another bug.
Use `g_error` and `g_assert*` rather than `g_return*` when checking the
locking preconditions so that failures result in the program
terminating. Since this code is protecting filesystem data, we'd rather
crash than delete or corrupt data unexpectedly.
`g_error` is used when the error is due to the caller requesting an
invalid transition like attempting to pop a lock type that hasn't been
taken. It also provides a semi-useful message about what happened.
Previously each thread maintained its own lock file descriptor
regardless of whether the thread was using the same `OstreeRepo` as
another thread. This was very safe but it made certain multithreaded
procedures difficult. For example, if a main thread took an exclusive
lock and then spawned worker threads, it would deadlock if one of the
worker threads tried to acquire the lock.
This moves the file descriptor from thread local storage to the
`OstreeRepo` structure so that threads using the same `OstreeRepo` can
share the lock. A mutex guards against threads altering the lock state
concurrently.
Fixes: #2344
This simplifies the lock state management considerably since the
previously pushed type doesn't need to be tracked. Instead, 2 counters
are kept to track how many times each lock type has been pushed. When
the number of exclusive locks drops to 0, the lock transitions back to
shared.
Previous to this we'd trip an assertion `abort()` deep in the curl code if e.g.
a user did `ostree remote add foo htttp://...` etc.
Motivated by considering supporting "external remotes" where code outside
ostree does a pull, but we want to reuse the signing verification infrastructure.
Several tests generate summaries and then expect to use the generated
summary immediately. However, this can cause intermittent test failures
when they inadvertantly get a cached summary file. This typically
happens when the test is run on a filesystem that doesn't support user
extended attributes. In that case, the caching code can only use the
last modified time, which only has 1 second granularity. If tests don't
carefully manage the summary modification times or the repo cache then
they are likely subject to races in some test environments.
This introduces an environment variable `OSTREE_SKIP_CACHE` that
prevents the repo from using a cache directory. This is enabled by
default in tests and disabled for tests that are a explicitly trying to
test the caching behavior.
Fixes: #2313Fixes: #2351
This API is push rather than pull, which makes it much more
suitable to use cases like parsing a tar file from external
code.
Now, we have a large mess in this area internally because
the original file writing code was pull based, but static
deltas hit the same problem of wanting a push API, so I added
this special `OstreeRepoBareContent` just for writing regular
files from a push API.
Eventually...I'd like to deprecate the pull based API,
and rework things so that for regular files the push API
is the default, and then `write_content_object()` would
be split up into archive/bare cases.
In this world the `ostree_repo_write_content()` API would
then need to hackily bridge pull to push and it'd be
less efficient.
Anyways for now due to this bifurcation, this API only
works on non-archive repositories, but that's fine for
now because that's what I want for the `ostree-ext-container`
bits.