IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Use `g_error` and `g_assert*` rather than `g_return*` when checking the
locking preconditions so that failures result in the program
terminating. Since this code is protecting filesystem data, we'd rather
crash than delete or corrupt data unexpectedly.
`g_error` is used when the error is due to the caller requesting an
invalid transition like attempting to pop a lock type that hasn't been
taken. It also provides a semi-useful message about what happened.
Previously each thread maintained its own lock file descriptor
regardless of whether the thread was using the same `OstreeRepo` as
another thread. This was very safe but it made certain multithreaded
procedures difficult. For example, if a main thread took an exclusive
lock and then spawned worker threads, it would deadlock if one of the
worker threads tried to acquire the lock.
This moves the file descriptor from thread local storage to the
`OstreeRepo` structure so that threads using the same `OstreeRepo` can
share the lock. A mutex guards against threads altering the lock state
concurrently.
Fixes: #2344
This simplifies the lock state management considerably since the
previously pushed type doesn't need to be tracked. Instead, 2 counters
are kept to track how many times each lock type has been pushed. When
the number of exclusive locks drops to 0, the lock transitions back to
shared.
In gnupg 2.3.0[1], if a primary key is expired and a subkey does not
have an expiration or its expiration is older than the primary key, the
subkey's expiration will be reported as the primary's. Previously a
subkey without an expiration would not report one regardless of the
primary key's expiration.
This caused a regression in a test setting an expiration on a primary
key. The test was checking that the subkey was not expired by asserting
that there was no `Key expired` line in the signature verification
output. With gnupg 2.3.0+, it will show as expired, causing the test to
fail.
Remove the assertion since it's not consistent across gnupg versions. In
practice we don't care whether the subkey is considered expired or not
as long as the signature verification fails when the primary key is
expired.
1. https://dev.gnupg.org/T3343Fixes: #2359
The ostree repo has read permissions set for workflows, which prevents
the documentation job from pushing the built docs to the gh-pages
branch. Raise the job's permissions to write for repo contents to allow
that.
Make a copy of `apidoc/html` to `docs/reference` and then tell Jekyll to
include it verbatim. This will include the gtk-doc API docs on the
static site. A link is added to the main index.
A script is added to do the copy (a symlink won't do) and is setup to
run before Jekyll in the GitHub workflow. Ideally this would be a local
Jekyll plugin to make the process automatic, but the github-pages gem
doesn't allow that.
This uses the Jekyll Actions GitHub action to push the rendered docs to
the gh-pages branch rather than GitHub's automated docs flow. That will
allow greater control over how the docs are generated. Pushing to the
gh-pages branch only happens on pushes to main. For pull requests, the
docs are only built.
This mimics the GitHub Pages environment so that you can build and serve
the site locally for testing. It's will also be required later for using
Jekyll Actions[1] instead of the automated GitHub Pages flow.
1. https://github.com/marketplace/actions/jekyll-actions
This returns a 404 since the site is already generated from the docs
directory. Furthermore, the `CONTRIBUTING.md` markdown file isn't in the
generated site, just the HTML.
Instead, use jekyll's `link` tag to create the link. Unfortunately,
before jekyll 4.0 (github-pages uses 3.9), you have to prepend the base
URL.
Previous to this we'd trip an assertion `abort()` deep in the curl code if e.g.
a user did `ostree remote add foo htttp://...` etc.
Motivated by considering supporting "external remotes" where code outside
ostree does a pull, but we want to reuse the signing verification infrastructure.
Several tests generate summaries and then expect to use the generated
summary immediately. However, this can cause intermittent test failures
when they inadvertantly get a cached summary file. This typically
happens when the test is run on a filesystem that doesn't support user
extended attributes. In that case, the caching code can only use the
last modified time, which only has 1 second granularity. If tests don't
carefully manage the summary modification times or the repo cache then
they are likely subject to races in some test environments.
This introduces an environment variable `OSTREE_SKIP_CACHE` that
prevents the repo from using a cache directory. This is enabled by
default in tests and disabled for tests that are a explicitly trying to
test the caching behavior.
Fixes: #2313Fixes: #2351
If we fail as a result of `set -x`, It's often not completely obvious
which command failed or how. Use a trap on ERR to show the command that
failed, and its exit status.
Signed-off-by: Simon McVittie <smcv@collabora.com>
Ideally in the future we change more of our unit tests to
support running installed; we've tried this in the past with
https://wiki.gnome.org/Initiatives/GnomeGoals/InstalledTests
I'd like to pick that back up again. This takes a step
towards that by having our Rust tests.
To make this even easier, add a `tests/run-installed`
which runs the installed tests (uninstalled, confusingly
but conveniently for now).
rust-analyzer is happier with this because it understands
the project structure out of the box.
We aren't actually again adding a dependency on Rust/cargo in the core,
this is only used to make `cargo build` work out of the box to build
the Rust test code.
This API is push rather than pull, which makes it much more
suitable to use cases like parsing a tar file from external
code.
Now, we have a large mess in this area internally because
the original file writing code was pull based, but static
deltas hit the same problem of wanting a push API, so I added
this special `OstreeRepoBareContent` just for writing regular
files from a push API.
Eventually...I'd like to deprecate the pull based API,
and rework things so that for regular files the push API
is the default, and then `write_content_object()` would
be split up into archive/bare cases.
In this world the `ostree_repo_write_content()` API would
then need to hackily bridge pull to push and it'd be
less efficient.
Anyways for now due to this bifurcation, this API only
works on non-archive repositories, but that's fine for
now because that's what I want for the `ostree-ext-container`
bits.
Continuation of the addition of `ostree_repo_write_regfile_inline()`.
This will be helpful for ostree-rs-ext and importing from tar, it's
quite inefficient and awkward for small files to end up creating
a whole `GInputStream` and `GFileInfo` and etc. for small files.
Although people are likely not deploying on 32 bit x86 anymore,
deploying on 32 bit armv7 is still often used. Add back an i386 build on
debian to try to catch 32 bit bugs in CI.