IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Flathub has hit the 10MB limit in 2022, and we had to drop less popular
CPU architectures from the main summary to subsummaries, effectively
cutting off users running too old Flatpak version. Despite that, the
main summary containing only x86_64 is already at 7MB. As this is
eventually going to happen to subsummaries as well, preemptively bump
the limit 12 times.
It takes between 2 and 3 years for a change like this to roll out across
Linux distributions so the best time for this was yesterday.
fixes#2715
Main motivation is prep for composefs in
https://github.com/ostreedev/ostree/pull/2640
In the interest of that, we add a `bool using_composefs` but
it's currently always `false`.
Co-authored-by: Alexander Larsson <alexl@redhat.com>
I've deprecated sh-inline; in the end I think it is better
to minimize the amount of bash code we have. xshell solves
the core convenience problem of taking local variables and mapping
them to command arguments.
A full port would be nontrivial; this just starts the ball
rolling.
During the early design of FCOS and RHCOS, we chose a value of 384M
for the boot partition. This turned out to be too small: some arches
other than x86_64 have larger initrds, kernel binaries, or additional
artifacts (like device tree blobs). We'll likely bump the boot partition
size in the future, but we don't want to abandon all the nodes deployed
with the current size.[[1]]
Because stale entries in `/boot` are cleaned up after new entries are
written, there is a window in the update process during which the bootfs
temporarily must host all the `(kernel, initrd)` pairs for the union of
current and new deployments.
This patch determines if the bootfs is capable of holding all the
pairs. If it can't but it could hold all the pairs from just the new
deployments, the outgoing deployments (e.g. rollbacks) are deleted
*before* new deployments are written. This is done by updating the
bootloader in two steps to maintain atomicity.
Since this is a lot of new logic in an important section of the
code, this feature is gated for now behind an environment variable
(`OSTREE_ENABLE_AUTO_EARLY_PRUNE`). Once we gain more experience with
it, we can consider turning it on by default.
This strategy increases the fallibility of the update system since one
would no longer be able to rollback to the previous deployment if a bug
is present in the bootloader update logic after auto-pruning (see [[2]]
and following). This is however mitigated by the fact that the heuristic
is opportunistic: the rollback is pruned *only if* it's the only way for
the system to update.
[1]: https://github.com/coreos/fedora-coreos-tracker/issues/1247
[2]: https://github.com/ostreedev/ostree/issues/2670#issuecomment-1179341883Closes: #2670
Having `-Werror` on in CI only by default has generally worked OK,
but I don't think it's worth trying to immediately scramble to port
when they deprecate APIs.
Motivated in this case by
```
src/libostree/ostree-fetcher-curl.c: In function 'initiate_next_curl_request':
src/libostree/ostree-fetcher-curl.c:876:3: error: 'CURLOPT_PROTOCOLS' is deprecated: since 7.85.0. Use CURLOPT_PROTOCOLS_STR [-Werror=deprecated-declarations]
876 | rc = curl_easy_setopt (req->easy, CURLOPT_PROTOCOLS, (long)(CURLPROTO_HTTP | CURLPROTO_HTTPS | CURLPROTO_FILE));
| ^~
```
When hacking and testing locally with `cosa build-fast` and `kola run`,
I prefer to leave testing framework stuff within the work directory
rather than installed in my pet container. Add a `localinstall` target
for this which puts the tests in `tests/kola`. Then a simple `kola run`
will pick it up.
AFAICT, I don't see how `runkola.sh` or the Makefile in `tests/kolainst`
can create files in `tests/kola` since it's geared towards installing
under `/usr`.
In the unusual case where one is manually finalizing staged deployments,
as can happen in testing, we expect a successful finalization to remove
the failure stamp file.
Crawling through the bootfs and the deployment dirs was already mostly
separate. The only inefficiency here is that we now iterate over the
array of active deployments twice when building the hash tables. No
functional change otherwise.
Prep for future patch.
soup3 works best using only the async API from a single thread[1].
Rework the fetcher to stop using worker threads. In order to maximize
session usage across requests, sessions will be reused for each main
context.
1. https://libsoup.org/libsoup-3.0/client-thread-safety.html
Set a few environment variables during tests to ensure fake GIO backends
are used. This is particularly important with the soup fetcher backend
as it can cause strange test errors in containerized test environments.
Upstream soup has been setting these 3 environment variables for their
tests since 2015.