IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Notably this includes [std::optional](https://en.cppreference.com/w/cpp/utility/optional)
which I'd like to use.
The dnf-5-devel branch uses this too, and C++-17 is now well
supported in current GCC, which is available in RHEL8 DTS too.
This demonstrates well the strength of the cxx-rs approach;
we can keep an API in C++ but add unit tests in Rust which
just works much more nicely.
Prep for https://github.com/coreos/rpm-ostree/pull/2502
which wants to drop the C++ unit tests.
The correct place for this would be...something like ostree-releng-tools
or coreos-assembler. Or perhaps in the future a Rust ostree-ext-tools repository.
Prep for "Rust-as-main", where I want to build libdnf statically.
And this really completes the "library thinout" story because
now we avoid dragging our *private* `libdnf.so` into the caller's
address space, which can cause potential conflicts if they're
also linking the system one. (Which could easily occur with
something like gnome-software)
All we were using libdnf for (indirectly via libsolv) is comparing
version strings but librpm can already do that for us.
Registration through `RegisterClient` is not mandatory today; for
example, Zincati does not register itself currently.
Look up systemd unit of caller if it is not already registered.
The previous build was GC'd; unfortunately it's very nontrivial
to make this test truly robust over time because FCOS changes;
we might sometimes have an outstanding update, other times might
not etc.
Let's just sanity check the commands; ultimately they're
thin wrappers around just downloading packages so we don't need
deep checks.
Let's include the final extensions file in JSON format as part of the
output directory. A key difference from the input file (apart from YAML
vs JSON) is that this is post-filtering, so any extensions which were
removed because the architecture does not match are not present.
This JSON file will be used by cosa and the MCO. See discussions in:
https://github.com/openshift/os/issues/409
Prep for reworking our binary entrypoint to be Rust and not C++.
We need to split up main into sub-pieces; but before we do
that let's avoid `goto out` and rework into declare-and-initialize
style which cleans things up here.
Until now with cxx-rs we'd been using it effectively as a better
cbindgen - we're exposing Rust code to C++ safely. This is
the first case of having Rust calling back into C++ using cxx-rs.
This allows us to fully use cxx-rs with `extern "C++"`. Now
we do call back into the C/C++ today, but it only works outside
of cargo/Rust's knowledge. Most notably, it means we can't
use our C code in `cargo test`. And that's a problem
for moving some C/C++ code to Rust, because we want to port
the unit tests too.
For now, re-declare our dependencies and part of the build
system inside the Cargo build. However, this is also
an important step towards using Cargo as our *sole* build
system.
We don't add build dependencies too often, so the short
term duplication should be OK.
However, a major unfortunate side effect of this is that
we now need to serialize the build process; almost all the
C/C++ comes first (`librpmostreeinternals.la`) and then
the Rust build, then we finally generate the executable
with both.
The only way out of this really is to move more of the
C/C++ build into Cargo, and we probably want to refactor
into internal crates.
I think we did this at some point, but then stopped.
Prep for https://github.com/coreos/rpm-ostree/pull/2413
because we'll need a full build of the C++ side too in order
to `cargo test`.
It is sometimes useful to only register an update driver without
actually deploying anything. If the argument for `deploy` is an
empty string, only register driver and then no-op.
We temporarily set the rpmdb path to be an absolute path pointing under
the tmprootfs when writing the rpmdb. This throws off libsolv 0.7.17,
which learned to give the `_dbpath` macro precedence on where the rpmdb
is located:
04d4d036b2
So then the rpmdb sanity-check we do when exiting
`rpmostree_context_assemble()` breaks because it can't find the expected
packages.
Because RPM macros are in global state, there's no elegant way of
setting it just for the rpmdb write operation (short of forking), so
just fix this by setting `_dbpath` back to the correct value after we're
done writing the rpmdb.
Closes: https://github.com/coreos/fedora-coreos-tracker/issues/723
Mostly straightforward stuff. It taught me about the `matches!` macro,
which looks really useful.
Wanted to turn this on in CI, but there's still a bunch of clippy
warnings coming from the `cxx.rs` stuff and some of our unsafe blocks.
For example, it wants the `files` arg in `initramfs_overlay_generate` to
be `&[String]` instead of `&Vec<String>` but that would break cxx.rs (it
looks like cxx.rs does support slices, but it would require creating one
from the vector we have to create anyway).
This fixes `install-extra-builddeps.sh` helper, by letting cargo
detect whether the target binary is already present in the
environment with the expected version.
This is in order to avoid mismatches in generated code when the
library version is bumped, and stale binaries are present on
the system.
Record the calling agent's systemd unit and serialize it into a
g-variant file at `/run/rpmostree/update-driver.gv`, along with the
human-readable name of the update driver provided as a string
argument.
Also add the companion `--register-driver` option to the `deploy`
CLI argument.
Closes https://github.com/coreos/rpm-ostree/issues/1747.
This adds support for a new `rpm-ostree compose extensions` command`
which takes a treefile, a new extensions YAML file, and an OSTree repo
and ref. It performs a depsolve and downloads the extensions to a
provided output directory.
This is intended to replace cosa's `download-extensions`:
https://github.com/coreos/coreos-assembler/blob/master/src/download-extensions
The input YAML schema matches the one accepted by that script.
Some differences from the script:
- We have a guaranteed depsolve match and thus can avoid silly issues
we've hit in RHCOS (like downloading the wrong `libprotobuf` for
`usbguard` -- rhbz#1889694).
- We seamlessly re-use the same repos defined in the treefile, whereas
the cosa script uses `reposdir=$dir` which doesn't have the same
semantics (repo enablement is in that case purely based on the
`enabled` flag in those repos, which may be different than what the
rpm-ostree compose ran with).
- We perform more sanity-checks against the requested extensions, such
as whether the extension is already in the base.
- We support no-change detection via a state SHA512 file for better
integration in cosa and pipelines.
- We support a `match-base-evr` key, which forces the extension to have
the same EVR as the one from a base package: this is helpful in the
case of extensions which complement a base package, esp. those which
may not have strong enough reldeps to enforce matching EVRs by
depsolve alone (`kernel-headers` is an example of this).
- We don't try to organize the RPMs into separate directories by
extension because IMO it's not at the right level. Instead, we should
work towards higher-level metadata to represent extensions (see
https://github.com/openshift/os/issues/409 which is related to this).
Closes: #2055
This reverts commit 6a3e3d807d.
This isn't as useful in the implementation of a `rpm-ostree compose
extensions` because it doesn't account for locally cached repos where no
downloading happens.
Instead, just let libdnf download the packages to the default location
if it's a remote package and we'll just copy it over to the output dir.