IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The inevitable followup to https://github.com/coreos/rpm-ostree/pull/2278
that I was too cowardly to do at the time. But it's time to admit
the 2 months or so of work on this was wasted. We have too much
tech debt and this is a large chunk of C/C++ code that touches everything
in the codebase in a nontrivial way.
Bigger picture, I'm going to work on
https://github.com/coreos/fedora-coreos-tracker/issues/828
which will strongly orient rpm-ostree towards the container world instead.
We'll still obviously keep the rpm package world around, but only
as a secondary layer. What rojig was trying to do in putting "images"
inside an RPM was conflating layers. It would have had a lot of
benefits probably if we'd truly pushed it over the edge into completion,
but that didn't happen. Let's focus on containers instead.
There's still a lot more rojig code to delete but this first patch removes
the bulk of it. Touching everything that references e.g. `RPMOSTREE_REFSPEC_TYPE_ROJIG`
etc. can come as a 3rd phase.
As rpm-ostree is transitioning to a primarily-Rust project,
our dependency on running `make` to generate `.gitignore` is a
problem.
For example, just opening the project in an IDE that runs rust-analyzer,
the latter will start a build and generate files in the `target/` directory,
but because r-a doesn't know to run `make` we won't have `.gitignore` yet.
I think further what we should do is try changing the C/Automake side
to write generated files into `target/` too.
Install a copy of rpm-ostree as rpm-ostree-unpriv to get a `bin_t`
labeled binary as a temporary workaround for:
https://bugzilla.redhat.com/show_bug.cgi?id=1937404
Also modify the rpm-ostree count me service to use that binary.
We have fully transitioned to cxx-rs! This drops a lot of now
dead code; only one binding system to think about generating
source code. For example, a notable advantage of cxx-rs
is it doesn't scan the whole source code, so running `make`
doesn't spew errors from cbindgen not understanding bits.
When working on a PR to add a sub-crate I hit the fact
that our `find` bit wasn't fully accurate and spent some
time debugging the fact that the code I got after `make`
wasn't up to date.
Since cargo is smart in general, let's stop trying to
second guess its dependencies and just run `cargo build`
every time `make` is run.
(I'm not sure why we didn't do this from the start)
This way we at least get unit test coverage (which...
our unit test coverage doesn't do much because our
main code paths require privileges or virt).
One main blocker to this is that rustc doesn't expose
first-class support for this yet:
https://github.com/rust-lang/rust/issues/39699
At a practical level this works when building in release
mode but fails with `cargo test` for some reason; linker
arguments being pruned? Not sure.
So I was able to use this when composing to find a bug,
but then for some other reason the client
side apparently infinite loops inside libsolv.
So we're not enabling this yet for those reasons, but
let's land the build infrastructure now.
```
(lldb) thread backtrace
* thread #4, name = 'pool-/usr/bin/r'
* frame #0: 0x00007fd61b97200f libc.so.6`__memcpy_sse2_unaligned_erms + 623
frame #1: 0x00007fd61cbc88e6 libasan.so.6`__asan::asan_realloc(void*, unsigned long, __sanitizer::BufferedStackTrace*) + 214
frame #2: 0x00007fd61cc4b725 libasan.so.6`__interceptor_realloc + 245
frame #3: 0x00007fd61baec43e libsolv.so.1`solv_realloc + 30
frame #4: 0x00007fd61baf0414 libsolv.so.1`repodata_add_dirstr + 276
frame #5: 0x00007fd61bb6f755 libsolvext.so.1`end_element + 53
frame #6: 0x00007fd61b05855d libxml2.so.2`xmlParseEndTag1.constprop.0 + 317
frame #7: 0x00007fd61b063548 libxml2.so.2`xmlParseTryOrFinish.isra.0 + 888
frame #8: 0x00007fd61af7ed20 libxml2.so.2`xmlParseChunk + 560
frame #9: 0x00007fd61bb727e7 libsolvext.so.1`solv_xmlparser_parse + 183
frame #10: 0x00007fd61bb5ea0e libsolvext.so.1`repo_add_rpmmd + 254
frame #11: 0x000055a4fce7a5f5 rpm-ostree`::load_filelists_cb(repo=<unavailable>, fp=<unavailable>) at dnf-sack.cpp:444:23
frame #12: 0x000055a4fce7cad6 rpm-ostree`load_ext(_DnfSack*, libdnf::Repo*, _hy_repo_repodata, char const*, char const*, int (*)(s_Repo*, _IO_FILE*), _GError**) at dnf-sack.cpp:430:13
frame #13: 0x000055a4fce7df60 rpm-ostree`dnf_sack_load_repo at dnf-sack.cpp:1789:26
frame #14: 0x000055a4fce7eee9 rpm-ostree`dnf_sack_add_repo at dnf-sack.cpp:2217:28
frame #15: 0x000055a4fce7f0fb rpm-ostree`dnf_sack_add_repos at dnf-sack.cpp:2271:32
frame #16: 0x000055a4fce870ee rpm-ostree`dnf_context_setup_sack_with_flags at dnf-context.cpp:1796:29
frame #17: 0x000055a4fcdf757f rpm-ostree`rpmostree_context_download_metadata at rpmostree-core.cxx:1206:44
frame #18: 0x000055a4fcdf95c3 rpm-ostree`rpmostree_context_prepare at rpmostree-core.cxx:2001:48
frame #19: 0x000055a4fce54ab7 rpm-ostree`rpmostree_sysroot_upgrader_prep_layering at rpmostree-sysroot-upgrader.cxx:1018:38
frame #20: 0x000055a4fcdcb143 rpm-ostree`deploy_transaction_execute(_RpmostreedTransaction*, _GCancellable*, _GError**) at rpmostreed-transaction-types.cxx:1445:49
frame #21: 0x000055a4fcdba4cd rpm-ostree`transaction_execute_thread(_GTask*, void*, void*, _GCancellable*) at rpmostreed-transaction.cxx:340:34
frame #22: 0x00007fd61c58f7e2 libgio-2.0.so.0`g_task_thread_pool_thread + 114
frame #23: 0x00007fd61c3d7e54 libglib-2.0.so.0`g_thread_pool_thread_proxy.lto_priv.0 + 116
frame #24: 0x00007fd61c3d52b2 libglib-2.0.so.0`g_thread_proxy + 82
frame #25: 0x00007fd61b8af3f9 libpthread.so.0`start_thread + 233
frame #26: 0x00007fd61b9c9903 libc.so.6`__clone + 67
(lldb)
```
Having our binary depend on the shared library, which in
turn depends on the binary (at runtime) is messy.
Instead, statically compile the shlib code into our binary.
This duplicates the text a bit, but it's not a lot of code.
The goal is to more easily in the future to e.g. move the
shared library out into a separate git repository entirely
that runs on a separate lifecycle - that would still build
using Automake for example while the main git repository
switches to purely cargo.
Another motivation is avoiding linker issues I had with other
patches due to this semi-cyclical dependency.
This fixes link dependencies and build-libraries path, in order to
make Rust tests work.
It also introduces an additional wildcard target to allow specifying
a test filter to cargo.
First, the public shared library only depends on a few
things (not the libdnf dependencies) so let's ensure we
only link it to those libraries.
And then, I realized we don't actually need the libdnf
dependencies here - I think I only added those back here
when trying vainly to keep the C unit tests working. But
we don't have those anymore! So we can delete the duplication
and fully rely on Cargo taking care of libdnf.
Conceptually for a static library we don't "link" it against
anything in Automake, that happens at the final stage with
the Rust linker.
This is now further migration towards Cargo/Rust possible
because we switched our main binary. We've had an internal
`libdnf-sys` crate for a while, but now it can take over
the build of the underlying library too (like many `-sys`
crates support).
This itself is just an incremental step towards migrating
the main rpm-ostree build system to e.g. cmake too (or
perhaps directly with the `cc` crate, not sure yet) and
driving it via `cargo` too.
We now have bidirectional calling between Rust and C++,
but we are generating two static libraries that we then
link together with a tiny C++ `main.cxx`.
Let's make another huge leap towards oxdiation by
having Rust be the entrypoint. This way cargo natively
takes care of linking the internal Rust library, and
our C++ internals become the library.
In other words, we've now fully inverted from
"C app with internal Rust library"
to "Rust binary with internal C++ library".
In order to make this work though we have to finally
kill the C unit tests. But mostly everything covered
there is either being converted to Rust, or covered
elsewhere anyways.
Now as the doc comments in `main.rs` say...this is
a bit awkward because all the CLI code is still in C++.
Porting stuff to use e.g. `structopt` natively would
be a bit of a slog. For now, we basically rely on
the fact that the Rust-native CLIs are all hidden
commands.
Update submodule: libdnf
Prep for "Rust-as-main", where I want to build libdnf statically.
And this really completes the "library thinout" story because
now we avoid dragging our *private* `libdnf.so` into the caller's
address space, which can cause potential conflicts if they're
also linking the system one. (Which could easily occur with
something like gnome-software)
All we were using libdnf for (indirectly via libsolv) is comparing
version strings but librpm can already do that for us.
This allows us to fully use cxx-rs with `extern "C++"`. Now
we do call back into the C/C++ today, but it only works outside
of cargo/Rust's knowledge. Most notably, it means we can't
use our C code in `cargo test`. And that's a problem
for moving some C/C++ code to Rust, because we want to port
the unit tests too.
For now, re-declare our dependencies and part of the build
system inside the Cargo build. However, this is also
an important step towards using Cargo as our *sole* build
system.
We don't add build dependencies too often, so the short
term duplication should be OK.
However, a major unfortunate side effect of this is that
we now need to serialize the build process; almost all the
C/C++ comes first (`librpmostreeinternals.la`) and then
the Rust build, then we finally generate the executable
with both.
The only way out of this really is to move more of the
C/C++ build into Cargo, and we probably want to refactor
into internal crates.
For some reason, when building with `-g -Og`, I get a linker error for
a missing `lio_listio`. Adding `-lrt` fixes it. (We already link against
this transitively, so it's not actually a net new `DT_NEEDED`.)
This way we're sure it will build in e.g. Koji. Right now
it's annoying to test that locally; one needs to explicitly
create a no-network container to do so strictly. But
cargo has a convenient `--offline` flag, and nothing else
in our build stack should touch the network.
cxx.rs (aka cxxbridge) and cbindgen are
both generating source code. Since the last release
we've introduced the former, and we need to ensure
that the generated cxx.rs source ends up in release tarballs
the same way as the cbindgen code.
Rationalize and clean up the binding infrastructure.
Drop support for the vendored cbindgen which we
weren't actually using:
Closes: https://github.com/coreos/rpm-ostree/issues/2392
Move the cxx-rs and cbindgen bits into the same place,
and update our CoreOS CI build to use a separate `Makefile.bindings`
that just generates the code, so our CI still "works like"
a main Koji RPM build.
In the previous buildsytem rework we disabled the unit tests because
of linking problems. Now I realized that a simple solution is
to continue to build one big object, just make it an internal
static library and have a tiny "stub main" that delegates to an entrypoint.
That's basically what the C unit tests are - an alternative `main()`
with some extra code.