IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This way we at least get unit test coverage (which...
our unit test coverage doesn't do much because our
main code paths require privileges or virt).
One main blocker to this is that rustc doesn't expose
first-class support for this yet:
https://github.com/rust-lang/rust/issues/39699
At a practical level this works when building in release
mode but fails with `cargo test` for some reason; linker
arguments being pruned? Not sure.
So I was able to use this when composing to find a bug,
but then for some other reason the client
side apparently infinite loops inside libsolv.
So we're not enabling this yet for those reasons, but
let's land the build infrastructure now.
```
(lldb) thread backtrace
* thread #4, name = 'pool-/usr/bin/r'
* frame #0: 0x00007fd61b97200f libc.so.6`__memcpy_sse2_unaligned_erms + 623
frame #1: 0x00007fd61cbc88e6 libasan.so.6`__asan::asan_realloc(void*, unsigned long, __sanitizer::BufferedStackTrace*) + 214
frame #2: 0x00007fd61cc4b725 libasan.so.6`__interceptor_realloc + 245
frame #3: 0x00007fd61baec43e libsolv.so.1`solv_realloc + 30
frame #4: 0x00007fd61baf0414 libsolv.so.1`repodata_add_dirstr + 276
frame #5: 0x00007fd61bb6f755 libsolvext.so.1`end_element + 53
frame #6: 0x00007fd61b05855d libxml2.so.2`xmlParseEndTag1.constprop.0 + 317
frame #7: 0x00007fd61b063548 libxml2.so.2`xmlParseTryOrFinish.isra.0 + 888
frame #8: 0x00007fd61af7ed20 libxml2.so.2`xmlParseChunk + 560
frame #9: 0x00007fd61bb727e7 libsolvext.so.1`solv_xmlparser_parse + 183
frame #10: 0x00007fd61bb5ea0e libsolvext.so.1`repo_add_rpmmd + 254
frame #11: 0x000055a4fce7a5f5 rpm-ostree`::load_filelists_cb(repo=<unavailable>, fp=<unavailable>) at dnf-sack.cpp:444:23
frame #12: 0x000055a4fce7cad6 rpm-ostree`load_ext(_DnfSack*, libdnf::Repo*, _hy_repo_repodata, char const*, char const*, int (*)(s_Repo*, _IO_FILE*), _GError**) at dnf-sack.cpp:430:13
frame #13: 0x000055a4fce7df60 rpm-ostree`dnf_sack_load_repo at dnf-sack.cpp:1789:26
frame #14: 0x000055a4fce7eee9 rpm-ostree`dnf_sack_add_repo at dnf-sack.cpp:2217:28
frame #15: 0x000055a4fce7f0fb rpm-ostree`dnf_sack_add_repos at dnf-sack.cpp:2271:32
frame #16: 0x000055a4fce870ee rpm-ostree`dnf_context_setup_sack_with_flags at dnf-context.cpp:1796:29
frame #17: 0x000055a4fcdf757f rpm-ostree`rpmostree_context_download_metadata at rpmostree-core.cxx:1206:44
frame #18: 0x000055a4fcdf95c3 rpm-ostree`rpmostree_context_prepare at rpmostree-core.cxx:2001:48
frame #19: 0x000055a4fce54ab7 rpm-ostree`rpmostree_sysroot_upgrader_prep_layering at rpmostree-sysroot-upgrader.cxx:1018:38
frame #20: 0x000055a4fcdcb143 rpm-ostree`deploy_transaction_execute(_RpmostreedTransaction*, _GCancellable*, _GError**) at rpmostreed-transaction-types.cxx:1445:49
frame #21: 0x000055a4fcdba4cd rpm-ostree`transaction_execute_thread(_GTask*, void*, void*, _GCancellable*) at rpmostreed-transaction.cxx:340:34
frame #22: 0x00007fd61c58f7e2 libgio-2.0.so.0`g_task_thread_pool_thread + 114
frame #23: 0x00007fd61c3d7e54 libglib-2.0.so.0`g_thread_pool_thread_proxy.lto_priv.0 + 116
frame #24: 0x00007fd61c3d52b2 libglib-2.0.so.0`g_thread_proxy + 82
frame #25: 0x00007fd61b8af3f9 libpthread.so.0`start_thread + 233
frame #26: 0x00007fd61b9c9903 libc.so.6`__clone + 67
(lldb)
```
Having our binary depend on the shared library, which in
turn depends on the binary (at runtime) is messy.
Instead, statically compile the shlib code into our binary.
This duplicates the text a bit, but it's not a lot of code.
The goal is to more easily in the future to e.g. move the
shared library out into a separate git repository entirely
that runs on a separate lifecycle - that would still build
using Automake for example while the main git repository
switches to purely cargo.
Another motivation is avoiding linker issues I had with other
patches due to this semi-cyclical dependency.
This is now further migration towards Cargo/Rust possible
because we switched our main binary. We've had an internal
`libdnf-sys` crate for a while, but now it can take over
the build of the underlying library too (like many `-sys`
crates support).
This itself is just an incremental step towards migrating
the main rpm-ostree build system to e.g. cmake too (or
perhaps directly with the `cc` crate, not sure yet) and
driving it via `cargo` too.
This allows us to fully use cxx-rs with `extern "C++"`. Now
we do call back into the C/C++ today, but it only works outside
of cargo/Rust's knowledge. Most notably, it means we can't
use our C code in `cargo test`. And that's a problem
for moving some C/C++ code to Rust, because we want to port
the unit tests too.
For now, re-declare our dependencies and part of the build
system inside the Cargo build. However, this is also
an important step towards using Cargo as our *sole* build
system.
We don't add build dependencies too often, so the short
term duplication should be OK.
However, a major unfortunate side effect of this is that
we now need to serialize the build process; almost all the
C/C++ comes first (`librpmostreeinternals.la`) and then
the Rust build, then we finally generate the executable
with both.
The only way out of this really is to move more of the
C/C++ build into Cargo, and we probably want to refactor
into internal crates.
This adds support for e.g.:
```
$ rpm-ostree override replace https://bodhi.fedoraproject.org/updates/FEDORA-2020-2908628031
```
This will find the Koji builds from the listed update, download
all the RPMs (that aren't debuginfo) and pass them for overrides
in the same way we support `override replace http://somewebserver/foo.rpm`
now.
We also support directly linking a Koji build:
```
$ rpm-ostree override replace https://koji.fedoraproject.org/koji/buildinfo?buildID=1625029
```
Bodhi has a modern HTTP+JSON API, and the lack of a Koji equivalent
drove me to create https://github.com/cgwalters/koji-sane-json-api
and we currently depend on an instance set up in the OpenShift CI
cluster.
I hope it shouldn't take long to deploy this in Fedora Infra,
but I don't want to block on it.
Also notably this still downloads *all* the other RPMs even
ones that aren't installed. Handling that truly correctly
would require moving this logic to the daemon and core.
All of this functionality is keyed off a `cfg(feature = "fedora-integration")`
that is detected by a Rust `build.rs` which parses the build environment's
`/etc/os-release` for now.