IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Where I stalled out before is this file has `pkg-add foo`, but
now that we have the `foo` package pre-built we can move all
this stuff into `misc.sh`.
I dropped the YAML parsing of `--version` because we don't
have python. This is related to
https://github.com/coreos/coreos-assembler/issues/1645
I don't like the use of `HY_GLOB` in the lockfile package matching. We
have all the information in the Rust object, so it's silly to condense
that to a single string in a hashmap.
Fix this by returning the `LockfileConfig` object itself and then adding
a function to fetch the list of locked packages. This allows the C++
side to see all the individual fields which makes filtering trivial.
The next step is moving all the code which needs the lockfile to Rust.
Then we can drop the shared `LockedPackage` type.
(I did start on converting `find_locked_packages`, though it requires
adding bindings for all the `HyQuery` stuff, which... isn't great (and
also runs into the fact that `hy_query_run` needs to return a
GPtrArray). I think instead of a 1:1 mapping, we'll probably want the
libdnf-sys API wrappers to provide some sugar for the common paths.)
In FCOS, we use "override" lockfiles to pin packages to certain
versions. Right now, we have separate overrides for each base arch we
(eventually want to) support. But that makes maintaining the overrides
cumbersome because of all the duplication.
Let's allow lockfiles to specify only the `evr` of a package, which is
just as good for FCOS, and means that we'll only have to maintain a
single override file for all the architectures.
Same motivation as https://github.com/coreos/bootupd/pull/163
Effectively what we're doing here is creating a human-readable subset
of the stack trace. This is nicer than having the calling functions
add with_context() because it's more verbose (gets duplicative at
each call site), easy to forget, etc.
This is a major downside of reworking and generating new CI
flows, it's super easy to lose testing what you intend to.
Also, we clearly need to figure out a flow where this is shared
across repos, since I don't want to copy-paste this into e.g. ostree too.
That's https://github.com/coreos/fedora-coreos-tracker/issues/263
OK I think it's time. This exposes the `apply-live` functionality
as implicitly stable, but specific to the package install case.
I'd like to add more intelligence to `apply-live` around separating
pure "additions" (as in this case) versus package (file) changes.
The change here doesn't try to do that; the implementation is
incredibly simple, we just have the client chain together the two
distinct transactions.
There's a huge difference between live updates that change
existing things, versus simply adding new packages (files).
The latter is really quite safe, and live layering is one
of the most requested features.
Continuing the momentum to use kola ext tests.
One obvious benefit of this as the porting continues
is that we can share our built test RPMs across
different tests, e.g. we can have a `testdaemon` package
instead of a `test-livefs-service` package.
It's always nice when apps provide useful hints about other commands you
may be interested in.
For instance, if they've done `rpm-ostree override replace/remove`,
let's be helpful and tell users that they can use `rpm-ostree override
reset` to unpin packages.
In prep for adding more methods, require the caller to identify
themselves.
For now this is `CliClient` - one could imagine in the future
we actually do direct DBus, but there's a whole other world
of stuff there.
Add more metadata that zincati needs, like `base-commit-meta`
which includes the `fedora-coreos.stream` key and the cosa basearch,
etc.
Also `Derive(Debug)` since it's used in a cache struct that also
derives debug, and that's a friendly thing to do in general.
This adds sufficient infrastructure to fully port the
`rpmostree-builtin-applylive.cxx` client code to Rust.
We just keep a stub entrypoint for now until we port
the rest of `rpmostree-builtin-ex.cxx`, at which point
a lot of C++ files go away.
The "finish" bits move from the daemon-oriented `live.rs`
into a new `rust/src/builtins` directory. I'd like
to try to more cleanly split up the Rust sources along
core(shared)/client/daemon directories in the future.
This stubs out sufficient infrastructure for us to register
as a client and call the Moo API.
A glaring problem here is the lack of extensive `glib::Variant`
bindings; that's covered in the next gtk-rs release.
My real goal was to try porting the `rpmostree-builtin-apply-live.cxx`
code entirely to Rust, but there's more to do to expose the
transaction helper APIs we have.
We have fully transitioned to cxx-rs! This drops a lot of now
dead code; only one binding system to think about generating
source code. For example, a notable advantage of cxx-rs
is it doesn't scan the whole source code, so running `make`
doesn't spew errors from cbindgen not understanding bits.
cxx-rs only supports a few basic types in `Vec<T>`/`CxxVector<T>`
and we need to pass an array of GObjects in a few cases.
Add a wrapper class hack instead of using `u64` so we at least
have some basic safety here and have a convenient place to
grep for later when we want to improve this.
This moves the `ror_lockfile_write` to cxx.rs, which brings us closer to
getting rid of cbindgen now.
There's one massive hack this uses, which is that we pass an array of
pointers to `DnfPackage` and `DnfRepo` objects as u64. We'll want to
circle back and fix that up once either cxx.rs supports natively arrays
of pointers, or we just come up with our own wrapper type for it.
But for now at least, this unblocks the cbindgen transition and hacking
on the lockfile code.
I'd like to get to the point where we drop the `vmcheck.sh`/`libvm.sh` stuff.
Instead we use kola directly, and write our tests in a way that they
default to run on the target, not on the host because it's *much*
more natural to type e.g. `rpm-ostree upgrade` instead of `vm_rpmostree upgrade`.
We'd done a bit of porting, but a blocker was that a lot of our
tests dynamically generate RPMs and send them over. Instead,
let's generate the RPMs ahead of time in a "build" step, then
they all get passed at once via kola ext data. Add the concept
of multiple repo versions too.
Right now we only generate the one RPM needed for the `layering-local`
test and port it.
Came up on `#fedora-iot` channel, some people are hitting
"No packages in transaction". I believe we have a bug,
but I didn't hit it with at least this simple test case.
It may be related to layering while doing this too, going to
test that next.
I spent longer than I'd care to admit being confused why my
changes from `cosa build-fast` weren't being picked up.
We need to honor `.cosa` first because the expected case
is you have both set in the `build-fast` case.
Will look at fixing `kola spawn` to handle all this too; the
problem is we haven't taught kola/cosa about `COSA_DIR`.
Instead of passing the pid back up the stack and using a cleanup
function on it to invoke `kill()`, use `PR_SET_PDEATHSIG` which
has the kernel take care of this for us.
(In practice we don't actually use this peer functionality anymore
because all of the client/daemon code kind of requires being run under systemd
on a real system now)
This shrinks the API surface and is much less repetitive in
the codebase.
Prep for moving more of the CLI code to Rust.