IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
We saw a random ostree SEGV start popping up in our CI environment:
https://github.com/projectatomic/rpm-ostree/pull/641#issuecomment-281870424
Looking at this code more and comparing it to what util-linux does, I noticed we
had a write-after-free, since `mnt_unref_table()` will invoke
`mnt_unref_cache()` on its cache, and that function does:
```
if (cache) {
cache->rfcount--;
```
unconditionally.
Fix this by using `unref()`.
Closes: #705
Approved by: jlebon
I learned today that `docker version` does this and I really like
the idea. While we have the patient open, also add the gitrev
with code taken from https://github.com/projectatomic/rpm-ostree/pull/584Closes: #691
Approved by: giuseppe
I saw in a recent test log a ton of spam
```
libglnx/glnx-dirfd.c:253:3: runtime error: null pointer passed as argument 1, which is declared to never be null
```
which actually turned out to be libglnx getting reverted. But
let's be sure now we actually bomb out quickly on UBSAN warnings
in general.
Closes: #693
Approved by: jlebon
Commit a1805d6101 reverted
this unintentionally.
We should have some CI check that requires a commit message has
something like "libglnx bump" or something?
Closes: #693
Approved by: jlebon
Add meta-updater and QtOTA, and delete OpenEmbedded since it's implied by the
first two. Merge rpm-ostree + Atomic Host since they're close. Clarify
gnome-continuous a bit.
Closes: #700
Approved by: jlebon
I think I commented this out while debugging something, and forgot to re-enable
it. Reading the log files should be a better again after this.
Closes: #699
Approved by: giuseppe
It's just simpler, and I'm not sure people are going to care
much about the difference by default.
We already folded in the fallback sizes into the download totals, so folding in
the count makes things consistent; previously you could see e.g.
`3/3 parts, 100MB/150MB` and be confused.
Closes: #678
Approved by: giuseppe
I don't know why I added support for this; it makes no sense really. If we have
large metadata objects something has gone badly wrong.
The delta compiler has always only processed fallbacks for regular
content files.
Dropping support in the fetcher for this will simplify later handling of
fallback progress accounting.
Closes: #678
Approved by: giuseppe
There were a few bugs here.
- We need to keep track of the size of the delta parts we've already processed,
in order to make progress reliable at all in the face of interruptions. Add
a new `fetched-delta-part-size` async progress variable for this.
- The total before disregarded what we'd already downloaded, which was confusing.
Now, a progress percentage is `fetched/total`.
- Correctly handle "unknown bytes/sec" in the progress display.
However, to be fully correct we need to show the fallback objects too. That
would require tracking in the pull code when we fetch an object as a fallback
versus "normally". This would be simpler really if we could assume in a run we
were *only* processing a delta, but currently we don't do that.
Related: https://github.com/ostreedev/ostree/issues/475Closes: #678
Approved by: giuseppe
Doing `g_variant_print (superblock)` is unreadable and not very useful,
since we show the checksums as byte arrays.
However, do show the checksums for fallback objects. This makes it easier to see
which objects are fallbacks (and inspect why).
Closes: #678
Approved by: giuseppe
In https://github.com/ostreedev/ostree/pull/634 we introduced
a subtle regression - the unreadable object was added to the *new*
reachable objects, when it shouldn't have been. Because it
was a *from* object, clients already had it.
This became more obvious now that I'm working on fixing delta
progress - I noticed my deltas were always starting out with 40MB
fetched, which turned out to be a non-world-readable initramfs object.
This code should simply *skip* the unreadable object, and the delta processing
below properly iterates over "new objects", so we'll pick it up from there.
Closes: #678
Approved by: giuseppe
rm -r the directory since we are keeping the Go bindings separately
under https://github.com/ostreedev/ostree-go
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Closes: #690
Approved by: cgwalters
We should get a release out to try to keep with at least a once-a-month cadence.
This one has some exciting stuff like libcurl and Rust, and various bugfixes.
Also importantly I want to cut this *before* we land some other bigger stuff, so
rpm-ostree can start using the reload_config API etc.
Closes: #685
Approved by: jlebon
These allow us to avoid copying a lot of data around
in userspace. Instead we splice the data directly from
the fd to the destination fd.
Closes: #684
Approved by: cgwalters
I have no idea why I made the lib `.PHONY` originally; it's clearly wrong, and I
noticed because when I was doing `sudo make install`, we were doing a rebuild,
which in turn triggered other things to be built, and they'd be owned by root.
Closes: #682
Approved by: jlebon
Switching between local branches should be supported too.
Signed-off-by: Anton Gerasimov <anton@advancedtelematic.com>
Closes: #683
Approved by: cgwalters
Clarify the documentation for functions like
ostree_repo_get_remote_boolean_option(), stating what out_value will be
set to on error.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #676
Approved by: cgwalters
When fetching over a fast enough connection, we can be receiving files
faster than we write them. This can then lead to EMFILE when we have
enough files open. This was made very easy to notice with the upcoming
libcurl backend, which makes use of pipelining.
Closes: #675
Approved by: cgwalters
For rpm-ostree, we already link to libcurl indirectly via librepo, and
only having one HTTP library in process makes sense.
Further, libcurl is (I think) more popular in the embedded space. It
also supports HTTP/2.0 today, which is a *very* nice to have for OSTree.
This seems to be working fairly well for me in my local testing, but it's
obviously brand new nontrivial code, so it's going to need some soak time.
The ugliest part of this is having to vendor in the soup-url code. With
Oxidation we could follow the path of Firefox and use the
[Servo URL parser](https://github.com/servo/rust-url). Having to redo
cookie parsing also sucked, and that would also be a good oxidation target.
But that's for the future.
Closes: #641
Approved by: jlebon
The libcurl backend does all the work in the main thread/loop, which
seems to starve the idle scanning worker more. With the libcurl
backend, we're a lot more likely to have at least one outstanding
metadata request.
But it can more easily transiently happen with libcurl that all of our current
fetches are content. To be accurate here, just show Estimating if we're scanning
too.
Closes: #654
Approved by: jlebon
Now that we have queuing in the higher level pull logic, we don't
need to do this anymore.
It's tempting to keep it since the code diff is so small (without
completely rewriting things), but dropping it here will make
it easier to see when things go wrong at a higher level.
Note that I kept an assertion.
Closes: #654
Approved by: jlebon
Working on the libcurl backend, I didn't want to reimplement another queue. I
think the queue logic is really better done at the high level, since the fetcher
knows how we want to prioritize metadata over content, etc.
Adding another queue here is duplication, but things will look nicer when we can
actually delete the libsoup one in the next commit.
Closes: #654
Approved by: jlebon
The gzip default is 6. When I was writing this code, I chose 9 under
the assumption that for long-term archival, the extra compression was
worth it.
Turns out level 9 is really, really not worth it. Here's run at level 9
compressing the current Fedora Atomic Host into archive:
```
ostree --repo=repo pull-local repo-build fedora-atomic/25/x86_64/docker-host
real 2m38.115s
user 2m31.210s
sys 0m3.114s
617M repo
```
And here's the new default level of 6:
```
ostree --repo=repo pull-local repo-build fedora-atomic/25/x86_64/docker-host
real 0m53.712s
user 0m43.727s
sys 0m3.601s
619M repo
619M total
```
As you can see, we run almost *three times* faster, and we take up *less
than one percent* more space.
Conclusion: Using level 9 is dumb. And here's a run at compression level 1:
```
ostree --repo=repo pull-local repo-build fedora-atomic/25/x86_64/docker-host
real 0m24.073s
user 0m17.574s
sys 0m2.636s
643M repo
643M total
```
I would argue actually many people would prefer even this for "devel" repos.
For production repos, you want static deltas anyways. (However, perhaps
we should support a model where generating a delta involves re-compressing
fallback objects with a bit stronger compression level).
Anyways, let's make everyone's life better and switch the default to 6.
Closes: #671
Approved by: jlebon
What we do here basically is set things up in a `dist-hook` so that our Rust
sources are vendored at `dist` time. This gives us a single tarball still, and
ideally should be transparent to downstream builders, as long as they have the
`cargo/rust` toolchain.
Closes: #669
Approved by: jlebon
For a long time we've cached the remote configs in the repo, which
mostly makes sense for the `repo/config` file, but less sense
for `/etc/ostree/remotes.d`, because we want to support admins
interactively editing them.
One can delete the repo instance and create a new one, but that's a bit ugly.
Let's introduce an API for this so rpm-ostree can reload remotes after
admins/scripts edit them in `/etc`. We also might as well reload
any other entries in the config.
Structurually now, `ostree_repo_open()` deals with file descriptors, and then
calls `ostree_repo_reload_config()`. Except for the uncompressed cache, which is
the only thing that deals with FDs that can be configured. But we want to delete
that anyways.
No tests, since...we don't have a daemon in this codebase, don't want to shave
that yak just today.
Closes: #662
Approved by: jlebon
This is obsolete for a few reasons. The spec file is out of date (mostly the
%files, but also the BRs), and further down the line we'll need to use `make
dist` so we pick up vendored Rust sources. So the "git archive" approach won't
work for much longer anyways.
This came up in https://mail.gnome.org/archives/ostree-list/2017-February/msg00005.html
Basically, anyone who wants to build packages should look at the upstream
dist-git - until such time as e.g. Fedora learns to support pulling spec files
from upstream.
Closes: #670
Approved by: giuseppe
With the package rename from ostree to libostree, the trusted.gpg.d/ dir
changed install location from /usr/share/ostree to /usr/share/libostree.
Let's keep the same dir to remain compatible with existing installations
that may already have keys there.
Closes: #668
Approved by: cgwalters
This is an initial drop of "oxidation", or adding implementation
of components in Rust. The bupsplit code is a good target - no
dependencies, just computation.
Translation into Rust had a few twists -
- The C code relies a lot on overflowing unsigned ints, and
also on the C promotion rules for e.g. `uint8_t -> int32_t`
- There were some odd loops that I introduced bugs in while
translating...in particular, the function always returns `len`,
but I mistakenly translated to `len+1`, resulting in an OOB
read on the C side, which was hard to debug.
On the plus side, an off-by-one array indexing in the Rust code paniced nicely.
In practice, we'll need a lot more build infrastructure to make this work, such
as using `cargo vendor` when producing build artifacts for example. Also, Cargo
is yet another thing we need to cache.
Where do we go with this? Well, I think we should merge this, it's not a lot of
code. We can just have it be an alternative CI target. Should we do a lot more
right now? Probably not immediately, but I find the medium/long term prospects
pretty exciting!
Closes: #656
Approved by: jlebon
There are many motivating factors. The biggest is simply that at a practical
level, the command line is not sufficient to build a real system. The docs say
that it's a demo for the library. Let's make that more obvious, so people don't
try to use `ostree admin upgrade` for their real systems, and also don't use
e.g. `ostree commit` on the command line outside of test suites/quick hacking.
This change will also help clarify the role of rpm-ostree, which we will likely
be renamed to "nts". Then use of the term "ostree" will become much clearer. And
similarly for other people writing upgraders, they can say they use libostree.
I didn't try to change all of the docs and code at once, because it's going to
lead to conflicts.
The next big steps are:
- Rename the github repo (github will inject a redirect)
- Look at supporting a build where we don't do `ostree admin`, or at least
it's only built for tests. We may want to split it off as a separate binary
or so? That way people with their own upgraders don't need to ship it.
Closes: #659
Approved by: jlebon
As OSTree has evolved over time, the tests grew with it. We
didn't start out with static deltas or a summary file, and the
tests reflect this.
What I really want to do is change more of the pull tests, from
corruption/proxying/mirroring etc. to use this more realistic
repo rather than the tiny one the other test creates.
We start by using some of the code from `test-pull-many.sh`, and change that
test to use `--disable-static-deltas` for pull, since the point of that test is
to *not* test deltas.
Still TODO is investigate changing other tests to use this.
Closes: #658
Approved by: jlebon
We weren't running it before. Also I switched it to use GLib. Preparation for
some oxidation work (having an implementation of bupsplit in Rust).
I exported another function to do the raw rollsum operation which is what this
test suite uses.
Closes: #655
Approved by: jlebon
I went through the Travis history a bit, and these seem to be
the flaky ones. The ubuntu one is likely no libsoup patches.
The other one @smcv has partially traced to a GPGME race condition
or something like that.
For the libsoup one; as I say in comments, once we have libcurl, I'd like to
enable that mostly everywhere, which should (hopefully) be more reliable.
Closes: #664
Approved by: cgwalters
I was working on https://bugzilla.redhat.com/show_bug.cgi?id=1393545
and it was annoying that I couldn't know what the new (unsigned)
commit has was until verification succeeded. I could pull it
manually without GPG, but then it'd be sitting in the repo.
Now:
```
Updating from: fedora-atomic:fedora-atomic/25/x86_64/docker-host
Receiving metadata objects: 0/(estimating) -/s 0 bytes
error: Commit 2fb89decd2cb5c3bd73983f0a7b35c7437f23e3aaa91698fab952bb224e46af5: GPG verification enabled, but no signatures found (use gpg-verify=false in remote config to disable)
```
Closes: #663
Approved by: giuseppe
They are built at "make" time and cleaned up by "make clean", so there
is no need to distribute them.
Signed-off-by: Simon McVittie <smcv@debian.org>
Closes: #665
Approved by: cgwalters
Add an arg description for -P, otherwise it's not immediately obvious
that it takes an argument.
Mention that - is supported for --log-file.
Closes: #657
Approved by: cgwalters
This would be more likely to tickle things like
https://github.com/ostreedev/ostree/issues/601
reliably.
Also, while working on the curl backend, I hit on the fact that curl doesn't
queue (by default, you can enable) and will happily create 20000+ concurrent TCP
connections if you try. Having this test would have made that more likely to
fail.
Closes: #650
Approved by: giuseppe
There are use cases for having a single repo with branches
with different lifecycles; a simple example of what I was
trying to do in CentOS Atomic Host work is have "stable"
and "devel" branches, were we want to prune devel, but
retain *all* of stable.
This patch is split into two parts - first we add a low level "delete all
objects not in this set" API, and change the current prune API
to use this.
Next, we move more logic into the "ostree prune" command. This paves the way for
demonstrating how more sophisticated algorithms/logic could be developed outside
of the ostree core.
Also, the --keep-younger-than logic already lived in the commandline, so it
makes sense to keep extending it there.
Closes: https://github.com/ostreedev/ostree/issues/604Closes: #646
Approved by: jlebon
Debian's Lintian packaging consistency check complains that it isn't
executable but has a #! line. In fact it's reasonable to run this
script directly, so make it executable, and put it in a _scripts
variable so it will be installed executable.
Closes: #652
Approved by: cgwalters
They are installed non-executable, which makes Debian's Lintian
packaging consistency check complain that #! is only useful
in executable scripts. But in fact they are not useful to execute
directly (they rely on setup being done in the script that sources
them), so just chmod them -x.
Closes: #652
Approved by: cgwalters