IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Now that the computed similar objects are all regular files,
get_unpacked_unlinked_content should never be called on any other
object type. Assert that this is true instead of silently succeeding.
_ostree_delta_compute_similar_objects should not output symlinks.
Previously, a symlink in the "from" commit could be matched to a
real file in the "to" commit, since nothing was filtering symlinks
on the "from" side. This led to failures running the bzdiff
algorithm.
test-lzma builds a copy of the compressor and decompressor directly, so
the compiler needs access to the LZMA headers and the linker needs to
link the program with liblzma.
On systems with slow disks, the recursive scanning of directories can
be expensive -- it takes upwards of 2 minutes on our systems. This can
block the main loop for such a long time that it allows the download to
time out...
As such, move all the scanning of objects to a queue, processed from
an idle, to make sure that we don't block the main loop when scanning.
https://bugzilla.gnome.org/show_bug.cgi?id=753336
First of all, what we were doing with having GMainLoop in the internal
APIs is wrong. Synchronous APIs should always create their own main
context and not iterate the caller's. Doing the latter creates
potential for evil reentrancy issues. Sync API should block, async
API is for not blocking.
Now that's out of the way, fix the pull code to do the clean
```
while (termination_condition (state))
g_main_context_iteration (mainctx, TRUE);
```
model for looping. This is a lot easier to understand and ultimately
more reliable than having other code call `g_main_loop_quit()`, as the
loop condition is in exactly one place.
We can also remove the idle source which only fired once.
Note we have to add a hack here to discard the synchronous session and
create a new one which we only use async.
https://bugzilla.gnome.org/show_bug.cgi?id=753336
I'd like to migrate content from the GNOME wiki, as frankly the wiki
is crap. Markdown in git is better in every way.
Start by fleshing out the README.md to be more useful.
ostree_repo_prepare_transaction() should always be matched with a call
to either ostree_repo_commit_transaction() or
ostree_repo_abort_transaction().
Since ostree_repo_pull_with_options() does not call
ostree_repo_abort_transaction() on errors, the OstreeRepo instance will
hit an assertion when it's re-used later for another attempt, such as
when the update is driven by an external component through libostree and
network temporarily goes down.
This commit simply always calls ostree_repo_abort_transaction() in the
exit path of ostree_repo_pull_with_options(), since the function is safe
to call even when we're not in a transaction, and that matches e.g. what
ostree-sysroot-cleanup.c does.
The new API permits to query a remote repository summary file and
retrieve the list of available refs.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Eliminates the need for constantly passing --sysroot=sysroot, but
also makes ostree place remote configs for sysroot/ostree/repo in
sysroot/etc/ostree/remotes.d where they should have been all along.
Returns a GFile for the default system root, which is usually the root
directory unless overridden by the OSTREE_SYSROOT environment variable
(which is mainly intended for testing).
libsoup will cache sessions, so it might be the case that we get a
reused session when pulling from the same repo multiple times in one
process.
In this case we were leaking signal connections, which caused
callbacks into freed memory with bad consequences.
Fix it by tying the signal connection to the object lifetime.
I did a quick audit pass through the pull code. What I focused on the
most is the case where `gpg-verify-summary=true`, and in particular
where `gpg-verify=false` too. This should be a valid and secure
configuration.
The primary change here is to error out very quickly if either
`summary` or `summary.sig` are 404. Previously, we'd only error out
if we were processing deltas.
Expand the existing test case to cover this, plus invalid summary and
invalid sig. (The test case was failing with current git master too).
It allows to specify whether GPG verification for the summary file is
enabled for a specific repository.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Works like "ostree refs" but fetches refs from a remote repo.
This depends on the remote repo having a summary file, but any repo
being served over HTTP *ought* to have one.
This may not be the best idea for general usage, but the only use case
for metalinks currently is fetching a summary file and those are pretty
small. Far more convenient to return the file content in a GBytes.