IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Mostly adding this for use in test cases; it allows us to add e.g.
integers, and we need to deal with byteswapping those.
Someone mind also find it useful to add fully structured metadata, although most
of those users should be using a real language and not shell script.
Closes: #1372
Approved by: jlebon
In the non-`CONSUME` path for regfiles (which happens currently for
`bare-user`), we go to a lot of contortions to make an "object stream",
only to immediately parse it again.
Fixing this will also enable the `G_IS_FILE_DESCRIPTOR_BASED()` fast path in
commit, since the input stream will actually reference the file descriptor and
not be an `_OstreeChainInputStream`.
There's a slight concern here in that we're no longer checksumming *literally*
the object stream passed in for the stream case, but I mention in the comment,
the data should be the same, and if it's not somehow we're not adding risk,
since the checksum is still covering the data we actually care about.
Prep for further changes to break up the `write_content_object()` path into
separate paths for archive, as well as regfile vs symlink in non-archive.
Closes: #1371
Approved by: jlebon
A while ago I did `truncate -s 0 /path/to/repo/00/123.commit`, and expected a
checksum error, but I actually got a validation error due to us loading the
commit into a variant and trying to parse out the parent checksum, etc.
I first started by changing the `load_and_fsck_one_object()` function to
checksum before loading, but the problem is that we do a traverse of all objects
first. Fixing this is going to require an `OSTREE_REPO_COMMIT_TRAVER_FLAG_FSCK`
or something.
In the meantime at least though, let's add a public API to fsck a single object
which *does* checksum cleanly before parsing the object, and change the `fsck`
command to use it.
We then change the fsck binary to do this while iterating over the refs
and finding the commit object. This way we'll at least get a checksum
first for commit objects, even if not dirtree/dirmeta.
Closes: #1364
Approved by: jlebon
This commit fixes an infinite loop that happens if you try to list the
remotes of a repo that has a parent repo set. It also adds a unit test
to ensure the right behavior, which is that both the child remotes and
parent remotes are listed.
Closes: #1366
Approved by: cgwalters
One major thing we can do to speed up local commits is multithreading. In
preparation for that, split up the recursion function so that the subdirectory
case is separate from the content (regfile/symlink) case. Then for non-subdirs,
we can easily peel off worker threads and gather the final checksums and update
the mtree from the main thread.
The diff here looks large but it's pretty straightforward; amazingly this change
compiled the very first time I tried it!
Closes: #1365
Approved by: jlebon
This seems to work around
https://github.com/ostreedev/ostree/issues/1362
Though I'm not entirely sure why yet. But at least with this it'll be easier for
people to work around things locally.
Closes: #1368
Approved by: jlebon
Test that concurrent commits and prunes can succeed. Mostly this is a
check that the new locking works correctly and the concurrent processes
will properly wait until they've acquired the appropriate repository
lock.
Closes: #1343
Approved by: cgwalters
Add exclusive repository locking to all the pruning entry points. This
ensures that objects and deltas will not be removed while another
process is writing to the repository.
Closes: #1343
Approved by: cgwalters
Define an auto cleanup handler for use with repo locking. This is based
on the existing auto transaction cleanup. A wrapper for
ostree_repo_lock_push() is added with it. The intended usage is like so:
g_autoptr(OstreeRepoAutoLock) lock = NULL;
lock = ostree_repo_auto_lock_push (repo, lock_type, cancellable, error);
if (!lock)
return FALSE;
The functions and type are marked to be skipped by introspection since I
can't see them being usable from bindings.
Closes: #1343
Approved by: cgwalters
Currently ostree has no method of guarding against concurrent pruning.
When there are multiple repo writers, it's possible to have a pull or
commit race against a prune and end up with missing objects.
This adds a file based repo locking mechanism. The intention is to take
a shared lock when writing objects and an exclusive lock when deleting
them. In order to make use of the locking throughout the library in a
fine grained fashion, the lock acts recursively with a stack of lock
states. If the lock becomes exclusive, it will stay in that state until
the stack is unwound past the initial exclusive push. The file locking
is similar to GLnxLockFile in that it uses open file descriptor locks
but falls back to flock when needed.
The lock also attempts to be thread safe by storing the lock state in
thread local storage with GPrivate. This means that each thread will
have an independent lock for each repository it opens. There are some
drawbacks to that, but it seemed impossible to manage the lock state
coherently in the face of multithreaded access.
The API is a push/pop interface in accordance with the recursive nature
of the locking. The push interface uses an enum that's translated to
LOCK_SH or LOCK_EX as needed. Both interfaces use an internal timeout
field to decide whether to manage the lock in a blocking or non-blocking
fashion. The intention is to allow ostree applications as well as
administrators to control this timeout. For now, the default is a 30
second timeout.
Note that the timeout is handled synchronously in thread since the lock
is maintained in thread local storage. I.e., the thread that acquires
the lock needs to be the same thread that runs the operation. There may
be a way to offer an asynchronous version, but it's not clear exactly
how that would work since it would likely involve a separate thread that
invokes a callback when the locking operation completes.
https://bugzilla.gnome.org/show_bug.cgi?id=759442Closes: #1343
Approved by: cgwalters
I was getting a bare `error: Creating temp file: No such file or directory` when
debugging `test-concurrency.py`; with this I get
`error: Writing content object: Creating temp file: No such file or directory`
which helps me pin it down.
Closes: #1343
Approved by: cgwalters
For rpm-ostree I'd like to do importing in parallel with threads; the code is
*almost* ready for that except today it calls
`ostree_repo_transaction_set_ref()`.
Looking at the code, there's really a "transaction" struct here,
not just stats. Let's lift that struct out, and move the refs
into it under the existing lock.
Clarify the documentation around multithreading for various functions.
Closes: #1358
Approved by: jlebon
Time to cut a new release, we've got the libcurl cleanup ordering patch which
several people have hit, along with safe early fixes for tmpdir cleanup. Let's
try to land the locking PR early next cycle.
Closes: #1359
Approved by: jlebon
I was seeing the `Writing OSTree commit...` phase of rpm-ostree
being very slow lately. This turns out to be more fallout from
https://github.com/ostreedev/ostree/pull/1170
AKA commit: 8fe4536
Loading the xattrs is slow on my system (F27AW, XFS+LVM, NVMe). I haven't fully
traced through why, but AIUI at least on XFS the xattrs are often stored outside
of the inode so it's a little bit like doing an `open()+read()`. Plus there's
the LSM overhead, etc.
The thing is that for rpm-ostree's package layering use case, we
basically always want to treat the on-disk state as canonical. (There's
a subtle case here if one does overrides for something that contains
policy but we'll fix that).
Anyways, so we're in a state now where we do the slow but correct thing by
default, which seems sane. But let's allow the app to opt-in to telling us
"really trust devino". The difference between a `stat()` + hash table lookup
versus the full xattr load on my test case of `rpm-ostree install
./tree-1.7.0-10.fc27.x86_64.rpm` is absolutely dramatic; consistently on the
order of 10s without this support, and <1s with (800ms).
Closes: #1357
Approved by: jlebon
This squashes the last race condition I was actively hitting while running
`test-concurrency.py` in a loop. The race is when process A finds a tmpdir to
reuse, and goes to lock it. Meanwhile process B deletes it and unlocks the lock.
Process A then succeeds at grabbing a lock, but the tmpdir is deleted.
Closes: #1352
Approved by: dbnicholson
Previously we'd delete the tmpdir in `rename_pending_loose_objects()`
but do the unlock inside `ostree_repo_commit_transaction()`. Move
them into the same place in the latter function for consistency.
Doesn't fix anything, just a cleanup while reading the code and
working on `test-concurrency.py`.
Closes: #1352
Approved by: dbnicholson
Set the PYTHONUNBUFFERED environment variable during tests so that
python leaves stdout unbuffered. This is helpful when reading logs for
failures since the interleaved stdout and stderr will generally come out
in the right order. It's not perfect since tap-driver.sh does some
special redirection to the log file, but it's an improvement.
Closes: #1352
Approved by: dbnicholson
Prep for future work here; let's cleanly separate the path for cleaning up the
txn staging directories from the code that cleans up "other stuff". Currently
only the former case uses the `GLnxLockFile` etc.
Closes: #1352
Approved by: dbnicholson
This closes a race condition I was seeing with `test-concurrency.py`. If we
don't have `O_TMPFILE` (or for symlinks) we'll create temporary files;
previously these would be subject to the date-based pruning because we set the
timestamp to 0 for objects.
Having our temporary files also in the txn staging dir ensures that they're
covered by the locking we do for that directory, and it's also generally cleaner
since the lifecycle of all the temporary data for a txn is in one place.
Closes: #1352
Approved by: dbnicholson
I was running this recently to test the last delta write changes, and this
helps. We should add an option to repo-init to make this easier at some point.
Closes: #1356
Approved by: jlebon
This lowers into the commit core what the static delta code
was doing, and improves the API.
The bigger picture issue is that for writing large files, our current "pull" API
where the caller provides a `GInputStream` is very awkward in some scenarios.
For example, we have a whole "libarchive input stream" that is a ~200 line
GObject that boils down to wrapping `archive_read_data()`.
This came more to a head when I was working on rpm-ostree jigdo since I had to
copy that object.
One step we can take after this is to further split `write_content_object()`
into a "write symlink or archive object" versus "write bare content object"
(it already has a mess of conditionals) and teach the latter case to call
this.
The eventual goal here is to make this API public.
Closes: #1355
Approved by: jlebon
For situations where fsync is disabled, there's basically
no reason to do the whole "staging directory" dance. Just
write directly into the repo.
Today I use `fsync=false` for my build/cache repos.
I briefly considered not allocating a tmpdir at all
in this case, but we actually do want the txn tmpdir
for the non-`O_TMPFILE` case.
Part of https://github.com/ostreedev/ostree/issues/1184Closes: #1354
Approved by: giuseppe
When using dynamic remotes (LAN and USB), we cannot use their name with
the common remote related ops (ostree_repo_remote_...) because ostree
doesn't keep this type of remotes in its internal hash table.
Unfortunately this means that we cannot access the URL of those remotes
either (in order to e.g. set the right URL for those remotes in
Flatpak).
Since the URL is actually stored in a key file that belongs to the
OstreeRemote, then we can simply allow users access to it through a
getter.
So this patch adds a method that allows to return the URL directly from
the OstreeRemote without having to go through the OstreeRepo.
The test-repo-finder-config is also updated by this patch to check if
the URL is correct.
Closes: #1353
Approved by: cgwalters
We use utimens instead of utime, thus allowing nanosecond timestamps,
and also fixes a bug where we used to passed UTIME_OMIT to tv_nsec
which made the entire operation a no-op.
Closes: #1351
Approved by: cgwalters
They don't play nicely currently with HTTP2 where we may
have lots of requests queued.
https://github.com/ostreedev/ostree/issues/878#issuecomment-347228854
In practice anyways I think issues here are better solved on a higher level -
e.g. apps today can use an overall timeout on pulls and if they exceed the limit
set the cancellable.
Closes: #1349
Approved by: jlebon
If a test fails, we immediately exit and thus never get a chance to
actually upload the test results. Add a trap so that they always
uploaded, even on failure.
Closes: #1350
Approved by: cgwalters
If a newly allocated tmpdir can't be locked, set initialized to FALSE so
that glnx_tmpdir_cleanup doesn't delete it when new_tmpdir goes out of
scope.
Closes: #1346
Approved by: cgwalters
Another tmpdir user may have deleted an existing tmpdir between the time
the current user called readdir and tried to open it.
Closes: #1346
Approved by: cgwalters
By default, unless it’s const, an (out) GHashTable will be assumed to be
(transfer full). That means the binding needs to free all the items in
the hash table, plus the table itself.
However, all the GHashTables we use have free functions set already, so
freeing the hash table will free its items. This results in a
double-free.
Fix that by ensuring we annotate such (out) hash tables as (transfer
container). Also annotate some other hash tables as (transfer none)
where appropriate, for clarity.
This fixes OSTree.Repo.list_collection_refs() in the Python bindings.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1341
Approved by: dbnicholson
This reverts commit 519b30b7e1. Now that
the experimental GIR is being built correctly and OstreeRemote is a real
boxed type, this can be exposed again.
Closes: #1337
Approved by: pwithnall
Now that g-ir-scanner is being told about ENABLE_EXPERIMENTAL_API, it
can include these types correctly. Drop the __GI_SCANNER__ guards in the
header files so that all the declarations are found.
After this, you can actually construct the types normally:
>>> OSTree.CollectionRef.new('com.example.Foo', 'bar')
<OSTree.CollectionRef object at 0x7f2bba4c7528 (OstreeCollectionRef at 0x55c033ff2f30)>
Closes: #1337
Approved by: pwithnall