IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
I was hitting a bug in libguestfs/guestmount/FUSE where it blew up
with EINVAL on directories containing lots of files (more than
32000?). We really want to use prefixed subdirs just like the real
objects/ directory does.
This allows us to share more code between the paths, is more
efficient, etc.
gnome-continuous uses the ostree_repo_scan_hardlinks() mode to
avoid re-checksumming everything. However, when I ported the commit
code to use openat() and friends, this optimization was lost.
Re add it. The difference is about 15s versus 5 minutes.
This follows up from the previous commit; now that pull knows how to
do the efficient link() or copy for local files, we can just have
pull-local call into ostree_repo_pull().
As part of this:
- pull() can also accept a file:/// URI instead
of a remote name (since pull local supports anonymous pulls)
- pull() knows an "override-remote-name" option, since pull-local
supported writing a ref out even if there wasn't a remote with
that name
It's always been suboptimal to have both pull and pull-local; as we go
beyond the raw object data into things like deltas and summary files,
the logic to perform e.g. mirroring should only be in one place.
This will be used by Pulp's OSTree content plugin at least to perform
promotions.
When doing a pull --mirror from an archive-z2 repository into another
archive-z2 repository, currently we gunzip/checksum/gzip each content
object. The re-gzip process in particular is fairly expensive.
This does assume that the upstream content is trusted and correct.
It'd be nice in the future to do at least a CRC check, if not the full
checksum. (Could we append CRC data to the end of filez objects?)
We could also choose to only do this optimization if fetching over
TLS.
before: 1626 metadata, 20320 content objects fetched; 299634 KiB transferred in 62 seconds
after : 1626 metadata, 20320 content objects fetched; 299634 KiB transferred in 11 seconds
For future delta work where we do more interesting things than just
"tar of new objects", this lays the groundwork for doing streaming
writes into content objects.
It's also more efficient, as we avoid many intermediate allocations
and virtual calls. Just a single `g_output_stream_write_all` for the
splice case.
Conflicts:
src/libostree/ostree-repo-private.h
src/libostree/ostree-repo-static-delta-processing.c
We could just make everything relative to this, but the objects/ and
tmp/ are accessed very often, so I think it's worth holding individual
fds.
This fd can cover everything else: refs, deltas, etc.
prepare-root works with the mount that has been set up at /sysroot.
It creates a bind-mount within /sysroot (the deployment) and then moves
that mount to /sysroot.
Now we have 2 mounts both at /sysroot, and once we do switch_root, we will
never be able to unmount both of them. I'm not sure if this is ultimately
a kernel bug, but either way, ostree could do a bit more tidying up
after itself.
http://thread.gmane.org/gmane.linux.file-systems/92411
Easy way to reproduce:
1. Boot with rd.break param
2. At initramfs shell, run: ostree-prepare-root /sysroot
3. Observe two /sysroot mounts in /proc/mounts
Fix this by setting up the mounts at /sysroot.tmp, and unmounting the
original /sysroot before our new mount is MS_MOVEd on top of it.
Do not write directly to objects/ but maintain pulled files under tmp/
with a "tmpobject-$CHECKSUM.$OBJTYPE" name until they are syncfs'ed to
disk.
Move them under objects/ at ostree_repo_commit_transaction cleanup
time.
Before (test done on a local network):
$ LANG=C sudo time ./ostree --repo=repo pull origin master
0 metadata, 3 content objects fetched; 83820 KiB; 4 delta parts
fetched, transferred in 417 seconds
16.42user 6.73system 6:57.19elapsed 5%CPU (0avgtext+0avgdata
248428maxresident)k
24inputs+794472outputs (0major+233968minor)pagefaults 0swaps
After:
$ LANG=C sudo time ./ostree --repo=repo pull origin master
0 metadata, 3 content objects fetched; 83820 KiB; 4 delta parts
fetched, transferred in 9 seconds
14.70user 2.87system 0:09.99elapsed 175%CPU (0avgtext+0avgdata
256168maxresident)k
0inputs+794472outputs (0major+164333minor)pagefaults 0swaps
https://bugzilla.gnome.org/show_bug.cgi?id=728065
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
subscription-manager has a daemon that runs in a confined domain,
and it doesn't have permission to write usr_t, which is the default
label of /ostree/deploy/$osname/deploy.
A better long term fix is probably to move the origin file into the
deployment root as /etc/ostree/origin.conf or so.
In the meantime, let's ensure the .origin files are labeled as
configuration.
If an object already existed and we somehow tried to pull it, the
caller would still expect a returned checksum.
This appears to happen with static deltas for some reason; we might be
including duplicate metadata objects. Regardless, this is a bug that
should be fixed.
We have a chain of checksums from the root up until here. While doing
checksums of the objects individually would be a good redundancy check
for test cases and the like, when doing a pull there's no good reason
to burn cycles on SHA256.
This caused deadlocks and/or EMFILE due to the interaction between
threads and fds. What we really want here is a better pull-based
model for parsing content objects.
Another idea would be to change static deltas so that content objects
have a special opcode that includes their metadata first, and then do
rollsums etc. only over actual content.
This will avoid too many open files at the same time that could cause
an EMFILE error.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
(cherry picked from commit bc092b06f0e34e93f7d6102957bf55fd7ffd1b9e)
See projectatomic/rpm-ostree#42 for rationale. There are two high
level use cases:
- If the OS comes unconfigured, this is a way to point it at a repo of your choice.
- To switch between repositories while keeping the same branch easily.
At some point, we might want to expose a uniform way to refer
to deployments by an index. At the moment undeploy is the only
command that does.
I plan to introduce another command which optionally takes an index,
so prepare a helper function for this.