IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Fixes#10526.
Even if we waited for the root device to appear, the mount could still fail if
we didn't wait for udev to initalize the device. In particular, the
/dev/block/n:m path used to mount the device is created by udev, and nspawn
would sometimes win the race and the mount would fail with -ENOENT.
The same wait is done for partitions, since if we try to mount them, the same
considerations apply.
Note: I first implemented a version which just does a loop (with a short wait).
In that approach, udev takes on average ~800 µs to initialize the loopback
device. The approach where we set up a monitor and avoid the loop is a bit
nicer. There doesn't seem to be a significant difference in speed.
With 1000 invocations of 'systemd-nspawn -i image.squashfs echo':
loop (previous approach):
real 4m52.625s
user 0m37.094s
sys 2m14.705s
monitor (this patch):
real 4m50.791s
user 0m36.619s
sys 2m14.039s
dissect-image would wait for the root device and paritions to appear. But if we
had an image with no partitions, we'd not wait at all. If the kernel or udev
were slow in creating device nodes or symlinks, subsequent mount attempt might
fail if nspawn won the race.
Calling wait_for_partitions_to_appear() in case of no partitions means that we
verify that the kernel agrees that there are no partitions. We verify that the
kernel sees the same number of partitions as blkid, so let's that also in this
case.
This makes the failure in #10526 much less likely, but doesn't eliminate it
completely. Stay tuned.
The function interface is the same, except that the output pointer may be NULL.
The implementation is slightly simplified by taking advantage of changes in
ancestor commit 'sd-device: attempt to read db again if it wasn't found', by
not creating a new sd_device object before re-checking the is_initialized
status.
v2:
- In v1, the old object was always used and the device received back from the
sd_device_monitor_start callback was ignored. I *think* the result will be
equivalent in both cases, because by the time we the callback gets called,
the db entry in the filesystem will also exist, and any subsequent access to
properties of the object would trigger a read of the database from disk. But
I'm not certain, and anyway, using the device object received in the callback
seems cleaner.
Apparently people want more of these (as #11175 shows). Since this is
merely a safety limit for us, let's just bump all values substantially.
Fixes: #11175
Normally, we don't care too much about what pahole reports. But this structure
could potentially be allocated for every device on the system, i.e. in a large
number of copies. 5 vs 7 cache lines is nice.
/* size: 400, cachelines: 7, members: 53 */
/* sum members: 330, holes: 12, sum holes: 70 */
/* last cacheline: 16 bytes */
/* size: 320, cachelines: 5, members: 53 */
/* bit holes: 1, sum bit holes: 6 bits */
/* bit_padding: 5 bits */
ensure that mfree() on filename is called after the logging function
which uses the string pointed by filename
Signed-off-by: Khem Raj <raj.khem@gmail.com>
value pointer here is always NULL but subsequent use of that pointer
with a %s format will always be NULL, printing p instead would be a
valid string
Signed-off-by: Khem Raj <raj.khem@gmail.com>
We had two very similar functions: device_read_db_aux and device_read_db,
and a number of wrappers for them:
device_read_db_aux
← device_read_db (in sd-device.c)
← all functions in sd-device.c, including sd_device_is_initialized
← device_read_db_force
← event_execute_rules_on_remove (in udev-event.c)
device_read_db (in device-private.c)
← functions in device_private.c (but not device_read_db_force):
device_get_devnode_{mode,uid,gid}
device_get_devlink_priority
device_get_watch_handle
device_clone_with_db
← called from udevadm, udev-{node,event,watch}.c
Before 7141e4f62c (sd-device: don't retry loading
uevent/db files more than once), the two implementations were the same. In that
commit, device_read_db_aux was changed. Those changes were reverted in the parent
commit, so the two implementations are now again the same except for superficial
differences. This commit removes device_read_db (in sd-device.c), and renames
device_read_db_aux to device_read_db_internal and makes everyone use this one
implementation. There should be no functional change.
This mostly reverts "sd-device: don't retry loading uevent/db files more than
once", 7141e4f62c. We will retry if we couldn't
access the file, but not if parsing failed.
Not re-reading the database at all just doesn't seem like a good idea. We have
two implementations of device_read_db, and one does that, and the other retries
to read the db. Re-reading seems more useful, since we can create the object
and then access properties as some later time when we know that the device has
been initialized and we can get useful results. Otherwise, we force the user to
destroy this object and create a new one.
This changes device_read_uevent_file() and device_read_db_aux(). See next
commit for description of where those functions are used.
If we create 2000 mounts (on a 1-CPU qemu VM) with
mkdir -p /MNT/{1..2000}
time for i in {1..2000}; do mount --bind /etc /MNT/$i ; done
it takes around 20 seconds to complete. Much of this time is taken up
by systemd repeatedly processing /proc/self/mountinfo.
If I disable the processing, the time drops to about 4 seconds.
I have reports that on a larger system with multiple active user sessions, each
with it's own systemd, the impact can be higher.
One particular use-case where a large number of mounts can be expected in quick
succession is when the "clearcase" SCM starts up.
This patch modifies the handling up events from /proc/self/mountinfo so
that systemd backs off when a storm is detected. Specifically the time to process
mountinfo is measured, and the process will not be repeated until 10 times
that duration has passed. This ensures systemd won't use more than 10% of
real time processing mountinfo.
With this patch, my test above takes about 5 seconds.
This reverts commit dd102e4d0c.
That test case exposed a memory leak and breaks CI, so let's revert it until
the original issue is fixed, to prevent disruption of automated testing.
Let's make sure the first hashmap we destroy also frees all machines,
because otherwise when freeing the other hashmaps we'll try to
deregister the contained machines from the hashmaps already destroyed.
Apparently, people do use nscd, hence play somewhat nice with it, and
let's explicitly flush nscd caches whenever we register a new
user/group.
This patch only adds the actual refresh request invocation. Later
commits then issue this call at appropriate moments.
Note that the nscd protocol is not officially documented though very
simple. This code is written very defensively so that incompatibilities
don't affect us much.
Given that glibc really has a duty to maintain compat between
differently compiled programs and their system nscd they can't break API
and thus it should be safe for us to implement an alternative,
minimalistic client.
Ideally this kind of explicit, global cache flushing would not be necessary.
However nscd currently has no cache coherency protocol, hence we can't
really implement this better. The only concept it knows is a TTL for
positive hosts lookups. Hoewver for negative lookups or any of the other
tables nothing is available.
In Debian unstable package libqrencode-dev is version 4.0.2-1, and the
corresponding runtime library is provided by package libqrencode4.
This change fixes the following error when running journalctl:
root@image:~# journalctl
journalctl: error while loading shared libraries: libqrencode.so.4: cannot
open shared object file: No such file or directory
This change also fixes the following boot failures in
systemd-journal-flush.service and systemd-journal-catalog-update.service:
[FAILED] Failed to start Flush Journal to Persistent Storage.
[FAILED] Failed to start Rebuild Journal Catalog.
See also #4949
The ?: operator is very useful for chaining comparison functions
(strcmp, memcmp, CMP), since its behavior is to return the result
of the comparison function call if non-zero, or continue evaluating
the chain of comparison functions.
This simplifies the code in that using a temporary `r` variable
to store the function results is no longer necessary and the checks
for non-zero to return are no longer needed either, resulting in a
typical three-fold reduction to the number of lines in the code.
Introduce a new memcmp_nn() to compare two memory buffers in
lexicographic order, taking length in consideration.
Tested: $ ninja -C build/ test
All test cases pass. In particular, test_multiple_neighbors_sorted()
in test-lldp would catch regressions introduced by this commit.
In particular, check that the order of the results is consistent.
This test coverage will be useful in order to refactor the compare_func
used while sorting the results.
Tested: ninja -C build/ test