IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Previously, setting this option by default was problematic due to
SELinux (as this would also prohibit the transition from PID1's label to
the service's label). However, this restriction has since been lifted,
hence let's start making use of this universally in our services.
On SELinux system this change should be synchronized with a policy
update that ensures that NNP-ful transitions from init_t to service
labels is permitted.
Fixes: #1219
This reverts commit 3ca9940cb9.
Let's split the commit in two: the sorting and the changes.
Because there's a requirement to update selinux policy, this change is
incompatible, strictly speaking. I expect that distributions might want to
revert this particular change. Let's make it easy.
_c is misleading because .h files should be included in those lists too
(this tells meson that the build outputs should be rebuilt if the header
files change).
Follow-up for 1437822638.
The point here is to compare speed of hashmap_destroy with free and a different
freeing function, to the implementation details of hashmap_clear can be
evaluated.
Results:
current code:
/* test_hashmap_free (slow, 1048576 entries) */
string_hash_ops test took 2.494499s
custom_free_hash_ops test took 2.640449s
string_hash_ops test took 2.287734s
custom_free_hash_ops test took 2.557632s
string_hash_ops test took 2.299791s
custom_free_hash_ops test took 2.586975s
string_hash_ops test took 2.314099s
custom_free_hash_ops test took 2.589327s
string_hash_ops test took 2.319137s
custom_free_hash_ops test took 2.584038s
code with a patch which restores the "fast path" using:
for (idx = skip_free_buckets(h, 0); idx != IDX_NIL; idx = skip_free_buckets(h, idx + 1))
in the case where both free_key and free_value are either free or NULL:
/* test_hashmap_free (slow, 1048576 entries) */
string_hash_ops test took 2.347013s
custom_free_hash_ops test took 2.585104s
string_hash_ops test took 2.311583s
custom_free_hash_ops test took 2.578388s
string_hash_ops test took 2.283658s
custom_free_hash_ops test took 2.621675s
string_hash_ops test took 2.334675s
custom_free_hash_ops test took 2.601568s
So the test is noisy, but there clearly is no significant difference with the
"fast path" restored. I'm surprised by this, but it shows that the current
"safe" implementation does not cause a performance loss.
When the code is compiled with optimization, those times are significantly
lower (e.g. 1.1s and 1.4s), but again, there is no difference with the "fast
path" restored.
The difference between string_hash_ops and custom_free_hash_ops is the
additional cost of global modification and the extra function call.
In particular, check that the order of the results is consistent.
This test coverage will be useful in order to refactor the compare_func
used while sorting the results.
When introduced, this test also uncovered a memory leak in sd_lldp_stop(),
which was then fixed by a separate commit using a specialized function
as destructor of the LLDP Hashmap.
Tested:
$ ninja -C build/ test
$ valgrind --leak-check=full build/test-lldp
Let's first remove an item from the hashmap and only then destroy it.
This makes sure that destructor functions can mdoify the hashtables in
their own codee and we won't be confused by that.
There's no value in authenticating SOA RRs that are neither answer to
our question nor parent of our question (the latter being relevant so
that we have a TTL from the SOA field for negative caching of the actual
query).
By being to eager here, and trying to authenticate too much we run the
risk of creating cyclic deps between our transactions which then causes
the over-all authentication to fail.
Fixes: #9771
KeyringMode option is useful for user services. Also, documentation for the
option suggests that the option applies to user services. However, setting the
option to any of its allowed values has no effect.
This commit fixes that and removes EXEC_NEW_KEYRING flag. The flag is no longer
necessary: instead of checking if the flag is set we can check if keyring_mode
is not equal to EXEC_KEYRING_INHERIT.
With cgroupsv1 a zombie process is migrated to root cgroup in all
hierarchies. This was changed for unified hierarchy and /proc/PID/cgroup
reports cgroup to which process belonged before it exited.
Be more suspicious about cgroup path reported by the kernel and use
unit_id provided by the log client if the kernel reports that process is
running in the root cgroup.
Users tend to care the most about 'log->unit_id' mapping so systemctl
status can correctly report last log lines. Also we wouldn't be able to
infer anything useful from "/" path anyway.
See: 2e91fa7f6d
This appears to be necessary for client software to ensure the reponse data
is validated with DNSSEC. For example, `ssh -v -o VerifyHostKeyDNS=yes -o
StrictHostKeyChecking=yes redpilllinpro01.ring.nlnog.net` fails if EDNS0 is
not enabled. The debugging output reveals that the `SSHFP` records were
found in DNS, but were considered insecure.
Note that the patch intentionally does *not* enable EDNS0 in the
`/run/systemd/resolve/resolv.conf` file (the one that contains `nameserver`
entries for the upstream DNS servers), as it is impossible to know for
certain that all the upstream DNS servers handles EDNS0 correctly.
Fixes#10526.
Even if we waited for the root device to appear, the mount could still fail if
we didn't wait for udev to initalize the device. In particular, the
/dev/block/n:m path used to mount the device is created by udev, and nspawn
would sometimes win the race and the mount would fail with -ENOENT.
The same wait is done for partitions, since if we try to mount them, the same
considerations apply.
Note: I first implemented a version which just does a loop (with a short wait).
In that approach, udev takes on average ~800 µs to initialize the loopback
device. The approach where we set up a monitor and avoid the loop is a bit
nicer. There doesn't seem to be a significant difference in speed.
With 1000 invocations of 'systemd-nspawn -i image.squashfs echo':
loop (previous approach):
real 4m52.625s
user 0m37.094s
sys 2m14.705s
monitor (this patch):
real 4m50.791s
user 0m36.619s
sys 2m14.039s
dissect-image would wait for the root device and paritions to appear. But if we
had an image with no partitions, we'd not wait at all. If the kernel or udev
were slow in creating device nodes or symlinks, subsequent mount attempt might
fail if nspawn won the race.
Calling wait_for_partitions_to_appear() in case of no partitions means that we
verify that the kernel agrees that there are no partitions. We verify that the
kernel sees the same number of partitions as blkid, so let's that also in this
case.
This makes the failure in #10526 much less likely, but doesn't eliminate it
completely. Stay tuned.
The function interface is the same, except that the output pointer may be NULL.
The implementation is slightly simplified by taking advantage of changes in
ancestor commit 'sd-device: attempt to read db again if it wasn't found', by
not creating a new sd_device object before re-checking the is_initialized
status.
v2:
- In v1, the old object was always used and the device received back from the
sd_device_monitor_start callback was ignored. I *think* the result will be
equivalent in both cases, because by the time we the callback gets called,
the db entry in the filesystem will also exist, and any subsequent access to
properties of the object would trigger a read of the database from disk. But
I'm not certain, and anyway, using the device object received in the callback
seems cleaner.
Apparently people want more of these (as #11175 shows). Since this is
merely a safety limit for us, let's just bump all values substantially.
Fixes: #11175
Normally, we don't care too much about what pahole reports. But this structure
could potentially be allocated for every device on the system, i.e. in a large
number of copies. 5 vs 7 cache lines is nice.
/* size: 400, cachelines: 7, members: 53 */
/* sum members: 330, holes: 12, sum holes: 70 */
/* last cacheline: 16 bytes */
/* size: 320, cachelines: 5, members: 53 */
/* bit holes: 1, sum bit holes: 6 bits */
/* bit_padding: 5 bits */