IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Debian unstable is frozen for the Debian 9 release, current development
happens in experimental. After the release, this can be switched back to
master, and the branch set through the `$BRANCH` env variable in the
semaphore config.
Fix the following compile error:
src/basic/socket-util.h:187:30: error: implicit declaration of function 'strnlen'; did you mean 'strlen'? [-Werror=implicit-function-declaration]
Fixes the following build failure with musl:
../git/src/udev/udev-event.c: In function 'spawn_wait':
../git/src/udev/udev-event.c:600:53: error: 'WEXITED' undeclared (first use in this function); did you mean 'WIFEXITED'?
r = sd_event_add_child(e, NULL, spawn->pid, WEXITED, on_spawn_sigchld, spawn);
^~~~~~~
This looks like a bug in udev-event.c that could also have broken
the compilation after some future glibc header reshuffle.
Follow-up for faae64fa3d, which increased the
default number of udev workers per cpu regardless of how big the system is.
It's not really clear from the commit message if the new number of workers
improved the overall time for the boot process or only reduced the number of
times the max number of children limit was reached (and in this case
5406c36844 commit might have been more appropriate in the first place).
But systems with ~1000 CPUs are not rare these days and the worker numbers get
quite large with CPU factor of 8. Spawning more than 2000 workers can't be
healthy on any system, no matter how big.
Indeed the main mistake is the belief that udev is CPU-intensive, and thus the
number of allowed workers has to increase with the number of CPUs. It is not,
at probably has never been. It's I/O bound, and sometimes, bound by resources
such as locks.
This is an argument to:
- scale only weakly with the number of CPUs, and the rationale to switch back
to a scale factor C=2 but with a higher offset number which should affect
systems with a small number of CPUs only. With this patch applied the offset
is increased from O=8 to O=16.
- put an absolute maximum limit to make sure no more than 2048 workers are
spawned no matter how big the system is.
This still provides more workers for the laptop cases (where the number of CPUs
is limited), while avoiding sky-rocketing numbers for big systems.
Note that on most desktop systems, the memory limit will kick in. The following
table collects numbers about children-max. For each scenario, the first column
is the "cpu_limit" limit, and the second number is the minimum amount of memory
for the "cpu_limit" limit to become relevant (with less RAM, memory will limit
the number of children thus "mem_limit" will become the active limit).
| > v240 | < v240 | this patch |
CPUs | C = 8, O = 8 | C = 2, O = 8 | C = 2, O = 16 |
-------------------------------------------------------
1 | 16 2 | 10 1.3 | 18 2 |
2 | 24 3 | 12 1.5 | 20 2 |
4 | 40 5 | 16 2 | 24 3 |
8 | 72 9 | 24 3 | 32 4 |
16 | 136 17 | 40 5 | 48 5 |
64 | 520 65 | 136 17 | 144 18 |
1024 | 8200 1025 | 2056 263 | 2048 256 |
2048 |16392 2049 | 4104 513 | 2048 256 |
This patch is mainly based on Martin Wilck's analyze and comments.
When journalctl is compiled with PCRE2 support, let's return a non-zero
exit code when --grep is used and no match for given pattern is found.
This should allow users to use journalctl --grep in scripts instead of
piping journalctl into grep
Fixes#8152
This patch adds netdev ipvtap that is based on the
IP-VLAN network interface, called ipvtap. An ipvtap device can be created
in the same way as an ipvlan device, using 'kind ipvtap', and then accessed
using the tap user space interface.
Switching to K_UNICODE from other than L_XLATE can make the keyboard
unusable and possibly leak keypresses from X.
BugLink: https://launchpad.net/bugs/1803993
When we read from keyring, a temporary buffer is allocated in order to
determine the size needed for the entire data. However, when zeroing that area,
we use the data size returned by the read instead of the lesser size allocate
for the buffer.
That will cause memory corruption that causes systemd-cryptsetup to crash
either when a single large password is used or when multiple passwords have
already been pushed to the keyring.
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>