IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
A follow-up for commit a8cb1dc3e0.
Commit a8cb1dc3e0 made sure that initrd-cleanup.service won't be stopped
when initrd-switch-root.target is isolated.
However even with this change, it might happen that initrd-cleanup.service
survives the switch to rootfs (since it has no ordering constraints against
initrd-switch-root.target) and is stopped right after when default.target is
isolated. This led to initrd-cleanup.service entering in failed state as it
happens when oneshot services are stopped.
This patch along with a8cb1dc3e0 should fix issue #4343.
Fixes: #4343
* kernel-install: fix initrd when called as installkernel
Running make install from the kernel runs e.g.:
installkernel 4.20.5 arch/x86/boot/bzImage System.map "/boot"
Since 0912c0b80e this would
cal 90-loaderentry.install with those arguments:
add 4.20.5 /boot/... arch/x86/boot/bzImage System.map "/boot"
The two last arguments would then be handled as the initrd files.
As System.map exists in current directory but not in /boot/...
it would get copied there, and used as initrd intead of the initrd
which has been generated by dracut.
With this change, nothing changes when kernel-install is called
directly, but when it's called as installkernel, we now pass
thos arguments to 90-loaderentry.install:
add 4.20.5 /boot/... arch/x86/boot/bzImage initrd
initrd is thus detected as the file to use for the initrd, and as it
exists, nothing is copied over and the initrd line generated is
consistent with what one would expect
* kernel-install: fix dracut initrd detection when called directly
This brings back the systemd 240 behaviour when called directly too
* kernel-install: unify initrd fallback
* kernel-install: move initrd fallback handling to 90-loaderentry.install
* kernel-install: move initrd fallback just before creating loader entry
Of course, this should never happen, but let's better be safe than
sorry, and abort rather than continue when a too large memory block is
allocated, simply asa safety precaution.
An early abort is better than continuing with a likely memory corruption
later.
The problem was introduced in a37422045f:
we have a unit which has a fragment, and when we'd update it based on
/proc/self/mountinfo, we'd say that e.g. What=/dev/loop8 has origin-fragment.
This commit changes two things:
- origin-fragment is changed to origin-mountinfo-implicit
- when we stop a unit, mountinfo information is flushed and all deps based
on it are dropped.
The second step is important, because when we restart the unit, we want to
notice that we have "fresh" mountinfo information. We could keep the old info
around and solve this in a different way, but keeping stale information seems
inelegant.
Fixes#11342.
Currently, tmpfiles runs in two separate services at boot. /dev is
populated by systemd-tmpfiles-setup-dev.service and everything else by
systemd-tmpfiles-setup.service. The former was so far conditionalized by
CAP_SYS_MODULES. The reasoning was that the primary purpose of
populating /dev was to create device nodes based on the static device
node info exported in kernel modules through MODALIAS. And without the
privs to load kernel modules doing so is unnecessary. That thinking is
incomplete however, as there might be reason to create stuff in /dev
outside of the static modalias usecase. Thus, let's drop the
conditionalization to ensure that tmpfiles.d rules are always executed
at least once under all conditions.
Fixes: #11544
In normal use, this allow us to drop dead entries from the cache and reduces
the cache size so that we don't evict entries unnecessarily. The time limit is
there mostly to serve as a guard against malicious logging from many different
PIDs.
This is far from perfect, but should give mostly reasonable values. My
assumption is that if somebody has a few hundred MB of memory, they are
unlikely to have thousands of processes logging. A hundred would already be a
lot. So let's scale the cache size propritionally to the total memory size,
with clamping on both ends.
The formula gives 64 cache entries for each GB of RAM.
First of all let's always log where the errors happen, and not in an
upper stackframe, in all cases. Previously we'd do this somethis one way
and sometimes another, which resulted in sometimes duplicate logging and
sometimes none.
When we cannot activate something due to bad password the kernel gives
us EPERM. Let's uniformly return this EAGAIN, so tha the next password
is tried. (previously this was done in most cases but not in all)
When we get EPERM let's also explicitly indicate that this probably
means the password is simply wrong.
Fixes: #11498
The badge is currently serving a broken image, since Coverity Scan is currently
having an outage. See Issue #11185 for more details. We can restore the badge
by reverting this commit once their service is up again.
procfs_memory_get_current is renamed to procfs_memory_get_used, because
"current" can mean anything, including total memory, used memory, and free
memory, as long as the value is up to date.
No functional change.