1
0
mirror of https://github.com/systemd/systemd.git synced 2024-11-01 09:21:26 +03:00

systemd: do not stop units bound to inactive units while coldplugging (#6316)

When running systemd-analyze verify I would get a random subset of warnings
(sometimes none, sometimes one or two):

dev-mapper-luks\x2d8db85dcf\x2d6230\x2d4e88\x2d940d\x2dba176d062b31.swap: Unit is bound to inactive unit dev-mapper-luks\x2d8db85dcf\x2d6230\x2d4e88\x2d940d\x2dba176d062b31.device. Stopping, too.
home.mount: Unit is bound to inactive unit dev-disk-by\x2duuid-75751556\x2d6e31\x2d438b\x2d99c9\x2dd626330d9a1b.device. Stopping, too.
boot.mount: Unit is bound to inactive unit dev-disk-by\x2duuid-56c56bfd\x2d93f0\x2d48fb\x2dbc4b\x2d90aa67144ea5.device. Stopping, too.

When running with debug on, it's pretty obvious what is happening:

home.mount: Changed dead -> mounted
home.mount: Unit is bound to inactive unit dev-disk-by\x2duuid-75751556\x2d6e31\x2d438b\x2d99c9\x2dd626330d9a1b.device. Stopping, too.
home.mount: Trying to enqueue job home.mount/stop/fail
home.mount: Installed new job home.mount/stop as 27
home.mount: Enqueued job home.mount/stop as 27
...
dev-disk-by\x2duuid-75751556\x2d6e31\x2d438b\x2d99c9\x2dd626330d9a1b.device: Installed new job dev-disk-by\x2duuid-75751556\x2d6e31\x2d438b\x2d99c9\x2dd626330d9a1b.device/start as 47
dev-disk-by\x2duuid-75751556\x2d6e31\x2d438b\x2d99c9\x2dd626330d9a1b.device: Changed dead -> plugged
dev-disk-by\x2duuid-75751556\x2d6e31\x2d438b\x2d99c9\x2dd626330d9a1b.device: Job dev-disk-by\x2duuid-75751556\x2d6e31\x2d438b\x2d99c9\x2dd626330d9a1b.device/start finished, result=done

Fixes #2206, https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=808151.
This commit is contained in:
Zbigniew Jędrzejewski-Szmek 2017-07-11 04:45:03 -04:00 committed by Lennart Poettering
parent ad1f3fe6a8
commit 13ddc3fc2b

View File

@ -1832,6 +1832,10 @@ static void unit_check_binds_to(Unit *u) {
if (other->job)
continue;
if (!other->coldplugged)
/* We might yet create a job for the other unit… */
continue;
if (!UNIT_IS_INACTIVE_OR_FAILED(unit_active_state(other)))
continue;