1
1
mirror of https://github.com/systemd/systemd-stable.git synced 2024-12-24 21:34:08 +03:00

core: split $MONITOR_METADATA and return it only if a single unit triggers OnFailure/OnSuccess

Remove the list logic, and simply skip passing metadata if more than one
unit triggered an OnFailure/OnSuccess handler.
Instead of a single env var to loop over, provide each separate item
as its own variable.

Fixes https://github.com/systemd/systemd/issues/22370
This commit is contained in:
Luca Boccassi 2022-02-09 11:48:30 +00:00
parent bd844eae23
commit 95c81c55b2
10 changed files with 163 additions and 317 deletions

View File

@ -3404,7 +3404,7 @@ StandardInputData=SWNrIHNpdHplIGRhIHVuJyBlc3NlIEtsb3BzLAp1ZmYgZWVtYWwga2xvcHAncy
<varlistentry> <varlistentry>
<term><varname>$SERVICE_RESULT</varname></term> <term><varname>$SERVICE_RESULT</varname></term>
<listitem><para>Only defined for the service unit type, this environment variable is passed to all <listitem><para>Only used for the service unit type. This environment variable is passed to all
<varname>ExecStop=</varname> and <varname>ExecStopPost=</varname> processes, and encodes the service <varname>ExecStop=</varname> and <varname>ExecStopPost=</varname> processes, and encodes the service
"result". Currently, the following values are defined:</para> "result". Currently, the following values are defined:</para>
@ -3472,7 +3472,7 @@ StandardInputData=SWNrIHNpdHplIGRhIHVuJyBlc3NlIEtsb3BzLAp1ZmYgZWVtYWwga2xvcHAncy
<term><varname>$EXIT_CODE</varname></term> <term><varname>$EXIT_CODE</varname></term>
<term><varname>$EXIT_STATUS</varname></term> <term><varname>$EXIT_STATUS</varname></term>
<listitem><para>Only defined for the service unit type, these environment variables are passed to all <listitem><para>Only defined for the service unit type. These environment variables are passed to all
<varname>ExecStop=</varname>, <varname>ExecStopPost=</varname> processes and contain exit status/code <varname>ExecStop=</varname>, <varname>ExecStopPost=</varname> processes and contain exit status/code
information of the main process of the service. For the precise definition of the exit code and status, see information of the main process of the service. For the precise definition of the exit code and status, see
<citerefentry><refentrytitle>wait</refentrytitle><manvolnum>2</manvolnum></citerefentry>. <varname>$EXIT_CODE</varname> <citerefentry><refentrytitle>wait</refentrytitle><manvolnum>2</manvolnum></citerefentry>. <varname>$EXIT_CODE</varname>
@ -3585,38 +3585,28 @@ StandardInputData=SWNrIHNpdHplIGRhIHVuJyBlc3NlIEtsb3BzLAp1ZmYgZWVtYWwga2xvcHAncy
</varlistentry> </varlistentry>
<varlistentry> <varlistentry>
<term><varname>$MONITOR_METADATA</varname></term> <term><varname>$MONITOR_SERVICE_RESULT</varname></term>
<term><varname>$MONITOR_EXIT_CODE</varname></term>
<term><varname>$MONITOR_EXIT_STATUS</varname></term>
<term><varname>$MONITOR_INVOCATION_ID</varname></term>
<term><varname>$MONITOR_UNIT</varname></term>
<listitem><para>Only defined for the service unit type, this environment variable is passed to all <listitem><para>Only defined for the service unit type. Those environment variable are passed to
<varname>ExecStart=</varname> and <varname>ExecStartPre=</varname> processes which run in services all <varname>ExecStart=</varname> and <varname>ExecStartPre=</varname> processes which run in
triggered by <varname>OnFailure=</varname> or <varname>OnSuccess=</varname> dependencies.</para> services triggered by <varname>OnFailure=</varname> or <varname>OnSuccess=</varname> dependencies.
<para>
The contents of this variable consists of a semi-colon separated list of metadata fields associated with the triggering
service. For each service which triggered the <varname>OnFailure=</varname> or <varname>OnSuccess=</varname>
dependency the following fields will be set:
</para> </para>
<itemizedlist> <para>Variables <varname>$MONITOR_SERVICE_RESULT</varname>, <varname>$MONITOR_EXIT_CODE</varname>
<listitem><para><constant>SERVICE_RESULT</constant></para></listitem> and <varname>$MONITOR_EXIT_STATUS</varname> take the same values as for
<listitem><para><constant>EXIT_CODE</constant></para></listitem> <varname>ExecStop=</varname> and <varname>ExecStopPost=</varname> processes. Variables
<listitem><para><constant>EXIT_STATUS</constant></para></listitem> <varname>$MONITOR_INVOCATION_ID</varname> and <varname>$MONITOR_UNIT</varname> are set to the
<listitem><para><constant>INVOCATION_ID</constant></para></listitem> invocaton id and unit name of the service which triggered the dependency.</para>
<listitem><para><constant>UNIT</constant></para></listitem>
</itemizedlist>
<para>The fields <constant>SERVICE_RESULT</constant>, <constant>EXIT_CODE</constant> and
<constant>EXIT_STATUS</constant> may take the same values that are allowed when set for
<varname>ExecStop=</varname> and <varname>ExecStopPost=</varname> processes. The fields
<constant>INVOCATION_ID</constant> and <constant>UNIT</constant> are the invocaton id and unit
name of the service which triggered the dependency. Each field is comma separated, i.e.</para>
<programlisting>
SERVICE_RESULT=result-string,EXIT_CODE=exit-code,EXIT_STATUS=exit-status,INVOCATION_ID=invocation-id,UNIT=triggering.service
</programlisting>
</listitem>
<para>Note that when multiple services trigger the same unit, those variables will be
<emphasis>not</emphasis> be passed. Consider using a template handler unit for that case instead:
<literal>OnFailure=<replaceable>handler</replaceable>@%n.service</literal> for non-templated units,
or <literal>OnFailure=<replaceable>handler</replaceable>@%p-%i.service</literal> for templated
units.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry> <varlistentry>
@ -4060,7 +4050,7 @@ SERVICE_RESULT=result-string,EXIT_CODE=exit-code,EXIT_STATUS=exit-status,INVOCAT
<title>Examples</title> <title>Examples</title>
<example> <example>
<title><varname>$MONITOR_METADATA</varname> usage</title> <title><varname>$MONITOR_<replaceable>*</replaceable></varname> usage</title>
<para>A service <filename index="false">myfailer.service</filename> which can trigger an <para>A service <filename index="false">myfailer.service</filename> which can trigger an
<varname>OnFailure=</varname> dependency.</para> <varname>OnFailure=</varname> dependency.</para>
@ -4094,33 +4084,31 @@ ExecStart=/bin/mysecondprogram
Description=Acts on service failing or succeeding Description=Acts on service failing or succeeding
[Service] [Service]
ExecStart=/bin/bash -c "echo $MONITOR_METADATA" ExecStart=/bin/bash -c "echo $MONITOR_SERVICE_RESULT $MONITOR_EXIT_CODE $MONITOR_EXIT_STATUS $MONITOR_INVOCATION_ID $MONITOR_UNIT"
</programlisting> </programlisting>
<para>If <filename index="false">myfailer.service</filename> were to run and exit in failure, <para>If <filename index="false">myfailer.service</filename> were to run and exit in failure,
then <filename index="false">myhandler.service</filename> would be triggered and the then <filename index="false">myhandler.service</filename> would be triggered and the
<varname>$MONITOR_METADATA</varname> variable would be set as follows:</para> monitor variables would be set as follows:</para>
<programlisting> <programlisting>
MONITOR_METADATA=SERVICE_RESULT=result-string,EXIT_CODE=exit-code,EXIT_STATUS=exit-status,INVOCATION_ID=invocation-id,UNIT=myfailer.service MONITOR_SERVICE_RESULT=exit-code
MONITOR_EXIT_CODE=exited
MONITOR_EXIT_STATUS=1
MONITOR_INVOCATION_ID=cc8fdc149b2b4ca698d4f259f4054236
MONITOR_UNIT=myfailer.service
</programlisting> </programlisting>
<para>If <filename index="false">mysuccess.service</filename> were to run and exit in success, <para>If <filename index="false">mysuccess.service</filename> were to run and exit in success,
then <filename index="false">myhandler.service</filename> would be triggered and the then <filename index="false">myhandler.service</filename> would be triggered and the
<varname>$MONITOR_METADATA</varname> variable would be set as follows:</para> monitor variables would be set as follows:</para>
<programlisting> <programlisting>
MONITOR_METADATA=SERVICE_RESULT=result-string,EXIT_CODE=exit-code,EXIT_STATUS=exit-status,INVOCATION_ID=invocation-id,UNIT=mysuccess.service MONITOR_SERVICE_RESULT=success
</programlisting> MONITOR_EXIT_CODE=exited
MONITOR_EXIT_STATUS=0
<para>If <filename index="false">myfailer.service</filename> and <filename index="false">mysuccess.service</filename> were to run and exit, MONITOR_INVOCATION_ID=6ab9af147b8c4a3ebe36e7a5f8611697
there is a chance that the triggered dependency start job might be merged. Thus only a single invocation of MONITOR_UNIT=mysuccess.service
<filename index="false">myhandler.service</filename> would be triggered. In this case the <varname>$MONITOR_METADATA</varname> variable
would be a list containing exit metadata for both of <filename index="false">myfailer.service</filename>
and <filename index="false">mysuccess.service</filename>.</para>
<programlisting>
MONITOR_METADATA=SERVICE_RESULT=result-string,EXIT_CODE=exit-code,EXIT_STATUS=exit-status,INVOCATION_ID=invocation-id,UNIT=myfailer.service;SERVICE_RESULT=result-string,EXIT_CODE=exit-code,EXIT_STATUS=exit-status,INVOCATION_ID=invocation-id,UNIT=mysuccess.service
</programlisting> </programlisting>
</example> </example>

View File

@ -99,9 +99,6 @@ Job* job_free(Job *j) {
assert(!j->subject_list); assert(!j->subject_list);
assert(!j->object_list); assert(!j->object_list);
while (!LIST_IS_EMPTY(j->triggered_by))
LIST_POP(triggered_by, j->triggered_by);
job_unlink(j); job_unlink(j);
sd_bus_track_unref(j->bus_track); sd_bus_track_unref(j->bus_track);
@ -110,13 +107,6 @@ Job* job_free(Job *j) {
return mfree(j); return mfree(j);
} }
void job_add_triggering_unit(Job *j, Unit *u) {
assert(j);
assert(u);
LIST_APPEND(triggered_by, j->triggered_by, u);
}
static void job_set_state(Job *j, JobState state) { static void job_set_state(Job *j, JobState state) {
assert(j); assert(j);
assert(state >= 0); assert(state >= 0);
@ -197,8 +187,6 @@ static void job_merge_into_installed(Job *j, Job *other) {
j->irreversible = j->irreversible || other->irreversible; j->irreversible = j->irreversible || other->irreversible;
j->ignore_order = j->ignore_order || other->ignore_order; j->ignore_order = j->ignore_order || other->ignore_order;
if (other->triggered_by)
LIST_JOIN(triggered_by, j->triggered_by, other->triggered_by);
} }
Job* job_install(Job *j) { Job* job_install(Job *j) {

View File

@ -124,8 +124,6 @@ struct Job {
LIST_HEAD(JobDependency, subject_list); LIST_HEAD(JobDependency, subject_list);
LIST_HEAD(JobDependency, object_list); LIST_HEAD(JobDependency, object_list);
LIST_HEAD(Unit, triggered_by);
/* Used for graph algs as a "I have been here" marker */ /* Used for graph algs as a "I have been here" marker */
Job* marker; Job* marker;
unsigned generation; unsigned generation;
@ -246,5 +244,3 @@ JobResult job_result_from_string(const char *s) _pure_;
const char* job_type_to_access_method(JobType t); const char* job_type_to_access_method(JobType t);
int job_compare(Job *a, Job *b, UnitDependencyAtom assume_dep); int job_compare(Job *a, Job *b, UnitDependencyAtom assume_dep);
void job_add_triggering_unit(Job *j, Unit *u);

View File

@ -1447,81 +1447,37 @@ static bool service_exec_needs_notify_socket(Service *s, ExecFlags flags) {
return s->notify_access != NOTIFY_NONE; return s->notify_access != NOTIFY_NONE;
} }
static int service_create_monitor_md_env(Job *j, char **ret) { static Service *service_get_triggering_service(Service *s) {
_cleanup_free_ char *var = NULL; Unit *candidate = NULL, *other;
const char *list_delim = ";";
bool first = true;
Unit *tu;
assert(j); assert(s);
assert(ret);
/* Create an environment variable 'MONITOR_METADATA', if creation is successful /* Return the service which triggered service 's', this means dependency
* a pointer to it is returned via ret. * types which include the UNIT_ATOM_ON_{FAILURE,SUCCESS}_OF atoms.
* *
* This variable contains a space separated set of fields which relate to * N.B. if there are multiple services which could trigger 's' via OnFailure=
* the service(s) which triggered job 'j'. Job 'j' is the JOB_START job for * or OnSuccess= then we return NULL. This is since we don't know from which
* an OnFailure= or OnSuccess= dependency. Format of the MONITOR_METADATA * one to propagate the exit status. */
* variable is as follows:
*
* MONITOR_METADATA="SERVICE_RESULT=<result-string0>,EXIT_CODE=<exit-code0>,EXIT_STATUS=<exit-status0>,
* INVOCATION_ID=<id>,UNIT=<triggering-unit0.service>;
* SERVICE_RESULT=<result-stringN>,EXIT_CODE=<exit-codeN>,EXIT_STATUS=<exit-statusN>,
* INVOCATION_ID=<id>,UNIT=<triggering-unitN.service>"
*
* Multiple results may be passed as in the above example if jobs are merged, i.e.
* some services a and b contain an OnFailure= or OnSuccess= dependency on the same
* service.
*
* For example:
*
* MONITOR_METADATA="SERVICE_RESULT=exit-code,EXIT_CODE=exited,EXIT_STATUS=1,INVOCATION_ID=02dd868af2f344b18edaf74b618b2f90,UNIT=failure.service;
* SERVICE_RESULT=exit-code,EXIT_CODE=exited,EXIT_STATUS=1,INVOCATION_ID=80cb228bd7344f77a090eda603a3cfe2,UNIT=failure2.service"
*/
LIST_FOREACH(triggered_by, tu, j->triggered_by) { UNIT_FOREACH_DEPENDENCY(other, UNIT(s), UNIT_ATOM_ON_FAILURE_OF) {
Service *env_source = SERVICE(tu); if (candidate) {
int r; log_unit_debug(UNIT(s), "multiple trigger source candidates for exit status propagation, skipping.");
return NULL;
if (!env_source)
continue;
/* Add the environment variable name first. */
if (first && !strextend(&var, "MONITOR_METADATA="))
return -ENOMEM;
if (!strextend(&var, !first ? list_delim : "", "SERVICE_RESULT=", service_result_to_string(env_source->result)))
return -ENOMEM;
first = false;
if (env_source->main_exec_status.pid > 0 &&
dual_timestamp_is_set(&env_source->main_exec_status.exit_timestamp)) {
if (!strextend(&var, ",EXIT_CODE=", sigchld_code_to_string(env_source->main_exec_status.code)))
return -ENOMEM;
if (env_source->main_exec_status.code == CLD_EXITED) {
r = strextendf(&var, ",EXIT_STATUS=%i",
env_source->main_exec_status.status);
if (r < 0)
return r;
} else if (!strextend(&var, ",EXIT_STATUS=", signal_to_string(env_source->main_exec_status.status)))
return -ENOMEM;
} }
if (!sd_id128_is_null(UNIT(env_source)->invocation_id)) { candidate = other;
r = strextendf(&var, ",INVOCATION_ID=" SD_ID128_FORMAT_STR,
SD_ID128_FORMAT_VAL(UNIT(env_source)->invocation_id));
if (r < 0)
return r;
}
if (!strextend(&var, ",UNIT=", UNIT(env_source)->id))
return -ENOMEM;
} }
*ret = TAKE_PTR(var); UNIT_FOREACH_DEPENDENCY(other, UNIT(s), UNIT_ATOM_ON_SUCCESS_OF) {
return 0; if (candidate) {
log_unit_debug(UNIT(s), "multiple trigger source candidates for exit status propagation, skipping.");
return NULL;
}
candidate = other;
}
return SERVICE(candidate);
} }
static int service_spawn( static int service_spawn(
@ -1588,7 +1544,7 @@ static int service_spawn(
if (r < 0) if (r < 0)
return r; return r;
our_env = new0(char*, 10); our_env = new0(char*, 12);
if (!our_env) if (!our_env)
return -ENOMEM; return -ENOMEM;
@ -1645,30 +1601,44 @@ static int service_spawn(
} }
} }
Service *env_source = NULL;
const char *monitor_prefix;
if (flags & EXEC_SETENV_RESULT) { if (flags & EXEC_SETENV_RESULT) {
if (asprintf(our_env + n_env++, "SERVICE_RESULT=%s", service_result_to_string(s->result)) < 0) env_source = s;
monitor_prefix = "";
} else if (flags & EXEC_SETENV_MONITOR_RESULT) {
env_source = service_get_triggering_service(s);
monitor_prefix = "MONITOR_";
}
if (env_source) {
if (asprintf(our_env + n_env++, "%sSERVICE_RESULT=%s", monitor_prefix, service_result_to_string(env_source->result)) < 0)
return -ENOMEM; return -ENOMEM;
if (s->main_exec_status.pid > 0 && if (env_source->main_exec_status.pid > 0 &&
dual_timestamp_is_set(&s->main_exec_status.exit_timestamp)) { dual_timestamp_is_set(&env_source->main_exec_status.exit_timestamp)) {
if (asprintf(our_env + n_env++, "EXIT_CODE=%s", sigchld_code_to_string(s->main_exec_status.code)) < 0) if (asprintf(our_env + n_env++, "%sEXIT_CODE=%s", monitor_prefix, sigchld_code_to_string(env_source->main_exec_status.code)) < 0)
return -ENOMEM; return -ENOMEM;
if (s->main_exec_status.code == CLD_EXITED) if (env_source->main_exec_status.code == CLD_EXITED)
r = asprintf(our_env + n_env++, "EXIT_STATUS=%i", s->main_exec_status.status); r = asprintf(our_env + n_env++, "%sEXIT_STATUS=%i", monitor_prefix, env_source->main_exec_status.status);
else else
r = asprintf(our_env + n_env++, "EXIT_STATUS=%s", signal_to_string(s->main_exec_status.status)); r = asprintf(our_env + n_env++, "%sEXIT_STATUS=%s", monitor_prefix, signal_to_string(env_source->main_exec_status.status));
if (r < 0) if (r < 0)
return -ENOMEM; return -ENOMEM;
} }
} else if (flags & EXEC_SETENV_MONITOR_RESULT) { if (env_source != s) {
Job *j = UNIT(s)->job; if (!sd_id128_is_null(UNIT(env_source)->invocation_id)) {
if (j) { r = asprintf(our_env + n_env++, "%sINVOCATION_ID=" SD_ID128_FORMAT_STR,
r = service_create_monitor_md_env(j, our_env + n_env++); monitor_prefix, SD_ID128_FORMAT_VAL(UNIT(env_source)->invocation_id));
if (r < 0) if (r < 0)
return r; return -ENOMEM;
}
if (asprintf(our_env + n_env++, "%sUNIT=%s", monitor_prefix, UNIT(env_source)->id) < 0)
return -ENOMEM;
} }
} }

View File

@ -82,13 +82,11 @@ static const UnitDependencyAtom atom_map[_UNIT_DEPENDENCY_MAX] = {
[UNIT_PROPAGATES_STOP_TO] = UNIT_ATOM_RETROACTIVE_STOP_ON_STOP | [UNIT_PROPAGATES_STOP_TO] = UNIT_ATOM_RETROACTIVE_STOP_ON_STOP |
UNIT_ATOM_PROPAGATE_STOP, UNIT_ATOM_PROPAGATE_STOP,
[UNIT_ON_FAILURE] = UNIT_ATOM_ON_FAILURE |
UNIT_ATOM_BACK_REFERENCE_IMPLIED,
[UNIT_ON_SUCCESS] = UNIT_ATOM_ON_SUCCESS |
UNIT_ATOM_BACK_REFERENCE_IMPLIED,
/* These are simple dependency types: they consist of a single atom only */ /* These are simple dependency types: they consist of a single atom only */
[UNIT_ON_FAILURE] = UNIT_ATOM_ON_FAILURE,
[UNIT_ON_SUCCESS] = UNIT_ATOM_ON_SUCCESS,
[UNIT_ON_FAILURE_OF] = UNIT_ATOM_ON_FAILURE_OF,
[UNIT_ON_SUCCESS_OF] = UNIT_ATOM_ON_SUCCESS_OF,
[UNIT_BEFORE] = UNIT_ATOM_BEFORE, [UNIT_BEFORE] = UNIT_ATOM_BEFORE,
[UNIT_AFTER] = UNIT_ATOM_AFTER, [UNIT_AFTER] = UNIT_ATOM_AFTER,
[UNIT_TRIGGERS] = UNIT_ATOM_TRIGGERS, [UNIT_TRIGGERS] = UNIT_ATOM_TRIGGERS,
@ -104,8 +102,6 @@ static const UnitDependencyAtom atom_map[_UNIT_DEPENDENCY_MAX] = {
* things discoverable/debuggable as they are the inverse dependencies to some of the above. As they * things discoverable/debuggable as they are the inverse dependencies to some of the above. As they
* have no effect of their own, they all map to no atoms at all, i.e. the value 0. */ * have no effect of their own, they all map to no atoms at all, i.e. the value 0. */
[UNIT_RELOAD_PROPAGATED_FROM] = 0, [UNIT_RELOAD_PROPAGATED_FROM] = 0,
[UNIT_ON_SUCCESS_OF] = 0,
[UNIT_ON_FAILURE_OF] = 0,
[UNIT_STOP_PROPAGATED_FROM] = 0, [UNIT_STOP_PROPAGATED_FROM] = 0,
}; };
@ -200,17 +196,19 @@ UnitDependency unit_dependency_from_unique_atom(UnitDependencyAtom atom) {
case UNIT_ATOM_PROPAGATE_STOP_FAILURE: case UNIT_ATOM_PROPAGATE_STOP_FAILURE:
return UNIT_CONFLICTED_BY; return UNIT_CONFLICTED_BY;
case UNIT_ATOM_ON_FAILURE | /* And now, the simple ones */
UNIT_ATOM_BACK_REFERENCE_IMPLIED:
case UNIT_ATOM_ON_FAILURE: case UNIT_ATOM_ON_FAILURE:
return UNIT_ON_FAILURE; return UNIT_ON_FAILURE;
case UNIT_ATOM_ON_SUCCESS |
UNIT_ATOM_BACK_REFERENCE_IMPLIED:
case UNIT_ATOM_ON_SUCCESS: case UNIT_ATOM_ON_SUCCESS:
return UNIT_ON_SUCCESS; return UNIT_ON_SUCCESS;
/* And now, the simple ones */ case UNIT_ATOM_ON_SUCCESS_OF:
return UNIT_ON_SUCCESS_OF;
case UNIT_ATOM_ON_FAILURE_OF:
return UNIT_ON_FAILURE_OF;
case UNIT_ATOM_BEFORE: case UNIT_ATOM_BEFORE:
return UNIT_BEFORE; return UNIT_BEFORE;

View File

@ -66,27 +66,22 @@ typedef enum UnitDependencyAtom {
/* Recheck default target deps on other units (which are target units) */ /* Recheck default target deps on other units (which are target units) */
UNIT_ATOM_DEFAULT_TARGET_DEPENDENCIES = UINT64_C(1) << 21, UNIT_ATOM_DEFAULT_TARGET_DEPENDENCIES = UINT64_C(1) << 21,
/* Dependencies which include this atom automatically get a reverse
* REFERENCES/REFERENCED_BY dependency. */
UNIT_ATOM_BACK_REFERENCE_IMPLIED = UINT64_C(1) << 22,
/* Trigger a dependency on successful service exit. */
UNIT_ATOM_ON_SUCCESS = UINT64_C(1) << 23,
/* Trigger a dependency on unsuccessful service exit. */
UNIT_ATOM_ON_FAILURE = UINT64_C(1) << 24,
/* The remaining atoms map 1:1 to the equally named high-level deps */ /* The remaining atoms map 1:1 to the equally named high-level deps */
UNIT_ATOM_BEFORE = UINT64_C(1) << 25, UNIT_ATOM_ON_FAILURE = UINT64_C(1) << 22,
UNIT_ATOM_AFTER = UINT64_C(1) << 26, UNIT_ATOM_ON_SUCCESS = UINT64_C(1) << 23,
UNIT_ATOM_TRIGGERS = UINT64_C(1) << 27, UNIT_ATOM_ON_FAILURE_OF = UINT64_C(1) << 24,
UNIT_ATOM_TRIGGERED_BY = UINT64_C(1) << 28, UNIT_ATOM_ON_SUCCESS_OF = UINT64_C(1) << 25,
UNIT_ATOM_PROPAGATES_RELOAD_TO = UINT64_C(1) << 29, UNIT_ATOM_BEFORE = UINT64_C(1) << 26,
UNIT_ATOM_JOINS_NAMESPACE_OF = UINT64_C(1) << 30, UNIT_ATOM_AFTER = UINT64_C(1) << 27,
UNIT_ATOM_REFERENCES = UINT64_C(1) << 31, UNIT_ATOM_TRIGGERS = UINT64_C(1) << 28,
UNIT_ATOM_REFERENCED_BY = UINT64_C(1) << 32, UNIT_ATOM_TRIGGERED_BY = UINT64_C(1) << 29,
UNIT_ATOM_IN_SLICE = UINT64_C(1) << 33, UNIT_ATOM_PROPAGATES_RELOAD_TO = UINT64_C(1) << 30,
UNIT_ATOM_SLICE_OF = UINT64_C(1) << 34, UNIT_ATOM_JOINS_NAMESPACE_OF = UINT64_C(1) << 31,
_UNIT_DEPENDENCY_ATOM_MAX = (UINT64_C(1) << 35) - 1, UNIT_ATOM_REFERENCES = UINT64_C(1) << 32,
UNIT_ATOM_REFERENCED_BY = UINT64_C(1) << 33,
UNIT_ATOM_IN_SLICE = UINT64_C(1) << 34,
UNIT_ATOM_SLICE_OF = UINT64_C(1) << 35,
_UNIT_DEPENDENCY_ATOM_MAX = (UINT64_C(1) << 36) - 1,
_UNIT_DEPENDENCY_ATOM_INVALID = -EINVAL, _UNIT_DEPENDENCY_ATOM_INVALID = -EINVAL,
} UnitDependencyAtom; } UnitDependencyAtom;

View File

@ -2222,24 +2222,17 @@ void unit_start_on_failure(
UNIT_FOREACH_DEPENDENCY(other, u, atom) { UNIT_FOREACH_DEPENDENCY(other, u, atom) {
_cleanup_(sd_bus_error_free) sd_bus_error error = SD_BUS_ERROR_NULL; _cleanup_(sd_bus_error_free) sd_bus_error error = SD_BUS_ERROR_NULL;
Job *job = NULL;
if (!logged) { if (!logged) {
log_unit_info(u, "Triggering %s dependencies.", dependency_name); log_unit_info(u, "Triggering %s dependencies.", dependency_name);
logged = true; logged = true;
} }
r = manager_add_job(u->manager, JOB_START, other, job_mode, NULL, &error, &job); r = manager_add_job(u->manager, JOB_START, other, job_mode, NULL, &error, NULL);
if (r < 0) if (r < 0)
log_unit_warning_errno( log_unit_warning_errno(
u, r, "Failed to enqueue %s job, ignoring: %s", u, r, "Failed to enqueue %s job, ignoring: %s",
dependency_name, bus_error_message(&error, r)); dependency_name, bus_error_message(&error, r));
else if (job)
/* u will be kept pinned since both UNIT_ON_FAILURE and UNIT_ON_SUCCESS includes
* UNIT_ATOM_BACK_REFERENCE_IMPLIED. We save the triggering unit here since we
* want to be able to reference it when we come to run the OnFailure= or OnSuccess=
* dependency. */
job_add_triggering_unit(job, u);
} }
if (logged) if (logged)
@ -3121,20 +3114,6 @@ int unit_add_dependency(
noop = false; noop = false;
} }
if (FLAGS_SET(a, UNIT_ATOM_BACK_REFERENCE_IMPLIED)) {
r = unit_add_dependency_hashmap(&other->dependencies, UNIT_REFERENCES, u, 0, mask);
if (r < 0)
return r;
if (r)
noop = false;
r = unit_add_dependency_hashmap(&u->dependencies, UNIT_REFERENCED_BY, other, 0, mask);
if (r < 0)
return r;
if (r)
noop = false;
}
if (add_reference) { if (add_reference) {
r = unit_add_dependency_hashmap(&u->dependencies, UNIT_REFERENCES, other, mask, 0); r = unit_add_dependency_hashmap(&u->dependencies, UNIT_REFERENCES, other, mask, 0);
if (r < 0) if (r < 0)

View File

@ -242,9 +242,6 @@ typedef struct Unit {
/* Queue of units that have a BindTo= dependency on some other unit, and should possibly be shut down */ /* Queue of units that have a BindTo= dependency on some other unit, and should possibly be shut down */
LIST_FIELDS(Unit, stop_when_bound_queue); LIST_FIELDS(Unit, stop_when_bound_queue);
/* Queue of units which have triggered an OnFailure= or OnSuccess= dependency job. */
LIST_FIELDS(Unit, triggered_by);
/* PIDs we keep an eye on. Note that a unit might have many /* PIDs we keep an eye on. Note that a unit might have many
* more, but these are the ones we care enough about to * more, but these are the ones we care enough about to
* process SIGCHLD for */ * process SIGCHLD for */

View File

@ -223,6 +223,7 @@ int main(int argc, char *argv[]) {
assert_se(unit_add_dependency_by_name(stub, UNIT_AFTER, SPECIAL_ROOT_SLICE, true, UNIT_DEPENDENCY_FILE) >= 0); assert_se(unit_add_dependency_by_name(stub, UNIT_AFTER, SPECIAL_ROOT_SLICE, true, UNIT_DEPENDENCY_FILE) >= 0);
assert_se(unit_add_dependency_by_name(stub, UNIT_REQUIRES, "non-existing.mount", true, UNIT_DEPENDENCY_FILE) >= 0); assert_se(unit_add_dependency_by_name(stub, UNIT_REQUIRES, "non-existing.mount", true, UNIT_DEPENDENCY_FILE) >= 0);
assert_se(unit_add_dependency_by_name(stub, UNIT_ON_FAILURE, "non-existing-on-failure.target", true, UNIT_DEPENDENCY_FILE) >= 0); assert_se(unit_add_dependency_by_name(stub, UNIT_ON_FAILURE, "non-existing-on-failure.target", true, UNIT_DEPENDENCY_FILE) >= 0);
assert_se(unit_add_dependency_by_name(stub, UNIT_ON_SUCCESS, "non-existing-on-success.target", true, UNIT_DEPENDENCY_FILE) >= 0);
log_info("/* Merging a+stub, dumps before */"); log_info("/* Merging a+stub, dumps before */");
unit_dump(a, stderr, NULL); unit_dump(a, stderr, NULL);
@ -237,7 +238,13 @@ int main(int argc, char *argv[]) {
assert_se(unit_has_dependency(a, UNIT_ATOM_PULL_IN_START, manager_get_unit(m, "non-existing.mount"))); assert_se(unit_has_dependency(a, UNIT_ATOM_PULL_IN_START, manager_get_unit(m, "non-existing.mount")));
assert_se(unit_has_dependency(a, UNIT_ATOM_RETROACTIVE_START_REPLACE, manager_get_unit(m, "non-existing.mount"))); assert_se(unit_has_dependency(a, UNIT_ATOM_RETROACTIVE_START_REPLACE, manager_get_unit(m, "non-existing.mount")));
assert_se(unit_has_dependency(a, UNIT_ATOM_ON_FAILURE, manager_get_unit(m, "non-existing-on-failure.target"))); assert_se(unit_has_dependency(a, UNIT_ATOM_ON_FAILURE, manager_get_unit(m, "non-existing-on-failure.target")));
assert_se(unit_has_dependency(manager_get_unit(m, "non-existing-on-failure.target"), UNIT_ATOM_ON_FAILURE_OF, a));
assert_se(unit_has_dependency(a, UNIT_ATOM_ON_SUCCESS, manager_get_unit(m, "non-existing-on-success.target")));
assert_se(unit_has_dependency(manager_get_unit(m, "non-existing-on-success.target"), UNIT_ATOM_ON_SUCCESS_OF, a));
assert_se(!unit_has_dependency(a, UNIT_ATOM_ON_FAILURE, manager_get_unit(m, "basic.target"))); assert_se(!unit_has_dependency(a, UNIT_ATOM_ON_FAILURE, manager_get_unit(m, "basic.target")));
assert_se(!unit_has_dependency(a, UNIT_ATOM_ON_SUCCESS, manager_get_unit(m, "basic.target")));
assert_se(!unit_has_dependency(a, UNIT_ATOM_ON_FAILURE_OF, manager_get_unit(m, "basic.target")));
assert_se(!unit_has_dependency(a, UNIT_ATOM_ON_SUCCESS_OF, manager_get_unit(m, "basic.target")));
assert_se(!unit_has_dependency(a, UNIT_ATOM_PROPAGATES_RELOAD_TO, manager_get_unit(m, "non-existing-on-failure.target"))); assert_se(!unit_has_dependency(a, UNIT_ATOM_PROPAGATES_RELOAD_TO, manager_get_unit(m, "non-existing-on-failure.target")));
assert_se(unit_has_name(a, "a.service")); assert_se(unit_has_name(a, "a.service"));

View File

@ -36,16 +36,6 @@ OnFailure=testservice-failure-exit-handler-68.service
ExecStart=/bin/bash -c "exit 1" ExecStart=/bin/bash -c "exit 1"
EOF EOF
# Another service which triggers testservice-failure-exit-handler-68.service
cat >/run/systemd/system/testservice-failure-68-additional.service <<EOF
[Unit]
Description=TEST-68-PROPAGATE-EXIT-STATUS Additional service with OnFailure= trigger
OnFailure=testservice-failure-exit-handler-68.service
[Service]
ExecStart=/bin/bash -c "exit 1"
EOF
# Trigger testservice-success-exit-handler-68.service # Trigger testservice-success-exit-handler-68.service
cat >/run/systemd/system/testservice-success-68.service <<EOF cat >/run/systemd/system/testservice-success-68.service <<EOF
[Unit] [Unit]
@ -56,67 +46,37 @@ OnSuccess=testservice-success-exit-handler-68.service
ExecStart=/bin/bash -c "exit 0" ExecStart=/bin/bash -c "exit 0"
EOF EOF
# Trigger testservice-success-exit-handler-68.service
cat >/run/systemd/system/testservice-success-68-additional.service <<EOF
[Unit]
Description=TEST-68-PROPAGATE-EXIT-STATUS Addition service with OnSuccess= trigger
OnSuccess=testservice-success-exit-handler-68.service
[Service]
ExecStart=/bin/bash -c "exit 0"
EOF
# Script to check that when an OnSuccess= dependency fires, the correct # Script to check that when an OnSuccess= dependency fires, the correct
# MONITOR* env variables are passed. This script handles the case where # MONITOR* env variables are passed.
# multiple services triggered the unit that calls this script. In this
# case we need to check the MONITOR_METADATA variable for >= 1 service
# details since jobs may merge.
cat >/tmp/check_on_success.sh <<EOF cat >/tmp/check_on_success.sh <<EOF
#!/usr/bin/env bash #!/usr/bin/env bash
set -ex set -ex
env
if [ "\$MONITOR_SERVICE_RESULT" != "success" ]; then
echo "MONITOR_SERVICE_RESULT was \"\$MONITOR_SERVICE_RESULT\", expected \"success\"";
exit 1;
fi
echo "MONITOR_METADATA=\$MONITOR_METADATA" if [ "\$MONITOR_EXIT_CODE" != "exited" ]; then
echo "MONITOR_EXIT_CODE was \"\$MONITOR_EXIT_CODE\", expected \"exited\"";
exit 1;
fi
IFS=';' read -ra ALL_SERVICE_MD <<< "\$MONITOR_METADATA" if [ "\$MONITOR_EXIT_STATUS" != "0" ]; then
for SERVICE_MD in "\${ALL_SERVICE_MD[@]}"; do echo "MONITOR_EXIT_STATUS was \"\$MONITOR_EXIT_STATUS\", expected \"0\"";
IFS=',' read -ra METADATA <<< "\$SERVICE_MD" exit 1;
IFS='=' read -ra SERVICE_RESULT <<< "\${METADATA[0]}" fi
SERVICE_RESULT=\${SERVICE_RESULT[1]}
IFS='=' read -ra EXIT_CODE <<< "\${METADATA[1]}"
EXIT_CODE=\${EXIT_CODE[1]}
IFS='=' read -ra EXIT_STATUS <<< "\${METADATA[2]}"
EXIT_STATUS=\${EXIT_STATUS[1]}
IFS='=' read -ra INVOCATION_ID <<< "\${METADATA[3]}"
INVOCATION_ID=\${INVOCATION_ID[1]}
IFS='=' read -ra UNIT <<< "\${METADATA[4]}"
UNIT=\${UNIT[1]}
if [ "\$SERVICE_RESULT" != "success" ]; then if [ -z "\$MONITOR_INVOCATION_ID" ]; then
echo 'SERVICE_RESULT was "\$SERVICE_RESULT", expected "success"'; echo "MONITOR_INVOCATION_ID unset";
exit 1; exit 1;
fi fi
if [ "\$EXIT_CODE" != "exited" ]; then if [[ "\$MONITOR_UNIT" != "testservice-success-68.service" && "\$MONITOR_UNIT" != "testservice-transient-success-68.service" ]]; then
echo 'EXIT_CODE was "\$EXIT_CODE", expected "exited"'; echo "MONITOR_UNIT was \"\$MONITOR_UNIT\", expected \"testservice-success-68{-transient}.service\"";
exit 1; exit 1;
fi fi
if [ "\$EXIT_STATUS" != "0" ]; then
echo 'EXIT_STATUS was "\$EXIT_STATUS", expected "0"';
exit 1;
fi
if [ -z "\$INVOCATION_ID" ]; then
echo 'INVOCATION_ID unset';
exit 1;
fi
if [[ "\$UNIT" != "testservice-success-68.service" && "\$UNIT" != "testservice-success-68-additional.service" && "\$UNIT" != "testservice-transient-success-68.service" ]]; then
echo 'UNIT was "\$UNIT", expected "testservice-success-68{-additional,-transient}.service"';
exit 1;
fi
done
exit 0; exit 0;
EOF EOF
@ -133,56 +93,36 @@ ExecStart=/tmp/check_on_success.sh
EOF EOF
# Script to check that when an OnFailure= dependency fires, the correct # Script to check that when an OnFailure= dependency fires, the correct
# MONITOR* env variables are passed. This script handles the case where # MONITOR* env variables are passed.
# multiple services triggered the unit that calls this script. In this
# case we need to check the MONITOR_METADATA variable for >=1 service
# details since jobs may merge.
cat >/tmp/check_on_failure.sh <<EOF cat >/tmp/check_on_failure.sh <<EOF
#!/usr/bin/env bash #!/usr/bin/env bash
set -ex set -ex
env
if [ "\$MONITOR_SERVICE_RESULT" != "exit-code" ]; then
echo "MONITOR_SERVICE_RESULT was \"\$MONITOR_SERVICE_RESULT\", expected \"exit-code\"";
exit 1;
fi
echo "MONITOR_METADATA=\$MONITOR_METADATA" if [ "\$MONITOR_EXIT_CODE" != "exited" ]; then
echo "MONITOR_EXIT_CODE was \"\$MONITOR_EXIT_CODE\", expected \"exited\"";
exit 1;
fi
IFS=';' read -ra ALL_SERVICE_MD <<< "\$MONITOR_METADATA" if [ "\$MONITOR_EXIT_STATUS" != "1" ]; then
for SERVICE_MD in "\${ALL_SERVICE_MD[@]}"; do echo "MONITOR_EXIT_STATUS was \"\$MONITOR_EXIT_STATUS\", expected \"1\"";
IFS=',' read -ra METADATA <<< "\$SERVICE_MD" exit 1;
IFS='=' read -ra SERVICE_RESULT <<< "\${METADATA[0]}" fi
SERVICE_RESULT=\${SERVICE_RESULT[1]}
IFS='=' read -ra EXIT_CODE <<< "\${METADATA[1]}"
EXIT_CODE=\${EXIT_CODE[1]}
IFS='=' read -ra EXIT_STATUS <<< "\${METADATA[2]}"
EXIT_STATUS=\${EXIT_STATUS[1]}
IFS='=' read -ra INVOCATION_ID <<< "\${METADATA[3]}"
INVOCATION_ID=\${INVOCATION_ID[1]}
IFS='=' read -ra UNIT <<< "\${METADATA[4]}"
UNIT=\${UNIT[1]}
if [ "\$SERVICE_RESULT" != "exit-code" ]; then if [ -z "\$MONITOR_INVOCATION_ID" ]; then
echo 'SERVICE_RESULT was "\$SERVICE_RESULT", expected "exit-code"'; echo "MONITOR_INVOCATION_ID unset";
exit 1; exit 1;
fi fi
if [ "\$EXIT_CODE" != "exited" ]; then if [[ "\$MONITOR_UNIT" != "testservice-failure-68.service" && "\$MONITOR_UNIT" != "testservice-transient-failure-68.service" ]]; then
echo 'EXIT_CODE was "\$EXIT_CODE", expected "exited"'; echo "MONITOR_UNIT was \"\$MONITOR_UNIT\", expected \"testservice-failure-68{-transient}.service\"";
exit 1; exit 1;
fi fi
if [ "\$EXIT_STATUS" != "1" ]; then
echo 'EXIT_STATUS was "\$EXIT_STATUS", expected "1"';
exit 1;
fi
if [ -z "\$INVOCATION_ID" ]; then
echo 'INVOCATION_ID unset';
exit 1;
fi
if [[ "\$UNIT" != "testservice-failure-68.service" && "\$UNIT" != "testservice-failure-68-additional.service" && "\$UNIT" != "testservice-transient-failure-68.service" ]]; then
echo 'UNIT was "\$UNIT", expected "testservice-failure-68{-additional,-transient}.service"';
exit 1;
fi
done
exit 0; exit 0;
EOF EOF
@ -201,16 +141,10 @@ EOF
systemctl daemon-reload systemctl daemon-reload
# The running of the OnFailure= and OnSuccess= jobs for all of these services
# may result in jobs being merged.
systemctl start testservice-failure-68.service systemctl start testservice-failure-68.service
wait_on_state_or_fail "testservice-failure-exit-handler-68.service" "inactive" "10" wait_on_state_or_fail "testservice-failure-exit-handler-68.service" "inactive" "10"
systemctl start testservice-failure-68-additional.service
wait_on_state_or_fail "testservice-failure-exit-handler-68.service" "inactive" "10"
systemctl start testservice-success-68.service systemctl start testservice-success-68.service
wait_on_state_or_fail "testservice-success-exit-handler-68.service" "inactive" "10" wait_on_state_or_fail "testservice-success-exit-handler-68.service" "inactive" "10"
systemctl start testservice-success-68-additional.service
wait_on_state_or_fail "testservice-success-exit-handler-68.service" "inactive" "10"
# Test some transient units since these exit very quickly. # Test some transient units since these exit very quickly.
systemd-run --unit=testservice-transient-success-68 --property=OnSuccess=testservice-success-exit-handler-68.service /bin/bash -c "exit 0;" systemd-run --unit=testservice-transient-success-68 --property=OnSuccess=testservice-success-exit-handler-68.service /bin/bash -c "exit 0;"
@ -218,12 +152,6 @@ wait_on_state_or_fail "testservice-success-exit-handler-68.service" "inactive" "
systemd-run --unit=testservice-transient-failure-68 --property=OnFailure=testservice-failure-exit-handler-68.service /bin/bash -c "exit 1;" systemd-run --unit=testservice-transient-failure-68 --property=OnFailure=testservice-failure-exit-handler-68.service /bin/bash -c "exit 1;"
wait_on_state_or_fail "testservice-failure-exit-handler-68.service" "inactive" "10" wait_on_state_or_fail "testservice-failure-exit-handler-68.service" "inactive" "10"
# These yield a higher chance of resulting in jobs merging.
systemctl start testservice-failure-68.service testservice-failure-68-additional.service --no-block
wait_on_state_or_fail "testservice-failure-exit-handler-68.service" "inactive" "10"
systemctl start testservice-success-68.service testservice-success-68-additional.service --no-block
wait_on_state_or_fail "testservice-success-exit-handler-68.service" "inactive" "10"
systemd-analyze log-level info systemd-analyze log-level info
echo OK >/testok echo OK >/testok