IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
If the system is finally shutting down it makes no sense to write core
dumps as the last remaining processes are terminated / killed. This is
especially significant in case of a "force reboot" where all processes
are hit concurrently with a SIGTERM and no orderly shutdown of
processes takes place.
This also adds the ability to incorporate arrays into netlink messages
and to determine when a netlink message is too big, used by some generic
netlink protocols.
commit 7715629 (networkd: Fix race condition in [RoutingPolicyRule] handling (#7615)).
Does not fix race. Still there is a race in case of bride because the
bride goes down and up .
calling route_configure then link_set_routing_policy_rule and the
link_check_ready makes a race between routing_policy_rule_messages and route_messages.
While bride comes up and we call the call again route_configure if finds
it self in the callback function LINK_STATE_CONFIGURED networkd dies.
Let's handle first routing policy rules then route_configure. This fixes
the crash.
Closes#7797
On s390x and ppc64, the permissions of the /dev/kvm device are currently
not right as long as the kvm kernel module has not been loaded yet. The
kernel module is using MODULE_ALIAS("devname:kvm") there, so the module
will be loaded on the first access to /dev/kvm. In that case, udev needs
to apply the permission to the static node already (which was created via
devtmpfs), i.e. we have to specify the option "static_node=kvm" in the
udev rule.
Note that on x86, the kvm kernel modules are loaded early instead (via the
MODULE_DEVICE_TABLE(x86cpu, ...) feature checking), so that the right module
is loaded for the Intel or AMD hypervisor extensions right from the start.
Thus the "static_node=kvm" is not required on x86 - but it also should not
hurt here (and using it here even might be more future proof in case the
module loading is also done delayed there one day), so we just add the new
option to the rule here unconditionally.
When /sys is a symlink to the sysfs mountpoint, e.g. /path/to/sysfs.
Then, device->syspath was set to like /path/to/sysfs/devices/foo/baz.
This converts the path to /sys/devices/foo/baz.
Fixes#7676.
Currently, if there are two /proc/self/mountinfo entries with the same
mount point path, the mount setup flags computed for the second of
these two entries will overwrite the mount setup flags computed for
the first of these two entries. This is the root cause of issue #7798.
This patch changes mount_setup_existing_unit to prevent the
just_mounted mount setup flag from being overwritten if it is set to
true. This will allow all mount units created from /proc/self/mountinfo
entries to be initialized properly.
Fixes: #7798
When loading .netdev files we parse them twice: first we do one parsing
iteration to figure out their "kind", and then we do it again to parse
out the kind's parameters. The first iteration is run with a "short"
NetDev structure, that only covers the generic NetDev properties. Which
should be enough, as we don't parse the per-kind properties. However,
before this patch we'd still try to destruct the per-kind properties
which resulted in memory corruption. With this change we distuingish the
two iterations by the state field, so that the destruction only happens
when the state signals we are running with a full NetDev structure.
Since this is not obvious, let's add a lot of comments.
This is unused since kdbus is gone, hence remove this too. This permits
us to get rid of sd_bus_send_internal() and just implement sd_bus_send()
directly.
This way, clients can install the very same match on direct and broker
connections as in both cases the messages will originate from the o.f.s1
service.
This is useful on direct connections to generate messages with valid
sender fields.
This is particularly useful for services that are accessible both
through direct connections and the broker, as it allows clients to
install matches on the sender service name, and they work the same in
both cases.
This way sd_bus_call_method_async() (which is just a wrapper around
sd_bus_call_async()) can be used to put method calls together that
expect no reply.
The changes both networkd and resolved to make use of the watch_bind
feature of sd-bus to connect to the system bus. This way, both daemons
can be started during early boot, and automatically and instantly
connect to the system bus as it becomes available.
This replaces prior code that used a time-based retry logic to connect
to the bus.
With this new API sd-bus can synthesize a local "Connected" signal when
the connection is fully established. It mirrors the local "Disconnected"
signal that is already generated when the connection is terminated. This
is useful to be notified when connection setup is done, in order to
start method calls then, in particular when using "slow" connection
methods (for example slow TCP, or most importantly the "watch_bind"
inotify logic).
Note that one could also use hook into the initial NameAcquired signal
received from the bus broker, but that scheme works only if we actually
connect to a bus. The benefit of "Connected" OTOH is that it works with
any kind of connection.
Ideally, we'd just generate this message unconditionally, but in order
not to break clients that do not expect this message it is opt-in.
This new call is much light sd_bus_is_open(), but returns true only if
the connection is fully set up, i.e. after we finished with the
authentication and Hello() phase. This API is useful for clients in
particular when using the "watch_bind" feature, as that way it can be
determined in advance whether it makes sense to sync on some operation.
Let's directly reference /run instead, so that we can work without /var
being around, or with /var/run being incorrectly set up.
Note that we keep the old socket path in place when referencing the
system bus of containers, as they might be foreign operating systems,
that still don't have adopted /run, and where it makes sense to use the
standardized name instead. On local systems, we insist on /run being set
up properly however, hence this limitation does not apply.
Also, get rid of the UNIX_SYSTEM_BUS_ADDRESS and
UNIX_USER_BUS_ADDRESS_FMT defines. They had a purpose when we still did
kdbus, as we then had to support two different backends. But since
that's gone, we don't need this indirection anymore, hence settle on a
one define only.
Let's remove a number of synchronization points from our service
startups: let's drop synchronous match installation, and let's opt for
asynchronous instead.
Also, let's use sd_bus_match_signal() instead of sd_bus_add_match()
where we can.
These are convenience helpers that hide the match string logic (which we
probably should never have exposed), and instead just takes regular C
arguments.
We usually enqueue a number of these calls on each service
initialization. Let's do this asynchronously, and thus remove
synchronization points. This improves both performance behaviour and
reduces the chances to deadlock.
We currently wait for the RemoveMatch() reply, but then ignore what it
actually says. Let's optimize this a bit, and not even ask for an answer
back: just enqueue the RemoveMatch() operation, and do not request not
wait for any answer.
They do the same thing as their synchronous counterparts, but only
enqueue the operation, thus removing synchronization points during
service initialization.
If the callback function is passed as NULL we'll fallback to generic
implementations of the reply handlers, that terminate the connection if
the requested name cannot be acquired, under the assumption that not
being able to acquire the name is a technical problem.