IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Add "userdata" storage to a bunch of external objects, namely displays and
sessions. Furthermore, add some property retrieval helpers.
This is required if we want external API users to not duplicate our own
object hashtables, but retrieve context from the objects themselves.
The desktop brand is stored as DESKTOP variable for sessions. It can be
set arbitrarily by the session owner and identifies the desktop
environment that is running on that session.
All kdbus ioctl arguments must be 8byte aligned. Make sure we use
alloca_align() and _alignas_(8) in all situations where gcc doesn't
guarantee 8-byte alignment.
Note that objects on the stack are always 8byte aligned as we put
_alignas_(8) into the structure definition in kdbus.h.
The alloca_align() helper is the alloca() equivalent of posix_memalign().
As there is no such function provided by glibc, we simply account for
additional memory and return a pointer offset into the allocated memory to
grant the alignment.
Furthermore, alloca0_align() is added, which simply clears the allocated
memory.
Instead of raising DEVICE_CHANGE only per device, we now raise it per
device-session attachment. This is what we want for all sysview users,
anyway, as sessions are meant to be independent of each other. Lets avoid
any external session iterators and just do that in sysview itself.
Whenever we resync an evdev device (or disable it), we should send RESYNC
events to the linked upper layers. This allows to disable key-repeat and
assume some events got dropped.
The current pause/resume logic kinda intertwines the resume/pause and
enable/disable functions. Lets avoid that non-obvious behavior and always
make resume call into enable, and pause call into disable, if appropriate.
We now verify the existence of uid's before applying them to devicenodes, so change the
test accordingly. We assume that both uid/gid 1 and 2 exist on the test system.
The GETXY ioctls of DRM are usually called twice by libdrm: Once to
retrieve the number of objects, a second time with suitably sized buffers
to actually retrieve all objects. In grdrm, we avoid these excessive calls
and instead just call ioctls with cached buffers and resize them if they
were too small.
However, connectors need to read the mode list via EDID, which is horribly
slow. As the kernel still cannot do that asynchronously (seriously, we
need to fix this!), it has a hack to only do it if count_modes==0. This is
fine with libdrm, as it calls every ioctl twice, anyway. However, we fail
horribly with this as we usually never pass 0.
Fix this by calling into GETCONNECTOR ioctls twice in case we received an
hotplug event. Only in those cases, we need to re-read modes, so this
should be totally fine.
Multiple issues here:
1) Don't print excessive card dumps on each resync. Disable it and make
developers add it themselves.
2) Ignore EINVAL on page-flips. Some cards don't support page-flips, so
we'd print it on each frame. Maybe, at some point, the kernel will add
support to retrieve capabilities for that. Until then, simply ignore
it.
3) Replace the now dropped card-dump with a short message about resyncing
the card.
Whenever we cannot use hardware frame events, we now schedule a virtual
frame event to make sure applications don't have to do this. Usually,
applications render only on data changes, but we can further reduce
render-time by also limiting rendering to vsyncs.
Whenever a display is added or changed, we suppressed any frame events.
Make sure to raise them manually so we can avoid rendering when handling
anything but FRAME events.
If we get udev-device events via sysview, but they lack devnum
annotations, we know it cannot be a DRM card. Look through it's parents
and treat it as hotplug event in case we find such a card.
This will treat any new/removed connectors as sub-devices of the real
card, instead of as devices on its own.
The high frequency of the color-morphing is kinda irritating. Reduce it
to a much lower frequency so you can actually look at it longer than few
seconds.
So far, we only forward DRM cards via sysview APIs. However, with MST,
connectors can be hotplugged, too. Forward the connectors as first-level
devices via sysview so API users can react to changing DRM connectors.
Whe need to react to "change" events on devices, but we want to avoid
duplicating udev-monitors everywhere. Therefore, make sysview forward
change events to the sysview controllers, which can then properly react
to it.
When deciding what seat a device is on, we have to traverse all parents
to find one with an ID_SEAT tag, otherwise, input devices plugged on a
seated USB-hub are not automatically attached to the right seat. But any
tags on the main device still overwrite the tags of the childs, so fix our
logic to check the device itself first, before traversing the parents.
The systemd-modeset tool is meant to debug grdev issues. It simply
displays morphing colors on any found display. This is pretty handy to
look for tearing in the backends and debug hotplug issues.
Note that this tool requires systemd-logind to be compiled from git
(there're important fixes that haven't been released, yet).
The grdev-drm backend manages DRM cards for grdev. Any DRM card with
DUMB_BUFFER support can be used. So far, our policy is to configure all
available connectors, but keep pipes inactive as long as users don't
enable the displays on top.
We hard-code double-buffering so far, but can easily support
single-buffering or n-buffering. We also require XRGB8888 as format as
this is required to be supported by all DRM drivers and it is what VTs
use. This allows us to switch from VTs to grdev via page-flips instead of
deep modesets.
There is still a lot room for improvements in this backend, but it works
smoothly so far so more enhanced features can be added later.
The grdev layer provides graphics-device access via the
libsystemd-terminal library. It will be used by all terminal helpers to
actually access display hardware.
Like idev, the grdev layer is built around session objects. On each
session object you add/remove graphics devices as they appear and vanish.
Any device type can be supported via specific card-backends. The exported
grdev API hides any device details.
Graphics devices are represented by "cards". Those are hidden in the
session and any pipe-configuration is automatically applied. Out of those,
we configure displays which are then exported to the API user. Displays
are meant as lowest hardware entity available outside of grdev. The
underlying pipe configuration is fully hidden and not accessible from the
outside. The grdev tiling layer allows almost arbitrary setups out of
multiple pipes, but so far we only use a small subset of this. More will
follow.
A grdev-display is meant to represent real connected displays/monitors.
The upper level screen arrangements are user policy and not controlled by
grdev. Applications are free to apply any policy they want.
Real card-backends will follow in later patches.
If a session controller does not need synchronous VT switches, we allow
them to pass VT control to logind, which acknowledges all VT switches
unconditionally. This works fine with all sessions using the dbus API,
but causes out-of-sync device use if we switch to legacy sessions that
are notified via VT signals. Those are processed before logind notices
the session-switch via sysfs. Therefore, leaving the old session still
active for a short amount of time.
This, in fact, may cause the legacy session to prepare graphics devices
before the old session was deactivated, and thus, maybe causing the old
session to interfer with graphics device usage.
Fix this by releasing devices immediately before acknowledging VT
switches. This way, sessions without VT handlers are required to support
async session switching (which they do in that case, anyway).
This makes possible to spawn service instances triggered by socket with
MLS/MCS SELinux labels which are created based on information provided by
connected peer.
Implementation of label_get_child_mls_label derived from xinetd.
Reviewed-by: Paul Moore <pmoore@redhat.com>
TIOCSIG is linux specific, so include the linux ioctl header to make sure
it's defined. We currently rely on some rather non-obvious recursive
includes. Make sure its always defined regardless of the system headers.
c > 0 is already guaranteed from earlier checks.
We go from
ms = ALIGN(l+1) +
sizeof(char*) +
(c > 0 ? c : 1) * ALIGN(alen) +
(c > 0 ? c+1 : 2) * sizeof(char*);
to
ms = ALIGN(l+1) +
sizeof(char*) +
c * ALIGN(alen) +
(c+1) * sizeof(char*);
to
ms = ALIGN(l+1) + c * ALIGN(alen) + (c+2) * sizeof(char*);
Found by coverity. Fixes: CID#1237570 and CID#1237610