IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The cache might contain all kinds of unauthenticated data that we really
shouldn't be using if we upgrade our feature level and suddenly are able
to get authenticated data again.
Might fix: #4866
For the wildcard NSEC check we need to generate an "asterisk" domain, by
prepend the common ancestor with "*.". So far we did that with a simple
strappenda() which is fine for most domains, but doesn't work if the
common ancestor is the root domain as we usually write that as "." in
normalized form, and "*." joined with "." is "*.." and not "*." as it
should be.
Hence, use the clean way out, let's just use dns_name_concat() which
only exists precisely for this reason, to properly concatenate labels.
There's a good chance this actually fixes#5029, as this NSEC proof is
triggered by lookups in the TLD "example", which doesn't exist in the
Internet.
This ensures that configured NTAs exclude not only the listed domain but
also all domains below it from DNSSEC validation -- except if a positive
trust anchor is defined below (as suggested by RFC7647, section 1.1)
Fixes: #5048
Drop the TEST_DATA_DIR macro as this was using alloca() within a
function call which is allegedly unsafe. So add a "suffix" argument to
get_testdata_dir() instead and call that directly.
Rename get_exe_relative_testdata_dir() to get_testdata_dir() and move
the env var check into that, so that everything interesting happens at
the same place.
automake helpfully sets a few variables for during build. When our executable
is in a directory underneath $(abs_top_builddir), we know that we're in the
build environment $(abs_top_srcdir) contains the sources, and test data is
under $(abs_top_srcdir)/test. This remains true no matter where the build
directory is relative to the source directory. It also works if the test
executable is invoked as ./test-whatever or .libs/test-whatever, since the
relative path is not used at all.
When running from outside of the build directory, we should be running from the
installed location and we can look for ../testdata relative to the location of
the exe file.
Of course, $SYSTEMD_TEST_DATA always overrides this logic.
That way, if the test directory does not exist we don't leave behind
temporary files (as in that case or on test failure the cleanup actions
don't run).
When trying to directly run a test executable in the build tree without
setting $TEST_DIR, some tests fail with a non-obvious error message.
Print an useful one instead.
This is a follow-up for #5359, fixing the error codes in a similar way
for the other NSS modules.
(user/group lookup calls don't have h_errnop, hence we don't update that
in those cases)
A bug exists where the conflict counter is cleared
regardless of whether or not the next probe attempt leads to
a successful address acquisition. This causes 'bursts' of
MAX_CONFLICTS probes followed by a delay of
RATE_LIMIT_INTERVAL instead of a single probe each
RATE_LIMIT_INTERVAL when beyond MAX_CONFLICTS.
The conflict counter should only be cleared after an
address is successfully acquired. This commit achieves that
goal.
From RFC3927:
A host should maintain a counter of the number of address
conflicts it has experienced in the process of trying to
acquire an address, and if the number of conflicts exceeds
MAX_CONFLICTS then the host MUST limit the rate at which it
probes for new addresses to no more than one new address per
RATE_LIMIT_INTERVAL. This is to prevent catastrophic ARP
storms in pathological failure cases, such as a rogue host
that answers all ARP probes, causing legitimate hosts to go
into an infinite loop attempting to select a usable address.
Signed-off-by: Jason Reeder <jasonreeder@gmail.com>
The correct error code to report when a provided buffer is too small is
ERANGE. This is recognized by glibc, which will then try again with a
larger buffer. The old behaviour of reporting ENOMEM has no special
meaning for glibc. The error will simply be propagated to the
application, and a later retry will trigger the same error again.
Additionally, h_errnop must be set to NETDB_INTERNAL to have glibc look
at errnop for details.
More information at:
https://www.gnu.org/software/libc/manual/html_node/NSS-Modules-Interface.html
This breaks again, this time for setups where Qemu is not reported via DMI for whatever
reason. So swap order of cpuid and dmi again, but properly detect oracle.
See issue #5318.
$ ./coredumpctl --no-pager -1
TIME PID UID GID SIG COREFILE EXE
Sun 2016-11-06 10:10:51 EST 29514 1002 1002 - - /usr/bin/python3.5
$ ./coredumpctl info 29514
PID: 29514 (python3)
UID: 1002 (zbyszek)
GID: 1002 (zbyszek)
Reason: ZeroDivisionError
Timestamp: Sun 2016-11-06 10:10:51 EST (3h 22min ago)
Command Line: python3 systemd_coredump_exception_handler.py
Executable: /usr/bin/python3.5
Control Group: /user.slice/user-1002.slice/user@1002.service/gnome-terminal-server.service
Unit: user@1002.service
User Unit: gnome-terminal-server.service
Slice: user-1002.slice
Owner UID: 1002 (zbyszek)
Boot ID: 1531fd22ec84429e85ae888b12fadb91
Machine ID: 519a16632fbd4c71966ce9305b360c9c
Hostname: laptop
Storage: none
Message: Process 29514 (systemd_coredump_exception_handler.py) of user zbyszek failed with ZeroDivisionError: division by
Traceback (most recent call last):
File "systemd_coredump_exception_handler.py", line 134, in <module>
g()
File "systemd_coredump_exception_handler.py", line 133, in g
f()
File "systemd_coredump_exception_handler.py", line 131, in f
div0 = 1 / 0
ZeroDivisionError: division by zero
Local variables in innermost frame:
a=3
h=<function f at 0x7efdc14b6ea0>
Embedding sd_id128_t's in constant strings was rather cumbersome. We had
SD_ID128_CONST_STR which returned a const char[], but it had two problems:
- it wasn't possible to statically concatanate this array with a normal string
- gcc wasn't really able to optimize this, and generated code to perform the
"conversion" at runtime.
Because of this, even our own code in coredumpctl wasn't using
SD_ID128_CONST_STR.
Add a new macro to generate a constant string: SD_ID128_MAKE_STR.
It is not as elegant as SD_ID128_CONST_STR, because it requires a repetition
of the numbers, but in practice it is more convenient to use, and allows gcc
to generate smarter code:
$ size .libs/systemd{,-logind,-journald}{.old,}
text data bss dec hex filename
1265204 149564 4808 1419576 15a938 .libs/systemd.old
1260268 149564 4808 1414640 1595f0 .libs/systemd
246805 13852 209 260866 3fb02 .libs/systemd-logind.old
240973 13852 209 255034 3e43a .libs/systemd-logind
146839 4984 34 151857 25131 .libs/systemd-journald.old
146391 4984 34 151409 24f71 .libs/systemd-journald
It is also much easier to check if a certain binary uses a certain MESSAGE_ID:
$ strings .libs/systemd.old|grep MESSAGE_ID
MESSAGE_ID=%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x
MESSAGE_ID=%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x
MESSAGE_ID=%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x
MESSAGE_ID=%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x
$ strings .libs/systemd|grep MESSAGE_ID
MESSAGE_ID=c7a787079b354eaaa9e77b371893cd27
MESSAGE_ID=b07a249cd024414a82dd00cd181378ff
MESSAGE_ID=641257651c1b4ec9a8624d7a40a9e1e7
MESSAGE_ID=de5b426a63be47a7b6ac3eaac82e2f6f
MESSAGE_ID=d34d037fff1847e6ae669a370e694725
MESSAGE_ID=7d4958e842da4a758f6c1cdc7b36dcc5
MESSAGE_ID=1dee0369c7fc4736b7099b38ecb46ee7
MESSAGE_ID=39f53479d3a045ac8e11786248231fbf
MESSAGE_ID=be02cf6855d2428ba40df7e9d022f03d
MESSAGE_ID=7b05ebc668384222baa8881179cfda54
MESSAGE_ID=9d1aaa27d60140bd96365438aad20286
The entry must be a single entry in the journal export format, including the
terminating double newline. The MESSAGE field is now generated on the sender
side.
The advantage is that the reporter can easily pass additional metadata.
Continuing with the example of the python excepthook:
COREDUMP_PYTHON_EXECUTABLE=/usr/bin/python3
COREDUMP_PYTHON_VERSION=3.5.2 (default, Sep 14 2016, 11:28:32)
[GCC 6.2.1 20160901 (Red Hat 6.2.1-1)]
COREDUMP_PYTHON_THREAD_INFO=sys.thread_info(name='pthread', lock='semaphore', version='NPTL 2.24')
COREDUMP_PYTHON_EXCEPTION_TYPE=ZeroDivisionError
COREDUMP_PYTHON_EXCEPTION_VALUE=division by zero
MESSAGE=Process 29514 (systemd_coredump_exception_handler.py) of user zbyszek failed with ZeroDivisionError: division by zero
Traceback (most recent call last):
File "systemd_coredump_exception_handler.py", line 134, in <module>
g()
File "systemd_coredump_exception_handler.py", line 133, in g
f()
File "systemd_coredump_exception_handler.py", line 131, in f
div0 = 1 / 0
ZeroDivisionError: division by zero
Local variables in innermost frame:
a=3
h=<function f at 0x7efdc14b6ea0>
One consideration is whether to use the Journal Export Format, or send packets
over a UNIX socket instead. The advantage of current solution is that although
parsing is more complicated on the receiver side, it is much easier to use on the
sender side. I hope this can be used by various languages for which writing
binary structures to a UNIX socket is harder and more likely to be done wrong
than piping of a simple textyish format.
Only one test case is added, but it is enough to check basic sanity of the
code (single-line and binary fields and trusted fields, allocation and freeing).
In preparation for subsequenct changes...
Various stack allocations are changed to use the heap. This might be minimally
slower, but probably doesn't matter. The upside is that we will now properly
free all memory that is allocated.
In commit 050e65a we swapped order of detect_vm_{cpuid,dmi}(). That
fixed Virtualbox but broke qemu with kvm, which is expected to return
'kvm'. So check for qemu/kvm first, then DMI, CPUID last.
This fixes#5318.
Signed-off-by: Christian Hesse <mail@eworm.de>
Let's make sure we verify that all BindsTo= are in order before we actually go
and dispatch a start operation to a unit. Normally the job queue should already
have made sure all deps are in order, but this might not have been sufficient
in two cases: a) when the user changes deps during runtime and reloads the
daemon, and b) when the user placed BindsTo= dependencies without matching
After= dependencies, so that we don't actually wait for the bound to unit to be
up before upping also the binding unit.
See: #4725
This restores behaviour of 53fda2bb933694c9bdb1bbf1f5583e39673b74b2: for
mDNS (and mDNS only) we'll match replies to transactions honouring ANY
matches.