IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
It cannot fail in the current hashmap implementation, but it may fail in
alternative implementations (unless a sufficiently large reservation has
been placed beforehand).
With the hashmap implementation that uses chaining the reservations
merely ensure that the merging won't result in long bucket chains.
With a future alternative implementation it will additionally reserve
memory to make sure the merging won't fail.
That hashmap_move_one() currently cannot fail with -ENOMEM is an
implementation detail, which is not possible to guarantee in general.
Hashmap implementations based on anything else than chaining of
individual entries may have to allocate.
hashmap_move_one will not fail with -ENOMEM if a proper reservation has
been made beforehand. Use reservations in install.c.
In cgtop.c simply propagate the error instead of asserting.
With the current hashmap implementation that uses chaining, placing a
reservation can serve two purposes:
- To optimize putting of entries if the number of entries to put is
known. The reservation allocates buckets, so later resizing can be
avoided.
- To avoid having very long bucket chains after using
hashmap_move(_one).
In an alternative hashmap implementation it will serve an additional
purpose:
- To guarantee a subsequent hashmap_move(_one) will not fail with
-ENOMEM (this never happens in the current implementation).
The order of entries may matter here. Oldest entries are evicted first
when the cache is full.
(Though I don't see anything to rejuvenate entries on cache hits.)
-ENOENT is the same return value as if 'other' were an allocated hashmap
that does not contain the key. A NULL hashmap is a possible way of
expressing a hashmap that contains no key.
Test more corner cases and error states in several tests.
Add new tests for:
hashmap_move
hashmap_remove
hashmap_remove2
hashmap_remove_value
hashmap_remove_and_replace
hashmap_get2
hashmap_first
In test_hashmap_many additionally test with an intentionally bad hash
function.
test-hashmap-ordered.c is generated from test-hashmap-plain.c simply by
substituting "ordered_hashmap" for "hashmap" etc.
In the cases where tests rely on the order of entries, a distinction
between plain and ordered hashmaps is made using the ORDERED macro,
which is defined only for test-hashmap-ordered.c.
Few Hashmaps/Sets need to remember the insertion order. Most don't care
about the order when iterating. It would be possible to use more compact
hashmap storage in the latter cases.
Add OrderedHashmap as a distinct type from Hashmap, with functions
prefixed with "ordered_". For now, the functions are nothing more than
inline wrappers for plain Hashmap functions.
Repetetive messages can be annoying when running with
SYSTEMD_LOG_LEVEL=debug, but they are sometimes very useful
when debugging problems. Add log_trace which is like log_debug
but becomes a noop unless LOG_TRACE is defined during compilation.
This makes it easy to enable very verbose logging for a subset
of programs when compiling from source.
Systemd 209 started setting $WATCHDOG_PID, and sd-daemon watch was
modified to check for this variable. This means that
sd_watchdog_enabled() stopped working with previous versions of
systemd. But sd-event is a public library and API and we must keep it
working even when a program compiled with a newer version of the
libary is used on a system running an older version of the manager.
getenv() and unsetenv() are fairly expensive calls, so optimize
sd_watchdog_enabled() by not calling them when unnecessary.
man: centralize the description of $WATCHDOG_PID and $WATCHDOG_USEC in
the sd_watchdog_enabled manpage. It is better not to repeat the same
stuff in two places.
This new command will ask the journal daemon to flush all log data
stored in /run to /var, and wait for it to complete. This is useful, so
that in case of Storage=persistent we can order systemd-tmpfiles-setup
afterwards, to ensure any possibly newly created directory in /var/log
gets proper access mode and owners.
runtime journal is migrated to system journal when only
"/run/systemd/journal/flushed" exist. It's ok but according to this
the system journal directory size(max use) can be over the config. If
journal is not rotated during some time the journal directory can be
remained as over the config(or default) size. To avoid, do
server_vacuum just after the system journal migration from runtime.