IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Some users of push_ascii_nstring() (notably name_to_unstring())
expect the output to be truncated if it would exceed the size of
an nstring after conversion. However this broke in 2011 due to
commit d546adeab5 ("Change convert_string_internal() and
convert_string_error() to bool return"). This patch restores the
old behavior.
The issue can be observed in syslog after setting the
``workgroup`` to a 16+ characters long string which triggers a
DEBUG() message:
Oct 17 11:28:45 dev nmbd[11716]: name_to_nstring: workgroup name 0123456789ABCDEF0123456789ABCDEF is too long. Truncating to
Signed-off-by: Philipp Gesang <philipp.gesang@intra2net.com>
Reviewed-by: Noel Power <npower@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Autobuild-User(master): Noel Power <npower@samba.org>
Autobuild-Date(master): Tue Oct 25 16:25:40 UTC 2022 on sn-devel-184
fault.h has:
which leads to SMB_ASSERT not being defined when you include
samba_util.h (and thus fault.h) before debug.h.
Bug: https://bugzilla.samba.org/show_bug.cgi?id=15207
Signed-off-by: Volker Lendecke <vl@samba.org>
We now have only one code path that stores the fully
granted lock.
This is not change in behavior, but it will simplify further
changes.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
This prepares some helper functions in order to
allow callers of g_lock_lock() to pass in a callback function
that will run under the tdb chainlock when G_LOCK_WRITE was granted.
The idea is that the callers callback function would run with
g_lock_ctx->busy == true and all key based function are not be allowed
during the execution of the callback function. Only the
g_lock_lock_cb_state based helper function are allowed to be used.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
This prepares some helper functions in order to
allow callers of g_lock_lock() to pass in a callback function
that will run under the tdb chainlock when G_LOCK_WRITE was granted.
The idea is that the callers callback function would only be allowed
to run these new helper functions, while all key based function are
not to be allowed.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
First we fully check if we'll get the lock
and then store the lock.
This is not change in behavior, but it will simplify further changes.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
This is useful when we want to wakeup the next watcher
without modifying the record.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
This can be used if the decision of using dbwrap_watched_watch_skip_alerting()
needs to be reverted...
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
If a watcher was already selected for a wakeup notification reset it...
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Make it available to replace clistr_is_previous_version_path() in
libsmb/
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
In the common case we don't have any shared lock holders,
so there's no need to allocate memory for the empty array.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
This improves 'time smbtorture3 //foo/bar -U% local-g-lock-ping-pong -o 50000000'
from ~1.400.000 to ~3.400.000 operations per second any a testsystem.
As we also use TDB_VOLATILE for locking.tdb, this is a much more
realistic test now.
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
This gives a nice speed up, as it's unlikely for the waiters to hit
contention.
The following test with 256 commections all looping with open/close
on the same inode (share root) is improved drastically:
smbtorture //127.0.0.1/m -Uroot%test smb2.create.bench-path-contention-shared \
--option='torture:bench_path=' \
--option="torture:timelimit=60" \
--option="torture:nprocs=256"
From some like this:
open[num/s=8800,avslat=0.021445,minlat=0.000095,maxlat=0.179786]
close[num/s=8800,avslat=0.021658,minlat=0.000044,maxlat=0.179819]
to:
open[num/s=10223,avslat=0.017922,minlat=0.000083,maxlat=0.106759]
close[num/s=10223,avslat=0.017694,minlat=0.000040,maxlat=0.107345]
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
Autobuild-User(master): Ralph Böhme <slow@samba.org>
Autobuild-Date(master): Tue Jul 26 14:32:35 UTC 2022 on sn-devel-184
In case of a highly contended record we will have a lot of watchers,
which will all race to get g_lock_lock() to finish.
If g_lock_unlock() wakes them all, e.g. 250 of them, we get a thundering
herd, were 249 will only find that one of them as able to get the lock
and re-add their watcher entry (not unlikely in a different order).
With this commit we only wake the first watcher and let it remove
itself once it no longer wants to monitor the record content
(at that time it will wake the new first watcher).
It means the woken watcher doesn't have to race with all others
and also means order of watchers is kept, which means that we
most likely get a fair latency distribution for all watchers.
The following test with 256 commections all looping with open/close
on the same inode (share root) is improved drastically:
smbtorture //127.0.0.1/m -Uroot%test smb2.create.bench-path-contention-shared \
--option='torture:bench_path=' \
--option="torture:timelimit=60" \
--option="torture:nprocs=256"
From some like this:
open[num/s=80,avslat=2.793862,minlat=0.004097,maxlat=46.597053]
close[num/s=80,avslat=2.387326,minlat=0.023875,maxlat=50.878165]
to:
open[num/s=8800,avslat=0.021445,minlat=0.000095,maxlat=0.179786]
close[num/s=8800,avslat=0.021658,minlat=0.000044,maxlat=0.179819]
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
Unless the unique_lock_epoch changes via g_lock_lock()/g_lock_unlock()
we try to keep our existing watch instance alive while waiting
for unique_data_epoch to change.
This will become important in the following commits when the
dbwrap_watch layer will only wake up one watcher at a time
and each woken watcher will wakeup the next one. Without this
commit we would trigger an endless loop as none of the watchers
will ever change unique_data_epoch.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
It changes with every lock and unlock.
This will be needed in future in order to differentiate between
lock and data changed.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
The key points are:
1. We keep our position in the watcher queue until we got what
we were waiting for. It means the order is now fair and stable.
2. We only wake up other during g_lock_unlock() and only if
we detect that an pending exclusive lock is able to make progress.
(Note: read lock holders are never waiters on their own)
This reduced the contention on locking.tdb records drastically,
as waiters are no longer woken 3 times (where the first 2 times were completely useless).
The following test with 256 commections all looping with open/close
on the same inode (share root) is improved drastically:
smbtorture //127.0.0.1/m -Uroot%test smb2.create.bench-path-contention-shared \
--option='torture:bench_path=' \
--option="torture:timelimit=60" \
--option="torture:nprocs=256"
From some like this:
open[num/s=50,avslat=6.455775,minlat=0.000157,maxlat=55.683846]
close[num/s=50,avslat=4.563605,minlat=0.000128,maxlat=53.585839]
to:
open[num/s=80,avslat=2.793862,minlat=0.004097,maxlat=46.597053]
close[num/s=80,avslat=2.387326,minlat=0.023875,maxlat=50.878165]
Note the real effect of this commit will releaved together
with a following commit that only wakes one waiter at a time.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
This matches the behavior of g_lock_cleanup_shared(), which also
only operates on the in memory struct g_lock.
We do a g_lock_store() later during g_lock_trylock() anyway
when we make any progress.
In the case we where a pending exclusive lock holder
we now force a g_lock_store() if g_lock_cleanup_dead()
removed the dead blocker.
This will be useful for the following changes...
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
The destructor triggered by dbwrap_watched_watch_recv() will
remove the watcher instance via a dedicated dbwrap_do_locked(),
just calling dbwrap_watched_watch_remove_instance() inside.
But the typical caller triggers a dbwrap_do_locked() again after
dbwrap_watched_watch_recv() returned. Which means we call
dbwrap_do_locked() twice.
We now allow dbwrap_watched_watch_recv() to return the existing
instance id (if it still exists) and removes the destructor.
That way the caller can pass the given instance id to
dbwrap_watched_watch_remove_instance() from within its own dbwrap_do_locked(),
when it decides to leave the queue, because it's happy with the new
state of the record. In order to get the best performance
dbwrap_watched_watch_remove_instance() should be called before any
dbwrap_record_storev() or dbwrap_record_delete(),
because that will only trigger a single low level storev/delete.
If the caller found out that the state of the record doesn't meet the
expectations and the callers wants to continue watching the
record (from its current position, most likely the first one),
dbwrap_watched_watch_remove_instance() can be skipped and the
instance id can be passed to dbwrap_watched_watch_send() again,
in order to resume waiting on the existing instance.
Currently the watcher instance were always removed (most likely from
the first position) and re-added (to the last position), which may
cause unfair latencies.
In order to improve the overhead of adding a new watcher instance
the caller can call dbwrap_watched_watch_add_instance() before
any dbwrap_record_storev() or dbwrap_record_delete(), which
will only result in a single low level storev/delete.
The returned instance id is then passed to dbwrap_watched_watch_send(),
within the same dbwrap_do_locked() run.
It also adds a way to avoid alerting any callers during
the current dbwrap_do_locked() run.
Layers above may only want to wake up watchers
during specific situations and while it's useless to wake
others in other situations.
This will soon be used to add more fairness to the g_lock code.
Note that this commit only prepares the api for the above to be useful,
the instance returned by dbwrap_watched_watch_recv() is most likely 0,
which means the watcher entry was already removed, but that will change
in the following commits.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
The new dbwrap_watched_watch_remove_instance() will just remove ourself
from the in memory array and let db_watched_record_fini() call
dbwrap_watched_record_storev() in order to write the modified version
into the low level backend record.
For now there's no change in behavior, but it allows us to change it
soon....
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
It means we only have one code path storing the low level record
and that's dbwrap_watched_record_storev on the main record.
It avoids the nested dbwrap_do_locked() and only uses
dbwrap_parse_record() and talloc_memdup() when needed.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
dbwrap_watched_record_storev() will handle the high level storev and
delete, it will find out if we can remove the record as there's no value
and also no watchers to be stored.
This is no real change for now as dbwrap_watched_record_wakeup() will
always exits with wrec->watchers.count = 0, but that will change in the next
commits.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
We will soon have records with just a number of watchers, but without
payload. These records should not be visible during traverse.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
It will also delete the low level record in case there are no watchers
should be stored and no data buffers are given.
This is no real change for now as dbwrap_watched_record_wakeup() will
always exit with wrec->watchers.count = 0, but that will change in the next
commits.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
dbwrap backends are unlikely to be able to store
UINT32_MAX*DBWRAP_WATCHER_BUF_LENGTH in a single record
and most likely also not with the whole database!
DBWRAP_MAX_WATCHERS = INT32_MAX/DBWRAP_WATCHER_BUF_LENGTH should be
enough and makes further changes easier as we don't need to care
about size_t overflows.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
This is never set...
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>