IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
The environment variable is a string, but we expect an integer.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15074
Signed-off-by: Joseph Sutton <josephsutton@catalyst.net.nz>
Reviewed-by: Andreas Schneider <asn@samba.org>
The target principal and realm fields of the setpw structure are
supposed to be optional, but in MIT Kerberos they are mandatory. For
better compatibility and ease of testing, fall back to parsing the
simpler (containing only the new password) structure if the MIT function
fails to decode it.
Although the target principal and realm fields should be optional, one
is not supposed to specified without the other, so we don't have to deal
with the case where only one is specified.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15047
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15049
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15074
Signed-off-by: Joseph Sutton <josephsutton@catalyst.net.nz>
Reviewed-by: Andreas Schneider <asn@samba.org>
To use memcpy(), we need to specify the number of bytes to copy, rather
than the number of ldb_val structures.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15008
Signed-off-by: Joseph Sutton <josephsutton@catalyst.net.nz>
Currently, we can crash the server by sending a large number of values
of a specific attribute (such as sAMAccountName) spread across a few
message elements. If val_count is larger than the total number of
elements, we get an access beyond the elements array.
Similarly, we can include unrelated message elements prior to the
message elements of the attribute in question, so that not all of the
attribute's values are copied into the returned elements values array.
This can cause the server to access uninitialised data, likely resulting
in a crash or unexpected behaviour.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15008
Signed-off-by: Joseph Sutton <josephsutton@catalyst.net.nz>
This aims to minimise usage of the error-prone pattern of searching for
a just-added message element in order to make modifications to it (and
potentially finding the wrong element).
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15009
Signed-off-by: Joseph Sutton <josephsutton@catalyst.net.nz>
Currently, there are many places where we use ldb_msg_add_empty() to add
an empty element to a message, and then call ldb_msg_add_value() or
similar to add values to that element. However, this performs an
unnecessary search of the message's elements to locate the new element.
Moreover, if an element with the same attribute name already exists
earlier in the message, the values will be added to that element,
instead of to the intended newly added element.
A similar pattern exists where we add values to a message, and then call
ldb_msg_find_element() to locate that message element and sets its flags
to (e.g.) LDB_FLAG_MOD_REPLACE. This also performs an unnecessary
search, and may locate the wrong message element for setting the flags.
To avoid these problems, add functions for appending a value to a
message, so that a particular value can be added to the end of a message
in a single operation.
For ADD requests, it is important that no two message elements share the
same attribute name, otherwise things will break. (Normally,
ldb_msg_normalize() is called before processing the request to help
ensure this.) Thus, we must be careful not to append an attribute to an
ADD message, unless we are sure (e.g. through ldb_msg_find_element())
that an existing element for that attribute is not present.
These functions will be used in the next commit.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15009
Signed-off-by: Joseph Sutton <josephsutton@catalyst.net.nz>
Using the newly added ldb flag, we can now detect when a message has
been shallow-copied so that its elements share their values with the
original message elements. Then when adding values to the copied
message, we now make a copy of the shared values array first.
This should prevent a use-after-free that occurred in LDB modules when
new values were added to a shallow copy of a message by calling
talloc_realloc() on the original values array, invalidating the 'values'
pointer in the original message element. The original values pointer can
later be used in the database audit logging module which logs database
requests, and potentially cause a crash.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15009
Signed-off-by: Joseph Sutton <josephsutton@catalyst.net.nz>
When making a shallow copy of an ldb message, mark the message elements
of the copy as sharing their values with the message elements in the
original message.
This flag value will be heeded in the next commit.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15009
Signed-off-by: Joseph Sutton <josephsutton@catalyst.net.nz>
LDB_FLAG_MOD_* values are not actually flags, and the previous
comparison was equivalent to
(el->flags & LDB_FLAG_MOD_MASK) == 0
which is only true if none of the LDB_FLAG_MOD_* values are set, so we
would not successfully return if the element was a DELETE. Correct the
expression to what it was intended to be.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15009
Signed-off-by: Joseph Sutton <josephsutton@catalyst.net.nz>
LDB_FLAG_MOD_* values are not actually flags, and the previous
comparison was equivalent to
(el->flags & LDB_FLAG_MOD_MASK) == 0
which is only true if none of the LDB_FLAG_MOD_* values are set. Correct
the expression to what it was probably intended to be.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15009
Signed-off-by: Joseph Sutton <josephsutton@catalyst.net.nz>
LDB_FLAG_MOD_* values are not actually flags, and the previous
comparison was equivalent to
(req_msg->elements[el_idx].flags & LDB_FLAG_MOD_MASK) != 0
which is true whenever any of the LDB_FLAG_MOD_* values are set. Correct
the expression to what it was probably intended to be.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15009
Signed-off-by: Joseph Sutton <josephsutton@catalyst.net.nz>
If an account has an SPN that requires Write Property to set, we should
still be able to delete it with just Validated Write.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15009
Signed-off-by: Joseph Sutton <josephsutton@catalyst.net.nz>
This gives a nice speed up, as it's unlikely for the waiters to hit
contention.
The following test with 256 commections all looping with open/close
on the same inode (share root) is improved drastically:
smbtorture //127.0.0.1/m -Uroot%test smb2.create.bench-path-contention-shared \
--option='torture:bench_path=' \
--option="torture:timelimit=60" \
--option="torture:nprocs=256"
From some like this:
open[num/s=8800,avslat=0.021445,minlat=0.000095,maxlat=0.179786]
close[num/s=8800,avslat=0.021658,minlat=0.000044,maxlat=0.179819]
to:
open[num/s=10223,avslat=0.017922,minlat=0.000083,maxlat=0.106759]
close[num/s=10223,avslat=0.017694,minlat=0.000040,maxlat=0.107345]
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
Autobuild-User(master): Ralph Böhme <slow@samba.org>
Autobuild-Date(master): Tue Jul 26 14:32:35 UTC 2022 on sn-devel-184
In case of a highly contended record we will have a lot of watchers,
which will all race to get g_lock_lock() to finish.
If g_lock_unlock() wakes them all, e.g. 250 of them, we get a thundering
herd, were 249 will only find that one of them as able to get the lock
and re-add their watcher entry (not unlikely in a different order).
With this commit we only wake the first watcher and let it remove
itself once it no longer wants to monitor the record content
(at that time it will wake the new first watcher).
It means the woken watcher doesn't have to race with all others
and also means order of watchers is kept, which means that we
most likely get a fair latency distribution for all watchers.
The following test with 256 commections all looping with open/close
on the same inode (share root) is improved drastically:
smbtorture //127.0.0.1/m -Uroot%test smb2.create.bench-path-contention-shared \
--option='torture:bench_path=' \
--option="torture:timelimit=60" \
--option="torture:nprocs=256"
From some like this:
open[num/s=80,avslat=2.793862,minlat=0.004097,maxlat=46.597053]
close[num/s=80,avslat=2.387326,minlat=0.023875,maxlat=50.878165]
to:
open[num/s=8800,avslat=0.021445,minlat=0.000095,maxlat=0.179786]
close[num/s=8800,avslat=0.021658,minlat=0.000044,maxlat=0.179819]
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
This will become important in the following commits when the
dbwrap_watch layer will only wake up one watcher at a time
and each woken watcher will wakeup the next one.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
This makes sure we cleanup the locked record in all cases.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
This will become important in the following commits when the
dbwrap_watch layer will only wake up one watcher at a time
and each woken watcher will wakeup the next one.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
Unless the unique_lock_epoch changes via g_lock_lock()/g_lock_unlock()
we try to keep our existing watch instance alive while waiting
for unique_data_epoch to change.
This will become important in the following commits when the
dbwrap_watch layer will only wake up one watcher at a time
and each woken watcher will wakeup the next one. Without this
commit we would trigger an endless loop as none of the watchers
will ever change unique_data_epoch.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
It changes with every lock and unlock.
This will be needed in future in order to differentiate between
lock and data changed.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
The key points are:
1. We keep our position in the watcher queue until we got what
we were waiting for. It means the order is now fair and stable.
2. We only wake up other during g_lock_unlock() and only if
we detect that an pending exclusive lock is able to make progress.
(Note: read lock holders are never waiters on their own)
This reduced the contention on locking.tdb records drastically,
as waiters are no longer woken 3 times (where the first 2 times were completely useless).
The following test with 256 commections all looping with open/close
on the same inode (share root) is improved drastically:
smbtorture //127.0.0.1/m -Uroot%test smb2.create.bench-path-contention-shared \
--option='torture:bench_path=' \
--option="torture:timelimit=60" \
--option="torture:nprocs=256"
From some like this:
open[num/s=50,avslat=6.455775,minlat=0.000157,maxlat=55.683846]
close[num/s=50,avslat=4.563605,minlat=0.000128,maxlat=53.585839]
to:
open[num/s=80,avslat=2.793862,minlat=0.004097,maxlat=46.597053]
close[num/s=80,avslat=2.387326,minlat=0.023875,maxlat=50.878165]
Note the real effect of this commit will releaved together
with a following commit that only wakes one waiter at a time.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
This matches the behavior of g_lock_cleanup_shared(), which also
only operates on the in memory struct g_lock.
We do a g_lock_store() later during g_lock_trylock() anyway
when we make any progress.
In the case we where a pending exclusive lock holder
we now force a g_lock_store() if g_lock_cleanup_dead()
removed the dead blocker.
This will be useful for the following changes...
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
The destructor triggered by dbwrap_watched_watch_recv() will
remove the watcher instance via a dedicated dbwrap_do_locked(),
just calling dbwrap_watched_watch_remove_instance() inside.
But the typical caller triggers a dbwrap_do_locked() again after
dbwrap_watched_watch_recv() returned. Which means we call
dbwrap_do_locked() twice.
We now allow dbwrap_watched_watch_recv() to return the existing
instance id (if it still exists) and removes the destructor.
That way the caller can pass the given instance id to
dbwrap_watched_watch_remove_instance() from within its own dbwrap_do_locked(),
when it decides to leave the queue, because it's happy with the new
state of the record. In order to get the best performance
dbwrap_watched_watch_remove_instance() should be called before any
dbwrap_record_storev() or dbwrap_record_delete(),
because that will only trigger a single low level storev/delete.
If the caller found out that the state of the record doesn't meet the
expectations and the callers wants to continue watching the
record (from its current position, most likely the first one),
dbwrap_watched_watch_remove_instance() can be skipped and the
instance id can be passed to dbwrap_watched_watch_send() again,
in order to resume waiting on the existing instance.
Currently the watcher instance were always removed (most likely from
the first position) and re-added (to the last position), which may
cause unfair latencies.
In order to improve the overhead of adding a new watcher instance
the caller can call dbwrap_watched_watch_add_instance() before
any dbwrap_record_storev() or dbwrap_record_delete(), which
will only result in a single low level storev/delete.
The returned instance id is then passed to dbwrap_watched_watch_send(),
within the same dbwrap_do_locked() run.
It also adds a way to avoid alerting any callers during
the current dbwrap_do_locked() run.
Layers above may only want to wake up watchers
during specific situations and while it's useless to wake
others in other situations.
This will soon be used to add more fairness to the g_lock code.
Note that this commit only prepares the api for the above to be useful,
the instance returned by dbwrap_watched_watch_recv() is most likely 0,
which means the watcher entry was already removed, but that will change
in the following commits.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
The new dbwrap_watched_watch_remove_instance() will just remove ourself
from the in memory array and let db_watched_record_fini() call
dbwrap_watched_record_storev() in order to write the modified version
into the low level backend record.
For now there's no change in behavior, but it allows us to change it
soon....
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
It means we only have one code path storing the low level record
and that's dbwrap_watched_record_storev on the main record.
It avoids the nested dbwrap_do_locked() and only uses
dbwrap_parse_record() and talloc_memdup() when needed.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
dbwrap_watched_record_storev() will handle the high level storev and
delete, it will find out if we can remove the record as there's no value
and also no watchers to be stored.
This is no real change for now as dbwrap_watched_record_wakeup() will
always exits with wrec->watchers.count = 0, but that will change in the next
commits.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
We will soon have records with just a number of watchers, but without
payload. These records should not be visible during traverse.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
It will also delete the low level record in case there are no watchers
should be stored and no data buffers are given.
This is no real change for now as dbwrap_watched_record_wakeup() will
always exit with wrec->watchers.count = 0, but that will change in the next
commits.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
dbwrap backends are unlikely to be able to store
UINT32_MAX*DBWRAP_WATCHER_BUF_LENGTH in a single record
and most likely also not with the whole database!
DBWRAP_MAX_WATCHERS = INT32_MAX/DBWRAP_WATCHER_BUF_LENGTH should be
enough and makes further changes easier as we don't need to care
about size_t overflows.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
This is never set...
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
At the end of the dbwrap_watched_watch_recv() all temporary state should
be destroyed. It also means dbwrap_watched_watch_state_destructor() was
triggered.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
Async function always have their 'state' context for temporary memory.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
This will help in the next commits.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
This will be used in other places soon.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
This reduces quite some complexity and will make further changes
(which will follow soon) easier.
Review with git show --patience
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
This is no change in behavior, because:
- The first dbwrap_do_locked(dbwrap_watched_record_wakeup_fn), is
called at the start of dbwrap_watched_record_{storev,delete}().
That means the nested dbwrap_do_locked() will pass the
exact value same (unchanged) value to dbwrap_watched_record_wakeup_fn.
- After the first change we have either removed the whole backend
record in dbwrap_watched_record_delete or dbwrap_watched_record_storev()
removed all watchers and store num_watchers = 0.
- With that any further updates will have no watchers in the backend
record, so dbwrap_do_locked(dbwrap_watched_record_wakeup_fn) will
never do anything useful. It only burns cpu time any may cause memory
fragmentation.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
That makes it easier to understand that db_watched_record_init() and
db_watched_record_fini() wrap any caller activity on the record,
either during do_locked or between fetch_locked and the related
destructor.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
The code to construct a struct db_watched_record is mostly common
between dbwrap_watched_fetch_locked() and dbwrap_watched_do_locked_fn().
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
dbwrap_watched_do_locked_{storev,delete}() was now exactly the
same as dbwrap_watched_{storev,delete}().
We only need to know if dbwrap_watched_record_wakeup() is called from
within dbwrap_watched_do_locked_fn().
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
Both dbwrap_watched_record_storev() and dbwrap_watched_record_delete()
call dbwrap_watched_record_wakeup() as their first action.
So the behavior stays the same, but dbwrap_watched_do_locked_storev()
and dbwrap_watched_do_locked_delete() are not trivial and we
have the wakeup logic isolated in dbwrap_watched_record_wakeup() only.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15125
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>