IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
In the child, we fully re-open serverid.tdb, which leads to one fcntl lock for
CLEAR_IF_FIRST detection per smbd. This opens the tdb in the parent and holds
it, so that tdb_reopen_all correctly catches the CLEAR_IF_FIRST bit.
In the child, we fully re-open messaging.tdb, which leads to one fcntl lock for
CLEAR_IF_FIRST detection per smbd. This opens the tdb in the parent and holds
it, so that tdb_reopen_all correctly catches the CLEAR_IF_FIRST bit.
If thousands of smbds try to gencache_stabilize at the same time because the
network died, all of them might be sitting in transaction_start. Don't do the
stabilize transaction if nothing has changed in gencache_notrans.tdb.
Volker
Fix this by moving canonicalization into lib/sharesec.c. Update the
db version to 3. Ensures we always find share names with security
descriptors attached.
Jeremy.
Open winbindd_cache.tdb with read/write access when validate the cache,
otherwise, validation fails to get lock in tdb_check. It results in
validation failure even the cache is good.
Signed-off-by: Bo Yang <boyang@samba.org>
When a samba server process dies hard, it has no chance to clean up its entries
in locking.tdb, brlock.tdb, connections.tdb and sessionid.tdb.
For locking.tdb and brlock.tdb Samba is robust by checking every time we read
an entry from the database if the corresponding process still exists. If it
does not exist anymore, the entry is deleted. This is not 100% failsafe though:
On systems with a limited PID space there is a non-zero chance that between the
smbd's death and the fresh access, the PID is recycled by another long-running
process. This renders all files that had been locked by the killed smbd
potentially unusable until the new process also dies.
This patch is supposed to fix the problem the following way: Every process ID
in every database is augmented by a random 64-bit number that is stored in a
serverid.tdb. Whenever we need to check if a process still exists we know its
PID and the 64-bit number. We look up the PID in serverid.tdb and compare the
64-bit number. If it's the same, the process still is a valid smbd holding the
lock. If it is different, a new smbd has taken over.
I believe this is safe against an smbd that has died hard and the PID has been
taken over by a non-samba process. This process would not have registered
itself with a fresh 64-bit number in serverid.tdb, so the old one still exists
in serverid.tdb. We protect against this case by the parent smbd taking care of
deregistering PIDs from serverid.tdb and the fact that serverid.tdb is
CLEAR_IF_FIRST.
CLEAR_IF_FIRST does not work in a cluster, so the automatic cleanup does not
work when all smbds are restarted. For this, "net serverid wipe" has to be run
before smbd starts up. As a convenience, "net serverid wipedbs" also cleans up
sessionid.tdb and connections.tdb.
While there, this also cleans up overloading connections.tdb with all the
process entries just for messaging_send_all().
Volker
This reverts commit a6ae7a552f.
This fixes bug #7222 (All users have full rigths on all shares) (CVE-2010-0728).
(cherry picked from commit 1c9494c76c)
In a cluster, this makes a large difference: For r/w traverse, we have to do a
fetch_locked on every record which for most users of connections_forall is just
overkill.
connections_forall is called from count_current_connections() which potentially
deletes dead records. This needs r/w access to connections.tdb.
connections_traverse says it does not provide this. Does not really matter in
the smbd case, because we have opened it before r/w, so this is "just" cleanup.
In g_lock_unlock we have a little race between the process_exists and
messaging_send call: We only send to 5 waiters now, they all might have died
between us checking their existence and sending the message. This change makes
g_lock_lock retry at least once every minute.
Only notify the first 5 pending lock waiters. This avoids a thundering herd
problem that is really nasty in a cluster. It also makes acquiring a lock a bit
more FIFO, lock waiters are added to the end of the array.
Only check the existence of the lock owner in g_lock_parse, check the rest of
the records only when we got the lock successfully. This reduces the load on
process_exists which can involve a network roundtrip in the clustered case.