IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This is common between both implementations of tdb_oob(). It's
faster if we don't have to dereference function pointers.
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Fixes:
lib/tdb/common/transaction.c:354:2: warning: Null pointer passed as an argument to a 'nonnull' parameter <--[clang]
memcpy(tdb->transaction->blocks[blk] + off, buf, len);
&
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Gary Lockyer <gary@catalyst.net.nz>
The call to tdb_repack() inside tdb_transaction_commit()
is an optimization, not part of the transaction itself,
so failing due to lock or other errors isn't a fatal error
that should cause the caller to think the transaction was
a failure by returning -1.
The tdb transaction itself has finished and been committed
onto stable storage via fsync and all locks released at the
point tdb_repack() is called.
tdb_repack() is only called here as it's a convenient point
to attempt to reduce tdb fragmentation without having to add
a timer call to repack in all users of tdb.
This causes lock ordering issues in Samba, showing up as:
ldb: ltdb: tdb(../private/sam.ldb.d/DC=SAMBA2008R2,DC=EXAMPLE,DC=COM.ldb): tdb_transaction_prepare_commit: failed to upgrade hash locks: Locking error
This is because Samba has multiple tdb databases open, and the lock order between them
is important.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13952
Signed-off-by: Andrew Bartlett <abartlet@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Many of those warnings are difficult to fix, but this one was easy :-)
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Autobuild-User(master): Jeremy Allison <jra@samba.org>
Autobuild-Date(master): Thu Mar 22 07:21:44 CET 2018 on sn-devel-144
This fixes a GCC warning.
Signed-off-by: Andreas Schneider <asn@samba.org>
Reviewed-by: Volker Lendecke <vl@samba.org>
Autobuild-User(master): Andreas Schneider <asn@cryptomilk.org>
Autobuild-Date(master): Thu Aug 10 02:26:09 CEST 2017 on sn-devel-144
This will allow callers to avoid their own reference counting of transactions.
Additionally, this will always line up with the acutal transaction state, even
in the error cases where tdb can cancel the transaction
Signed-off-by: Andrew Bartlett <abartlet@samba.org>
Reviewed-by: Stefan Metzmacher <metze@samba.org>
Pair-Programmed-With: Stefan Metzmacher <metze@samba.org>
Signed-off-by: Andrew Bartlett <abartlet@samba.org>
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Andreas Schneider <asn@samba.org>
There's no real reason to disallow transactions as the
allrecord lock is also available with mutexes enabled.
E.g. ctdbd requires transactions also on non-persistent databases
opened with TDB_CLEAR_IF_FIRST and TDB_MUTEX_LOCKING.
Bug: https://bugzilla.samba.org/show_bug.cgi?id=11004
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>
This adds optional support for locking based on
shared robust mutexes.
The caller can use the TDB_MUTEX_LOCKING flag
together with TDB_CLEAR_IF_FIRST after verifying
with tdb_runtime_check_for_robust_mutexes() that
it's supported by the current system.
The caller should be aware that using TDB_MUTEX_LOCKING
implies some limitations, e.g. it's not possible to
have multiple read chainlocks on a given hash chain
from multiple processes.
Note: that this doesn't make tdb thread safe!
Pair-Programmed-With: Stefan Metzmacher <metze@samba.org>
Pair-Programmed-With: Michael Adam <obnox@samba.org>
Signed-off-by: Volker Lendecke <vl@samba.org>
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Signed-off-by: Michael Adam <obnox@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
We can have offsets > 2G, so use unsigned values. Fixes other prints to be
native types rather than casts, too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Autobuild-User(master): Andrew Bartlett <abartlet@samba.org>
Autobuild-Date(master): Tue May 28 11:22:14 CEST 2013 on sn-devel-104
realloc(NULL, ...) is equivalent to malloc. We are already using this
realloc property for tdb->lockrecs. It should not make any difference
in speed, it just makes for a little simpler code.
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Stefan Metzmacher <metze@samba.org>
Autobuild-User(master): Stefan Metzmacher <metze@samba.org>
Autobuild-Date(master): Tue Feb 19 17:30:13 CET 2013 on sn-devel-104
header.hash_size was the only thing we ever referenced outside of
tdb_open_ex and its direct callees. So this shrinks the tdb_context by
164 bytes.
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Stefan Metzmacher <metze@samba.org>
Autobuild-User(master): Stefan Metzmacher <metze@samba.org>
Autobuild-Date(master): Tue Feb 5 13:18:28 CET 2013 on sn-devel-104
(As suggested by Stefan Metzmacher, based on the change to ntdb.)
Since commit ec96ea690e, we handle the case
where a process dies during a transaction commit. Unfortunately, TDB_NOSYNC
means this no longer works, as it disables the recovery area as well as the
actual msync/fsync. We should do everything except the syncs.
This also means we can do a complete test with $TDB_NO_FSYNC set; just
to get more complete coverage, we disable it explicitly for one test
(where we override the actual sync calls anyway).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We allocate a new recovery area by expanding the file. But if the
recovery area is already at the end of file (as shown in at least one
client case), we can simply expand the record, rather than freeing it
and creating a new one.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-User: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-Date: Wed Dec 21 06:25:40 CET 2011 on sn-devel-104
If we're expanding because the current recovery area is too small, we
expand only the amount we need. This can quickly lead to exponential
growth when we have a slowly-expanding record (hence a
slowly-expanding transaction size).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I came across a tdb which had wrapped to 4G + 4K, and the contents had been
destroyed by processes which thought it only 4k long. Fix this by checking
on open, and making tdb_oob() check for wrap itself.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-User: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-Date: Mon Dec 19 07:52:01 CET 2011 on sn-devel-104
If it's really the recovery area, we can trust the rec_len field, and
don't have to go groping for bitpatterns.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-User: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-Date: Tue Apr 19 14:15:22 CEST 2011 on sn-devel-104
tdb_repack() is expensive and consumes memory, so we can spend some
effort to see if it's worthwhile. In particular, tdbbackup doesn't
need to repack: it started with an empty database!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
tdb transactions were designed to be robust against the machine
powering off, but interestingly were never designed to handle the case
where an administrator kill -9's a process during commit. Because
recovery is only done on tdb_open, processes with the tdb already
mapped will simply use it despite it being corrupt and needing
recovery.
The solution to this is to check for recovery every time we grab a
data lock: we could have gained the lock because a process just died.
This has no measurable cost: here is the time for tdbtorture -s 0 -n 1
-l 10000:
Before:
2.75 2.50 2.81 3.19 2.91 2.53 2.72 2.50 2.78 2.77 = Avg 2.75
After:
2.81 2.57 3.42 2.49 3.02 2.49 2.84 2.48 2.80 2.43 = Avg 2.74
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The current recovery code truncates the tdb file on recovery. This is
fine if recovery is only done on first open, but is a really bad idea
as we move to allowing recovery on "live" databases.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now the transaction code uses the standard allrecord lock, that stops
us from trying to grab any per-record locks anyway. We don't need to
have special noop lock ops for transactions.
This is a nice simplification: if you see brlock, you know it's really
going to grab a lock.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
tdb_release_extra_locks() is too general: it carefully skips over the
transaction lock, even though the only caller then drops it. Change
this, and rename it to show it's clearly transaction-specific.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now the transaction allrecord lock is the standard one, and thus is cleaned
in tdb_release_extra_locks(), _tdb_transaction_cancel() doesn't need to
know what type it is.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Centralize locking of all chains of the tdb; rename _tdb_lockall to
tdb_allrecord_lock and _tdb_unlockall to tdb_allrecord_unlock, and
tdb_brlock_upgrade to tdb_allrecord_upgrade.
Then we use this in the transaction code. Unfortunately, if the transaction
code records that it has grabbed the allrecord lock read-only, write locks
will fail, so we treat this upgradable lock as a write lock, and mark it
as upgradable using the otherwise-unused offset field.
One subtlety: now the transaction code is using the allrecord_lock, the
tdb_release_extra_locks() function drops it for us, so we no longer need
to do it manually in _tdb_transaction_cancel.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This never nests, so it's overkill, but it centralizes the locking into
lock.c and removes the ugly flag in the transaction code to track whether
we have the lock or not.
Note that we have a temporary hack so this places a real lock, despite
the fact that we are in a transaction.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Move locking intelligence back into lock.c, rather than open-coding the
lock release in transaction.c.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In many places we check whether locks are held: add a helper to do this.
The _tdb_lockall() case has already checked for the allrecord lock, so
the extra work done by tdb_have_extra_locks() is merely redundant.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Because fcntl locks don't nest, we track them in the tdb->lockrecs array
and only place/release them when the count goes to 1/0. We only do this
for record locks, so we simply place the list number (or -1 for the free
list) in the structure.
To generalize this:
1) Put the offset rather than list number in struct tdb_lock_type.
2) Rename _tdb_lock() to tdb_nest_lock, make it non-static and move the
allrecord check out to the callers (except the mark case which doesn't
care).
3) Rename _tdb_unlock() to tdb_nest_unlock(), make it non-static and
move the allrecord out to the callers (except mark again).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The word global is overloaded in tdb. The global_lock inside struct
tdb_context is used to indicate we hold a lock across all the chains.
Rename it to allrecord_lock.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The word global is overloaded in tdb. The GLOBAL_LOCK offset is used at
open time to serialize initialization (and by the transaction code to block
open).
Rename it to OPEN_LOCK.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now tdb_open() calls tdb_transaction_cancel() instead of
_tdb_transaction_cancel, we can make it static.
Signed-off-by: Rusty Russell<rusty@rustcorp.com.au>