IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
When we have a system talloc library, we still need to grab pytalloc.h
from lib/talloc. We don't want to just use -Ilib/talloc, as otherwise
we'll get the in-tree talloc.h which may not be compatible with the
system talloc.h
So we need to give the path to pytalloc.h
This allows us to re-initialise a tevent context without destroying
the pointer. That means that if someone keeps a long term ptr to the
event context across a fork it will still work.
This also brings the memory handling in single and standard process
models much closer together, which means less bugs that we don't find
with make test.
We had a crash bug where a cached copy of a iconv convenience pointer
was used after being freed when loadparm asked for iconv to
reload. This could happen if a python module used a iconv based
function before loadparm was completed.
The fix is to ensure that any use of this pointer remains valid, by
reusing the pointer itself when it has already been initialised, but
filling in the child elements with the updated values.
In smbd there's a small gab between TALLOC_FREE(frame); before
be call smbd_parent_loop() where we don't have a valid talloc stackframe.
smbd_parent_loop() calls talloc_stackframe() only within the while(1) loop.
As DEBUG(2,("waiting for connections")) uses talloc_tos() to construct
the time header for the debug message we crash on some systems.
metze
tdb transactions were designed to be robust against the machine
powering off, but interestingly were never designed to handle the case
where an administrator kill -9's a process during commit. Because
recovery is only done on tdb_open, processes with the tdb already
mapped will simply use it despite it being corrupt and needing
recovery.
The solution to this is to check for recovery every time we grab a
data lock: we could have gained the lock because a process just died.
This has no measurable cost: here is the time for tdbtorture -s 0 -n 1
-l 10000:
Before:
2.75 2.50 2.81 3.19 2.91 2.53 2.72 2.50 2.78 2.77 = Avg 2.75
After:
2.81 2.57 3.42 2.49 3.02 2.49 2.84 2.48 2.80 2.43 = Avg 2.74
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
To test the case of death of a process during transaction commit, add
a -k (kill random) option to tdbtorture. The easiest way to do this
is to make every worker a child (unless there's only one child), which
is why this patch is bigger than you might expect.
Using -k without -t (always transactions) you expect corruption, though
it doesn't happen every time. With -t, we currently get corruption but
the next patch fixes that.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The current recovery code truncates the tdb file on recovery. This is
fine if recovery is only done on first open, but is a really bad idea
as we move to allowing recovery on "live" databases.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now the transaction code uses the standard allrecord lock, that stops
us from trying to grab any per-record locks anyway. We don't need to
have special noop lock ops for transactions.
This is a nice simplification: if you see brlock, you know it's really
going to grab a lock.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
tdb_release_extra_locks() is too general: it carefully skips over the
transaction lock, even though the only caller then drops it. Change
this, and rename it to show it's clearly transaction-specific.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now the transaction allrecord lock is the standard one, and thus is cleaned
in tdb_release_extra_locks(), _tdb_transaction_cancel() doesn't need to
know what type it is.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Centralize locking of all chains of the tdb; rename _tdb_lockall to
tdb_allrecord_lock and _tdb_unlockall to tdb_allrecord_unlock, and
tdb_brlock_upgrade to tdb_allrecord_upgrade.
Then we use this in the transaction code. Unfortunately, if the transaction
code records that it has grabbed the allrecord lock read-only, write locks
will fail, so we treat this upgradable lock as a write lock, and mark it
as upgradable using the otherwise-unused offset field.
One subtlety: now the transaction code is using the allrecord_lock, the
tdb_release_extra_locks() function drops it for us, so we no longer need
to do it manually in _tdb_transaction_cancel.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Records themselves get (read) locked by the traversal code against delete.
Interestingly, this locking isn't done when the allrecord lock has been
taken, though the allrecord lock until recently didn't cover the actual
records (it now goes to end of file).
The write record lock, grabbed by the delete code, is not suppressed
by the allrecord lock. This is now bad: it causes us to punch a hole
in the allrecord lock when we release the write record lock. Make this
consistent: *no* record locks of any kind when the allrecord lock is
taken.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We were previously inconsistent with our "global" lock: the
transaction code grabbed it from FREELIST_TOP to end of file, and the
rest of the code grabbed it from FREELIST_TOP to end of the hash
chains. Change it to always grab to end of file for simplicity and
so we can merge the two.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was redundant before this patch series: it mirrored num_lockrecs
exactly. It still does.
Also, skip useless branch when locks == 1: unconditional assignment is
cheaper anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is pure overhead, but it centralizes the locking. Realloc (esp. as
most implementations are lazy) is fast compared to the fnctl anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Use our newly-generic nested lock tracking for the active lock.
Note that the tdb_have_extra_locks() and tdb_release_extra_locks()
functions have to skip over this lock now it is tracked.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This never nests, so it's overkill, but it centralizes the locking into
lock.c and removes the ugly flag in the transaction code to track whether
we have the lock or not.
Note that we have a temporary hack so this places a real lock, despite
the fact that we are in a transaction.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rather than a boutique lock and a separate nest count, use our
newly-generic nested lock tracking for the transaction lock.
Note that the tdb_have_extra_locks() and tdb_release_extra_locks()
functions have to skip over this lock now it is tracked.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Factor out two loops which find locks; we are going to introduce a couple
more so a helper makes sense.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Move locking intelligence back into lock.c, rather than open-coding the
lock release in transaction.c.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In many places we check whether locks are held: add a helper to do this.
The _tdb_lockall() case has already checked for the allrecord lock, so
the extra work done by tdb_have_extra_locks() is merely redundant.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
tdb_transaction_lock() and tdb_transaction_unlock() do nothing if we
hold the allrecord lock. However, the two locks don't overlap, so
this is wrong.
This simplification makes the transaction lock a straight-forward nested
lock.
There are two callers for these functions:
1) The transaction code, which already makes sure the allrecord_lock
isn't held.
2) The traverse code, which wants to stop transactions whether it has the
allrecord lock or not. There have been deadlocks here before, however
this should not bring them back (I hope!)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Because fcntl locks don't nest, we track them in the tdb->lockrecs array
and only place/release them when the count goes to 1/0. We only do this
for record locks, so we simply place the list number (or -1 for the free
list) in the structure.
To generalize this:
1) Put the offset rather than list number in struct tdb_lock_type.
2) Rename _tdb_lock() to tdb_nest_lock, make it non-static and move the
allrecord check out to the callers (except the mark case which doesn't
care).
3) Rename _tdb_unlock() to tdb_nest_unlock(), make it non-static and
move the allrecord out to the callers (except mark again).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The word global is overloaded in tdb. The global_lock inside struct
tdb_context is used to indicate we hold a lock across all the chains.
Rename it to allrecord_lock.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The word global is overloaded in tdb. The GLOBAL_LOCK offset is used at
open time to serialize initialization (and by the transaction code to block
open).
Rename it to OPEN_LOCK.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now tdb_open() calls tdb_transaction_cancel() instead of
_tdb_transaction_cancel, we can make it static.
Signed-off-by: Rusty Russell<rusty@rustcorp.com.au>
This is taken from the CCAN code base: rather than using tdb_brlock for
locking and unlocking, we split it into brlock and brunlock functions.
For extra debugging information, brunlock says what kind of lock it is
unlocking (even though fnctl locks don't need this). This requires an
extra argument to tdb_transaction_unlock() so we know whether the
lock was upgraded to a write lock or not.
We also use a "flags" argument tdb_brlock:
1) TDB_LOCK_NOWAIT replaces lck_type = F_SETLK (vs F_SETLKW).
2) TDB_LOCK_MARK_ONLY replaces setting TDB_MARK_LOCK bit in ltype.
3) TDB_LOCK_PROBE replaces the "probe" argument.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Which was:
tsocket/bsd: fix bug #7115 FreeBSD includes the UDP header in FIONREAD
Metze, this has to have been wrong - you are throwing away the talloc_realloc
pointer returned. Also no error checking. Please review.
Thank goodness for gcc warnings :-).
Jeremy.
This is needed because, we can't use sizeof(sockaddr_storage) for AF_UNIX
sockets. Also some platforms require exact values for AF_INET and AF_INET6.
metze
This allows us to run a child command in an async fashion, with
control over logging of stdout and stderr (which appears in the Samba
log file). This is useful for ensuring we don't miss important
messages from rndc commands (for example).
Pair-Programmed-With: Andrew Bartlett <abartlet@samba.org>
This changes the meaning of the ->prev pointer in our doubly linked
lists to point at the end of the list from the front of the list. That
allows us to implement DLIST_ADD_END() and related functions in O(1)
time, which can be a huge saving in many places in Samba.
This also means that the 'type' argument to various DLIST_*() macros
is no longer needed, but I have left it in for now to keep the
patchset small, which will make it easier to revert if any problems
are found. In the future we should remove the 'type' arguments.
(jra. Move the one use of DLIST_TAIL over to the new macros).
If a process (or the machine) dies after just after writing the
recovery head (pointing at the end of file), the recovery record will filled
with 0x42. This will not invoke a recovery on open, since rec.magic
!= TDB_RECOVERY_MAGIC.
Unfortunately, the first transaction commit will happily reuse that
area: tdb_recovery_allocate() doesn't check the magic. The recovery
record has length 0x42424242, and it writes that back into the
now-valid-looking transaction header) for the next comer (which
happens to be tdb_wipe_all in my tests).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This should make it easier to keep all release scripts alined as it will reduce
the difference between them to ideally a few variables
Also moves the tdb script in the scripts directory.
On my older CentOS 4 installation I had the problem with the missing
substitution prototypes ("uwrap_*"). So I added them to "uid_wrapper.h".
Also, I made the head of the "uid_wrapper.c" file more like the one of
"nss_wrapper.c" - it shouldn't change that much, I did it only to be consistent.
This patch should fix the build on older distributions while keep it running on
newer ones.
The includes of the UID wrapper headers werent't really efficient according
to metze's post on the technical mailing list (http://lists.samba.org/archive/samba-technical/2010-February/069165.html).
To achieve this move the "uid_wrapper.h" includes into "lib/util/unix_privs.c",
"lib/util/util.c", "ntvfs/posix/pvfs_acl.c" and "ntvfs/unixuid/vfs_unixuid.c".
There was a bug in tdb where the
tdb_brlock(tdb, GLOBAL_LOCK, F_UNLCK, F_SETLKW, 0, 1);
(ending the transaction-"mutex") was done before the
/* remove the recovery marker */
This means that when a transaction is committed there is a window where another
opener of the file sees the transaction marker while the transaction committer
is still fully functional and working on it. This led to transaction being
rolled back by that second opener of the file while transaction_commit() gave
no error to the caller.
This patch moves the F_UNLCK to after the recovery marker was removed, closing
this window.
This was moved from the schema_query code. It will now be used in more
than one place, so best to make it a library macro. I think there are
quite a few places that could benefit from this.
If ALWASY_REALLOC is defined and we are to 'shrink' memory block,
memcpy() will write outside memory just allocated.
Signed-off-by: Andrew Tridgell <tridge@samba.org>
We need to keep TDB_ALLOW_NESTING as default behavior,
so that existing code continues to work.
However we may change the default together with a major version
number change in future.
metze
Make the default be that transaction is not allowed and any attempt to create a nested transaction will fail with TDB_ERR_NESTING.
If an application can cope with transaction nesting and the implicit
semantics of tdb_transaction_commit(), it can enable transaction nesting
by using the TDB_ALLOW_NESTING flag.
(cherry picked from ctdb commit 3e49e41c21eb8c53084aa8cc7fd3557bdd8eb7b6)
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Difference with previous test for str_list_unique() is
that this test allows number of elements and number
of duplicates to be supplied on command line using
--option="list_unique:count=47"
--option="list_unique:dups=7"
Rather than have a repeat of the bugs we found at the plugfest where
hexidecimal strings must be in upper or lower case in particular
places, ensure that each caller chooses which case they want.
This reverts most of the callers back to upper case, as things were
before tridge's patch. The critical call in the extended DN code is
of course handled in lower case.
Andrew Bartlett
This is intended to replace our rfc1738_unescape(), and give us an
rfc1738_escape implementation (and hopefully is better tested and more
secure).
Andrew Bartlett
While studying tdb, I've noticed a couple of mismatches between readme
and actual code:
- tdb_open_ex changed it's log_fn argument to log_ctx
- there is now no tdb_update(), which it seems was transformed into
non-exported tdb_update_hash()
There were other mismatches, but I don't remember them now, sorry.
Signed-off-by: Kirill Smelkov <kirr@mns.spb.ru>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The reason I do it is that when using older python-tdb as shipped in
Debian Lenny, python interpreter crashes on this test:
(gdb) bt
#0 0xb7f8c424 in __kernel_vsyscall ()
#1 0xb7df5640 in raise () from /lib/i686/cmov/libc.so.6
#2 0xb7df7018 in abort () from /lib/i686/cmov/libc.so.6
#3 0xb7e3234d in __libc_message () from /lib/i686/cmov/libc.so.6
#4 0xb7e38624 in malloc_printerr () from /lib/i686/cmov/libc.so.6
#5 0xb7e3a826 in free () from /lib/i686/cmov/libc.so.6
#6 0xb7b39c84 in tdb_close () from /usr/lib/libtdb.so.1
#7 0xb7b43e14 in ?? () from /var/lib/python-support/python2.5/_tdb.so
#8 0x0a038d08 in ?? ()
#9 0x00000000 in ?? ()
master's pytdb does not (we have a check for self->closed in obj_close()),
but still...
Signed-off-by: Kirill Smelkov <kirr@mns.spb.ru>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
So that erroneous double tdb_close() calls do not try to close() same
fd again. This is like SAFE_FREE() but for fd.
Signed-off-by: Kirill Smelkov <kirr@mns.spb.ru>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We no longer use swig for pytdb, so there is no need for swig make
rules. Also pytdb.c header should be updated.
Signed-off-by: Kirill Smelkov <kirr@mns.spb.ru>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
ctdb wants a quick way to detect corrupt tdbs; particularly, tdbs with
loops in their hash chains. tdb_check() provides this.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It was a regrettable hack which I used to reduce line count in tdb; in fact it caused confusion as can be seen in this patch.
In particular, ecode now needs to be set before TDB_LOG anyway, and having it exposed in
the header is useless (the struct tdb_context isn't defined, so it's doubly useless).
Also, we should never set errno, as io.c was doing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When TDB_TRACE is defined (in tdb_private.h), verbose tracing of tdb operations is enabled.
This can be replayed using "replay_trace" from http://ccan.ozlabs.org/info/tdb.
The majority of this patch comes from moving internal functions to _<funcname> to
avoid double-tracing. There should be no additional overhead for the normal (!TDB_TRACE)
case.
Note that the verbose traces compress really well with rzip.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
There was a race condition that caused the torture.tdb to be left in a
state that needed recovery. The torture code thought that any message
from the tdb code was an error, so the "recovered" message, which is a
TDB_DEBUG_TRACE message, marked the run as being an error when it
isn't.
Make sure we always have a sorted (per file) export file.
This way we can directly compare the real export and the check file w/o having
to further sort things.
Also return error code from abi_checks.sh if warnings were reported
Make sure we do not reference our internal tdb directly.
Let configure define what tdb.h file to use so that builds that use an
extrenal tdb do not include 2 different versions of the tdb header.
Make sure we do not reference our internal talloc directly.
Let configure define what talloc.h file to use so that builds that use an
extrenal talloc do not include 2 different versions of the talloc header.
Modified implementation _ber_read_OID_String_impl()
returns how much bytes are converted.
The intentation is to use this implementation both for
reading OIDs and partial-OIDs in the future
Revert 23abcd2318 and fix logic bug.
The current code loops through the event contexts, when it sees a different
one, it notifies the current one (ev) and updates ev to point to the new one.
This is dumb, because:
(1) ev starts as NULL, so this code crashes, and
(2) The final context will not be notified.
The correct fix for this is to update ev to the new one, then notify it.
Volker's fix works because we currently always have one event context.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When we disable null tracking, we need to move any existing objects
that are under the null_context to be parented by the true NULL
context.
We also need a new talloc_enable_null_tracking_no_autofree() function,
as the talloc testsuite cannot cope with the moving of the autofree
context under the null_context as it wants to check exact counts of
objects under the null_context, and smbtorture has a large number of
objects in the autofree_context from .init functions
It's annoying when you use
p talloc_report_full(ctx, fopen("/tmp/xx","w"))
in gdb, and if you don't have write permission on the file then
you get a segv.
These macros allow the compile to better optimise code that has a lot
of if statements. I particularly want to use this for our low level
generated NDR code.
We previously only allowed a commit to happen after a prepare
commit. It is in fact safe to allow reads between a prepare and a
commit, and the s4 replication code can make use of that, so allow it.
If NULL tracking is enabled after the autofree context is initialised
then autofree ends up separate from the null_context. This means that
talloc_report_full() doesn't report the autofree context. Fix this by
reparenting the autofree context when we create the null_context.