IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This refers to LDB add operations as well, we have only to be careful on
"@ATTRIBUTES" entries.
E.g.
dn: cn=testperson,cn=users,dc=...,dc=...
objectClass: person
url: www.example.com
url: www.example.com
should not work.
Signed-off-by: Andrew Bartlett <abartlet@samba.org>
Also "<=", ">=", "~"... are allowed as well. Enumeration taken from
ldb_parse_filtertype().
This was the cause of not identifying the search filter as described in bug
https://bugzilla.samba.org/show_bug.cgi?id=8647.
Signed-off-by: Andrew Bartlett <abartlet@samba.org>
Except in the formatting of the selftest output, this removes the special case
of the build farm, so that an autobuild, a manual make test and the build farm
are more similar.
Andrew Bartlett
Autobuild-User(master): Andrew Bartlett <abartlet@samba.org>
Autobuild-Date(master): Tue Aug 21 06:39:04 CEST 2012 on sn-devel-104
Utility functions for handling the special placeholder server-id value
for disconnected clients (to be used for durable handles).
Pair-Programmed-With: Stefan Metzmacher <metze@samba.org>
there is a libsecurity on OSF1 which clasheѕ with our security lib. see bug #9023.
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Autobuild-User(master): Björn Jacke <bj@sernet.de>
Autobuild-Date(master): Fri Aug 10 14:22:21 CEST 2012 on sn-devel-104
This malloc() and free() actually shows up quite high on a call profile of
provision of the AD DC.
Andrew Bartlett
Autobuild-User(master): Andrew Bartlett <abartlet@samba.org>
Autobuild-Date(master): Thu Aug 9 13:51:06 CEST 2012 on sn-devel-104
This malloc() and free() actually shows up quite high on a call profile of
provision of the AD DC.
This allows the debug handler to decide if the argument list should be
printed.
Andrew Bartlett
This reverts commit cddcaf7bd2.
-Werror=format is no longer uses when it would cause this to fail.
This is a legitimate use of the ldb_search API.
Andrew Bartlett
Autobuild-User(master): Andrew Bartlett <abartlet@samba.org>
Autobuild-Date(master): Mon Aug 6 14:47:27 CEST 2012 on sn-devel-104
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Autobuild-User(master): Stefan Metzmacher <metze@samba.org>
Autobuild-Date(master): Fri Aug 3 12:52:38 CEST 2012 on sn-devel-104
This is all in a transaction, but when we are handling rename errors in the repl_meta_data
module, we key off the error, and do not close the transaction. We found that the
old record was gone and so could not try renaming it again to a conflict DN.
Andrew Bartlett
This parameter is only used in our NBT client code and in nmbd as a
fallback when we fail to select a better interface from "interfaces"
to use directly.
Andrew Bartlett
Autobuild-User(master): Andrew Bartlett <abartlet@samba.org>
Autobuild-Date(master): Fri Jul 27 12:16:25 CEST 2012 on sn-devel-104
This matters as we sync up the declarations with the source3/param code.
Andrew Bartlett
Autobuild-User(master): Andrew Bartlett <abartlet@samba.org>
Autobuild-Date(master): Fri Jul 27 05:26:45 CEST 2012 on sn-devel-104
This essentially reverts the commit 611ef42053
Author: Yasuma Takeda <yasuma@osstech.co.jp>
Date: Fri Dec 5 13:37:51 2008 -0800
Fix bug #5944 - nmbd does not boot if socket adress = "" is defined in smb.conf
Intead, the documentation is fixed so that the correct default is
recorded. Removing the special case handling here allows this to be
dealt with in the same way as all other parameters.
Andrew Bartlett
This finishes the series to remove the various warnings from smb.conf
loading when a source3 parameter is used.
Based on an earlier patch
Pair-Programmed-With: Andrew Tridgell <tridge@samba.org>
Andrew Bartlett
This will make the merge of the whole table smoother.
Based on an earlier patch
Pair-Programmed-With: Andrew Tridgell <tridge@samba.org>
Andrew Bartlett
This will make the merge of the whole table smoother.
Based on an earlier patch
Pair-Programmed-With: Andrew Tridgell <tridge@samba.org>
Andrew Bartlett
This will make the merge of the whole table smoother.
Based on an earlier patch
Pair-Programmed-With: Andrew Tridgell <tridge@samba.org>
Andrew Bartlett
This will make the merge of the whole table smoother.
Based on an earlier patch
Pair-Programmed-With: Andrew Tridgell <tridge@samba.org>
Andrew Bartlett
Autobuild-User(master): Andrew Bartlett <abartlet@samba.org>
Autobuild-Date(master): Thu Jul 26 14:16:02 CEST 2012 on sn-devel-104
This will make the merge of the whole table smoother.
Based on an earlier patch
Pair-Programmed-With: Andrew Tridgell <tridge@samba.org>
Andrew Bartlett
This will make the merge of the whole table smoother.
Based on an earlier patch
Pair-Programmed-With: Andrew Tridgell <tridge@samba.org>
Andrew Bartlett
This will make the merge of the whole table smoother.
Based on an earlier patch
Pair-Programmed-With: Andrew Tridgell <tridge@samba.org>
Andrew Bartlett
This will make the merge of the whole table smoother.
Based on an earlier patch
Pair-Programmed-With: Andrew Tridgell <tridge@samba.org>
Andrew Bartlett
This will make the merge of the whole table smoother.
Based on an earlier patch
Pair-Programmed-With: Andrew Tridgell <tridge@samba.org>
Andrew Bartlett
This will make the merge of the whole table smoother.
Based on an earlier patch
Pair-Programmed-With: Andrew Tridgell <tridge@samba.org>
Andrew Bartlett
This will make the merge of the whole table smoother.
Based on an earlier patch
Pair-Programmed-With: Andrew Tridgell <tridge@samba.org>
Andrew Bartlett
This will make the merge of the whole table smoother.
Based on an earlier patch
Pair-Programmed-With: Andrew Tridgell <tridge@samba.org>
Andrew Bartlett
This will make the merge of the whole table smoother.
Based on an earlier patch
Pair-Programmed-With: Andrew Tridgell <tridge@samba.org>
Andrew Bartlett
Autobuild-User(master): Andrew Bartlett <abartlet@samba.org>
Autobuild-Date(master): Tue Jul 24 12:53:17 CEST 2012 on sn-devel-104
This will make the merge of the whole table smoother.
Based on an earlier patch
Pair-Programmed-With: Andrew Tridgell <tridge@samba.org>
Andrew Bartlett
This will make the merge of the whole table smoother.
Based on an earlier patch
Pair-Programmed-With: Andrew Tridgell <tridge@samba.org>
Andrew Bartlett
This makes the merge with the source3 table less disruptive
Based on an earlier patch:
Pair-Programmed-With: Andrew Tridgell <tridge@samba.org>
Andrew Bartlett
These #defines allow us to merge the parameter table without providing the
handler functions quite yet. This helps us do this task in stages.
Andrew Bartlett
This is in preperation for the parameter table being merged into this file.
Andrew Bartlett
Autobuild-User(master): Andrew Bartlett <abartlet@samba.org>
Autobuild-Date(master): Tue Jul 24 09:16:44 CEST 2012 on sn-devel-104
We use the slightly confusing 'case sensitive = yes' option for the same behaviour.
This avoids adding even more confusing documentation for the ntvfs-only option.
Andrew Bartlett
This changes the default based on the #ifdef rather than an override
on a parameter value of ""
The less special override functions we have the easier it is to merge
the loadparm tables.
Andrew Bartlett
Pair-Programmed-With: Andrew Tridgell <tridge@samba.org>
Autobuild-User(master): Andrew Bartlett <abartlet@samba.org>
Autobuild-Date(master): Thu Jul 19 09:56:01 CEST 2012 on sn-devel-104
This allows this parameter, one of the few with differing declarations
between the loadparm systems, to be brought into common.
Andrew Bartlett
Pair-Programmed-With: Andrew Tridgell <tridge@samba.org>
When we revamp the idmap layer, we will end up just following the s3
options, and this option is not used there either.
Andrew Bartlett
Pair-Programmed-With: Andrew Tridgell <tridge@samba.org>
They use talloc_tos() internally: hoist that up to the callers, some
of whom don't want to us talloc_tos().
A simple patch, but hits a lot of files.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
talloc_stackframe() stacks, so if you forget to free one, the outer
one will free it. However, it's not a good idea to rely too heavily
on this behaviour: it can lead to delays in the release of memory or
destructors.
I had an elaborate hack to make sure every talloc_stackframe() was
freed in the exact same function it was allocated, however all bugs it
caught were simply lazy freeing, so this patch just checks for that.
This doesn't check for stackframes we don't free up on exit: that would
be nice, but uncovers some uncomfortable (but probably harmless) cases.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The only reason we make one stackframe parent of the next is so we use
our parent's pool. That doesn't make sense if we're a new pool, and
wouldn't work anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We explicitly call free() on a pool which falls to zero, assuming it's
not inside another pool (we crash). Check on creation and explicitly
document this case.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This neatens the code a bit (we should do a similar thing for all the
TALLOC_CHUNK macros).
Two subtler changes:
(1) As a result of the struct, we actually pack object_count into the
talloc header on 32-bit platforms (since the header is 40 bytes, but
needs to be 16-byte aligned).
(2) I avoid VALGRIND_MAKE_MEM_UNDEFINED on memmove when we resize the
only entry in a pool; that's done later anyway.
With -O2 on my 11.04 Ubuntu 32-bit x86 laptop, the talloc_pool speed as
measured by testsuite.c actually increases 10%.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If .close() has already been called, we have to play dead - the
self->ctx is just not valid any more, as we have been shut down to
allow some other part of Samba to open the tdb.
Andrew Bartlett
Autobuild-User(master): Andrew Bartlett <abartlet@samba.org>
Autobuild-Date(master): Mon Jul 16 13:51:52 CEST 2012 on sn-devel-104
We strictly need these changes to pass make test, and the concat change is
backwards incompatible, so we really want to use the right version.
Andrew Bartlett
Autobuild-User(master): Andrew Bartlett <abartlet@samba.org>
Autobuild-Date(master): Mon Jul 9 04:34:06 CEST 2012 on sn-devel-104
As per bug #9024, make --disable-ntdb work again.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-User(master): Rusty Russell <rusty@rustcorp.com.au>
Autobuild-Date(master): Wed Jul 4 08:11:33 CEST 2012 on sn-devel-104
We need this version, not the previous release, for Samba.
Andrew Bartlett
Autobuild-User(master): Andrew Bartlett <abartlet@samba.org>
Autobuild-Date(master): Tue Jul 3 17:20:32 CEST 2012 on sn-devel-104
Andrew Bartlett pointed out that making CCAN a non-library will break
the build in a different way in future: when two separate private
libraries start using the same CCAN module, the symbol duplicate
detection will fire (since private libaries don't use any symbol
hiding). That doesn't happen yet, but it will surely happen
eventually.
So, for now at least, we build as a private library again. This
unfortunately means the top-level build creates a libccan.so, which
contains all the ccan modules whether you need them or not. Given the
size of the library, I don't think this is a win. But it's simple.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-User(master): Rusty Russell <rusty@rustcorp.com.au>
Autobuild-Date(master): Sat Jun 30 11:19:04 CEST 2012 on sn-devel-104
Their AC_TRY_RUN doesn't include any current CPPFLAGS. Make
the set[res]uid checks independent of this. Needs a small
change to the waf build in order to code with the change.
Autobuild-User(master): Jeremy Allison <jra@samba.org>
Autobuild-Date(master): Sat Jun 30 00:32:36 CEST 2012 on sn-devel-104
Don't expose a libccan.so; it would produce clashes if someone else
does the same thing. Unfortunately, if we just change it from a
SAMBA_LIBRARY to a SAMBA_SUBSYSTEM, it doesn't create a static library
as we'd like, but links all the object files in. This means we get
many duplicates (eg. everyone gets a copy of tally, even though only
ntdb wants it).
So, the solution is twofold:
1) Make the ccan modules separate.
2) Make the ccan modules SAMBA_SUBSYSTEMs not SAMBA_LIBRARYs so we don't
build shared libraries which we can't share.
3) Make the places which uses ccan explicit.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-User(master): Rusty Russell <rusty@rustcorp.com.au>
Autobuild-Date(master): Fri Jun 29 06:22:44 CEST 2012 on sn-devel-104
Will allow thread-specific credentials to be added by modifying
the central definitions. Deliberately left the setXX[ug]id()
call in popt as this is not used in Samba.
Tru64 doesn't have any stdint.h
Autobuild-User(master): Björn Jacke <bj@sernet.de>
Autobuild-Date(master): Thu Jun 28 00:45:58 CEST 2012 on sn-devel-104
This helps clarify the role of this structure and wrapper function.
The purpose here is to provide helper functions to the lib/param
loadparm_context that point back at the s3 lp_ functions. This allows
a struct loadparm_context to be passed to any point in the code, and
always refer to the correct loadparm system. If this has not been
set, the variables loaded in the lib/param code will be returned.
As requested by Michael Adam.
Andrew Bartlett
Autobuild-User(master): Andrew Bartlett <abartlet@samba.org>
Autobuild-Date(master): Wed Jun 27 17:11:16 CEST 2012 on sn-devel-104
Commit 3c4263e758 said it removed err.h
from tdb, unfortuntely it didn't: tap-interface.h still included it.
This finishes it properly!
Reported-by:Stefan Metzmacher <metze@samba.org>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-User(master): Rusty Russell <rusty@rustcorp.com.au>
Autobuild-Date(master): Tue Jun 26 10:22:03 CEST 2012 on sn-devel-104
We use to handle UTCtime and generalized time the same way. The thing is
that it's not the case, they are different in the way they are set (most
of the time) with different format and also stored and return in
different format too.
It's not portable. While we could use ccan/err, it seems overkill since
we actually only use it in one test (I obviously cut & paste the #include).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-User(master): Rusty Russell <rusty@rustcorp.com.au>
Autobuild-Date(master): Fri Jun 22 09:22:28 CEST 2012 on sn-devel-104
(As suggested by Stefan Metzmacher, based on the change to ntdb.)
Since commit ec96ea690e, we handle the case
where a process dies during a transaction commit. Unfortunately, TDB_NOSYNC
means this no longer works, as it disables the recovery area as well as the
actual msync/fsync. We should do everything except the syncs.
This also means we can do a complete test with $TDB_NO_FSYNC set; just
to get more complete coverage, we disable it explicitly for one test
(where we override the actual sync calls anyway).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The flags returned were TDB-specific: this was only used for detecting
the endianness of obsolete databases (the conversion code was put in in
2003, with reference to Samba 2.3).
It's easier to remove it than to translate the NTDB flags to TDB flags,
and it's a really weird thing to ask for anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Similar to the util_tdb versions, but return the error code.
ntdb_add_int32_atomic seems a clearer name than tdb_change_int32_atomic.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Very similar to the util_tdb versions, but these return the error.
I've only implemented those functions actually used.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
There are various issues with NTDB_CLEAR_IF_FIRST which makes it
better if we don't have to use it, but much of the code does, so
we fake up support here.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The first function is ntdb_new: this is preferred over ntdb_open, as
it makes the ntdb_context returned (and all NTDB_DATA returned from
ntdb_fetch) valid talloc pointers.
The API is very similar to tdb_wrap_open().
Note that we handle $TDB_NO_FSYNC here, since ntdb doesn't do that
hack (and it's great for speeding up testing!).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In particular, this tests that we can store enough records to make the
database expand while we map the given record. We use a global lock for
this, but it could happen in theory with another process.
It also tests the that we can recurse inside ntdb_parse_record().
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since we have a readlock, any write will grab a write lock: if it happens
to be on the same bucket, we'll fail.
For that reason, enforce read-only so every write operation fails
(even for NTDB_NOLOCK or NTDB_INTERNAL dbs), and document it!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
NTDB_INTERNAL databases need to malloc and copy to keep old versions
around if we expand, in a similar way to the manner in which keep old
mmaps around.
Of course, it only works for read-only accesses, since the two copies
are not synced.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This means keeping the old mmap around when we expand the database.
We could revert to read/write, except for platforms with incoherent
mmap (ie. OpenBSD), where we need to use mmap for all accesses.
Thus we keep a linked list of old maps, and unmap them when the last access
finally goes away.
This is required if we want ntdb_parse_record() callbacks to be able
to expand the database.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since we can have multiple openers, we should leave the mmap in place
for the other openers to use. Enhance the test to check the bug (it
still works, because without mmap we fall back to read/write, but
performance would be terrible!).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Then we unset it inside the tdb test target itself. This means that
new code can't accidently forget it, and we can set it in the
'buildnice' script on sn-devel, for example.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Occasionally, the capability test inserts multiple used records and they
clash, but our primitive test layout engine doesn't handle hash clashes
and aborts.
Force a seed value which we know doesn't clash.
Reported-by: Andrew Bartlett <abartlet@samba.org>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-User(master): Rusty Russell <rusty@rustcorp.com.au>
Autobuild-Date(master): Wed Jun 20 16:50:20 CEST 2012 on sn-devel-104
This is copied from tdb; we build the utilities, but as nothing else
links against it, we shouldn't be adding anything to the normal samba
binary sizes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-User(master): Rusty Russell <rusty@rustcorp.com.au>
Autobuild-Date(master): Tue Jun 19 07:31:06 CEST 2012 on sn-devel-104
Update the design.lyx file with the latest status and the change in hashing.
Also, refresh and add examples to the TDB_porting.txt file.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since we've given up on expansion, let them frob the hashsize again.
We have attributes, so we should use them for optional stuff like
this.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
TDB2 started with a top-level hash of 1024 entries, divided into 128
groups of 8 buckets. When a bucket filled, the 8 bucket group
expanded into pointers into 8 new 64-entry hash tables. When these
filled, they expanded in turn, etc.
It's a nice idea to automatically expand the hash tables, but it
doesn't pay off. Remove it for NTDB.
1) It only beats TDB performance when the database is huge and the
TDB hashsize is small. We are about 20% slower on medium-size
databases (1000 to 10000 records), worse on really small ones.
2) Since we're 64 bits, our hash tables are already twice as expensive
as TDB.
3) Since our hash function is good, it means that all groups tend to
fill at the same time, meaning the hash enlarges by a factor of 128
all at once, leading to a very large database at that point.
4) Our efficiency would improve if we enlarged the top level, but
that makes our minimum db size even worse: it's already over 8k,
and jumps to 1M after about 1000 entries!
5) Making the sub group size larger gives a shallower tree, which
performs better, but makes the "hash explosion" problem worse.
6) The code is complicated, having to handle delete and reshuffling
groups of hash buckets, and expansion of buckets.
7) We have to handle the case where all the records somehow end up with
the same hash value, which requires special code to chain records for
that case.
On the other hand, it would be nice if we didn't degrade as badly as
TDB does when the hash chains get long.
This patch removes the hash-growing code, but instead of chaining like
TDB does when a bucket fills, we point the bucket to an array of
record pointers. Since each on-disk NTDB pointer contains some hash
bits from the record (we steal the upper 8 bits of the offset), 99.5%
of the time we don't need to load the record to determine if it
matches. This makes an array of offsets much more cache-friendly than
a linked list.
Here are the times (in ns) for tdb_store of N records, tdb_store of N
records the second time, and a fetch of all N records. I've also
included the final database size and the smbtorture local.[n]tdb_speed
results.
Benchmark details:
1) Compiled with -O2.
2) assert() was disabled in TDB2 and NTDB.
3) The "optimize fetch" patch was applied to NTDB.
10 runs, using tmpfs (otherwise massive swapping as db hits ~30M,
despite plenty of RAM).
Insert Re-ins Fetch Size dbspeed
(nsec) (nsec) (nsec) (Kb) (ops/sec)
TDB (10000 hashsize):
100 records: 3882 3320 1609 53 203204
1000 records: 3651 3281 1571 115 218021
10000 records: 3404 3326 1595 880 202874
100000 records: 4317 3825 2097 8262 126811
1000000 records: 11568 11578 9320 77005 25046
TDB2 (1024 hashsize, expandable):
100 records: 3867 3329 1699 17 187100
1000 records: 4040 3249 1639 154 186255
10000 records: 4143 3300 1695 1226 185110
100000 records: 4481 3425 1800 17848 163483
1000000 records: 4055 3534 1878 106386 160774
NTDB (8192 hashsize)
100 records: 4259 3376 1692 82 190852
1000 records: 3640 3275 1566 130 195106
10000 records: 4337 3438 1614 773 188362
100000 records: 4750 5165 1746 9001 169197
1000000 records: 4897 5180 2341 83838 121901
Analysis:
1) TDB wins on small databases, beating TDB2 by ~15%, NTDB by ~10%.
2) TDB starts to lose when hash chains get 10 long (fetch 10% slower
than TDB2/NTDB).
3) TDB does horribly when hash chains get 100 long (fetch 4x slower
than NTDB, 5x slower than TDB2, insert about 2-3x slower).
4) TDB2 databases are 40% larger than TDB1. NTDB is about 15% larger
than TDB1
This is designed to allow us to make ntdb_context (and NTDB_DATA returned
from ntdb_fetch) a talloc pointer. But it can also be used for any other
alternate allocator.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
NTDB_NOSYNC now just prevents the fsync/msync calls, which speeds
testing while still providing full coverage. It also provides safety
against processes dying during transaction commit (though obviously,
not against the machine dying).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
TDB allows this for internal databases, but it's a bad idea, since the
name is useful for logging.
They're a hassle to deal with, and we'd just end up putting "unnamed"
in there, so let the user deal with it. If they don't, they get an
informative core dump.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The performance numbers for transaction pagesize are indeterminate:
larger pagesizes means a smaller transaction array, and a better
chance of having a contiguous record (more efficient for
ntdb_parse_record and some internal operations inside a transaction).
On the other hand, large pagesize means more I/O even if we change a
few bytes.
But it also controls the multiple by which we will enlarge the file,
and hence the minimum db size. It's 4k for tdb1, but 16k seems
reasonable in these modern times.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now our database is always a multiple of NTDB_PGSIZE, we can remove the
special handling for the last block.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As copied from tdb1, there is logic in the transaction code to handle
a non-PGSIZE multiple db, but in fact this only happens for a
completely unused database: as soon as we add anything to it, it is
expanded to a NTDB_PGSIZE multiple.
If we create the database with a free record which pads it out to
NTDB_PGSIZE, we can remove this last-page-is-different logic.
Of course, the fake ntdbs we create in our tests now also need to be
multiples of NTDB_PGSIZE, so we change some numbers there too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
ntdb uses tdb's transaction code, and it has an undocumented but implicit
assumption: that the transaction recovery area is always aligned to the
transaction pagesize. This means that no block will overlap the recovery
area.
This is maintained by rounding the size of the database up, so do the same
for ntdb.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We were missing the last few bytes. Found by 100 runs of ntdbtorture
-t -k.
The transaction test code didn't catch this, because usually those
last few bytes are irrelevant to the actual contents of the database.
We fix the test.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Our external test helper is a bit primitive when it comes to doing STORE or
FETCH commands: let us specify the data we expect, instead of assuming it's
the same as the key.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a fairly common pattern in Samba, and if we log an error on
every open it spams the logs. On the other hand, other errors are
potentially more serious, so we still use NTDB_LOG_ERROR on them.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We document that the child of a fork() can do a brunlock() if the parent
does a brlock: we should not log an error when they do this.
Also, test the case where we fork() and return inside a parse function
(which is allowed).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In tdb, we grab the open lock immediately after we open the file. In
ntdb, we usually did some work first. tdbtorture managed to get in
before the creator grabbed the lock:
testing with 3 processes, 5000 loops, seed=1338246020
ntdb:torture.ntdb:IO Error:ntdb_open: torture.ntdb is not a ntdb file
29023:torture.ntdb:db open failed
At cost of a little duplicated code, we can reduce the race.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It was a hack to make compatibility easier. Since we're not doing that,
it can go away: all callers must use the return value now.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This renames everything from tdb2 to ntdb: importantly, we no longer
use the tdb_ namespace, so you can link against both ntdb and tdb if
you want to.
This also enables building of standalone ntdb by the autobuild script.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Judging by code it's tdb1, where you needed to free old key's dptr
manually.
Signed-off-by: Kirill Smelkov <kirr@mns.spb.ru>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes finding of ATTR_ROOT on GNU/kFreeBSD.
Autobuild-User(master): Jelmer Vernooij <jelmer@samba.org>
Autobuild-Date(master): Sat Jun 16 18:54:27 CEST 2012 on sn-devel-104
This commit changes the default file server to be s3fs. Existing
installs wishing to keep the ntvfs file server need to set this in
their smb.conf:
server services = +smb -s3fs
dcerpc endpoint services = +winreg +srvsvc
Andrew Bartlett
Signed-off-by: Michael Adam <obnox@samba.org>
Autobuild-User(master): Michael Adam <obnox@samba.org>
Autobuild-Date(master): Fri Jun 15 14:20:04 CEST 2012 on sn-devel-104