IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Signed-off-by: Joe Guo <joeg@catalyst.net.nz>
Reviewed-by: Noel Power <npower@samba.org>
Autobuild-User(master): Noel Power <npower@samba.org>
Autobuild-Date(master): Fri Dec 14 18:00:40 CET 2018 on sn-devel-144
Signed-off-by: Philipp Gesang <philipp.gesang@intra2net.com>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Add a more thorough test case that .deltree works as expected.
Note that we get a slightly different NT_STATUS error in file_exists()
if the parent directory doesn't exist, e.g.
/non-existent-dir/nonexistent.txt
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13676
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Autobuild-User(master): Andrew Bartlett <abartlet@samba.org>
Autobuild-Date(master): Wed Dec 12 08:23:07 CET 2018 on sn-devel-144
Extend the test case to check overwriting a file as well. Currently this
has the behaviour of appending to the existing file, rather than
overwriting the file with new contents.
It's not clear from the API that this is the intended behaviour in this
case, so I've marked it as a failure.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13676
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Extend the tests to better reflect some of the .list() functionality we
expect.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13676
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
The current assertion would never detect if the unlink API is broken.
The chkpath() API is only useful for checking if directories exist, so
it will always return False for a regular file (regardless of whether
the file actually exists or not).
Rework the test case so we assert that the file exists by trying to read
its contents (which will throw an error if the file doesn't exist).
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13676
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
chkpath was only tested incidentally (and that assertion was wrong). Add
a proper test to prove it works correctly. We can then clean-up the
incorrect assertion in the next patch.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13676
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
* Fix calling samba-tool with correct PYTHON version
* Fix integer division needs '//' operator (this was causing
'uncaught exception - list indices must be integers or slices,
not float'
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Fix various ldb.bytes that need to be stringified in order to get
tests to pass
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
The order of the values in the TrafficModel is different,
but... also unfortunately output of json.dump is also
different (even when using sorted versions of the associated
dictionaries before dumping), these changes reimport the output
files into TrafficModel objects rather than comparing the actual
raw files.
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Also path to traffic_learner is not in the normal 'bin' path so
also adjusted the insertion of PYTHON version to cover this
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Some tests (especially samba.tests.auth_log_netlogon_bad_creds) are
failing due to not receiving expected messages. There seems to be
some timing issue or race around the messaging bus being set up and
getting the expected events resulting from the failed netlogon.
Specifically the the order of destruction of the messaging.Messaging()
c-py objects is different under python2. Under python2 all of the
messaging.Messaging() objects are destructed *after* all the tests
are run. Note: each instance of the TestCase has it's own Messaging()
instance which is created by TestCaseXYZ.setUp, so it appears the unittest
destroys the test instances when all the tests have run whereas in
python3 we see each messaging.Messaging() instance destroyed after
each test runs.
Ok, what difference does that make ? well it seems in python3 because
each Messaging() instance is destructed after a test runs that the
associated messaging_dgm_destroy() also runs, this destroys the
global_dgm_context context which means when the next test runs the whole
messaging infrastructure needs to be built again when the next Messaging()
object is created. On the server-side this seems to result in attempts
to send messages to the listener failing first with
get_event_server: Failed to find 'auth_event' registered on the message bus to send JSON audit events to: NT_STATUS_CONNECTION_REFUSED
and subsequently with
get_event_server: Failed to find 'auth_event' registered on the message bus to send JSON audit events to: NT_STATUS_UNSUCCESSFUL
client doesn't get any more messages, test fails :-(
So, what's the difference in python2, well because the destructors for the
(4 in the case of netlogon_bad_creds) instances of Messagaging() don't run
till the end of the tests this doesn't happen and the global_dgm_context
never gets destroyed untill all the tests complete. There is some race
condition at play here, a simple sleep at the start of a failing test
fixes the problem. But... ok that isn't a possible solution here, instead
I have adjusted the base auth tests to store the Messaging() objects in a
global list forcing them to remain in scope until the tests are complete.
This ensure the behaviour is consistent across python2 & python3.
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Make sure correctly encode password to utf16 and not use
unicode (which doesn't exist in PY3)
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Note: Fix needed also for gpo.apply
minPwdAge, maxPwdAge, minPwdLength & set_pwdProperties all
have a line like
value = str(value).encode('utf8')
this is a generic type statement I guess to convert int, float etc
to utf8 encoded bytes representing the string value for those.
This worked fine in PY2 but in py3 some routine already are passing
bytes into these methods, in these cases e.g. b'200' will get converted
to "b'200'", this change only performs the conversion above for non
bytes (or str) types by replacing the above with
if not isinstance(value, binary_type):
value = str(value).encode('utf8')
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
1) configparser.set requires string values
2) self.gp_db.store() etc. neex to pass str object for
xml.etree.ElementTree.Element text attribute which needs
to be text
3) tdb delete method needs bytes key
4) configparser.write needs a file opened in text mode
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
The default params for the python3 version of the compat ConfigParser
are not correct. Code like
foo = ConfigParser()
fails because of this.
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
In python3 we are using ldb.bytes where we need strings
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
filter in PY2 returns list in PY3 it returns an iterator
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Misc hanges needed to get make test TEST=samba.tests.dns &
samb.tests.dns_fowarder to run and pass under PY3
* socket.send needs bytes not string
* rec.dwTimeStamp expects int not float (in PY3 / operator
will give float results, for int use '//' instead)
* re.match using bytes needs a bytes search term
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
SocketServer symbol changed in PY3 to socketserver so
we need to use a compat symbol for PY2/PY3 code.
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
SocketServer was renamed to socketserver in Py3, this patch
create a samba.compat.SocketServer which can be used in py2 or
py3
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
string_to_byte_array returns not a bytearray (as the name suggests)
but a list of byte values (int). Some code expects the list so even
using a 'real' bytearray wont work.
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
* remove unnecessary 'bin/' part of path as base BlackBox class
will do this anyway and also ensure correct detection that
command needs to have 'PYTHON=blah' addeded
* modify shell script so PYTHON variable if set is prepended
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Tests that prepare complex ldap expressions and equivalent python expressions,
then compare the results of the two.
Signed-off-by: Aaron Haslett <aaronhaslett@catalyst.net.nz>
Reviewed-by: Gary Lockyer <gary@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Reviewed-by: Noel Power <noel.power@suse.com>
Autobuild-User(master): Gary Lockyer <gary@samba.org>
Autobuild-Date(master): Fri Dec 7 07:07:08 CET 2018 on sn-devel-144
The parameter is added to the lists of ignored-paremteres in the
samba.docs tests, as the given default "aio max threads * 2" works only
as manpage string.
"aio max threads" can only be calculated at run time and requires a
handle to a pthreadpool_tevent which loadparm will never have.
Because of that lp_smbd_max_async_dosmode() will always return 0 as
default and it's up to the caller to calculate "aio max threads * 2" if
lp_smbd_max_async_dosmode() returns 0. Cf the next commit.
Signed-off-by: Ralph Boehme <slow@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
traffic_replay tries to distribute the users among the groups in a
realistic manner - some groups will have almost all users in them.
However, this becomes a problem when testing a really large database,
e.g. we may want 100K users, but no more than 5K users in each group.
This patch adds a max-member option so we can limit how big the groups
actually get.
If we detect that a group exceeds the max-members, we reset the group's
probability (of getting selected) to zero, and then recalculate the
cumulative distribution. The means that the group should no longer get
selected by generate_random_membership(). (Note we can't completely
remove the group from the list because that changes the
list-index-to-group-ID mapping).
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Autobuild-User(master): Andrew Bartlett <abartlet@samba.org>
Autobuild-Date(master): Tue Dec 4 12:22:50 CET 2018 on sn-devel-144
We want to cap the number of members that can be in a group. But first,
we need to tweak how the assignment dict gets generated, so that we get
rid of the intermediary set.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Check that the number of members reported is correct.
(This change somehow got left off the ca570bd482 commit that was
actually delivered).
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
These changes were inadvertently left off 0c910245fc.
(They were made to the 2nd patch-set iteration posted to the
mailing-list, but for some reason the first patch-set got delivered).
Changes are:
+ rework some variable names for better readability
+ Average members defaulted to int, so lost any floating point
precision.
+ Replace 'Min members' (which was fairly meaningless) with 'Median
members per group'.
+ Fix flake8 long line warnings
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
These tests expose the regression described by Stefan Metzmacher in
discussion on the bugzilla paged linked below.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13600
Signed-off-by: Aaron Haslett <aaronhaslett@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Stefan Metzmacher <metze@samba.org>
Tests to confirm the standard process model honours the smbd.conf
variable "max smbd processes", when forking a new process on accept.
Signed-off-by: Gary Lockyer <gary@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Count number of answers generated by internal DNS query routine and stop at
20 to match Microsoft's loop prevention mechanism.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13600
Signed-off-by: Aaron Haslett <aaronhaslett@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Reviewed-by: Garming Sam <garming@catalyst.net.nz>
Stops the user from adding a self-referencing CNAME over RPC, which is an easy
mistake to make with samba-tool.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13600
Signed-off-by: Aaron Haslett <aaronhaslett@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Reviewed-by: Garming Sam <garming@catalyst.net.nz>
The backup tests have a special constraint where we always want to use
check_output() over runcmd(). The reason is we need the samba-tool
backup/restore commands executed in a separate process. Otherwise the
global underlying LoadParm can accumulate settings from earlier test
case runs.
We can avoid someone in future inadvertently running runcmd() by
mistake, by simply changing the inheritance so we no longer inherit from
SambaToolCmdTest (so the runcmd functions are no longer present).
The comment explaining this has been moved to the top of the file.
Note that the TestCaseInTempDir inheritance was redundant.
BlackboxTestCase inherits from TestCaseInTempDir (and SambaToolCmdTest
was inheriting from BlackboxTestCase).
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13676
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Autobuild-User(master): Tim Beale <timbeale@samba.org>
Autobuild-Date(master): Tue Nov 27 06:57:03 CET 2018 on sn-devel-144
Not all testenvs have the DOMSID set as an environment variable.
However, it's easy enough to work out from querying the samdb.
This is a slight change in that we use a source4-generated loadparm
to connect to the DB (self.lp is source3-generated, presumably for
some SMB connection dependency).
This change is so we can run the ntacls_backup tests against a DC with
SMBv1 disabled (the restoredc). Note that currently the tests fail in
the smb.SMB() connection in the setUp(), so we can't run them as part
of autobuild just yet (because we can't known-fail test errors).
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13676
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
The restoredc already runs under python3, so before we can run the
domain_backup tests against the restoredc, we need to make sure they
work under python3.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13676
Signed-off-by: Noel Power <noel.power@suse.com>
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
If the backup command fails (i.e. throws an exception), we want the test
to fail. This makes it easier to mark tests as 'knownfail' (because we
can't knownfail test errors).
In theory, this should just involve updating run_cmd() to catch any
exceptions from the command and then call self.fail().
However, if the backup command fails, it can leave behind files in the
targetdir. Partly this is intentional, as these files may provide clues
to users as to why the command failed. However, in selftest, it causes
the TestCaseInTempDir._remove_tempdir() assertion to fire. Because this
assert actually gets run as part of the teardown, the assertion gets
treated as an error rather than a failure (and so we can't knownfail the
backup tests). To get around this, we remove any files in the tempdir
prior to calling self.fail().
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13676
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
self.create_backup() uses self.run_cmd(), which is a wrapper around
self.check_output(). Rework the code to call the underlying
check_output() function directly instead.
The reason we're doing this is we want run_cmd() to catch exceptions and
fail the test (i.e. in the next patch). However, we can't do that because
this test case relies on receiving the exceptions.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13676
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Tag prefork work processes with "(worker 0)", and sort the process list
on server name to get a consistent order.
Service: PID
--------------------------------------
cldap_server 15588
...
ldap_server 15584
ldap_server(worker 0) 15627
ldap_server(worker 1) 15630
ldap_server(worker 2) 15632
ldap_server(worker 3) 15634
nbt_server 15576
notify-daemon 15638
...
samba 0
...
wrepl_server 15580
Signed-off-by: Gary Lockyer <gary@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Add tests for the restarting of failed/terminated process, by the
pre-fork process model.
Signed-off-by: Gary Lockyer <gary@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Signed-off-by: Joe Guo <joeg@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Autobuild-User(master): Andrew Bartlett <abartlet@samba.org>
Autobuild-Date(master): Wed Nov 21 10:46:20 CET 2018 on sn-devel-144
Rather than unstable hash order. Ideally we'd do them in proper DN order.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Signed-off-by: Joe Guo <joeg@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Then we can reuse the re obj.
Signed-off-by: Joe Guo <joeg@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
This will simplify the logic and improve performance.
Signed-off-by: Joe Guo <joeg@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Two mistakes here:
- res[:-1] will copy but lost the last char
- string is immutable in python, there is no need to copy it explicitly
Signed-off-by: Joe Guo <joeg@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
This option has default value False, and was actually not passed down from cli
to LDAPBase. However, LDAPBase.__init__ has default value True for it.
After the change, a few tests using ldapcmp are affected.
Add --skip-missing-dn explicitly to keep the behavior consistent,
otherwise test will fail.
Signed-off-by: Joe Guo <joeg@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Just define another dict for return value, seems no need to modify
original dict.
Signed-off-by: Joe Guo <joeg@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Signed-off-by: Joe Guo <joeg@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
This simplify the logic and improve performance a lot.
Signed-off-by: Joe Guo <joeg@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
The list comprehension will repeat for each item.
For large database, this make the command freeze.
Signed-off-by: Joe Guo <joeg@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
So we don't need to validate ourselves.
Signed-off-by: Joe Guo <joeg@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
So we don't need to validate ourselves.
Signed-off-by: Joe Guo <joeg@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
This method actually changed both objects and print info.
__eq__ is not a proper name and is not designed for this case.
Rename to diff.
Signed-off-by: Joe Guo <joeg@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Bundel -> Bundle
Signed-off-by: Joe Guo <joeg@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Add extra tests to test the content returned by samr_EnumDomainUsers,
and tests for the result caching added in the following commit.
Signed-off-by: Gary Lockyer <gary@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Add extra tests to test the content returned by samr_EnumDomainGroups,
and tests for the result caching added in the following commit.
Signed-off-by: Gary Lockyer <gary@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Add extra tests to test the content returned by samr_QueryDisplayInfo,
which is not tested for the ADDC. Also adds tests for the result
caching added in the following commit.
Signed-off-by: Gary Lockyer <gary@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Under normal operation, users shouldn't see giant cookies in their logs.
We still log the initial cookie retrieved from the cache database, which
should still be helpful for identifying corrupt cookies.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13686
Signed-off-by: Garming Sam <garming@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
The replUpToDateVector could be incorrect after an offline backup was
restored. This means replication propagation dampening doesn't work
properly. In the worst case, a singleton DC would have no
replUpToDateVector at all, and so *all* objects created on that DC get
replicated every time a new DRS connection is established between 2 DCs.
This becomes a real problem if you used that singleton DC to create 100K
objects...
This patch flushes the replUpToDateVector when an offline backup gets
restored. We need to do this before we add in the new DC and remove the
old DCs.
Note that this is only a problem for offline backups. The online/rename
backups are received over DRS, and as part of the replication they
receive the latest replUpToDateVector from the DC being backed up.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
It will be easy to forget that the backupType marker doesn't exist on
v4.9. However, this seems like a dumb reason not to support v4.9
backup-files. Add a wrapper function to avoid potential problems
cropping up in future.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
We are starting to hit restore cases that are only applicable to a
particular type of backup. We already had a marker to differentiate
renames, but differentiating offline backups would also be useful.
Note that this raises a slight compatibility issue for backups created
on v4.9, as the marker won't exist. However, it's only offline backups
we will use this marker for (at the moment), and this option doesn't
exist on v4.9, so there's no problem.
Removing the markers has been refactored out into a separate function to
handle the optional presence of the new marker.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
We noticed that offline backups were missing a replUpToDateVector for
the original DC, if the backup was taken on a singleton DC. This patch
adds an assertion to the existing test-cases to highlight the problem.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
We need to examine the contents of PYTHON env variable which should defined the
python version to be used when running tests.
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Python 3.4 seems to need a string
parsed = json.loads (out_jsobj)
File "/usr/lib/python3.4/json/__init__.py", line 312, in loads
s.__class__.__name__))
TypeError: the JSON object must be str, not 'bytes'
however Python 3.5 seems to be happy to consume bytes (or string)
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Testing a string against an int value is illegal and
is not necessary in this case, this patch removes the
problematic test.
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Make sure either the output of tests and/or the item we are searching match
in type. Output of cmd in python3 is bytes, depending on the was the test is
written it may be easier just to convert all output or just a single string
that is used in the test
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Change all instance where python scripts are called so that the
correct python version as specified by $PYTHON is used
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
mdb_copy() was dutifully checking the PATH for the mdb_copy executable,
then, if it didn't find it, blindly proceeding anyway and trying to run
a non-existent executable. This resulted in a cryptic error:
ERROR(<type 'exceptions.OSError'>): uncaught exception - [Errno 2] No
such file or directory
Add in an extra check that we actually find the executable and raise a
better human-readable exception if we don't.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andreas Schneider <asn@samba.org>
Reviewed-by: Alexander Bokovoy <ab@samba.org>
Autobuild-User(master): Andreas Schneider <asn@cryptomilk.org>
Autobuild-Date(master): Fri Nov 9 21:07:47 CET 2018 on sn-devel-144
Signed-off-by: Andreas Schneider <asn@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
Autobuild-User(master): Andreas Schneider <asn@cryptomilk.org>
Autobuild-Date(master): Thu Nov 8 11:03:11 CET 2018 on sn-devel-144
generate_users_and_groups() now generates the machine acounts as well as
the user accounts, so it seems there's no need to also have
generate_traffic_accounts(), which does the same job.
Instead, we can just pass through the number of machine acounts to
generate_users_and_groups() and delete the other function.
Also updated generate_users_and_groups() so that machine_accounts is
no longer optional (we want to create machine accounts in all cases).
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Generate separate machine accounts for populating a large DB vs
replaying network traffic.
We want to use different userAccountControl flags in each of the above
cases (i.e. commit 3338a3e257). However, this means that once you
use the --generate-users-only option, you can't replay network packets
against the machine accounts.
We can avoid this problem by creating separate machine accounts for each
of 2 different cases, e.g. STGM-0-x machines for traffic-replay, and
PC-0-x machines for padding out the database.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
The traffic_replay group/user/machine account names follow a standard
format. This adds a function to generate the machine-name. It also makes
sure the existing user_name() function gets called in all applicable
places.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
We create machine accounts for 2 different purposes:
1). For traffic generation, i.e. testing realistic network packets.
2). For generating a realistic large DB.
Unfortunately, we want to use different userAccountControl flags for
the 2 different cases. Commit 3338a3e257 changed the flags used
for case #2, but this breaks case #1.
The problem is generate_users_and_groups() is called in both cases,
so we want the 'traffic account' flag passed into that function.
This ensures that the machine accounts get created with the appropriate
userAccountControl flags for the particular case you want to test.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
I was assuming that generate_users_and_groups() only gets called in the
--generate-users-only case. However, it also gets called in the default
traffic replay case.
This patch reworks the code so that the number of machine accounts to
create gets passed in, and the 'create 25% more computers than users'
assumption only applies to the --generate-users-only case.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
A few of the gpo commands use an identical temporary directory structure
that can be constructed using shared code.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
req.more_flags only exists for v10 requests, so we throw an exception if
we try to dereference that field on a v8 (or v5) request. Unfortunately,
we were checking that we support v10 *after* we had tried to access the
more_flags. This patch fixes up the order of the checks.
This may be a problem trying to replicate with an older Windows DC
(pre-2008R2), and was reported on the samba mailing-list at one point:
https://lists.samba.org/archive/samba/2018-June/216541.html
Unfortunately this patch doesn't help the overall situation at all (the
join will fail because we can't resolve the link target and we can't use
GET_TGT). But it now gives you a more meaningful error, i.e.
ERROR(runtime): uncaught exception - (8639, "Failed to process 'chunk'
of DRS replicated objects: DOS code 0x000021bf"
instead of:
ERROR(<type 'exceptions.AttributeError'>): uncaught exception -
'drsuapi.DsGetNCChangesRequest8' object has no attribute 'more_flags'
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Autobuild-User(master): Tim Beale <timbeale@samba.org>
Autobuild-Date(master): Tue Nov 6 07:15:33 CET 2018 on sn-devel-144
Tweak the code slightly to avoid some 80+ character lines.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
The LDAP connection can also timeout when trying to join a Windows DC
with a very large database. However, in this case Windows gives a
slightly different error message (NT_STATUS_CONNECTION_RESET instead of
NT_STATUS_CONNECTION_DISCONNECTED).
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13612
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
In python2 we decode str types in load_xml, in python3 these are
str class(s) which we cannot decode.
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
In python2 as far as I can see GptTmplInfParser.write_binary more
or less works by accident.
write_binary creates a writer for the 'utf8' codec, such a writer
should consume unicode and emit utf8 encoded bytes. This writer
is passed to each of the sections managed by GptTmplInfParser as
follows
def write_binary(self, filename):
with codecs.open(filename, 'wb+',
self.encoding) as f:
for s in self.sections:
self.sections[s].write_section(s, f)
And each section type itself is encoding its result to 'utf-16-le'
e.g.
class UnicodeParam(AbstractParam):
def write_section(self, header, fp):
fp.write(u'[Unicode]\r\nUnicode=yes\r\n'.encode(self.encoding)
But this makes little sense, it seems like sections are encoded to one
encoding but the total file is supposed to be encoded as ut8??? Also
having an encoding per ParamType doesn't seem correct.
Bizarely in PY2 this works and it actually encodes the whole file as utf-16le
In PY3 you can't do this as the writer wants to deal with strings not bytes
(after the extra encode phase in 'write_section'.
So, changes here are to remove the unnecessary encoding in each 'write_section'
method, additionally in GptTmplInfParser.write_binary the
codecs.open call now uses the correct codec (e.g. 'utf-16-le') to write
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Fixes:
1) various ldb.bytes that should be displayed as strings in PY3
2) sorting of lists of xml Element in PY3
3) various 'open' need to be opened in binary mode (to accept binary
data)
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
The previous version here was using UnicodeReader which was
wrapping the UTF8Recoder class and passing that to csv.reader.
It looks like the intention was to read a bytestream in a
certain encoding and then reencode it to a different encoding.
And then UnicodeReader creates unicode from the newly encoded stream.
This is unnecssary, we know the encoding of the bytesstream and
codec.getreader will happily consume the bytstream and give back
unicode. The unicode can be fed directly into csv.writer.
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Fixes
1) use compat versions of ConfigParser and StringIO
2) fix sort list of XML Elements
3) open file needs to be opened in binary mode as write_pretty_xml
routing uses BytesIO() object.
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Although this is unintuitive it's because we are writing unicode
not bytes (both in PY2 & PY3). using the 'b' mode causes an error in
PY3.
In PY3 we can define the encoding, but not in PY2.
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
'file' no longer exists in PY3 replace with 'open'
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Fixes.
1) sorting of xml.etree.ElementTree.Element, in PY2 sort
seems to sort lists of these. In PY3 this no longer works.
Choosing tag as the sort key for py3 so at least in python3
there is a consistent sort (probably won't match how it is
sorted in PY2 but nothing seems to depend on that)
2) md5 requires bytes
3) tostring returns bytes in PY3, adjust code for that
Signed-off-by: Noel Power <noel.power@suse.com>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Due to the userAccountControl flags we were specifying, the machine
accounts were all created as critical objects. When trying to populate
1000s of machine accounts in a DB, this makes replication unnecessarily
slow (because it has to replicate them all twice).
This patch changes it so when we're just creating machine accounts for
the purpose of populating a semi-realistic DB, we jsut use the default
WORKSTATION_TRUST_ACCOUNT flag.
Note that for the accounts used for traffic-replay, we apparently need
the existing flags in order for the DC to accept certain requests.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Autobuild-User(master): Tim Beale <timbeale@samba.org>
Autobuild-Date(master): Mon Nov 5 03:43:24 CET 2018 on sn-devel-144
Currently the tool only generates the machine accounts needed for
traffic generation. However, this isn't realistic if we're trying to use
the tool to generate users to simulate a large network.
This patch generates machine accoutns along with the user accounts.
Note we assume there will be more computer accounts than users in a real
network (e.g. work laptops, servers, etc), so generate slightly more
computer accounts.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
When creating 1000s of users you currently get a lot of debug, but at
the same time you have no idea how far through creating the users you
actually are.
Instead of logging every single user account that's created, log every
50th (as well as how far through the overall generation we are).
Logger already includes timestamps, so we can remove generating the
timestamp diff manually. User creation is the slowest operation - adding
groups/memberships is much faster, so we don't need to log as
frequently.
Note that there is a usability trade-off on how frequently we log
depending on whether the user is using the slower (but more common)
method of going via LDAP, vs the much faster (but more obscure) method
of writing directly to sam.ldb with ldb:nosync=true. In my tests, we end
up logging every ~30-ish secs with LDAP, and every ~3 seconds with
direct file writes.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Using logger is more helpful here because it includes timestamps, so we
can see how long things are taking. It's also more consistent with the
rest of the traffic_replay logging.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Each user-group membership was being written to the DB in a single
operation. With large numbers of users (e.g. 10,000 in average 15 groups
each), this becomes a lot of operations (e.g. 150,000). This patch
reworks the code so that we write the memberships for a group in
one operation. E.g. instead of 150,000 DB operations, we might make
1,500. This makes writing the group memberships several times
faster.
Note that rthere is a performance vs memory tradeoff. When we hit
10,000+ members in a group, memory-usage in the underlying DB modify
operation becomes very inefficient/costly. So we avoid potential memory
usage problems by writing no more than 1,000 users to a group at once.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
We can speed up writing the group memberships by adding multiple users
to a group in a single DB modify operation.
To do this, we first need to reorganize the assignments so instead
of being a set of tuples, it's a dictionary where key=group and
value=list-of-users-in-group.
add_users_to_groups() now iterates through the users/groups slightly
differently, but mostly it's just indentation changes. We haven't
changed the number of DB operations yet - we'll do that in the next
patch.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
When adding 10,000 users, one user would end up in over 1000 groups.
With 100,000 users, it would be more like 10,000 groups. While it makes
sense to have groups with large numbers of users, having a single user
in 1000s of groups is probably less realistic.
This patch changes the shape of the Pareto distribution that we use to
assign users to groups. The aim is to cap users at belonging to at most
~500 groups. Increasing the shape of the Pareto distribution pushes the
user assignments so they're closer to the average, and the tail (with
users in lots of groups) is not so large).
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
The current probability we were assigning to users roughly approximates
the Pareto Distribution (with shape=1.0). This means the code now uses a
documented algorithm (i.e. explanation on Wikipedia). It also allows us
to vary the distribution by changing the shape parameter.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
When assigning 10,000 users to 15 groups each (on average),
assign_groups() would take over 30 seconds. This did not include any DB
operations whatsoever. This patch improves things, so that it takes less
than a second in the same situation.
The problem was the code was looping ~23 million times where the
'random.random() < probability * 10000' condition was not met. The
problem is individual group/user probabilities get lower as the number
of groups/users increases. And so with large numbers of users, most of
the time the calculated probability was very small and didn't meet the
threshold.
This patch changes it so we can select a user/group in one go, avoiding
the need to loop multiple times.
Basically we distribute the users (or groups) between 0.0 and 1.0, so
that each user has their own 'slice', and this slice is proporational to
their weighted probability. random.random() generates a value between
0.0 and 1.0, so we can use this to pick a 'slice' (or rather, we use
this as an index into the list, using .bisect()). Users/groups with
larger probabilities end up with larger slices, so are more likely to
get picked.
The end result is roughly the same distribution as before, although the
first 10 or so user/groups seem to get picked more frequently, so the
weighted-probability calculations may need tweaking some more.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
This doesn't change functionality at all. It just moves the probability
calculations out into separate functions.
We want to tweak the logic/implementation behind this code, but the
rest of assign_groups() doesn't really care how the underlying
probabilities are worked out, so long as it gets a suitably random
user/group membership each time round the loop.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Wrap up the group assignment calculations in a helper class. We're going
to tweak the internals a bit in subsequent patches, but the rest of the
code doesn't really need to know about these changes.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
In this code
def f(a, b=[]):
b.append(a)
return b
all single argument calls to f() will affect the same copy of b.
In the controls case, controls=None has the same effect as
controls=[].
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Noel Power <noel.power@suse.com>
When given a list, it will use the list directly as an argument list,
avoiding shell-expansion and the intermediatory process.
This removes shell expansion trouble, and saves the machine a little
bit of work.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Noel Power <noel.power@suse.com>
Autobuild-User(master): Douglas Bagnall <dbagnall@samba.org>
Autobuild-Date(master): Thu Nov 1 09:40:02 CET 2018 on sn-devel-144
This was unused and broken. e.g. here:
- def init(self):
- # Check to see that this 'existing' LDAP backend in fact exists
- ldapi_db = Ldb(self.ldapi_uri)
there is no attribute self.ldapi_uri, so this would always raise an
exception.
It was being left around in case it became useful, but that doesn't
seem to be happening.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Noel Power <noel.power@suse.com>
There were 2 % formats and 3 arguments.
Also reformat for line length
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Noel Power <noel.power@suse.com>
previously these would have raised an exception
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Noel Power <noel.power@suse.com>
but str(e) is the same as str(e.message), so we can use that
on 2 and 3.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Noel Power <noel.power@suse.com>
We were skipping a level in the inheritance chain, which had no effect
in this case (no .setUps or .tearDowns were missed) but it would be
confusing if the parents ever changed.
Note: in python 3, you just call super() with no args, and it works
out the right thing.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Noel Power <noel.power@suse.com>
With large domains it's hard to get an idea of how many groups there
are, and how many users are in each group, on average. However, this
could have a big impact on whether a problem can be reproduced or not.
This patch dumps out some summary information so that you can get a
quick idea of how big the groups are.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Autobuild-User(master): Douglas Bagnall <dbagnall@samba.org>
Autobuild-Date(master): Wed Oct 31 03:40:41 CET 2018 on sn-devel-144
This adds an easy way for users to see (via samba-tool) how many members
are in various groups, without querying the members for each individual
group.
For example, you could pipe this output to grep to check for groups with
zero or one members (i.e. historic groups that may no longer make
sense).
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Currently the online/rename backup files always use the default backend
(TDB) and there is no way to change this.
This patch adds the backend-store option to the backup commands so that
you can create a backup with an MDB backend, if needed.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
This reduces noise, so the messages only come out if you specify
--debug.
Signed-off-by: Tim Beale <timbeale@catalyst.net.nz>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
This makes it easier to debug dbcheck problems.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=13418
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
This is not always going to work, and is not guaranteed to be
consistent even between minor versions.
Here is a simple counterexample:
>>> a = 'hello'
>>> a is 'hello'
True
>>> a is 'hello'.lower()
False
>>> a == a.lower()
True
Possibly it always works for the empty string, but we cannot rely
on that.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Andreas Schneider <asn@samba.org>
Autobuild-User(master): Andreas Schneider <asn@cryptomilk.org>
Autobuild-Date(master): Mon Oct 29 23:13:36 CET 2018 on sn-devel-144
Python's property() function works like this:
property([getter[, setter[, delete[, doc]]]])
but we have been forgetting the delete function, or rather setting it
to be a string. A string is not callable and is unlikely to succeed at
deleting the property.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Andreas Schneider <asn@samba.org>
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Noel Power <npower@samba.org>
Autobuild-User(master): Noel Power <npower@samba.org>
Autobuild-Date(master): Fri Oct 26 00:50:37 CEST 2018 on sn-devel-144
This test will be revealed to the world in the next commit.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Noel Power <npower@samba.org>
Pair-programmed-with: Garming Sam <garming@catalyst.net.nz>
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Noel Power <npower@samba.org>
The 'from_xml()' definition is replaced a few lines down
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Noel Power <npower@samba.org>
The error is in the value, and StandardError is not in Python 3
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Noel Power <npower@samba.org>
We had our own special version with very few entries, but only
used it in one place.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Noel Power <npower@samba.org>