1
0
mirror of https://github.com/samba-team/samba.git synced 2024-12-25 23:21:54 +03:00
samba-mirror/ctdb/tests
Martin Schwenke fdccaab2a9 ctdb/eventscripts: Do not reconfigure in "monitor" events
"monitor" events can be cancelled.  If a reconfigure action does a
service restart then the "monitor" event can be cancelled at the
inconvenient moment after the service is stopped.  In this case the
service stays down and the node may become unhealthy (depending on
whether there are any repair actions in the monitor event).

A long time ago we did service reconfiguration in "monitor" events
following failovers.  Service reconfiguration was then moved to the
"ipreallocated" event.  However, reconfiguration in "monitor" events
has been kept as a last resort in case an "ipreallocate" event does
not occur.  The only important case that this covers is "ctdb
deleteip", where "releaseip" events are generated without a
corresponding "ipreallocated".  Therefore, IPs can be deleted without
running the required service reconfiguration.

The supported way of removing IP addresses is now via "ctdb
reloadips", which always causes a takeover run with a corresponding
"ipreallocate" event.

This means that service reconfiguration in "monitor" events is no
longer required and should be removed because it is unsafe.

Also update the associated tests.  Make the first confirm that the
monitor event no longer does reconfiguration.  Change the others to
test that monitor status is correctly replayed when something else is
doing a reconfigure and currently holds the reconfigure lock.

Signed-off-by: Martin Schwenke <martin@meltin.net>
Reviewed-by: Amitay Isaacs <amitay@gmail.com>

Autobuild-User(master): Amitay Isaacs <amitay@samba.org>
Autobuild-Date(master): Tue Dec 17 06:32:35 CET 2013 on sn-devel-104
2013-12-17 06:32:35 +01:00
..
complex ctdb/tests/integration: Update NFS tickles test and supporting code 2013-12-05 00:39:21 +01:00
events.d tests: simple tests against local daemons should check $TEST_LOCAL_DEAMONS 2012-07-26 22:03:00 +10:00
eventscripts ctdb/eventscripts: Do not reconfigure in "monitor" events 2013-12-17 06:32:35 +01:00
onnode tools/onnode: Fix healthy/ok node handling 2013-10-29 17:14:55 +11:00
scripts ctdb/tests/scripts: Promote scripts/run_tests 2013-12-05 00:46:46 +01:00
simple ctdb/tests/integration: Decentralise the daemon restart code 2013-12-05 00:43:55 +01:00
src ctdb-tests: Coverity fixes 2013-11-19 19:06:51 +01:00
takeover ctdb:tests: Rework unit test result filtering 2013-11-27 18:46:17 +01:00
tool ctdb:tests: Rework unit test result filtering 2013-11-27 18:46:17 +01:00
INSTALL eventscripts: 60.nfs uses nfs_check_rpc_services() to check NFS RPC services 2013-05-07 12:55:09 +10:00
README ctdb:tests: update README 2013-12-05 00:47:43 +01:00
run_cluster_tests.sh tests: Fix wrapper scripts to handle options and tests without breakage 2012-05-14 15:02:19 +10:00
run_tests.sh ctdb/tests/scripts: Promote scripts/run_tests 2013-12-05 00:46:46 +01:00
test_check_tcp_ports.sh add a simple test script to test ctdb_check_tcp_ports 2009-01-30 22:49:17 +01:00
TODO For now, make tests/run_tests.sh runs the new test suite. Add 2009-01-12 15:47:12 +11:00

Introduction
------------

For a developer, the simplest way of running most tests on a local
machine from within the git repository is:

  make test

This runs all unit tests (onnode, takeover, tool, eventscripts) and
the tests against local daemons (simple) using the script
tests/run_tests.sh.

When running tests against a real or virtual cluster the script
tests/run_cluster_tests.sh can be used.  This runs all integration
tests (simple, complex).

Both of these scripts can also take a list of tests to run.  You can
also pass options, which are then passed to run_tests.  However, if
you just try to pass options to run_tests then you lose the default
list of tests that are run.  You can't have everything...

tests/run_tests.sh
------------------

This script can be used to manually run all or selected unit tests and
simple integration tests against local daemons. Test selection is done
by specifying optional call parameters. If no parameter is given,
all unit tests and simple integration tests are run.

This runs all unit tests of the "tool" category:

  ./tests/run_tests.sh tool

In order to run a single test, one simply specifies the path of the
test script to run as the last parameter, e.g.:

  ./tests/run_tests.sh ./tests/eventscripts/00.ctdb.monitor.001
  ./tests/run_tests.sh ./tests/simple/76_ctdb_pdb_recovery.sh

One can also specify multiple test suites and tests:

  ./tests/run_tests.sh eventscripts tool ./tests/onnode/0001.sh

The script also has number of command-line switches.
Some of the more useful options include:

  -s  Print a summary of tests results after running all tests

  -l  Use local daemons for integration tests

      This allows the tests in "simple" to be run against local
      daemons.

      All integration tests communicate with cluster nodes using
      onnode or the ctdb tool, which both have some test hooks to
      support local daemons.

      By default 3 daemons are used.  If you want to use a different
      number of daemons then do not use this option but set
      TEST_LOCAL_DAEMONS to the desired number of daemons instead.
      The -l option just sets TEST_LOCAL_DAEMONS to 3...  :-)

  -e  Exit on the first test failure

  -C  Clean up - kill daemons and remove $TEST_VAR_DIR when done

      Tests uses a temporary/var directory for test state.  By default,
      this directory is not removed when tests are complete, so you
      can do forensics or, for integration tests, re-run tests that
      have failed against the same directory (with the same local
      daemons setup).  So this option cleans things up.

      Also kills local daemons associated with directory.

  -V  Use <dir> as $TEST_VAR_DIR

      Use the specified temporary temporary/var directory.

  -H  No headers - for running single test with other wrapper

      This allows tests to be embedded in some other test framework
      and executed one-by-one with all the required
      environment/infrastructure.

      This replaces the old ctdb_test_env script.

How do the tests find remote test programs?
-------------------------------------------

If the all of the cluster nodes have the CTDB git tree in the same
location as on the test client then no special action is necessary.
The simplest way of doing this is to share the tree to cluster nodes
and test clients via NFS.

If cluster nodes do not have the CTDB git tree then
CTDB_TEST_REMOTE_DIR can be set to a directory that, on each cluster
node, contains the contents of tests/scripts/ and tests/bin/.

In the future this will hopefully (also) be supported via a ctdb-test
package.

Running the ctdb tool under valgrind
------------------------------------

The easiest way of doing this is something like:

  VALGRIND="valgrind -q" scripts/run_tests ...

This can be used to cause all invocations of the ctdb client (and,
with local daemons, the ctdbd daemons themselves) to occur under
valgrind.

NOTE: Some libc calls seem to do weird things and perhaps cause
spurious output from ctdbd at start time.  Please read valgrind output
carefully before reporting bugs.  :-)

How is the ctdb tool invoked?
-----------------------------

$CTDB determines how to invoke the ctdb client.  If not already set
and if $VALGRIND is set, this is set to "$VALGRIND ctdb".  If this is
not already set but $VALGRIND is not set, this is simply set to "ctdb"