1
0
mirror of https://github.com/samba-team/samba.git synced 2024-12-27 03:21:53 +03:00
samba-mirror/lib/testtools/MANUAL
2010-12-10 03:04:06 +01:00

350 lines
11 KiB
Plaintext

======
Manual
======
Introduction
------------
This document provides overview of the features provided by testtools. Refer
to the API docs (i.e. docstrings) for full details on a particular feature.
Extensions to TestCase
----------------------
Custom exception handling
~~~~~~~~~~~~~~~~~~~~~~~~~
testtools provides a way to control how test exceptions are handled. To do
this, add a new exception to self.exception_handlers on a TestCase. For
example::
>>> self.exception_handlers.insert(-1, (ExceptionClass, handler)).
Having done this, if any of setUp, tearDown, or the test method raise
ExceptionClass, handler will be called with the test case, test result and the
raised exception.
Controlling test execution
~~~~~~~~~~~~~~~~~~~~~~~~~~
If you want to control more than just how exceptions are raised, you can
provide a custom `RunTest` to a TestCase. The `RunTest` object can change
everything about how the test executes.
To work with `testtools.TestCase`, a `RunTest` must have a factory that takes
a test and an optional list of exception handlers. Instances returned by the
factory must have a `run()` method that takes an optional `TestResult` object.
The default is `testtools.runtest.RunTest` and calls 'setUp', the test method
and 'tearDown' in the normal, vanilla way that Python's standard unittest
does.
To specify a `RunTest` for all the tests in a `TestCase` class, do something
like this::
class SomeTests(TestCase):
run_tests_with = CustomRunTestFactory
To specify a `RunTest` for a specific test in a `TestCase` class, do::
class SomeTests(TestCase):
@run_test_with(CustomRunTestFactory, extra_arg=42, foo='whatever')
def test_something(self):
pass
In addition, either of these can be overridden by passing a factory in to the
`TestCase` constructor with the optional 'runTest' argument.
TestCase.addCleanup
~~~~~~~~~~~~~~~~~~~
addCleanup is a robust way to arrange for a cleanup function to be called
before tearDown. This is a powerful and simple alternative to putting cleanup
logic in a try/finally block or tearDown method. e.g.::
def test_foo(self):
foo.lock()
self.addCleanup(foo.unlock)
...
Cleanups can also report multiple errors, if appropriate by wrapping them in
a testtools.MultipleExceptions object::
raise MultipleExceptions(exc_info1, exc_info2)
TestCase.addOnException
~~~~~~~~~~~~~~~~~~~~~~~
addOnException adds an exception handler that will be called from the test
framework when it detects an exception from your test code. The handler is
given the exc_info for the exception, and can use this opportunity to attach
more data (via the addDetails API) and potentially other uses.
TestCase.patch
~~~~~~~~~~~~~~
``patch`` is a convenient way to monkey-patch a Python object for the duration
of your test. It's especially useful for testing legacy code. e.g.::
def test_foo(self):
my_stream = StringIO()
self.patch(sys, 'stderr', my_stream)
run_some_code_that_prints_to_stderr()
self.assertEqual('', my_stream.getvalue())
The call to ``patch`` above masks sys.stderr with 'my_stream' so that anything
printed to stderr will be captured in a StringIO variable that can be actually
tested. Once the test is done, the real sys.stderr is restored to its rightful
place.
TestCase.skipTest
~~~~~~~~~~~~~~~~~
``skipTest`` is a simple way to have a test stop running and be reported as a
skipped test, rather than a success/error/failure. This is an alternative to
convoluted logic during test loading, permitting later and more localized
decisions about the appropriateness of running a test. Many reasons exist to
skip a test - for instance when a dependency is missing, or if the test is
expensive and should not be run while on laptop battery power, or if the test
is testing an incomplete feature (this is sometimes called a TODO). Using this
feature when running your test suite with a TestResult object that is missing
the ``addSkip`` method will result in the ``addError`` method being invoked
instead. ``skipTest`` was previously known as ``skip`` but as Python 2.7 adds
``skipTest`` support, the ``skip`` name is now deprecated (but no warning
is emitted yet - some time in the future we may do so).
TestCase.useFixture
~~~~~~~~~~~~~~~~~~~
``useFixture(fixture)`` calls setUp on the fixture, schedules a cleanup to
clean it up, and schedules a cleanup to attach all details held by the
fixture to the details dict of the test case. The fixture object should meet
the ``fixtures.Fixture`` protocol (version 0.3.4 or newer). This is useful
for moving code out of setUp and tearDown methods and into composable side
classes.
New assertion methods
~~~~~~~~~~~~~~~~~~~~~
testtools adds several assertion methods:
* assertIn
* assertNotIn
* assertIs
* assertIsNot
* assertIsInstance
* assertThat
Improved assertRaises
~~~~~~~~~~~~~~~~~~~~~
TestCase.assertRaises returns the caught exception. This is useful for
asserting more things about the exception than just the type::
error = self.assertRaises(UnauthorisedError, thing.frobnicate)
self.assertEqual('bob', error.username)
self.assertEqual('User bob cannot frobnicate', str(error))
Note that this is incompatible with the assertRaises in unittest2/Python2.7.
While we have no immediate plans to change to be compatible consider using the
new assertThat facility instead::
self.assertThat(
lambda: thing.frobnicate('foo', 'bar'),
Raises(MatchesException(UnauthorisedError('bob')))
There is also a convenience function to handle this common case::
self.assertThat(
lambda: thing.frobnicate('foo', 'bar'),
raises(UnauthorisedError('bob')))
TestCase.assertThat
~~~~~~~~~~~~~~~~~~~
assertThat is a clean way to write complex assertions without tying them to
the TestCase inheritance hierarchy (and thus making them easier to reuse).
assertThat takes an object to be matched, and a matcher, and fails if the
matcher does not match the matchee.
See pydoc testtools.Matcher for the protocol that matchers need to implement.
testtools includes some matchers in testtools.matchers.
python -c 'import testtools.matchers; print testtools.matchers.__all__' will
list those matchers.
An example using the DocTestMatches matcher which uses doctests example
matching logic::
def test_foo(self):
self.assertThat([1,2,3,4], DocTestMatches('[1, 2, 3, 4]'))
Creation methods
~~~~~~~~~~~~~~~~
testtools.TestCase implements creation methods called ``getUniqueString`` and
``getUniqueInteger``. See pages 419-423 of *xUnit Test Patterns* by Meszaros
for a detailed discussion of creation methods.
Test renaming
~~~~~~~~~~~~~
``testtools.clone_test_with_new_id`` is a function to copy a test case
instance to one with a new name. This is helpful for implementing test
parameterization.
Extensions to TestResult
------------------------
TestResult.addSkip
~~~~~~~~~~~~~~~~~~
This method is called on result objects when a test skips. The
``testtools.TestResult`` class records skips in its ``skip_reasons`` instance
dict. The can be reported on in much the same way as succesful tests.
TestResult.time
~~~~~~~~~~~~~~~
This method controls the time used by a TestResult, permitting accurate
timing of test results gathered on different machines or in different threads.
See pydoc testtools.TestResult.time for more details.
ThreadsafeForwardingResult
~~~~~~~~~~~~~~~~~~~~~~~~~~
A TestResult which forwards activity to another test result, but synchronises
on a semaphore to ensure that all the activity for a single test arrives in a
batch. This allows simple TestResults which do not expect concurrent test
reporting to be fed the activity from multiple test threads, or processes.
Note that when you provide multiple errors for a single test, the target sees
each error as a distinct complete test.
TextTestResult
~~~~~~~~~~~~~~
A TestResult that provides a text UI very similar to the Python standard
library UI. Key differences are that its supports the extended outcomes and
details API, and is completely encapsulated into the result object, permitting
it to be used without a 'TestRunner' object. Not all the Python 2.7 outcomes
are displayed (yet). It is also a 'quiet' result with no dots or verbose mode.
These limitations will be corrected soon.
Test Doubles
~~~~~~~~~~~~
In testtools.testresult.doubles there are three test doubles that testtools
uses for its own testing: Python26TestResult, Python27TestResult,
ExtendedTestResult. These TestResult objects implement a single variation of
the TestResult API each, and log activity to a list self._events. These are
made available for the convenience of people writing their own extensions.
startTestRun and stopTestRun
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Python 2.7 added hooks 'startTestRun' and 'stopTestRun' which are called
before and after the entire test run. 'stopTestRun' is particularly useful for
test results that wish to produce summary output.
testtools.TestResult provides empty startTestRun and stopTestRun methods, and
the default testtools runner will call these methods appropriately.
Extensions to TestSuite
-----------------------
ConcurrentTestSuite
~~~~~~~~~~~~~~~~~~~
A TestSuite for parallel testing. This is used in conjuction with a helper that
runs a single suite in some parallel fashion (for instance, forking, handing
off to a subprocess, to a compute cloud, or simple threads).
ConcurrentTestSuite uses the helper to get a number of separate runnable
objects with a run(result), runs them all in threads using the
ThreadsafeForwardingResult to coalesce their activity.
Running tests
-------------
testtools provides a convenient way to run a test suite using the testtools
result object: python -m testtools.run testspec [testspec...].
To run tests with Python 2.4, you'll have to do something like:
python2.4 /path/to/testtools/run.py testspec [testspec ...].
Test discovery
--------------
testtools includes a backported version of the Python 2.7 glue for using the
discover test discovery module. If you either have Python 2.7/3.1 or newer, or
install the 'discover' module, then you can invoke discovery::
python -m testtools.run discover [path]
For more information see the Python 2.7 unittest documentation, or::
python -m testtools.run --help
Twisted support
---------------
Support for running Twisted tests is very experimental right now. You
shouldn't really do it. However, if you are going to, here are some tips for
converting your Trial tests into testtools tests.
* Use the AsynchronousDeferredRunTest runner
* Make sure to upcall to setUp and tearDown
* Don't use setUpClass or tearDownClass
* Don't expect setting .todo, .timeout or .skip attributes to do anything
* flushLoggedErrors is not there for you. Sorry.
* assertFailure is not there for you. Even more sorry.
General helpers
---------------
Lots of the time we would like to conditionally import modules. testtools
needs to do this itself, and graciously extends the ability to its users.
Instead of::
try:
from twisted.internet import defer
except ImportError:
defer = None
You can do::
defer = try_import('twisted.internet.defer')
Instead of::
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
You can do::
StringIO = try_imports(['StringIO.StringIO', 'io.StringIO'])