mirror of
https://github.com/samba-team/samba.git
synced 2024-12-27 03:21:53 +03:00
.. | ||
c | ||
c++ | ||
filters | ||
perl | ||
python | ||
shell | ||
Apache-2.0 | ||
BSD | ||
configure.ac | ||
COPYING | ||
INSTALL | ||
libcppunit_subunit.pc.in | ||
libsubunit.pc.in | ||
Makefile.am | ||
MANIFEST.in | ||
NEWS | ||
README | ||
runtests.py | ||
setup.py |
subunit: A streaming protocol for test results Copyright (C) 2005-2009 Robert Collins <robertc@robertcollins.net> Licensed under either the Apache License, Version 2.0 or the BSD 3-clause license at the users choice. A copy of both licenses are available in the project source as Apache-2.0 and BSD. You may not use this file except in compliance with one of these two licences. Unless required by applicable law or agreed to in writing, software distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the license you chose for the specific language governing permissions and limitations under that license. See the COPYING file for full details on the licensing of Subunit. subunit reuses iso8601 by Michael Twomey, distributed under an MIT style licence - see python/iso8601/LICENSE for details. Subunit ------- Subunit is a streaming protocol for test results. The protocol is human readable and easily generated and parsed. By design all the components of the protocol conceptually fit into the xUnit TestCase->TestResult interaction. Subunit comes with command line filters to process a subunit stream and language bindings for python, C, C++ and shell. Bindings are easy to write for other languages. A number of useful things can be done easily with subunit: * Test aggregation: Tests run separately can be combined and then reported/displayed together. For instance, tests from different languages can be shown as a seamless whole. * Test archiving: A test run may be recorded and replayed later. * Test isolation: Tests that may crash or otherwise interact badly with each other can be run seperately and then aggregated, rather than interfering with each other. * Grid testing: subunit can act as the necessary serialisation and deserialiation to get test runs on distributed machines to be reported in real time. Subunit supplies the following filters: * tap2subunit - convert perl's TestAnythingProtocol to subunit. * subunit2pyunit - convert a subunit stream to pyunit test results. * subunit2gtk - show a subunit stream in GTK. * subunit2junitxml - convert a subunit stream to JUnit's XML format. * subunit-diff - compare two subunit streams. * subunit-filter - filter out tests from a subunit stream. * subunit-ls - list info about tests present in a subunit stream. * subunit-stats - generate a summary of a subunit stream. * subunit-tags - add or remove tags from a stream. Integration with other tools ---------------------------- Subunit's language bindings act as integration with various test runners like 'check', 'cppunit', Python's 'unittest'. Beyond that a small amount of glue (typically a few lines) will allow Subunit to be used in more sophisticated ways. Python ====== Subunit has excellent Python support: most of the filters and tools are written in python and there are facilities for using Subunit to increase test isolation seamlessly within a test suite. One simple way to run an existing python test suite and have it output subunit is the module ``subunit.run``:: $ python -m subunit.run mypackage.tests.test_suite For more information on the Python support Subunit offers , please see ``pydoc subunit``, or the source in ``python/subunit/__init__.py`` C = Subunit has C bindings to emit the protocol, and comes with a patch for 'check' which has been nominally accepted by the 'check' developers. See 'c/README' for more details. C++ === The C library is includable and usable directly from C++. A TestListener for CPPUnit is included in the Subunit distribution. See 'c++/README' for details. shell ===== Similar to C, the shell bindings consist of simple functions to output protocol elements, and a patch for adding subunit output to the 'ShUnit' shell test runner. See 'shell/README' for details. Filter recipes -------------- To ignore some failing tests whose root cause is already known:: subunit-filter --without 'AttributeError.*flavor' The protocol ------------ Sample subunit wire contents ---------------------------- The following:: test: test foo works success: test foo works. test: tar a file. failure: tar a file. [ .. ].. space is eaten. foo.c:34 WARNING foo is not defined. ] a writeln to stdout When run through subunit2pyunit:: .F a writeln to stdout ======================== FAILURE: tar a file. ------------------- .. ].. space is eaten. foo.c:34 WARNING foo is not defined. Subunit protocol description ============================ This description is being ported to an EBNF style. Currently its only partly in that style, but should be fairly clear all the same. When in doubt, refer the source (and ideally help fix up the description!). Generally the protocol is line orientated and consists of either directives and their parameters, or when outside a DETAILS region unexpected lines which are not interpreted by the parser - they should be forwarded unaltered. test|testing|test:|testing: test label success|success:|successful|successful: test label success|success:|successful|successful: test label DETAILS failure: test label failure: test label DETAILS error: test label error: test label DETAILS skip[:] test label skip[:] test label DETAILS xfail[:] test label xfail[:] test label DETAILS progress: [+|-]X progress: push progress: pop tags: [-]TAG ... time: YYYY-MM-DD HH:MM:SSZ DETAILS ::= BRACKETED | MULTIPART BRACKETED ::= '[' CR UTF8-lines ']' CR MULTIPART ::= '[ multipart' CR PART* ']' CR PART ::= PART_TYPE CR NAME CR PART_BYTES CR PART_TYPE ::= Content-Type: type/sub-type(;parameter=value,parameter=value) PART_BYTES ::= (DIGITS CR LF BYTE{DIGITS})* '0' CR LF unexpected output on stdout -> stdout. exit w/0 or last test completing -> error Tags given outside a test are applied to all following tests Tags given after a test: line and before the result line for the same test apply only to that test, and inherit the current global tags. A '-' before a tag is used to remove tags - e.g. to prevent a global tag applying to a single test, or to cancel a global tag. The progress directive is used to provide progress information about a stream so that stream consumer can provide completion estimates, progress bars and so on. Stream generators that know how many tests will be present in the stream should output "progress: COUNT". Stream filters that add tests should output "progress: +COUNT", and those that remove tests should output "progress: -COUNT". An absolute count should reset the progress indicators in use - it indicates that two separate streams from different generators have been trivially concatenated together, and there is no knowledge of how many more complete streams are incoming. Smart concatenation could scan each stream for their count and sum them, or alternatively translate absolute counts into relative counts inline. It is recommended that outputters avoid absolute counts unless necessary. The push and pop directives are used to provide local regions for progress reporting. This fits with hierarchically operating test environments - such as those that organise tests into suites - the top-most runner can report on the number of suites, and each suite surround its output with a (push, pop) pair. Interpreters should interpret a pop as also advancing the progress of the restored level by one step. Encountering progress directives between the start and end of a test pair indicates that a previous test was interrupted and did not cleanly terminate: it should be implicitly closed with an error (the same as when a stream ends with no closing test directive for the most recently started test). The time directive acts as a clock event - it sets the time for all future events. The value should be a valid ISO8601 time. The skip result is used to indicate a test that was found by the runner but not fully executed due to some policy or dependency issue. This is represented in python using the addSkip interface that testtools (https://edge.launchpad.net/testtools) defines. When communicating with a non skip aware test result, the test is reported as an error. The xfail result is used to indicate a test that was expected to fail failing in the expected manner. As this is a normal condition for such tests it is represented as a successful test in Python. In future, skip and xfail results will be represented semantically in Python, but some discussion is underway on the right way to do this.