[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v4 3/3] parallel-tests: allow each test to have multiple resu
From: |
Ralf Wildenhues |
Subject: |
Re: [PATCH v4 3/3] parallel-tests: allow each test to have multiple results |
Date: |
Fri, 17 Jun 2011 08:37:59 +0200 |
* Stefano Lattarini wrote on Thu, Jun 16, 2011 at 10:03:59AM CEST:
> With this change, we improve the code creating the `test-suite.log'
> global log and the console testsuite summary to make it able to
> grasp multiple results per test script. This is required in order
> to introduce the planned support for test protocols, like TAP and
> SubUnit, which can run multiple testcases per test script.
> The implementation makes use of a custom reStructuredText field
> `:am-testcase-result:'.
Why an Automake-specific name here? This is user facing, it should
be ':testcase-result:'. But 'testcase' is not a word, at least not
a good one. Please check what other test suite codes use, but I'd
just use ':test-result:' or ':result:' (if that is not ambiguous).
Other third-party code might want to make use of this too.
Actually, why not s/testcase/test/g globally in all your text.
I haven't gone over this in detail yet, so just a few random remarks
below.
And by the way, thanks for all your work on this so far!
Cheers,
Ralf
> * lib/check.am ($(TEST_SUITE_LOG)): When processing .log files,
> recognize a testcase result report only if it is declared with
> the custom `:am-testcase-result:' reStructuredText field placed
> at the beginning of a line. Extend and add explanatory comments.
> (recheck, recheck-html): Add explanatory comments.
> * lib/pt-driver: Write an appropriate `:am-testcase-result:'
> reStructuredText field in the generated log file. Use a
> reStructuredText transition to better separate the test outcome
> report from the test registered output. Improve comments.
> * tests/test-driver-custom-xfail-tests.test: Adapt.
> * tests/parallel-tests-empty-testlogs.test: New test.
> * tests/parallel-tests-recheck-override.test: Likewise.
> * tests/parallel-tests2.test: Extend and keep more in-sync with ...
> * tests/test-driver-custom-html.test: ... this new related test.
> * tests/test-driver-custom-no-html.test: New test.
> * tests/test-driver-custom-multitest.test: Likewise.
> * tests/test-driver-custom-multitest-recheck.test: Likewise.
> * tests/test-driver-custom-multitest-recheck2.test: Likewise.
> * tests/ostp-driver: New file, used by the last four tests above.
tests/simple-driver, or maybe trivial-test-driver (as "simple driver" is
already used in the manual for something different) or something. ostp
is just confusing, and you've argued against 8+3 conventions all the
time. ;->
> * tests/Makefile.am (TESTS): Update.
> (EXTRA_DIST): Distribute `ostp-driver'.
> (test-driver-custom-multitest.log): Depend on `ostp-driver'.
> (test-driver-custom-multitest-recheck.log): Likewise.
> (test-driver-custom-multitest-recheck2.log): Likewise.
> (test-driver-custom-html.log): Likewise.
> * doc/automake.texi (API for Custom Test Drivers): Update (still
> in Texinfo comments only).
I don't like TODO and comments stuff in finished patches documentation
if we can help it (and here, we surely can).
> --- a/doc/automake.texi
> +++ b/doc/automake.texi
> @@ -9052,8 +9052,9 @@ if the exact interpretation of the associated semantics
> can change
> between a test driver and another, and even be a no-op in some drivers).
>
> @b{TODO!} @i{Options and flags that the driver must handle. Generation
> -of ``.log'' files. Console output the driver is expected to produce.
> -Support for colored output, XFAIL_TESTS, and DISABLE_HARD_ERRORS}
> +of ``.log'' files, and format they must obey. Console output the driver
@file{.log}
You should almost never need to use ``...'' in texinfo, at least not for
any entity that has a meaning in code.
> +is expected to produce. Support for colored output, XFAIL_TESTS, and
> +DISABLE_HARD_ERRORS.}
>
> @c
> @c The driver script should follow a simple protocol in order to really
> @@ -9075,6 +9076,29 @@ Support for colored output, XFAIL_TESTS, and
> DISABLE_HARD_ERRORS}
> @c driver can use temporary files if it needs to, only it should clean
> @c them up properly).
> @c
> address@hidden * The result of each testcase run by a test script/program
> *must*
> address@hidden be registered in the test log using a custom
> reStructuredText
> address@hidden field ``am-testcase-result''. For example, if the test
> script
See above.
> address@hidden executes two test cases, one successful and one failing,
> and skip
> address@hidden another test case, the driver should end up writing the
> following
> address@hidden in the test log:
> address@hidden :am-testcase-result: PASS [passed testcase name or
> details]
> address@hidden :am-testcase-result: FAIL [failed testcase name or
> details]
> address@hidden :am-testcase-result: SKIP [skipped testcase name or
> details]
is the [...] literal or metasyntactic? Please use proper notation here.
> address@hidden The above lines (each of which *must* be followed by a
> blank line
> address@hidden in order for the HTML output generation to work) are used
> only
> address@hidden when generating the `test-suite.log' from the individual
> test
See above.
> address@hidden logs, and can be placed in any order and position within
> the logs
> address@hidden itself.
> address@hidden
> address@hidden * The result of each testcase run by a test script/program
> *must*
> address@hidden be registered by the test driver in the *first* line of the
> test
> address@hidden log (FIXME: this seems too strict; maybe we could use
> another
> address@hidden custom reStructuredText directive instead?). This line is
> used by
> address@hidden the "recheck" target. A test will be considered failed by
> this
> address@hidden target, and thus to be re-run, if the first line in its log
> file
> address@hidden begins with either `FAIL' or `XPASS'.
> address@hidden
> @c * driver-specific options (AM_LOG_DRIVER_FLAGS and LOG_DRIVER_FLAGS)
> @c that get passed to the driver script by the Makefile.
> --- a/lib/am/check.am
> +++ b/lib/am/check.am
> @@ -133,12 +133,25 @@ esac;
> \
> $(AM_TESTS_ENVIRONMENT) $(TESTS_ENVIRONMENT)
>
> $(TEST_SUITE_LOG): $(TEST_LOGS)
> - @$(am__sh_e_setup); \
> - list='$(TEST_LOGS)'; \
> - results=`for f in $$list; do \
> - test -r $$f && read line < $$f && echo "$$line" \
> - || echo FAIL; \
> - done`; \
> + @$(am__sh_e_setup); \
> +## The custom reStructuredText filed used to register the outcome of a test
> +## case (which is *not* the same thing as the outcome of the test script).
> +## This is for supporting test protocols that allow for more that one test
> +## case per test script.
> + rst_magic=":am-testcase-result:"; \
> +## All test logs.
> + list='$(TEST_LOGS)'; \
> +## Readable test logs.
> + list2=`for f in $$list; do test ! -r $$f || echo $$f; done`; \
> +## Each unreadable test log counts as a failed test.
> + results1=`for f in $$list; do test -r $$f || echo FAIL; done`; \
> +## Extract the outcome of all the testcases from the test logs.
> + results2=''; \
> + if test -n "$$list2"; then \
> + results2=`sed -n "s/^$$rst_magic[ ]*//p" $$list2`; \
tab space
> + fi; \
> +## Prepare the test suite summary.
> + results=`echo "$$results1" && echo "$$results2"` \
> all=`echo "$$results" | sed '/^$$/d' | wc -l | sed -e 's/^[
> ]*//'`; \
> fail=`echo "$$results" | grep -c '^FAIL'`; \
> pass=`echo "$$results" | grep -c '^PASS'`; \
> @@ -178,6 +191,7 @@ $(TEST_SUITE_LOG): $(TEST_LOGS)
> msg="$$msg($$skip tests were not run). "; \
> fi; \
> fi; \
> +## Write "global" testsuite log.
> { \
> echo "$(PACKAGE_STRING): $(subdir)/$(TEST_SUITE_LOG)" | \
> $(am__rst_title); \
> @@ -185,6 +199,16 @@ $(TEST_SUITE_LOG): $(TEST_LOGS)
> echo; \
> echo ".. contents:: :depth: 2"; \
> echo; \
> +## Here we assume that the test driver writes a proper summary for the
> +## test script on the first line. Requiring this is not a limitation,
> +## but a feature, since this way a custom test driver is allowed to decide
> +## what the outcome is in case of conflicting testcase results in a test
> +## script. For example, if a test script reports 8 successful testcases
> +## and 2 skipped testcases, some drivers might report that globally as a
> +## SKIP, while others as a PASS.
> +## FIXME: This should be documented in the automake manual. The above
> +## FIXME: explanation is indeed more appropriate for the manual than for
> +## FIXME: comments in code.
> for f in $$list; do \
> test -r $$f && read line < $$f || line=; \
> case $$line in \
> @@ -201,6 +225,7 @@ $(TEST_SUITE_LOG): $(TEST_LOGS)
> fi; \
> fi; \
> test x"$$VERBOSE" = x || $$exit || cat $(TEST_SUITE_LOG); \
> +## Emit the test summary on the console, and exit.
> $(am__tty_colors); \
> if $$exit; then \
> echo $(ECHO_N) "$$grn$(ECHO_C)"; \
> @@ -283,6 +308,10 @@ recheck recheck-html:
> list=`for f in $$list; do \
> test -f $$f || continue; \
> if test -r $$f && read line < $$f; then \
> +## Here we assume that the test driver writes a proper summary for the
> +## test script on the first line. See the comments in the rules of
> +## $(TEST_SUITE_LOG) above for why we consider this acceptable and even
> +## advisable.
> case $$line in FAIL*|XPASS*) echo $$f;; esac; \
> else echo $$f; fi; \
> done | tr '\012\015' ' '`; \
> diff --git a/lib/pt-driver b/lib/pt-driver
> index 78b6d18..b1b281f 100755
> --- a/lib/pt-driver
> +++ b/lib/pt-driver
> @@ -113,9 +113,21 @@ case $estatus:$expect_failure in
> *:yes) col=$lgn; res=XFAIL;;
> *:*) col=$red; res=FAIL ;;
> esac
> +
> +# Report outcome to console.
> echo "${col}${res}${std}: $test_name"
> -echo "$res: $test_name (exit: $estatus)" | rst_section > $logfile
> -cat $tmpfile >> $logfile
> +
> +# Now write log file.
> +{
> + echo "$res: $test_name (exit: $estatus)" | rst_section
> + echo ":am-testcase-result: $res (exit status: $estatus)"
> + # Use a reStructuredText transition to better separate the test
> + # outcome report from its registered output.
> + echo
> + printf '%s\n' '------------'
> + echo
> + cat $tmpfile
> +} > $logfile
> rm -f $tmpfile
>
> # Local Variables:
> diff --git a/tests/Makefile.am b/tests/Makefile.am
> index 36445b5..93f678a 100644
> --- a/tests/Makefile.am
> +++ b/tests/Makefile.am
> @@ -718,10 +718,16 @@ parallel-tests-unreadable-log.test \
> parallel-tests-subdir.test \
> parallel-tests-interrupt.test \
> parallel-tests-reset-term.test \
> +parallel-tests-empty-testlogs.test \
> parallel-tests-pt-driver.test \
> test-driver-custom-no-pt-driver.test \
> test-driver-custom.test \
> test-driver-custom-xfail-tests.test \
> +test-driver-custom-multitest.test \
> +test-driver-custom-multitest-recheck.test \
> +test-driver-custom-multitest-recheck2.test \
> +test-driver-custom-html.test \
> +test-driver-custom-no-html.test \
> test-driver-fail.test \
> parse.test \
> percent.test \
> @@ -1058,6 +1064,11 @@ $(parallel_tests)
>
> EXTRA_DIST += $(TESTS)
>
> +test-driver-custom-multitest.log: ostp-driver
> +test-driver-custom-multitest-recheck.log: ostp-driver
> +test-driver-custom-multitest-recheck2.log: ostp-driver
> +test-driver-custom-html.log: ostp-driver
> +EXTRA_DIST += ostp-driver
>
> # Dependencies valid for each test case.
> $(TEST_LOGS): defs defs-static aclocal-$(APIVERSION) automake-$(APIVERSION)
> diff --git a/tests/Makefile.in b/tests/Makefile.in
> index df2a818..084b26d 100644
> --- a/tests/Makefile.in
> +++ b/tests/Makefile.in
> @@ -276,7 +276,7 @@ top_builddir = @top_builddir@
> top_srcdir = @top_srcdir@
> MAINTAINERCLEANFILES = $(parallel_tests) $(instspc_tests)
> EXTRA_DIST = ChangeLog-old gen-parallel-tests instspc-tests.sh \
> - $(TESTS)
> + $(TESTS) ostp-driver
> XFAIL_TESTS = all.test auxdir2.test cond17.test gcj6.test \
> override-conditional-2.test pr8365-remake-timing.test \
> yacc-dist-nobuild-subdir.test txinfo5.test \
> @@ -972,10 +972,16 @@ parallel-tests-unreadable-log.test \
> parallel-tests-subdir.test \
> parallel-tests-interrupt.test \
> parallel-tests-reset-term.test \
> +parallel-tests-empty-testlogs.test \
> parallel-tests-pt-driver.test \
> test-driver-custom-no-pt-driver.test \
> test-driver-custom.test \
> test-driver-custom-xfail-tests.test \
> +test-driver-custom-multitest.test \
> +test-driver-custom-multitest-recheck.test \
> +test-driver-custom-multitest-recheck2.test \
> +test-driver-custom-html.test \
> +test-driver-custom-no-html.test \
> test-driver-fail.test \
> parse.test \
> percent.test \
> @@ -1360,12 +1366,16 @@ cscope cscopelist:
>
>
> $(TEST_SUITE_LOG): $(TEST_LOGS)
> - @$(am__sh_e_setup); \
> - list='$(TEST_LOGS)'; \
> - results=`for f in $$list; do \
> - test -r $$f && read line < $$f && echo "$$line" \
> - || echo FAIL; \
> - done`; \
> + @$(am__sh_e_setup); \
> + rst_magic=":am-testcase-result:"; \
> + list='$(TEST_LOGS)'; \
> + list2=`for f in $$list; do test ! -r $$f || echo $$f; done`; \
> + results1=`for f in $$list; do test -r $$f || echo FAIL; done`; \
> + results2=''; \
> + if test -n "$$list2"; then \
> + results2=`sed -n "s/^$$rst_magic[ ]*//p" $$list2`; \
> + fi; \
> + results=`echo "$$results1" && echo "$$results2"` \
> all=`echo "$$results" | sed '/^$$/d' | wc -l | sed -e 's/^[
> ]*//'`; \
> fail=`echo "$$results" | grep -c '^FAIL'`; \
> pass=`echo "$$results" | grep -c '^PASS'`; \
> @@ -1718,6 +1728,11 @@ $(instspc_tests): Makefile.am
> instspc-data.log: instspc-tests.sh
> $(instspc_tests:.test=.log): instspc-tests.sh instspc-data.log
>
> +test-driver-custom-multitest.log: ostp-driver
> +test-driver-custom-multitest-recheck.log: ostp-driver
> +test-driver-custom-multitest-recheck2.log: ostp-driver
> +test-driver-custom-html.log: ostp-driver
> +
> # Dependencies valid for each test case.
> $(TEST_LOGS): defs defs-static aclocal-$(APIVERSION) automake-$(APIVERSION)
>
> diff --git a/tests/ostp-driver b/tests/ostp-driver
> new file mode 100644
> index 0000000..dced57a
> --- /dev/null
> +++ b/tests/ostp-driver
> @@ -0,0 +1,94 @@
> +#! /bin/sh
> +# Copyright (C) 2011 Free Software Foundation, Inc.
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 2, or (at your option)
> +# any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program. If not, see <http://www.gnu.org/licenses/>.
> +
> +# Testsuite driver for OSTP, the Outrageously Simple Test Protocol :-)
> +# The exit status of the wrapped script is ignored. Lines in its stdout
> +# and stderr beginning with `PASS', `FAIL', `XFAIL', `XPASS' or `SKIP'
> +# count as a test case result with the obviously-corresponding outcome.
> +# Every other line is ignored for what concerns the testsuite outcome.
> +# This script is used at least by the `driver-custom-multitest*.test'
> +# tests.
> +
> +set -u
> +
> +## Option parsing.
> +
> +test_name=INVALID.NAME
> +log_file=BAD.LOG
> +while test $# -gt 0; do
> + case $1 in
> + --test-name) test_name=$2; shift;;
> + --log-file) log_file=$2; shift;;
> + # Ignored.
> + --expect-failure) shift;;
> + --color-tests) shift;;
> + --enable-hard-errors) shift;;
> + # Explicitly terminate option list.
> + --) shift; break;;
> + # Shouldn't happen
> + *) echo "$0: invalid option/argument: '$1'" >&2; exit 2;;
> + esac
> + shift
> +done
> +
> +## Run the test script, get test cases results, display them on console.
> +
> +tmp_out=$log_file-out.tmp
> +tmp_res=$log_file-res.tmp
> +
> +"$@" 2>&1 | tee $tmp_out | (
> + i=0 st=0
> + : > $tmp_res
> + while read line; do
> + case $line in
> + PASS:*|FAIL:*|XPASS:*|XFAIL:*|SKIP:*)
> + i=`expr $i + 1`
> + result=`LC_ALL=C expr "$line" : '\([A-Z]*\):.*'`
This will be quite fork expensive, if done in real-world code.
Might just want to use one single sed script replacing this
while while loop.
But then again, measuring is king.
> + case $result in FAIL|XPASS) st=1;; esac
> + # Output testcase result to console.
> + echo "$result: $test_name, testcase $i"
> + # Register testcase outcome for the log file.
> + echo ":am-testcase-result: $line" >> $tmp_res
> + echo >> $tmp_res
> + ;;
> + esac
> + done
> + exit $st
> +)
> +
> +if test $? -eq 0; then
> + global_result=PASS
> +else
> + global_result=FAIL
> +fi
> +
> +## Write the log file.
> +
> +{
> + echo "$global_result: $test_name"
> + echo "$global_result: $test_name" | sed 's/./=/g'
> + echo
> + cat $tmp_res
> + echo
> + printf '%s\n' '--------------------'
echo? It copes with multiple dashes (don't believe autoconf.texi).
> + echo
> + cat $tmp_out
> +} > $log_file
> +rm -f $tmp_out $tmp_res
> +
> +## And we're done.
> +
> +exit 0
- Re: [PATCH 1/2] tests: make test runner a script, not a shell function, (continued)
- Re: [PATCH 1/2] tests: make test runner a script, not a shell function, Stefano Lattarini, 2011/06/21
- [PATCH] {master} tests: interactions between TESTS_ENVIRONMENT and LOG_COMPILER, Stefano Lattarini, 2011/06/21
- Re: [PATCH 1/2] tests: make test runner a script, not a shell function, Stefano Lattarini, 2011/06/20
- Re: [PATCH v4 1/3] parallel-tests: add auxiliary script 'pt-driver', refactor, Stefano Lattarini, 2011/06/19
- Re: [PATCH v4 1/3] parallel-tests: add auxiliary script 'pt-driver', refactor, Stefano Lattarini, 2011/06/20
[PATCH v4 2/3] parallel-tests: allow custom driver scripts, Stefano Lattarini, 2011/06/16
[PATCH v4 3/3] parallel-tests: allow each test to have multiple results, Stefano Lattarini, 2011/06/16
- Re: [PATCH v4 3/3] parallel-tests: allow each test to have multiple results,
Ralf Wildenhues <=
- Re: [PATCH v4 3/3] parallel-tests: allow each test to have multiple results, Ralf Wildenhues, 2011/06/17
- Re: [PATCH v4 3/3] parallel-tests: allow each test to have multiple results, Stefano Lattarini, 2011/06/17
- Re: [PATCH v4 3/3] parallel-tests: allow each test to have multiple results, Stefano Lattarini, 2011/06/17
- Re: [PATCH v4 3/3] parallel-tests: allow each test to have multiple results, Stefano Lattarini, 2011/06/17
- Re: [PATCH v4 3/3] parallel-tests: allow each test to have multiple results, Stefano Lattarini, 2011/06/17
Re: [PATCH v4 3/3] parallel-tests: allow each test to have multiple results, Ralf Wildenhues, 2011/06/20
Re: [PATCH v4 3/3] parallel-tests: allow each test to have multiple results, Stefano Lattarini, 2011/06/20
Re: [PATCH v4 3/3] parallel-tests: allow each test to have multiple results, Ralf Wildenhues, 2011/06/20
Re: [PATCH v4 3/3] parallel-tests: allow each test to have multiple results, Stefano Lattarini, 2011/06/20
Re: [PATCH v4 3/3] parallel-tests: allow each test to have multiple results, Ralf Wildenhues, 2011/06/21