[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

sensitivity vs. specificity in software testing (was: [PATCH] fix for gr

From: G. Branden Robinson
Subject: sensitivity vs. specificity in software testing (was: [PATCH] fix for groff Git regression (Savannah #64005))
Date: Thu, 6 Apr 2023 20:44:41 -0500

Hi Ralph,

At 2023-04-06T12:59:57+0100, Ralph Corderoy wrote:
> Would it be worth testing all of $output is exactly as expected?  This
> would widen what's being tested which may catch a future regression
> outside the scope of this test, e.g. with .DS/.DE.  The downside is a
> deliberate change might ripple through more tests but the fix-up
> should be straightforward and would preserve the wider testing.

I see the value in both approaches.  On the one hand I like the idea of
detecting inadvertent changes to vertical spacing (or anything else) in
a document, but on the other, I find narrowly scoped regression tests to
be advantageous.

>     output=\
>     ',,,,,,The first page is 1.,,     display,,,,,,,,,
>     ,,,                             -2-,,,The second page is 2.
>     '
>     output=$(echo "$output" | tr , \\012)

This is a good suggestion for handling blank line-happy output, of which
we have quite a bit in groff.

I think maybe the best-of-both-worlds solution is to have a model
document-based automated test--perhaps one that exercises as many ms(7)
macros as possible.  That would permit the retention of the narrow scope
of regression tests aimed at specific bugs, which necessarily tell you
something specific when they fail, but would add the highly sensitive
Rumsfeldian "unknown unknowns" problem detection that I think your
suggestion is tuned to.


Attachment: signature.asc
Description: PGP signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]