bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#57150: 29.0.50; [PATCH] Add test coverage for overlay modification h


From: Eli Zaretskii
Subject: bug#57150: 29.0.50; [PATCH] Add test coverage for overlay modification hooks
Date: Fri, 12 Aug 2022 21:53:23 +0300

> From: Matt Armstrong <matt@rfc20.org>
> Cc: 57150@debbugs.gnu.org
> Date: Fri, 12 Aug 2022 10:57:07 -0700
> 
> Do you have a preference between the two general approaches to
> exercising many similar test cases: data driven vs. macro driven?

I tend to like the data-driven approach better, but I have nothing
against macros, assuming they solve the issues that I described.

> If data driven tests are preferred, what do you think of using 'message'
> to log the individual test case details, as a way of knowing which part
> failed?

I have no problems with that, assuming that the text emitted by
'message' is visible when running the tests in batch mode.

> Other test frameworks have scoped based context messages.  In
> pseudocode, something like:
> 
> (ert-deftest example ()
>   (dolist arg '(a b c d)
>     (context (arg)
>       (should (equal arg (identity-function arg))))))
> 
> Printed failures would include both the normal 'should' output, but also
> the values passed to 'context'.
> 
> I notice that ERT does have two interactive features that partially
> address this.  These commands are available in its UI:
> 
> ‘l’
>      Show the list of ‘should’ forms executed in the test
>      (‘ert-results-pop-to-should-forms-for-test-at-point’).
> 
> ‘m’
>      Show any messages that were generated (with the Lisp function
>      ‘message’) in a test or any of the code that it invoked
>      (‘ert-results-pop-to-messages-for-test-at-point’).
> 
> In simpler tests I find that 'l' is enough, since I can count the
> 'should' calls to work out the iteration of whatever loop is used by the
> test.  In more complex cases, perhaps using 'message' to display the
> context is enough?
> 
> If you don't think 'l' and 'm' are *not* good enough, I might agree with
> you.

I'm not sure I understand how these commands are relevant.  I'm
talking about running the tests in batch mode.  How do I make use of
those commands in that case?

> If you think adding something like 'context' to ERT is worthwhile
> I can look at doing that.
> 
> For this patch, perhaps using 'message' is best?

Anything is fine with me, if it shows enough information to identify
the particular "should" test that failed in the code.

Thanks.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]