xforms-development
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XForms] automated regression tests for XForms


From: alessandro basili
Subject: Re: [XForms] automated regression tests for XForms
Date: Sun, 27 Jan 2013 14:49:19 +0100
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130107 Thunderbird/17.0.2

On 27/01/2013 02:07, Jens Thoms Toerring wrote:
> Hi Alessandro,
> 
>    sorry for not replying earlier, too much work at the moment
> (which is also why I didn't got any further with the new re-
> lease)...

I'd actually would love to contribute more, that's why I propose some
testing framework. I guess this will allow some relief once it is set
up. People willing to help may also check in test cases instead of
bug-fixes, improving the quality of the whole package (don't get me
wrong, I still believe XForms is a great library).

[]
>> What we can conceive is to go through the list of objects and build test
>> programs, which will then be used by a regular user who may have his/her
>> sequence of actions recorded. The test report (as a form of events) can
>> then be replayed regularly for every build, guaranteeing that the new
>> fix did not break that part.
> 
> I completely agree, something like that would be a good starting
> point - alas, only for a certain subset of possible problems, at
> least as far as I can see. Not having spend more than a spend a
> small amount of time with it my impression is that it definitely
> could help finding a number of problems like, for example, if
> clicking on on a button doesn't result in the intended action
> anymore (in the most simple case) as it did before. So this looks
> like a very good idea. The main problem I envision is that the
> programs most people have written tend to be large and need a
> lot of other stuff, input data etc. (your's is a good example),
> so converting them to test cases that could be given to others
> can become quite a challenge. 

Indeed I guess would be difficult to collect users' programs and run
them as test cases. My idea was to have a set of programs which can
cover as much functionality as possible but remaining themselves very
simple. In our programs, for instance, lot's of code was to handle
input/output data, but it is completely separate from the XForms library.

> And many real-life programs aren't
> of a natuer that you could say "If I click there, adjust that
> slider, select that item in a browser and then clickon the
> "Store" button a file gets written out that must have a cer-
> tain, reprducable content that can be compared to a previous
> run. At least mine don't work that way, and, I'm sure, neither
> does your's (especially when all the hardware you're control-
> ling isn't present - I don't think you'd allow your spectro-
> meter on the ISS to be used for that kind of testing, would
> you?;-)

I understand your point, but the aim of a testing framework is not to
test the user's program, on the contrary it aims to prove that a button
behaves as such, maybe with some limitations w.r.t. real applications,
but still to a level that may increase the confidence of the developers
in spotting bugs.

If we adopt a black-box testing, for instance, we can ensure that (for
example) for every 'possible' input the output still makes sense,
therefore improving a lot the too often case of Garbage In Garbage Out.

This may also require some changes to the current function interfaces,
even though I'd try to minimize that as much as possible in order not to
break users' programs.

> 
> Such test case programs definitely can be written, no question
> about it. And having them would be a big step forward! As far
> as I can see (but I am shortsighted by nature and may be even
> more shortsighted here) we would need some special kind of test
> programs where s certain sequence of user inputs results in a
> well defind result that can be compared to the one obtained
> in previous runs.

Exactly! In my mind we need to go through the capabilities of the
library and start to build test programs. Once they are there we can
fill test reports with Xnee that can be played back at a later stage
(next built) automatically (or semi-automatically).

When somebody finds a sequence that breaks the test program, it can
check it in to make it available to the developers, who now do not need
to spend too much time in clicking buttons randomly.

When the user finds that an object does not behave properly in his
program, while it does behave correctly in the test code, I guess it
would be a good chance to change the test program as well as the
library, ending up with a release that has a more solid foundation.

> 
>> As users submit test cases as well as test reports, they can be reused
>> for next build, without the user needing to get the new version, link
>> against it and then test his/her application with some random clicks to
>> see if 'everything' is still fine.
>>
>> Possibly the test cases should be structured in a framework such that
>> low level functionality is tested first, while higher level code can be
>> tested later. And I also believe that good test programs can be also
>> used as template for newcomers, fostering good programming style with
>> the package.
> 
> I'm all for that. A starting point might be re-using the demo
> programs (and improving them along the way!). The problem is:
> who's going to do the work? Unfortunately, I'm in no position
> to volunteer at the moment;-)

I'll be starting with some 'dry-run' on the examples (that would spare
me the time to build new test codes). Once that is done I can try to see
if instead of examples we can use test cases which will have a larger
coverage of the code.

> 
>> By no means this effort should relax the amount of testing, but I guess
>> it will help developers/maintainers with a more systematic feedback on
>> their builds, increasing the quality and the amount of work people spend
>> on repetitive testing.
> 
> The other testing would definitely also have to continue since
> this test harness could only find a certain class of problems
> (at least that's what I would expect). I can't imagine how gra-
> phical glitches, effects like a slow-down of some programs etc.
> could be reliably detected this way.

I also suspect this. Heavy object interaction may lead to unexpected
behavior which is hard to simulate with those test cases, but the
advantage is that if somebody is showing a weird behavior, that can be
added to the test suite, improving the capability of the tests.

Since I'm also fairly busy I'm not sure how quick that can go, but as
soon as I have something I'll like to throw it out there in order to
have some feedback on the general structure.

Al

-- 
PGP Fingerprint DCBE 430E E93D 808B 45E6 7B8A F68D A276 565C 4450
PGP Public Key available at keys.gnugp.net



reply via email to

[Prev in Thread] Current Thread [Next in Thread]