xforms-development
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XForms] automated regression tests for XForms


From: Jens Thoms Toerring
Subject: Re: [XForms] automated regression tests for XForms
Date: Sun, 27 Jan 2013 02:07:13 +0100
User-agent: Mutt/1.5.21 (2010-09-15)

Hi Alessandro,

   sorry for not replying earlier, too much work at the moment
(which is also why I didn't got any further with the new re-
lease)...

On Thu, Jan 24, 2013 at 11:06:56PM +0100, alessandro basili wrote:
> some times ago I started to wonder how can a GUI toolkit like XForms
> integrate a framework for testing purposes which will allow developers
> to apply changes and regularly perform regression tests minimizing the
> chance the new feature or bug-fix will show an 'unwanted' behavior on
> some other part of the code.
> 
> When I normally try to use or fix part of the code I've found myself
> building test cases, with simple interfaces and some amount of
> interaction (click here, scroll there...). What if then for every object
> we cannot build a very simple test case and 'memorize' the sequence of
> events in order to reuse it?
> 
> I've searched around and found Xnee (http://xnee.wordpress.com/) which
> is exactly this, an event recorder/replay for X11 based systems. I have
> no experience on how it works but it seems a pretty nice software.

Same here, Xnest was pointed out to me some time ago by someone
and I got as far as installing and playing around with it for a
bit but unfortunately never found the time to explore what can
be done with it any further.

> What we can conceive is to go through the list of objects and build test
> programs, which will then be used by a regular user who may have his/her
> sequence of actions recorded. The test report (as a form of events) can
> then be replayed regularly for every build, guaranteeing that the new
> fix did not break that part.

I completely agree, something like that would be a good starting
point - alas, only for a certain subset of possible problems, at
least as far as I can see. Not having spend more than a spend a
small amount of time with it my impression is that it definitely
could help finding a number of problems like, for example, if
clicking on on a button doesn't result in the intended action
anymore (in the most simple case) as it did before. So this looks
like a very good idea. The main problem I envision is that the
programs most people have written tend to be large and need a
lot of other stuff, input data etc. (your's is a good example),
so converting them to test cases that could be given to others
can become quite a challenge. And many real-life programs aren't
of a natuer that you could say "If I click there, adjust that
slider, select that item in a browser and then clickon the
"Store" button a file gets written out that must have a cer-
tain, reprducable content that can be compared to a previous
run. At least mine don't work that way, and, I'm sure, neither
does your's (especially when all the hardware you're control-
ling isn't present - I don't think you'd allow your spectro-
meter on the ISS to be used for that kind of testing, would
you?;-)

Such test case programs definitely can be written, no question
about it. And having them would be a big step forward! As far
as I can see (but I am shortsighted by nature and may be even
more shortsighted here) we would need some special kind of test
programs where s certain sequence of user inputs results in a
well defind result that can be compared to the one obtained
in previous runs.

> As users submit test cases as well as test reports, they can be reused
> for next build, without the user needing to get the new version, link
> against it and then test his/her application with some random clicks to
> see if 'everything' is still fine.
> 
> Possibly the test cases should be structured in a framework such that
> low level functionality is tested first, while higher level code can be
> tested later. And I also believe that good test programs can be also
> used as template for newcomers, fostering good programming style with
> the package.

I'm all for that. A starting point might be re-using the demo
programs (and improving them along the way!). The problem is:
who's going to do the work? Unfortunately, I'm in no position
to volunteer at the moment;-)

> By no means this effort should relax the amount of testing, but I guess
> it will help developers/maintainers with a more systematic feedback on
> their builds, increasing the quality and the amount of work people spend
> on repetitive testing.

The other testing would definitely also have to continue since
this test harness could only find a certain class of problems
(at least that's what I would expect). I can't imagine how gra-
phical glitches, effects like a slow-down of some programs etc.
could be reliably detected this way.

                              Best refards, Jens
-- 
  \   Jens Thoms Toerring  ________      address@hidden
   \_______________________________      http://toerring.de



reply via email to

[Prev in Thread] Current Thread [Next in Thread]