[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/3] build configuration query tool and conditio

From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH 0/3] build configuration query tool and conditional (qemu-io)test skip
Date: Thu, 27 Jul 2017 14:41:01 +0100
User-agent: Mutt/1.8.3 (2017-05-23)

On Wed, Jul 26, 2017 at 02:24:02PM -0400, Cleber Rosa wrote:
> On 07/26/2017 01:58 PM, Stefan Hajnoczi wrote:
> > On Tue, Jul 25, 2017 at 12:16:13PM -0400, Cleber Rosa wrote:
> >> On 07/25/2017 11:49 AM, Stefan Hajnoczi wrote:
> >>> On Fri, Jul 21, 2017 at 10:21:24AM -0400, Cleber Rosa wrote:
> >>>> On 07/21/2017 10:01 AM, Daniel P. Berrange wrote:
> >>>>> On Fri, Jul 21, 2017 at 01:33:25PM +0100, Stefan Hajnoczi wrote:
> >>>>>> On Thu, Jul 20, 2017 at 11:47:27PM -0400, Cleber Rosa wrote:
> >>>> Without the static capabilities defined, the dynamic check would be
> >>>> influenced by the run time environment.  It would really mean "qemu-io
> >>>> running on this environment (filesystem?) can do native aio".  Again,
> >>>> that's not the best type of information to depend on when writing tests.
> >>>
> >>> Can you explain this more?
> >>>
> >>> It seems logical to me that if qemu-io in this environment cannot do
> >>> aio=native then we must skip those tests.
> >>>
> >>> Stefan
> >>>
> >>
> >> OK, let's abstract a bit more.  Let's take this part of your statement:
> >>
> >>  "if qemu-io in this environment cannot do aio=native"
> >>
> >> Let's call that a feature check.  Depending on how the *feature check*
> >> is written, a negative result may hide a test failure, because it would
> >> now be skipped.
> > 
> > You are saying a pass->skip transition can hide a failure but ./check
> > tracks skipped tests.  See tests/qemu-iotests/check.log for a
> > pass/fail/skip history.
> > 
> You're not focusing on the problem here.  The problem is that a test
> that *was not* supposed to be skipped, would be skipped.

As Daniel Berrange mentioned, ./configure has the same problem.  You
cannot just run it blindly because it silently disables features.

What I'm saying is that in addition to watching ./configure closely, you
also need to look at the skipped tests that ./check reports.  If you do
that then you can be sure the expected set of tests is passing.

> > It is the job of the CI system to flag up pass/fail/skip transitions.
> > You're no worse off using feature tests.
> > 
> > Stefan
> > 
> What I'm trying to help us achieve here is a reliable and predictable
> way for the same test job execution to be comparable across
> environments.  From the individual developer workstation, CI, QA etc.

1. Use ./configure --enable-foo options for all desired features.
2. Run the ./check command-line and there should be no unexpected skips
   like this:

087         [not run] missing aio=native support

To me this seems to address the problem.

I have mentioned the issues with the build flags solution: it creates a
dependency on the build environment and forces test feature checks to
duplicate build dependency logic.  This is why I think feature tests are
a cleaner solution.


Attachment: signature.asc
Description: PGP signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]