qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Who is running QEMU automated tests, and when?


From: Daniel P . Berrangé
Subject: Re: [Qemu-devel] Who is running QEMU automated tests, and when?
Date: Thu, 26 Apr 2018 15:14:00 +0100
User-agent: Mutt/1.9.2 (2017-12-15)

On Thu, Apr 26, 2018 at 04:09:55PM +0200, Thomas Huth wrote:
> On 26.04.2018 15:57, Eduardo Habkost wrote:
> > (Starting a new thread, for more visibility)
> > 
> > (This was: Re: [Qemu-devel] [RFC PATCH] tests/device-introspect: Test
> > devices with all machines, not only with "none")
> > 
> > On Thu, Apr 26, 2018 at 01:54:43PM +0200, Markus Armbruster wrote:
> [...]
> >> I don't mind having make check SPEED=slow run more extensive tests.
> >> Assuming we actually run them at least once in a while, which seems
> >> doubtful.
> > 
> > We probably don't do that, but we really must be running a more
> > extensive (and slower) test set at least once before every
> > release.
> > 
> > Maybe some people are running SPEED=slow tests, or even more
> > extensive test suites like avocado-vt once in a while, but we
> > need to know who is running them, and when.
> 
> At least I am running "make check SPEED=slow" manually from time to
> time, especially when we enter the hard freeze period.

Hmm, we could get this done by travis. It has the concept of "cron jobs"
for scheduling builds separately from pushes.

So we could keep the current travis jobs unchanged, but then add an
use of SPEED=slow when TRAVIS_EVENT_TYPE == "cron" in the travis.yml,
so we can get SPEED=slow run once a day. Just have to be careful which
jobs we make slow so we don't hit the 50 minute timeout.


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



reply via email to

[Prev in Thread] Current Thread [Next in Thread]