qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH] tests/avocado: use new rootfs for orangepi test


From: Daniel P . Berrangé
Subject: Re: [RFC PATCH] tests/avocado: use new rootfs for orangepi test
Date: Thu, 24 Nov 2022 09:32:41 +0000
User-agent: Mutt/2.2.7 (2022-08-07)

On Thu, Nov 24, 2022 at 12:06:10AM +0100, Philippe Mathieu-Daudé wrote:
> On 23/11/22 19:49, Cédric Le Goater wrote:
> > On 11/23/22 19:13, Philippe Mathieu-Daudé wrote:
> > > On 23/11/22 15:12, Alex Bennée wrote:
> > > > Thomas Huth <thuth@redhat.com> writes:
> > > > > On 23/11/2022 12.15, Philippe Mathieu-Daudé wrote:
> > > > > > On 18/11/22 12:33, Alex Bennée wrote:
> > > > > > > The old URL wasn't stable. I suspect the current URL will only be
> > > > > > > stable for a few months so maybe we need another strategy for 
> > > > > > > hosting
> > > > > > > rootfs snapshots?
> > > > > > > 
> > > > > > > Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
> > > > > > > ---
> > > > > > >    tests/avocado/boot_linux_console.py | 4 ++--
> > > > > > >    1 file changed, 2 insertions(+), 2 deletions(-)
> > > > > > > 
> > > > > > > diff --git a/tests/avocado/boot_linux_console.py
> > > > > > > b/tests/avocado/boot_linux_console.py
> > > > > > > index 4c9d551f47..5a2923c423 100644
> > > > > > > --- a/tests/avocado/boot_linux_console.py
> > > > > > > +++ b/tests/avocado/boot_linux_console.py
> > > > > > > @@ -793,8 +793,8 @@ def test_arm_orangepi_sd(self):
> > > > > > >            dtb_path =
> > > > > > > '/usr/lib/linux-image-current-sunxi/sun8i-h3-orangepi-pc.dtb'
> > > > > > >            dtb_path = self.extract_from_deb(deb_path, dtb_path)
> > > > > > >            rootfs_url =
> > > > > > > ('http://storage.kernelci.org/images/rootfs/buildroot/'
> > > > > > > -                      'kci-2019.02/armel/base/rootfs.ext2.xz')
> > > > > > > -        rootfs_hash = '692510cb625efda31640d1de0a8d60e26040f061'
> > > > > > > +
> > > > > > > 'buildroot-baseline/20221116.0/armel/rootfs.ext2.xz')
> > > > > > > +        rootfs_hash = 'fae32f337c7b87547b10f42599acf109da8b6d9a'
> > > > > > If Avocado doesn't find an artifact in its local cache,
> > > > > > it will fetch it
> > > > > > from the URL.
> > > > > > The cache might be populated with artifacts previously downloaded, 
> > > > > > but
> > > > > > their URL is not valid anymore (my case for many tests).
> > > > > > We can also add artifacts manually, see [1].
> > > > > > I'd rather keep pre-existing tests if possible, to test older
> > > > > > (kernel / user-space) images. We don't need to run all the tests all
> > > > > > the time:
> > > > > > tests can be filtered by tags (see [2]).
> > > > > > My preference here is to refactor this test, adding the
> > > > > > "kci-2019.02"
> > > > > > and "baseline-20221116.0" releases. I can prepare the patch if you /
> > > > > > Thomas don't object.
> > > > > 
> > > > > IMHO we shouldn't keep tests in the upstream git repository where the
> > > > > binaries are not available in public anymore. They won't get run by
> > > > > new contributors anymore, and also could vanish from the disks of the
> > > > > people who previously downloaded it, once they wipe their cache or
> > > > > upgrade to a new installation, so the test code will sooner or later
> > > > > be bitrotting. But if you want to keep the tests around on your hard
> > > > > disk, you could also stick the test in a local branch on your hard
> > > > > disk instead.
> > > > 
> > > > CI/Workstation splits aside I tend to agree with Thomas here that having
> > > > tests no one else can run will lead to an accretion of broken tests.
> > > 
> > > Following this idea, should we remove all boards for which no open
> > > source & GPL software is available? I.e:
> > > 
> > > 40p                  IBM RS/6000 7020 (40p)
> > 
> > This machine can run debian :
> 
> IMHO having QEMU able to run anything an architecture can run seems way
> more interesting/helpful rather than restricting it to just open source
> projects.
> 
> >    qemu-system-ppc -M 40p -cpu 604 -nic user -hda ./prep.qcow2 -cdrom
> > ./zImage.hdd -serial mon:stdio -nographic
> >    >> =============================================================
> >    >> OpenBIOS 1.1 [Mar 7 2022 23:07]
> >    >> Configuration device id QEMU version 1 machine id 0
> >    >> CPUs: 0
> >    >> Memory: 128M
> >    >> UUID: 00000000-0000-0000-0000-000000000000
> >    >> CPU type PowerPC,604
> >    milliseconds isn't unique.
> >    Welcome to OpenBIOS v1.1 built on Mar 7 2022 23:07
> >    Trying hd:,\\:tbxi...
> >    >> Not a bootable ELF image
> >    >> switching to new context:
> >    loaded at:     04000400 04015218
> >    relocated to:  00800000 00814E18
> >    board data at: 07C9E870 07CA527C
> >    relocated to:  0080B130 00811B3C
> >    zimage at:     0400B400 0411DC98
> >    avail ram:     00400000 00800000
> >    Linux/PPC load: console=/dev/ttyS0,9600 console=tty0
> > ether=5,0x210,eth0 ether=11,0x300,eth1 ramdisk_size=8192 root=/dev/sda3
> >    Uncompressing Linux................................................done.
> >    Now booting the kernel
> >    Debian GNU/Linux 3.0 6015 ttyS0
> >    6015 login:
> > 
> > Please keep it ! :)
> > 
> > and it also boots AIX 4.4/5.1 (with 2 small patches) but that's clearly
> > not open source. It is downloadable from the net though, like many macos
> > PPC images.
> > 
> > That said, we might have been putting too much in avocado and it takes
> > ages to run (when it does not hit some random python issue).
> 
> w.r.t. "too much in avocado", are you referring to GitLab CI?
> 
> I see the following 2 use cases with Avocado:
>  1/ Run tests locally
>  2/ Run tests on CI
> The set of tests used in 1/ and 2/ doesn't have to be the same...
> 
> 1/ is very helpful for maintainers, to run tests specific to their
> subsystems. Also useful during refactor when touching other subsystems,
> to run their tests before sending a patch set.
> 
> 2/ is the "gating" testing. With retrospective, it was a mistake to
> start running avocado on CI without any filtering on what tests to run.
> Instead of trying to explain my view here, I'd like to go back to Daniel
> earlier proposal:
> https://lore.kernel.org/qemu-devel/20200427152036.GI1244803@redhat.com/
> 
> Per this proposal, we should only run 'Tier 1' on Gitlab CI.
> Daniel described "Tier 1" as "[test that] Will always work."

The key part there is to make clear that testing does not determine
what code we accept into QEMU tree. It merely influences what quality
level we tell users the code has. Ideally we would test everything,
but realistically that's not viable, but we still want to take the
features.

>                                                              I'd like to
> amend with "test that run in less than 150 seconds" (or less). If a test
> takes more, we can run it on our workstations, but we shouldn't waste
> CI cycles with it.

I don't think we need to be so aggressive on time limits for
individual tests. What matters for CI is not the individual
test time, but the overall pipeline wallclock time.

If we want our pipelines to be no longer than 45 minutes,
it is still fine to have 4 tests that run 30 minutes each,
provided we have sufficient resources to run all 4 in parallel.
Keeping tests short is still a good thing, as it lets us run
more overall, but if some need extra time that's ok.

Above all else though, the top 5 requirements for any CI
test we add are reliability, reliability, reliability,
reliability and reliability.

We can't keep spending so much time chasing broken tests.
If the person merging QEMU pull requests just carries on
ignoring tests as they're so frequently broken, the value
of having the tests at all is drastically reduced, in
terms of what they can promise us about quality of the
code we ship.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]