Ani Sinha <ani@anisinha.ca> writes:
> On Fri, 21 Oct, 2022, 5:52 pm Ani Sinha, <ani@anisinha.ca> wrote:
>
> On Fri, 21 Oct, 2022, 5:26 pm Alex Bennée, <alex.bennee@linaro.org> wrote:
>
> Ani Sinha <ani@anisinha.ca> writes:
>
> > On Fri, Oct 21, 2022 at 3:10 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> >>
> >> On Fri, Oct 21, 2022 at 10:30:09AM +0100, Alex Bennée wrote:
> >> >
> >> > Ani Sinha <ani@anisinha.ca> writes:
> >> >
> >> > > On Fri, Oct 21, 2022 at 2:02 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> >> > >>
> >> > >> On Fri, Oct 21, 2022 at 05:45:15AM +0530, Ani Sinha wrote:
> >> > >> > And have multiple platform specific branches in bits that have fixes for those
> >> > >> > platforms so that bits can run there. Plus the existing test can be enhanced to
> >> > >> > pull in binaries from those branches based on the platform on which it is being
> >> > >> > run.
> >> > >> >
> >> > >>
> >> > >> What a mess.
> >> > >> Who is going to be testing all these million platforms?
> >> > >
> >> > > I am not talking about branches in QEMU but branches in bits.
> >> > > If you are going to test multiple platforms, you do need to build bits
> >> > > binaries for them. There is no way around it.
> >> > > bits is not all platform independent python. It does have binary executables.
> >> > >
> >> > > Currently bits is built only for the x86 platform. Other platforms are
> >> > > not tested. I doubt if anyone even tried building bits for arm or
> >> > > mips.
> >> >
> >> > I'm not worried about test bits on other targets, but we do run x86
> >> > targets on a number of hosts. The current reliance on a special patched
> >> > host build tool for only one architecture is the problem. If we just
> >> > download the iso that problem goes away.
> >>
> >> 👍what he said.
> >
> > Yes, in that case the problem is that upstream bits does not pass all
> > the test out of the box. Hence we are taking this approach of keeping
> > some test scripts in QEMU repo and modifying them. Then generating the
> > iso with the modified scripts. It also helps developers who want to
> > write new tests or make enhancements to existing tests.
> > If modifications need to be made to tests, they need to be versioned.
> > We have gone through the route of not using submodules and I am not
> > going to open that can of worms again.
>
> We have added a mirror of biosbits to the QEMU project so there is no
> reason why we can't track changes and modifications there (we do this
> for TestFloat which is forked from the upstream SoftFloat code).
>
> The whole idea was that say an acpi developer added support for a new table in QEMU, he should write a
> corresponding test for bits so that the same table is exercised during run time. Making those changes from a single
> repo (either directly or through a submodule) makes things lit simpler and also keeps things in sync with each
> other. If we use separate repos for acpi bits test, it will be another mess when comes to developers adding changes
> and keeping things in sync.
For people that care about ACPI it shouldn't be that hard.
People who submit patches for acpi come from all over the place and they mostly care about the qemu source tree and not any other repos.
Most QEMU
developers have separate repos of test cases that aren't directly
integrated into QEMU for various things (e.g. RISU, semihosting,
baremetal, kvm-unit-tests, LTP).
> Not only this. let's look at the developers workflow.
>
> (A) check out bits repo.
> (B) write new test.
> (C) build the bits iso.
> (D) get back to QEMU repo.
> (E) point the test to the new iso so that the test gets executed.
This seems like a long winded workflow. Usually you test your binaries
before integrating them into the acceptance tests. All you need is a
script to launch qemu (either system or point at a developer binary) and
run directly. Only once you are happy with the final ISO would you
upload and then integrate into check-acceptance.
> (F) oops something failed. So let's rinse and repeat.
> (G) once ready, send a PR for bits repo. update tags and figure out how gitlab ci works so that the QEMU test can point to
> it. To do that figure out the artefact hash and other parameters.
> (H) send a patch for QEMU repo to update the test to point to new iso.
>
> How complicated is that? How complicated will it be for the reviewer? Right now the developer can simply make changes
> from a single repo and run a avocado test and check logs for failures. Once test is fixed, they can run the test again to
> make sure everything passes. Once done, commit the test in QEMU repo. If the test exercises a new table we make sure
> that the commits adding the new table is already present before the test that exercises it is committed. Send a patch for
> review. The reviewer applies the patch and simply runs the avocado test from QEMU repo. Everything is in one place. No
> back and forth between two repos. A lot like "make check".
We do indeed build tests for a lot of make check (unit, qtest, tcg) but
they build on all our host architectures and have configure machinery to
make them optional if host binaries are missing. For avocado tests we
typically are using other peoples binaries so this series is a departure
from that model.
Yea so if you are using other peoples binaries you should not assume that they will work on all host architectures.