qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] State of QEMU CI as we enter 4.0


From: Alex Bennée
Subject: Re: [Qemu-devel] State of QEMU CI as we enter 4.0
Date: Thu, 21 Mar 2019 09:52:27 +0000
User-agent: mu4e 1.1.0; emacs 26.1

Cleber Rosa <address@hidden> writes:

> On Thu, Mar 14, 2019 at 03:57:06PM +0000, Alex Bennée wrote:
>>
>> Hi,
>>
>> As we approach stabilisation for 4.0 I thought it would be worth doing a
>> review of the current state of CI and stimulate some discussion of where
>> it is working for us and what could be improved.
>>
>> Testing in Build System
>> =======================
>>
>> Things seem to be progressing well in this respect. More and more tests
>> have been added into the main source tree and they are only a make
>> invocation away. These include:
>>
>>   check          (includes unit, qapi-schema, qtest and decodetree)
>>   check-tcg      (now with system mode tests!)
>>   check-softfloat
>>   check-block
>>   check-acceptance
>>
>> Personally check-acceptance is the area I've looked at the least but
>> this seems to be the best place for "full life cycle" tests like booting
>> kernels and running stress and performance tests. I'm still a little
>> unsure how we deal with prebuilt kernels and images here though. Are
>> they basically provided by 3rd parties from their websites? Do we mirror
>> any of the artefacts we use for these tests?
>
> While it's possible to add any sort of files alongside the tests, and
> "get it"[1] from the test[2], this is certainly not desirable for
> kernels and other similarly large blobs.  The current approach is to
> use well known URLs[3] and download[4][5] those at test run time.
>
> Those are cached locally, automatically on the first run and reused on
> subsequent executions.  The caching is helpful for development
> environments, but is usually irrelevant to CI environments, where
> you'd most often than not get a new machine (or a clean environment).
>
> For now I would, also for the sake of simplicity, keep relying on 3rd
> party websites until they prove to be unreliable.  This adds
> trasnparency and reproducibility well beyond can be achieved if we
> attempt to mirror them to a QEMU sponsored/official location IMO.

I think this is fine for "well-known" artefacts. Any distro kernel is
reproducible if you go through the appropriate steps. But we don't want
to repeat the mistakes of:

  https://wiki.qemu.org/Testing/System_Images

which is a fairly random collection of stuff. At least the Advent
Calendar images have a bit more documentation with them.

>
>>
>> One area of concern is how well this all sits with KVM (and other HW
>> accelerators) and how that gets tested. With my ARM hat on I don't
>> really see any integration between testing kernel and QEMU changes
>> together to catch any problems as the core OS support for KVM gets
>> updated.
>>
>
> In short, I don't think there should be at the QEMU CI be any
> integration testing that changes both KVM and QEMU at once.
>
> But, that's me assuming that the vast majority of changes in QEMU and
> KVM can be developed, an tested separately of each other.  That's in
> sharp contrast with the the days in which KVM Autotest would build
> both the kernel and userspace as part of all test jobs, because of
> very frequent dependencies among them.
>
> I'd love to get feedback on this from KVM (and other HW accelerator)
> folks.
>
>> Another area I would like to improve is how we expand testing with
>> existing test suites. I'm thinking things like LTP and kvm-unit-tests
>> which can exercise a bunch of QEMU code but are maybe a bit to big to be
>> included in the source tree. Although given we included TestFloat (via a
>> git submodule) maybe we shouldn't dismiss that approach? Or is this
>> something that could be done via Avocado?
>>
>
> Well, there's this:
>
>   https://github.com/avocado-framework-tests/avocado-misc-tests
>
> Which contains close to 300 tests, most of them wrappers for other
> test suites, including LTP:
>
>   
> https://github.com/avocado-framework-tests/avocado-misc-tests/blob/master/generic/ltp.py
>
> I'm claiming it's the perfect fit for your idea, but sounds like a
> good starting point.

Cool - I shall have a look at that the other side of Connect. I'd like
to make running LTP easier for non-core linux-user developers.

>
>> Generally though I think we are doing pretty well at increasing our test
>> coverage while making the tests more directly available to developers
>> without having to rely on someones personal collection of random
>> binaries.
>>
>
> +1.
>
>> I wanted to know if we should encode this somewhere in our developer
>> documentation:
>>
>>   There is a strong preference for new QEMU tests to be integrated with
>>   the build system. Developers should be able to (build and) run the new
>>   tests locally directly from make.
>>
>> ?
>>
>
> There should definitely be, if reasonable, a similar experience for
> running different types of tests.  Right now, the build system (make
> targets) is clearly the common place, so +1.
>
> - Cleber.
>
> [1] - 
> https://avocado-framework.readthedocs.io/en/69.0/api/core/avocado.core.html#avocado.core.test.TestData.get_data
> [2] - 
> https://avocado-framework.readthedocs.io/en/69.0/WritingTests.html#accessing-test-data-files
> [3] - 
> https://github.com/clebergnu/qemu/blob/sent/target_arch_v5/tests/acceptance/boot_linux_console.py#L68
> [4] - 
> https://github.com/clebergnu/qemu/blob/sent/target_arch_v5/tests/acceptance/boot_linux_console.py#L92
> [5] - 
> https://avocado-framework.readthedocs.io/en/69.0/WritingTests.html#fetching-asset-files


--
Alex Bennée



reply via email to

[Prev in Thread] Current Thread [Next in Thread]