qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: "make check-acceptance" takes way too long


From: Philippe Mathieu-Daudé
Subject: Re: "make check-acceptance" takes way too long
Date: Fri, 30 Jul 2021 17:41:12 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0

On 7/30/21 5:12 PM, Peter Maydell wrote:
> "make check-acceptance" takes way way too long. I just did a run
> on an arm-and-aarch64-targets-only debug build and it took over
> half an hour, and this despite it skipping or cancelling 26 out
> of 58 tests!
> 
> I think that ~10 minutes runtime is reasonable. 30 is not;
> ideally no individual test would take more than a minute or so.
> 
> Output saying where the time went. The first two tests take
> more than 10 minutes *each*. I think a good start would be to find
> a way of testing what they're testing that is less heavyweight.

IIRC the KVM forum BoF, we suggested a test shouldn't take more than
60sec. But then it was borderline for some tests so we talked about
allowing 90-120sec, and more should be discussed and documented.

However it was never documented / enforced.

This seems to match my memory:

$ git grep 'timeout =' tests/acceptance/
tests/acceptance/avocado_qemu/__init__.py:440:    timeout = 900
tests/acceptance/boot_linux_console.py:99:    timeout = 90
tests/acceptance/boot_xen.py:26:    timeout = 90
tests/acceptance/linux_initrd.py:27:    timeout = 300
tests/acceptance/linux_ssh_mips_malta.py:26:    timeout = 150 # Not for
'configure --enable-debug --enable-debug-tcg'
tests/acceptance/machine_arm_canona1100.py:18:    timeout = 90
tests/acceptance/machine_arm_integratorcp.py:34:    timeout = 90
tests/acceptance/machine_arm_n8x0.py:20:    timeout = 90
tests/acceptance/machine_avr6.py:25:    timeout = 5
tests/acceptance/machine_m68k_nextcube.py:30:    timeout = 15
tests/acceptance/machine_microblaze.py:14:    timeout = 90
tests/acceptance/machine_mips_fuloong2e.py:18:    timeout = 60
tests/acceptance/machine_mips_loongson3v.py:18:    timeout = 60
tests/acceptance/machine_mips_malta.py:38:    timeout = 30
tests/acceptance/machine_ppc.py:14:    timeout = 90
tests/acceptance/machine_rx_gdbsim.py:22:    timeout = 30
tests/acceptance/machine_s390_ccw_virtio.py:24:    timeout = 120
tests/acceptance/machine_sparc64_sun4u.py:20:    timeout = 90
tests/acceptance/machine_sparc_leon3.py:15:    timeout = 60
tests/acceptance/migration.py:27:    timeout = 10
tests/acceptance/ppc_prep_40p.py:18:    timeout = 60
tests/acceptance/replay_kernel.py:34:    timeout = 120
tests/acceptance/replay_kernel.py:357:    timeout = 180
tests/acceptance/reverse_debugging.py:33:    timeout = 10
tests/acceptance/tcg_plugins.py:24:    timeout = 120

> 
>  (01/58) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_tcg_gicv2:
> PASS (629.74 s)
>  (02/58) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_tcg_gicv3:
> PASS (628.75 s)
>  (03/58) tests/acceptance/boot_linux.py:BootLinuxAarch64.test_virt_kvm:
> CANCEL: kvm accelerator does not seem to be available (1.18 s)

We could restrict these to one of the projects runners (x86 probably)
with something like:

  @skipUnless(os.getenv('X86_64_RUNNER_AVAILABLE'), '...')

>  (15/58) 
> tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi:
> PASS (4.86 s)
>  (16/58) 
> tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi_initrd:
> PASS (39.85 s)
>  (17/58) 
> tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi_sd:
> PASS (53.57 s)
>  (18/58) 
> tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi_bionic_20_08:
> SKIP: storage limited
>  (19/58) 
> tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi_uboot_netbsd9:
> SKIP: storage limited

I've been thinking about restricting them to my sdmmc-tree, but if
I don't send pull-req I won't test or catch other introducing
regressions. They respect the 60sec limit.

We could restrict some jobs to maintainers fork namespace, track
mainstream master branch and either run the pipelines when /master
is updated or regularly
(https://docs.gitlab.com/ee/ci/pipelines/schedules.html)
but them if the maintainer becomes busy / idle / inactive we
similarly won't catch regressions in mainstream.

Anyway Daniel already studied the problem and send a RFC but we
ignored it:
https://www.mail-archive.com/qemu-devel@nongnu.org/msg761087.html

Maybe worth continuing the discussion there?



reply via email to

[Prev in Thread] Current Thread [Next in Thread]