[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Proposal for a regular upstream performance testing

From: Daniel P . Berrangé
Subject: Re: Proposal for a regular upstream performance testing
Date: Thu, 26 Nov 2020 09:43:38 +0000
User-agent: Mutt/1.14.6 (2020-07-11)

On Thu, Nov 26, 2020 at 09:10:14AM +0100, Lukáš Doktor wrote:
> How
> ===
> This is a tough question. Ideally this should be a standalone service that
> would only notify the author of the patch that caused the change with a
> bunch of useful data so they can either address the issue or just be aware
> of this change and mark it as expected.

We need to distinguish between the service that co-ordinates and reports
the testing, vs the service which runs the tests.

For the service which runs the tests, it is critical that it be a standalone
bare metal machine with nothing else being run, to ensure reproducability of
results as you say.

For the service which co-ordinates and reports test results, we ideally want
it to be integrated into our primary CI dashboard, which is GitLab CI at
this time.

> Ideally the community should have a way to also issue their custom builds
> in order to verify their patches so they can debug and address issues
> better than just commit to qemu-master.

Allowing community builds certainly adds an extra dimension of complexity
to the problem, as you need some kind of permissions control, as you can't
allow any arbitrary user on the web to trigger jobs with arbitrary code,
as that is a significant security risk to your infra.

I think I'd just suggest providing a mechanism for the user to easily spin
up performance test jobs on their own hardware. This could be as simple
as providing a docker container recipe that users can deploy on some
arbitrary machine of their choosing that contains the test rig. All they
should need do is provide a git ref, and then launching the container and
running jobs should be a single command. They can simply run the tests
twice, with and without the patch series in question.

> The problem with those is that we can not simply use travis/gitlab/...
> machines for running those tests, because we are measuring in-guest
> actual performance.

As mentioned above - distinguish between the CI framework, and the
actual test runner.

> Solution 3
> ----------
> You name it. I bet there are many other ways to perform system-wide
> performance testing.

IMHO ideally we should use GitLab CI as the dashboard for trigger
the tests, and report results back.  We should not use the GitLab
shared runners though for reasons you describe of course. Instead
register our own dedicated bare metal machine to run the perf jobs.
Cleber has already done some work in this area to provide custom
runners for some of the integration testing work. Red Hat is providing
the hardware for those runners, but I don't know what spare we have
available, if any,  that could be dedicated for the performance
regression tests

|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

reply via email to

[Prev in Thread] Current Thread [Next in Thread]