[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GSoC/Outreachy QEMU project proposal] Measure and Analyze QEMU Perf

From: Stefan Hajnoczi
Subject: Re: [GSoC/Outreachy QEMU project proposal] Measure and Analyze QEMU Performance
Date: Wed, 22 Jan 2020 11:28:18 +0000

On Tue, Jan 21, 2020 at 03:07:53PM +0100, Aleksandar Markovic wrote:
> On Mon, Jan 20, 2020 at 3:51 PM Stefan Hajnoczi <address@hidden> wrote:
> >
> > On Sat, Jan 18, 2020 at 03:08:37PM +0100, Aleksandar Markovic wrote:
> > > 3) The community will be given all devised performance measurement 
> > > methods in the form of easily reproducible step-by-step setup and 
> > > execution procedures.
> >
> > Tracking performance is a good idea and something that has not been done
> > upstream yet.
> Thanks for the interest, Stefan!
> >  A few questions:
> >
> >  * Will benchmarks be run automatically (e.g. nightly or weekly) on
> >    someone's hardware or does every TCG architecture maintainer need to
> >    run them manually for themselves?
> If the community wants it, definitely yes. Once the methodology is
> developed, it should be straightforward to setup nightly and/or weekly
> benchmarks - that could definitely include sending mails with reports
> to the entire list or just individuals or subgroups. The recipient
> choice is just a matter or having decent criteria about
> appropriateness of information within the message (e.g. not to flood
> the list with the data most people are not really interested).
> For linux-user tests, they are typically very quick, and nightly tests
> are quite feasible to run. On someone hardware, of course, and
> consistently always on the same hardware, if possible. If it makes
> sense, one could setup multiple test beds with a variety of hardware
> setups.
> For system mode tests, I knoe they are much more difficult to
> automate, and, on top of that, there could be greater risk of
> hangs/crashes Also, considering the number of machines we support,
> those tests could consume much more time - perhaps even one day would
> not be sufficient, if we have many machines and boot/shutdown
> variants. For these reason, perhaps weekly executions would be more
> appropriate for them, and, in general, given greater complexity, the
> expectation from system-mode performance tests should be better kept
> quite low for now.
> >  * Where will the benchmark result history be stored?
> >
> If emailing is set up, the results could be reconstructed from emails.
> But, yes, it would be better if the result history is kept somewhere
> on an internet-connected file server

Thanks.  I don't want to overcomplicate this project.  The main thing is
to identify the stakeholders (TCG target maintainers?) and make sure
they are happy.


Attachment: signature.asc
Description: PGP signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]