qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GSoC/Outreachy QEMU project proposal] Measure and Analyze QEMU Perf


From: Aleksandar Markovic
Subject: Re: [GSoC/Outreachy QEMU project proposal] Measure and Analyze QEMU Performance
Date: Tue, 21 Jan 2020 15:37:57 +0100

On Tue, Jan 21, 2020 at 9:08 AM Lukáš Doktor <address@hidden> wrote:
>
> Dne 18. 01. 20 v 15:08 Aleksandar Markovic napsal(a):
> > Hi, everybody.
> >
> > I am going to propose several ideas for QEMU participation in 
> > GSoC/Outreachy in next few days. This is the first one. Please feel free to 
> > give an honest feedback.
> >
> > Yours,
> > Aleksandar
> >
>
> Hello Aleksandr,
>
> sounds like a good plan, I'd like to be involved as well.
>

Sure, I am glad to heard this.

> Why? At Rad Hat I'm exploring a way to monitor qemu performance. At this 
> point it's x86_64 whole system only, but it should be flexible enough to work 
> on various setups. Good news is we're in a process of upstreamizing our setup 
> so it might actually serve for the part II of your proposal. It's not ready 
> yet as it contains many ugly and downstream parts, but I'm replacing the 
> custom modules with Ansible and cleaning things from internal parts as having 
> it upstream is a high priority at this point. Our motivation is to allow 
> public upstream testing (again, starting with x86, but more will hopefully 
> come).
>
> Your proposal is fairly generic, I'm wondering which way it will turn. I like 
> the part I, it might catch low-level changes and should lower the variability 
> of results. In part II I'm a bit scared of how the scope will grow (based on 
> what I saw in my experiment). You have host, host kernel, host system, qemu, 
> guest kernel, guest system and than the tested app, which might result in a 
> great jitter. Additionally qemu contains many features that need to be 
> utilized, you have various disk formats, block devices, various passthrough 
> options, ... as well as host/guest tune settings. It's gonna be hard to not 
> to get lost in the depth and to deliver something useful while extendable for 
> the future...
>

My first impression is that your work and this proposal could be
viewed much more as complementary, rather than largely overlapping.

Yes, I am quite aware of the problem of data explosion, and I already
explore different possibilities of dealing with it.

Also, a student realistically can't do aweful lot of difficult work
for 3 or 4 months, so I plan to focus on simplicity, and the community
could further develop something more complex, if needed.

> Anyway, please keep me in the loop and good luck with leading this into the 
> right direction...
>

Definitely, and thanks!

Best regards,
Aleksandar

> Regards,
> Lukáš
>
> >
> >
> > *Measure and Analyze Performance of
> > QEMU User and System Mode Emulation*
> >
> >
> > _/PLANNED ACTIVITIES/_
> >
> > PART I: (user mode)
> >
> >    a) select around a dozen test programs (resembling components of SPEC 
> > benchmark, but must be open source, and preferably license compatible with 
> > QEMU); test programs should be distributed like this: 4-5 FPU 
> > CPU-intensive, 4-5 non-FPU CPU intensive, 1-2 I/O intensive;
> >    b) measure execution time and other performance data in user mode across 
> > all platforms for ToT:
> >        - try to improve performance if there is an obvious bottleneck (but 
> > this is unlikely);
> >        - develop tests that will be protection against performance 
> > regressions in future.
> >    c) measure execution time in user-mode for selected platforms for all 
> > QEMU versions in last 5 years:
> >        - confirm performance improvements and/or detect performance 
> > degradations.
> >    d) summarize all results in a comprehensive form, using also 
> > graphics/data visualization.
> >
> > PART II: (system mode)
> >
> >    a) measure execution time and other performance data for boot/shutdown 
> > cycle for selected machines for ToT:
> >        - try to improve performance if there is an obvious bottleneck;
> >        - develop tests that will be protection against performance 
> > regressions in future.
> >    b) summarize all results in a comprehensive form.
> >
> >
> > /_DELIVERABLES_/
> >
> > 1) Each maintainer for target will be given a list of top 25 functions in 
> > terms of spent host time for each benchmark described in the previous 
> > section. Additional information and observations will be also provided, if 
> > the judgment is they are useful and relevant.
> >
> > 2) Each maintainer for machine (that has successful boot/shutdown cycle) 
> > will be given a list of top 25 functions in terms of spent host time during 
> > boot/shutdown cycle. Additional information and observations will be also 
> > provided, if the judgment is they are useful and relevant.
> >
> > 3) The community will be given all devised performance measurement methods 
> > in the form of easily reproducible step-by-step setup and execution 
> > procedures.
> >
> > (parts 1) and 2) will be, of course, published to everybody, maintainers 
> > are simply singled out as main recipients and decision-makers on possible 
> > next action items)
> >
> > Deliverable will be distributed over wide time interval (in other words, 
> > they will not be presented just at the end of project, but gradually during 
> > project execution).
> >
> >
> > /Mentor:/ Aleksandar Markovic (myself) (but, I am perfectly fine if 
> > somebody else wants to mentor the project, if interested)
> >
> > /Student:/ open
> >
> >
> > That would be all, feel free to ask for additional info and/or 
> > clarification.
> >
>
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]