qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [REPORT] [GSoC - TCG Continuous Benchmarking] [#2] Dissecting QEMU I


From: Aleksandar Markovic
Subject: Re: [REPORT] [GSoC - TCG Continuous Benchmarking] [#2] Dissecting QEMU Into Three Main Parts
Date: Tue, 30 Jun 2020 10:58:18 +0200

уто, 30. јун 2020. у 09:19 Ahmed Karaman
<ahmedkhaledkaraman@gmail.com> је написао/ла:
>
> On Tue, Jun 30, 2020 at 6:34 AM Lukáš Doktor <ldoktor@redhat.com> wrote:
> >
> > Dne 29. 06. 20 v 12:25 Ahmed Karaman napsal(a):
> > > Hi,
> > >
> > > The second report of the TCG Continuous Benchmarking series builds
> > > upon the QEMU performance metrics calculated in the previous report.
> > > This report presents a method to dissect the number of instructions
> > > executed by a QEMU invocation into three main phases:
> > > - Code Generation
> > > - JIT Execution
> > > - Helpers Execution
> > > It devises a Python script that automates this process.
> > >
> > > After that, the report presents an experiment for comparing the
> > > output of running the script on 17 different targets. Many conclusions
> > > can be drawn from the results and two of them are discussed in the
> > > analysis section.
> > >
> > > Report link:
> > > https://ahmedkrmn.github.io/TCG-Continuous-Benchmarking/Dissecting-QEMU-Into-Three-Main-Parts/
> > >
> > > Previous reports:
> > > Report 1 - Measuring Basic Performance Metrics of QEMU:
> > > https://lists.gnu.org/archive/html/qemu-devel/2020-06/msg06692.html
> > >
> > > Best regards,
> > > Ahmed Karaman
> >
> > Hello Ahmed,
> >
> > very nice reading, both reports so far. One thing that could be better 
> > displayed is the system you used this to generate. This would come handy 
> > especially later when you move from examples to actual reports. I think 
> > it'd make sense to add a section with a clear definition of the machine as 
> > well as the operation system, qemu version and eventually other deps (like 
> > compiler, flags, ...). For this report something like:
> >
> > architecture: x86_64
> > cpu_codename: Kaby Lake
> > cpu: i7-8650U
> > ram: 32GB DDR4
> > os: Fedora 32
> > qemu: 470dd165d152ff7ceac61c7b71c2b89220b3aad7
> > compiler: gcc-10.1.1-1.fc32.x86_64
> > flags: 
> > --target-list="x86_64-softmmu,ppc64-softmmu,aarch64-softmmu,s390x-softmmu,riscv64-softmmu"
> >  --disable-werror --disable-sparse --enable-sdl --enable-kvm  
> > --enable-vhost-net --enable-vhost-net --enable-attr  --enable-kvm  
> > --enable-fdt   --enable-vnc --enable-seccomp 
> > --block-drv-rw-whitelist="vmdk,null-aio,quorum,null-co,blkverify,file,nbd,raw,blkdebug,host_device,qed,nbd,iscsi,gluster,rbd,qcow2,throttle,copy-on-read"
> >  --python=/usr/bin/python3 --enable-linux-io-uring
> >
> > would do. Maybe it'd be even a good idea to create a script to report this 
> > basic set of information and add it after each of the perf scripts so 
> > people don't forget to double-check the conditions, but others might 
> > disagree so take this only as a suggestion.
> >
> > Regards,
> > Lukáš
> >
> > PS: Automated cpu codenames, hosts OSes and such could be tricky, but one 
> > can use other libraries or just best-effort-approach with fallback to 
> > "unknown" to let people filling it manually or adding their branch to your 
> > script.
> >
> > Regards,
> > Lukáš
> >
> Thanks Mr. Lukáš, I'm really glad you found both reports interesting.
>
> Both reports are based on QEMU version 5.0.0, this wasn't mentioned in
> the reports so thanks for the reminder. I'll add a short note about
> that.
>
> The used QEMU build is a very basic GCC build (created by just running
> ../configure in the build directory without any flags).
>
> Regarding the detailed machine information (CPU, RAM ... etc), The two
> reports introduce some concepts and methodologies that will produce
> consistent results on whichever machine they are executed on. So I
> think it's unnecessary to mention the detailed system information used
> in the reports for now.
>

Ahmed, I don't entirely agree with you on this topic.

I think you treated Mr. Lukas comments in an overly lax way.

Yes, the results will be stable (within a small fraction of a percent)
on a particular given system (which is proved in "Stability
Experiment" section of Report 1). That is great! Although it sounds
elementary, this is not easy to achieve, so I am glad you did it.

However, we know that the results for hosts of different architectures
will be different - we expect that.

32-bit Intel host will also most likely produce significantly
different results than 64-bit Intel hosts. By the way, 64-bit targets
in QEMU linux-user mode are not supported on 32-bit hosts (although
nothing stops the user to start corresponding instances of QEMU on a
32-bit host, but the results are unpredictable.

Let's focus now on Intel 64-bit hosts only. Richard, can you perhaps
enlighten us on whether QEMU (from the point of view of TCG target)
behaves differently on different Intel 64-bit hosts, and to what
degree?

I currently work remotely, but once I am be physically at my office I
will have a variety of hosts at the company, and would be happy to do
the comparison between them, wrt what you presented in Report 2.

In conclusion, I think a basic description of your test bed is missing
in your reports. And, for final reports (which we call "nightly
reports") a detailed system description, as Mr Lukas outlined, is,
also in my opinion, necessary.

Thanks, Mr. Lukas, for bringing this to our attention!

Yours,
Aleksandar




> Best regards,
> Ahmed Karaman



reply via email to

[Prev in Thread] Current Thread [Next in Thread]