qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC 0/8] virtio: Improve boot time of virtio-scsi-pci and virtio-bl


From: Greg Kurz
Subject: Re: [RFC 0/8] virtio: Improve boot time of virtio-scsi-pci and virtio-blk-pci
Date: Thu, 25 Mar 2021 21:51:34 +0100

On Thu, 25 Mar 2021 17:43:10 +0000
Stefan Hajnoczi <stefanha@redhat.com> wrote:

> On Thu, Mar 25, 2021 at 01:05:16PM -0400, Michael S. Tsirkin wrote:
> > On Thu, Mar 25, 2021 at 04:07:27PM +0100, Greg Kurz wrote:
> > > Now that virtio-scsi-pci and virtio-blk-pci map 1 virtqueue per vCPU,
> > > a serious slow down may be observed on setups with a big enough number
> > > of vCPUs.
> > > 
> > > Exemple with a pseries guest on a bi-POWER9 socket system (128 HW 
> > > threads):
> > > 
> > > 1         0m20.922s       0m21.346s
> > > 2         0m21.230s       0m20.350s
> > > 4         0m21.761s       0m20.997s
> > > 8         0m22.770s       0m20.051s
> > > 16                0m22.038s       0m19.994s
> > > 32                0m22.928s       0m20.803s
> > > 64                0m26.583s       0m22.953s
> > > 128               0m41.273s       0m32.333s
> > > 256               2m4.727s        1m16.924s
> > > 384               6m5.563s        3m26.186s
> > > 
> > > Both perf and gprof indicate that QEMU is hogging CPUs when setting up
> > > the ioeventfds:
> > > 
> > >  67.88%  swapper         [kernel.kallsyms]  [k] power_pmu_enable
> > >   9.47%  qemu-kvm        [kernel.kallsyms]  [k] smp_call_function_single
> > >   8.64%  qemu-kvm        [kernel.kallsyms]  [k] power_pmu_enable
> > > =>2.79%  qemu-kvm        qemu-kvm           [.] 
> > > memory_region_ioeventfd_before
> > > =>2.12%  qemu-kvm        qemu-kvm           [.] 
> > > address_space_update_ioeventfds
> > >   0.56%  kworker/8:0-mm  [kernel.kallsyms]  [k] smp_call_function_single
> > > 
> > > address_space_update_ioeventfds() is called when committing an MR
> > > transaction, i.e. for each ioeventfd with the current code base,
> > > and it internally loops on all ioventfds:
> > > 
> > > static void address_space_update_ioeventfds(AddressSpace *as)
> > > {
> > > [...]
> > >     FOR_EACH_FLAT_RANGE(fr, view) {
> > >         for (i = 0; i < fr->mr->ioeventfd_nb; ++i) {
> > > 
> > > This means that the setup of ioeventfds for these devices has
> > > quadratic time complexity.
> > > 
> > > This series introduce generic APIs to allow batch creation and deletion
> > > of ioeventfds, and converts virtio-blk and virtio-scsi to use them. This
> > > greatly improves the numbers:
> > > 
> > > 1         0m21.271s       0m22.076s
> > > 2         0m20.912s       0m19.716s
> > > 4         0m20.508s       0m19.310s
> > > 8         0m21.374s       0m20.273s
> > > 16                0m21.559s       0m21.374s
> > > 32                0m22.532s       0m21.271s
> > > 64                0m26.550s       0m22.007s
> > > 128               0m29.115s       0m27.446s
> > > 256               0m44.752s       0m41.004s
> > > 384               1m2.884s        0m58.023s
> > > 
> > > The series deliberately spans over multiple subsystems for easier
> > > review and experimenting. It also does some preliminary fixes on
> > > the way. It is thus posted as an RFC for now, but if the general
> > > idea is acceptable, I guess a non-RFC could be posted and maybe
> > > extend the feature to some other devices that might suffer from
> > > similar scaling issues, e.g. vhost-scsi-pci, vhost-user-scsi-pci
> > > and vhost-user-blk-pci, even if I haven't checked.
> > > 
> > > This should fix https://bugzilla.redhat.com/show_bug.cgi?id=1927108
> > > which reported the issue for virtio-scsi-pci.
> > 
> > 
> > Series looks ok from a quick look ...
> > 
> > this is a regression isn't it?
> > So I guess we'll need that in 6.0 or revert the # of vqs
> > change for now ...
> 
> Commit 9445e1e15e66c19e42bea942ba810db28052cd05 ("virtio-blk-pci:
> default num_queues to -smp N") was already released in QEMU 5.2.0. It is
> not a QEMU 6.0 regression.
> 

Oh you're right, I've just checked and QEMU 5.2.0 has the same problem.

> I'll review this series on Monday.
> 

Thanks !

> Stefan

Attachment: pgpsGW7wQw6xV.pgp
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]