qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v9 00/27] virtio: virtio-blk data plane


From: Khoa Huynh
Subject: Re: [Qemu-devel] [RFC v9 00/27] virtio: virtio-blk data plane
Date: Wed, 18 Jul 2012 11:41:19 -0500

Michael S. Tsirkin wrote on 07/18/2012 10:43:23 AM:

> From: "Michael S. Tsirkin" <address@hidden>

> To: Stefan Hajnoczi <address@hidden>,
> Cc: Kevin Wolf <address@hidden>, Anthony Liguori/Austin/address@hidden,
> address@hidden, address@hidden, Khoa Huynh/Austin/
> address@hidden, Paolo Bonzini <address@hidden>, Asias He <address@hidden>

> Date: 07/18/2012 10:45 AM
> Subject: Re: [Qemu-devel] [RFC v9 00/27] virtio: virtio-blk data plane
> Sent by: address@hidden
>
> On Wed, Jul 18, 2012 at 04:07:27PM +0100, Stefan Hajnoczi wrote:
> > This series implements a dedicated thread for virtio-blk
> processing using Linux
> > AIO for raw image files only.  It is based on qemu-kvm.git a0bc8c3
> and somewhat
> > old but I wanted to share it on the list since it has been
> mentioned on mailing
> > lists and IRC recently.
> >
> > These patches can be used for benchmarking and discussion about
> how to improve
> > block performance.  Paolo Bonzini has also worked in this area andmight want
> > to share his patches.
> >
> > The basic approach is:
> > 1. Each virtio-blk device has a thread dedicated to handling ioeventfd
> >    signalling when the guest kicks the virtqueue.
> > 2. Requests are processed without going through the QEMU block layer using
> >    Linux AIO directly.
> > 3. Completion interrupts are injected via ioctl from the dedicated thread.
> >
> > The series also contains request merging as a bdrv_aio_multiwrite
> () equivalent.
> > This was only to get a comparison against the QEMU block layer and
> I would drop
> > it for other types of analysis.
> >
> > The effect of this series is that O_DIRECT Linux AIO on raw files can bypass
> > the QEMU global mutex and block layer.  This means higher performance.
>
> Do you have any numbers at all?


Yes, we do have a lot of data for this data-plane patch set.  I can send you
detailed charts if you like, but generally, we run into a performance bottleneck
with the existing qemu due to the qemu global mutex, and thus, could only get
to about 140,000 IOPS for a single guest (at least on my setup).  With this
data-plane patch set, we bypass this bottleneck and have been able to achieve
more than 600,000 IOPS for a single guest, and an aggregate 1.33 million IOPS
with 4 guests on a single host.

Just for reference, VMware has claimed that they could get 300,000 IOPS for a
single VM and 1 million IOPS with 6 VMs on a single VSphere 5.0 host.  So we
definitely need something like this for KVM to be competitive with VMware and
other hypervisors.  Of course, this would also help satisfy the high I/O rate
requirements for BigData and other data-intensive applications or benchmarks
running on KVM.

Thanks,
-Khoa

>
> > A cleaned up version of this approach could be added to QEMU as a
> raw O_DIRECT
> > Linux AIO fast path.  Image file formats, protocols, and other block layer
> > features are not supported by virtio-blk-data-plane.
> >
> > Git repo:
> > http://repo.or.cz/w/qemu-kvm/stefanha.git/shortlog/refs/heads/
> virtio-blk-data-plane
> >
> > Stefan Hajnoczi (27):
> >   virtio-blk: Remove virtqueue request handling code
> >   virtio-blk: Set up host notifier for data plane
> >   virtio-blk: Data plane thread event loop
> >   virtio-blk: Map vring
> >   virtio-blk: Do cheapest possible memory mapping
> >   virtio-blk: Take PCI memory range into account
> >   virtio-blk: Put dataplane code into its own directory
> >   virtio-blk: Read requests from the vring
> >   virtio-blk: Add Linux AIO queue
> >   virtio-blk: Stop data plane thread cleanly
> >   virtio-blk: Indirect vring and flush support
> >   virtio-blk: Add workaround for BUG_ON() dependency in virtio_ring.h
> >   virtio-blk: Increase max requests for indirect vring
> >   virtio-blk: Use pthreads instead of qemu-thread
> >   notifier: Add a function to set the notifier
> >   virtio-blk: Kick data plane thread using event notifier set
> >   virtio-blk: Use guest notifier to raise interrupts
> >   virtio-blk: Call ioctl() directly instead of irqfd
> >   virtio-blk: Disable guest->host notifies while processing vring
> >   virtio-blk: Add ioscheduler to detect mergable requests
> >   virtio-blk: Add basic request merging
> >   virtio-blk: Fix request merging
> >   virtio-blk: Stub out SCSI commands
> >   virtio-blk: fix incorrect length
> >   msix: fix irqchip breakage in msix_try_notify_from_thread()
> >   msix: use upstream kvm_irqchip_set_irq()
> >   virtio-blk: add EVENT_IDX support to dataplane
> >
> >  event_notifier.c          |    7 +
> >  event_notifier.h          |    1 +
> >  hw/dataplane/event-poll.h |  116 +++++++
> >  hw/dataplane/ioq.h        |  128 ++++++++
> >  hw/dataplane/iosched.h    |   97 ++++++
> >  hw/dataplane/vring.h      |  334 ++++++++++++++++++++
> >  hw/msix.c                 |   15 +
> >  hw/msix.h                 |    1 +
> >  hw/virtio-blk.c           |  753 ++++++++++++++++++++
> +------------------------
> >  hw/virtio-pci.c           |    8 +
> >  hw/virtio.c               |    9 +
> >  hw/virtio.h               |    3 +
> >  12 files changed, 1074 insertions(+), 398 deletions(-)
> >  create mode 100644 hw/dataplane/event-poll.h
> >  create mode 100644 hw/dataplane/ioq.h
> >  create mode 100644 hw/dataplane/iosched.h
> >  create mode 100644 hw/dataplane/vring.h
> >
> > --
> > 1.7.10.4
>


reply via email to

[Prev in Thread] Current Thread [Next in Thread]