[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] Dataplane and vhost-blk
From: |
Stefan Hajnoczi |
Subject: |
Re: [Qemu-devel] Dataplane and vhost-blk |
Date: |
Tue, 5 Mar 2013 16:59:05 +0100 |
On Tue, Mar 5, 2013 at 3:18 PM, Benoît Canet <address@hidden> wrote:
> I am looking for a way to help improving qemu block performance.
>
> APIC-V is a work in progress and the two options with public code are vhost-*
> and virtio-blk-dataplane.
>
> The way of doing seems very similar (bypassing the qemu lock) and dedicating
> a thread to each emulated virtio block device.
>
> vhost-* is in kernel while dataplane is in qemu.
Yes, they take a similar approach. The main difference is using a
vhost kernel thread versus a QEMU userspace thread.
> Performance seems similar.
>
> Dataplane seems to be a demonstrator to be replaced by an evolution of the
> qemu block layer made thread friendly and vhost-blk is not upstream yet.
>
> This left me with the following questions :
>
> Are dataplane and vhost-block purpose the same (speed) despite being pushed
> by the same company (Red Hat) ?
Both approaches tackle high IOPS scalability. Both approaches were
prototyped over a period of 1 or 2 years. They are not associated
with just one contributor or company - vhost_blk and virtio-blk data
plane were pushed along by various folks as time went on. vhost_blk
had at least two independent implementations :).
Since they were relatively long-term efforts, the overlap or
duplication was actually good. It allowed comparisons and both
approaches benefitted from competition.
> What is the best path I can take to help improve qemu block performance ?
You need to set a more specific goal. Some questions to get started:
* Which workloads do you care about and what are their
characteristics (sequential or random I/O, queue depth)?
* Do you care about 1 vcpu guests or 4+ vcpu guests? (SMP scalability)
* Are you using an image format?
Once you have decided what needs to be improved it should be easier to
figure out what to work on.
I haven't run latency tracing on the full stack recently. The goal is
to match host latency, but we have an overhead due to virtio-blk and
QEMU block I/O. Many changes have been made, like the introduction of
coroutines, since I posted measurements on the KVM wiki
(http://www.linux-kvm.org/page/Virtio/Block/Latency). Perhaps this is
an area you care about?
Stefan