[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] Queries on dataplane mechanism
From: |
Gaurav Sharma |
Subject: |
Re: [Qemu-devel] Queries on dataplane mechanism |
Date: |
Tue, 28 Jun 2016 14:36:41 +0530 |
Hi Stefan,
I am working on something to move PCI devices to data plane architecture.
Do you know any know reasons, as to why this was not tried before ?
Regards,
On Fri, Jun 24, 2016 at 3:45 PM, Stefan Hajnoczi <address@hidden> wrote:
> On Thu, Jun 23, 2016 at 08:56:34PM +0530, Gaurav Sharma wrote:
> > Hi,
> > I am trying to explore how the data plane mechanism works in QEMU. I
> > understand the behavior of QEMU big lock. Can someone clarify the
> following
> > w.r.t. to data plane :
> >
> > 1. Currently only virtio-blk-pci and virtio-scsi-pci have data plane
> > enabled ?
>
> Yes.
>
> > 2. From qemu 2.1.0 data plane is enabled by default.
>
> No "enabled by default" would mean that existing QEMU command-lines
> enable dataplane. This is not the case. You have to explicitly define
> an iothread object and then associate a virtio-blk/virtio-scsi device
> with it.
>
> > I specify the
> > following options in the command line to enable :
> > -enable-kvm -drive if=none,id=drive1,file=file_name -object
> > iothread,id=iothread2 -device
> > virtio-blk-pci,id=drv0,drive=drive1,iothread=iothread2
> > Is the above syntax correct ?
>
> Yes.
>
> > 3. What is the best possible scenario to test data plane ? Currently, I
> > have a test set up wherein i have two different devices [dev1 and dev2].
> If
> > i process a write to dev1 which i made blocking by putting a sleep
> > statement, will i be able to process write on dev2 ? My understanding is
> > that as in case of dataplane we have a different event loop, i should be
> > able to process write on dev2. Is this correct ?
>
> Dataplane improves scalability for high IOPS workloads when there are
> multiple disks.
>
> You do not need to modify any code in order to benchmark dataplane. Run
> fio inside an SMP 4 guest with 4 disks (you can use the host Linux
> kernel's null_blk driver) and you should find that QEMU without
> dataplane has lower iops. The difference should become clear around 4
> or 8 vcpus/disks.
>
> Stefan
>