qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Libvirt driver iothread property for virtio-scsi disks


From: Nir Soffer
Subject: Re: Libvirt driver iothread property for virtio-scsi disks
Date: Wed, 4 Nov 2020 20:00:00 +0200

On Wed, Nov 4, 2020 at 6:42 PM Sergio Lopez <slp@redhat.com> wrote:
>
> On Wed, Nov 04, 2020 at 05:48:40PM +0200, Nir Soffer wrote:
> > The docs[1] say:
> >
> > - The optional iothread attribute assigns the disk to an IOThread as 
> > defined by
> >   the range for the domain iothreads value. Multiple disks may be assigned 
> > to
> >   the same IOThread and are numbered from 1 to the domain iothreads value.
> >   Available for a disk device target configured to use "virtio" bus and 
> > "pci"
> >   or "ccw" address types. Since 1.2.8 (QEMU 2.1)
> >
> > Does it mean that virtio-scsi disks do not use iothreads?
>
> virtio-scsi disks can use iothreads, but they are configured in the
> scsi controller, not in the disk itself. All disks attached to the
> same controller will share the same iothread, but you can also attach
> multiple controllers.

Thanks, I found that we do use this in ovirt:

    <controller type='scsi' index='0' model='virtio-scsi'>
      <driver iothread='1'/>
      <alias name='ua-6f070142-1dbe-4be3-90c6-1a2274a2f8a0'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00'
function='0x0'/>
    </controller>

However the VMs in this setup are not created by oVirt, but manually using
libvirt. I'll make sure we configure the controller in the same way.

> > I'm experiencing a horrible performance using nested vms (up to 2 levels of
> > nesting) when accessing NFS storage running on one of the VMs. The NFS
> > server is using scsi disk.
> >
> > My theory is:
> > - Writing to NFS server is very slow (too much nesting, slow disk)
> > - Not using iothreads (because we don't use virtio?)
> > - Guest CPU is blocked by slow I/O
>
> I would discard the lack of iothreads as the culprit. They do improve
> the performance, but without them the performance should be quite
> decent anyway. Probably something else is causing the trouble.
>
> I would do a step by step analysis, testing the NFS performance from
> outside the VM first, and then elaborating upwards from that.

Makes sense, thanks.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]