[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [RFC]QEMU disk I/O limits
From: |
Sasha Levin |
Subject: |
Re: [Qemu-devel] [RFC]QEMU disk I/O limits |
Date: |
Thu, 02 Jun 2011 10:15:02 +0300 |
On Thu, 2011-06-02 at 14:29 +0800, Zhi Yong Wu wrote:
> On Thu, Jun 02, 2011 at 09:17:06AM +0300, Sasha Levin wrote:
> >Date: Thu, 02 Jun 2011 09:17:06 +0300
> >From: Sasha Levin <address@hidden>
> >To: Zhi Yong Wu <address@hidden>
> >Cc: address@hidden, address@hidden, address@hidden,
> > address@hidden, address@hidden,
> > address@hidden, address@hidden, address@hidden,
> > address@hidden, address@hidden, address@hidden,
> > address@hidden, address@hidden, address@hidden
> >Subject: Re: [Qemu-devel] [RFC]QEMU disk I/O limits
> >X-Mailer: Evolution 2.32.2
> >
> >Hi,
> >
> >On Mon, 2011-05-30 at 13:09 +0800, Zhi Yong Wu wrote:
> >> Hello, all,
> >>
> >> I have prepared to work on a feature called "Disk I/O limits" for
> >> qemu-kvm projeect.
> >> This feature will enable the user to cap disk I/O amount performed by
> >> a VM.It is important for some storage resources to be shared among
> >> multi-VMs. As you've known, if some of VMs are doing excessive disk I/O,
> >> they will hurt the performance of other VMs.
> >>
> >> More detail is available here:
> >> http://wiki.qemu.org/Features/DiskIOLimits
> >>
> >> 1.) Why we need per-drive disk I/O limits
> >> As you've known, for linux, cgroup blkio-controller has supported I/O
> >> throttling on block devices. More importantly, there is no single
> >> mechanism for disk I/O throttling across all underlying storage types
> >> (image file, LVM, NFS, Ceph) and for some types there is no way to
> >> throttle at all.
> >>
> >> Disk I/O limits feature introduces QEMU block layer I/O limits
> >> together with command-line and QMP interfaces for configuring limits. This
> >> allows I/O limits to be imposed across all underlying storage types using
> >> a single interface.
> >>
> >> 2.) How disk I/O limits will be implemented
> >> QEMU block layer will introduce a per-drive disk I/O request queue for
> >> those disks whose "disk I/O limits" feature is enabled. It can control
> >> disk I/O limits individually for each disk when multiple disks are
> >> attached to a VM, and enable use cases like unlimited local disk access
> >> but shared storage access with limits.
> >> In mutliple I/O threads scenario, when an application in a VM issues a
> >> block I/O request, this request will be intercepted by QEMU block layer,
> >> then it will calculate disk runtime I/O rate and determine if it has go
> >> beyond its limits. If yes, this I/O request will enqueue to that
> >> introduced queue; otherwise it will be serviced.
> >>
> >> 3.) How the users enable and play with it
> >> QEMU -drive option will be extended so that disk I/O limits can be
> >> specified on its command line, such as -drive [iops=xxx,][throughput=xxx]
> >> or -drive [iops_rd=xxx,][iops_wr=xxx,][throughput=xxx] etc. When this
> >> argument is specified, it means that "disk I/O limits" feature is enabled
> >> for this drive disk.
> >> The feature will also provide users with the ability to change
> >> per-drive disk I/O limits at runtime using QMP commands.
> >
> >I'm wondering if you've considered adding a 'burst' parameter -
> >something which will not limit (or limit less) the io ops or the
> >throughput for the first 'x' ms in a given time window.
> Currently no, Do you let us know what scenario it will make sense to?
My assumption is that most guests are not doing constant disk I/O
access. Instead, the operations are usually short and happen on small
scale (relatively small amount of bytes accessed).
For example: Multiple table DB lookup, serving a website, file servers.
Basically, if I need to do a DB lookup which needs 50MB of data from a
disk which is limited to 10MB/s, I'd rather let it burst for 1 second
and complete the lookup faster instead of having it read data for 5
seconds.
If the guest now starts running multiple lookups one after the other,
thats when I would like to limit.
> Regards,
>
> Zhiyong Wu
> >
> >> Regards,
> >>
> >> Zhiyong Wu
> >>
> >
> >--
> >
> >Sasha.
> >
--
Sasha.