qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/3] FVD: Added support for 'qemu-img update'


From: Chunqiang Tang
Subject: Re: [Qemu-devel] [PATCH 1/3] FVD: Added support for 'qemu-img update'
Date: Mon, 31 Jan 2011 09:49:50 -0500

> After thinking about it more, qemu-img update does also serve a
> purpose.  Sometimes it is necessary to set options on many images in
> bulk or from provisioning scripts instead of at runtime.
> 
> I guess my main fear of qemu-img update is that it adds a new
> interface that only FVD exploits so far.  If it never catches on with
> other formats then we have this special feature that must be
> maintained but is rarely used.  I'd hold off this patch until code
> that can make use of it has been merged into qemu.git.

I am fine with holding it off. Actually, 'qemu-img rebase' is already a 
special type of 'qemu-img update'. Without a general update interface, 
every parameter change would have to use a special interface like 
'rebase'.

> There's a lot of room for studying the behavior and making
> improvements.  Coming up with throttling strategies that make the
> prefetch I/O an "idle task" only when there's bandwidth available is
> difficult because the problem is more complex than just one greedy
> QEMU process.  In a cloud environment there will be any physical
> hosts, each with multiple VMs, on a shared network and no single QEMU
> process has global knowledge.  It's more like TCP where you need to
> try seeing how much data the connection can carry, fall back on packet
> loss, and then gradually try again.  But I'm not sure we have a
> feedback mechanism to say "you're doing too much prefetching".

Your observation on the similarity to TCP is incisive. To my knowledge, 
the two papers below are the most relevant work on congestion control for 
VM storage. I evaluated [1] below, and found that it is not most reliable. 
I copied some text from my paper below. Basically, since a storage system 
does not have packet loss, there are two ways of detecting congestion in a 
storage system: 1) making decisions based on increased latency, or 2) 
making decisions based on reduced throughput. The paper [1] below uses 
latency but I found it not always reliable. The current implementation in 
FVD is based on throughput, which tends to be more reliable. But as you 
said,this is still a problem that has a lot of room for future study.

===========
"[1] also performs adaptive prefetching. It halves the prefetch rate
if a certain “percentage” of recent requests experiencing
a high latency. Our experiments show that it is hard to set
a proper “percentage” to reliably detect contentions. Because
storage servers and disk controllers perform readahead
in large chunks for sequential reads, a very large
percentage (e.g., 90%) of a VM’s prefetching reads hit in
read-ahead caches and experience a low latency. When a
storage server becomes busy, the “percentage” of requests
that hit in read-ahead caches may change little, but the response
time of those cache-miss requests may increase
dramatically. In other words, this “percentage” does not
correlate well with the achieved disk I/O throughput."
============

The paper [2] below from VMware is informative but cannot be adopted by us 
directly, as the problem domain is different. I previously had a paper on 
general congestion control, 
https://sites.google.com/site/tangchq/papers/NCI-USENIX09.pdf?attredirects=0 
, and attended a conference together with an author of paper [2] , and had 
some discussions. It is an interesting paper.

[1] http://suif.stanford.edu/papers/nsdi05.pdf . 
[2] http://www.usenix.org/events/fast09/tech/full_papers/gulati/gulati.pdf 

 


Regards,
ChunQiang (CQ) Tang
Homepage: http://www.research.ibm.com/people/c/ctang

reply via email to

[Prev in Thread] Current Thread [Next in Thread]