qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] hmp: Add "calc_dirty_rate" and "info dirty_rate" cmds


From: Peter Xu
Subject: Re: [PATCH] hmp: Add "calc_dirty_rate" and "info dirty_rate" cmds
Date: Wed, 9 Jun 2021 14:57:03 -0400

On Tue, Jun 08, 2021 at 08:36:23PM +0100, Dr. David Alan Gilbert wrote:
> * Peter Xu (peterx@redhat.com) wrote:
> > On Tue, Jun 08, 2021 at 07:49:56PM +0100, Dr. David Alan Gilbert wrote:
> > > * Peter Xu (peterx@redhat.com) wrote:
> > > > These two commands are missing when adding the QMP sister commands.  
> > > > Add them,
> > > > so developers can play with them easier.
> > > > 
> > > > Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > > > Cc: Juan Quintela <quintela@redhat.com>
> > > > Cc: Leonardo Bras Soares Passos <lsoaresp@redhat.com>
> > > > Cc: Chuan Zheng <zhengchuan@huawei.com>
> > > > Cc: huangy81@chinatelecom.cn
> > > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > > 
> > > Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > > 
> > > > ---
> > > > PS: I really doubt whether this is working as expected... I ran one 
> > > > 200MB/s
> > > > workload inside, what I measured is 20MB/s with current algorithm...  
> > > > Sampling
> > > > 512 pages out of 1G mem is not wise enough I guess, especially that 
> > > > assumes
> > > > dirty workload is spread across the memories while it's normally not 
> > > > the case..
> > > 
> > > What size of address space did you dirty - was it 20MB?
> > 
> > IIRC it was either 200M or 500M, based on a 1G small VM.
> 
> What was your sample time ?

10 seconds; I used the same sample time for below runs:

https://lore.kernel.org/qemu-devel/YMEFqfYZVhsinNN+@t490s/

A large sample time does make dirty rate less indeed, as the same dirty page
could be written again as 1 single page dirtyed in the host (while it's counted
twice in the guest dirty workload).

This effect should happen too if we further extend calc_dirty_rate with
KVM_GET_DIRTY_LOG in the future as the 3rd method besides dirty ring.

>From that pov, dirty ring is easier to be more "accurate" (I don't know whether
it's suitable to say it's accurate; it's just easier to trap cases like
writting to same page multiple times within a period), as the ring size is
normally very limited (e.g. 4096 pages per vcpu), so even the guest workload
writes the same page twice, as long as there's a ring collect between the two
writes, they'll be counted twice too (each collect will reprotect the pages).

-- 
Peter Xu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]