qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v6 2/2] s390: do not call memory_region_allocate


From: Peter Xu
Subject: Re: [Qemu-devel] [PATCH v6 2/2] s390: do not call memory_region_allocate_system_memory() multiple times
Date: Wed, 18 Sep 2019 08:16:21 +0800
User-agent: Mutt/1.11.4 (2019-03-13)

On Tue, Sep 17, 2019 at 03:42:12PM +0200, Igor Mammedov wrote:
> On Tue, 17 Sep 2019 16:44:42 +0800
> Peter Xu <address@hidden> wrote:
> 
> > On Mon, Sep 16, 2019 at 09:23:47AM -0400, Igor Mammedov wrote:
> > > PS:
> > > I don't have access to a suitable system to test it.  
> > 
> > Hmm I feel like it would be good to have series like this to be at
> > least smoke tested somehow...
> > 
> > How about manually setup a very small max memslot size and test it on
> > x86?  IMHO we'd test with both KVMState.manual_dirty_log_protect to be
> > on & off to make sure to cover the log_clear() code path.  And of
> > course to trigger those paths we probably need migrations, but I
> > believe local migrations would be enough.
> 
> I did smoke test it (Fedora boot loop) [*] on s390 host with hacked
> 1G max section. I guess I could hack x86 and do the same for x86 guest.
> Anyways, suggestions how to test it better are welcome.
> 
> *) I don't have much faith in tests we have though as it didn't
>    explode with broken v5 in my case. Hence CCing ones who is more
>    familiar with migration parts.
> 
>    I used 2 memslot split config at 1Gb with offline migration like this:
> 
>    $ qemu-system-s390x -M s390-ccw-virtio -m 2048 -cpu max -smp 2 -M 
> accel=kvm  \
>         --nographic --hda fedora.qcow2 -serial unix:/tmp/s,server,nowait \
>         -monitor stdio 
>      (monitor) stop
>      (monitor) migrate "exec: cat > savefile
>      (monitor) q
>    $ qemu-system-s390x -M s390-ccw-virtio -m 2048 -cpu max -smp 2 -M 
> accel=kvm  \
>         --nographic --hda fedora.qcow2 -serial unix:/tmp/s,server,nowait \
>         -incoming "exec: cat savefile"

Yeah this looks good already. A better one could be (AFAICS):

  1) as mentioned, enable KVMState.manual_dirty_log_protect to test
     the log_clear path by offering a host kernel new enough (Linux
     5.2+), then it'll be on by default.  Otherwise the default is
     off.  We can enable some trace points to make sure those code
     paths are triggered if uncertain like trace_kvm_clear_dirty_log.

  2) more aggresive dirtying. This can be done by:

    - run a mem dirty workload inside.  I normally use [1] on my own
      ("mig_mon mm_dirty 1024 500" will dirty mem upon 1024MB with
      500MB/s dirty rate), but any tool would work

    - turn down migration bandwidth using "migrate_set_speed" so the
      migration even harder to converge, then dirty bit path is
      tortured more.  Otherwise local full-speed migration normally
      completes super fast due to pure mem moves.

Though if with 2) above I'd suggest to use unix sockets or tcp
otherwise the dumped file could be super big (hopefully not eating all
the laptop disk!).

[1] https://github.com/xzpeter/clibs/blob/master/bsd/mig_mon/mig_mon.c

Regards,

-- 
Peter Xu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]