qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 0/7] Rework vhost memory region updates


From: Igor Mammedov
Subject: Re: [Qemu-devel] [RFC 0/7] Rework vhost memory region updates
Date: Thu, 30 Nov 2017 13:58:20 +0100

On Thu, 30 Nov 2017 12:47:20 +0000
"Dr. David Alan Gilbert" <address@hidden> wrote:

> * Igor Mammedov (address@hidden) wrote:
> > On Thu, 30 Nov 2017 12:08:06 +0000
> > "Dr. David Alan Gilbert" <address@hidden> wrote:
> >   
> > > * Igor Mammedov (address@hidden) wrote:  
> > > > On Wed, 29 Nov 2017 18:50:19 +0000
> > > > "Dr. David Alan Gilbert (git)" <address@hidden> wrote:
> > > >     
> > > > > From: "Dr. David Alan Gilbert" <address@hidden>
> > > > > 
> > > > > Hi,
> > > > >   This is an experimental set that reworks the way the vhost
> > > > > code handles changes in physical address space layout that
> > > > > came from a discussion with Igor.    
> > > > Thanks for looking into it.
> > > > 
> > > >      
> > > > > Instead of updating and trying to merge sections of address
> > > > > space on each add/remove callback, we wait until the commit phase
> > > > > and go through and rebuild a list by walking the Flatview of
> > > > > memory and end up producing an ordered list.
> > > > > We compare the list to the old list to trigger updates.
> > > > > 
> > > > > Note, only very lightly tested so far, I'm just trying to see if it's
> > > > > the right shape.
> > > > > 
> > > > > Igor, is this what you were intending?    
> > > > 
> > > > I was thinking about a little less intrusive approach
> > > > 
> > > > where vhost_region_add/del are modified to maintain
> > > > sorted by GPA array of mem_sections, vhost_dev::mem is dropped
> > > > altogether and vhost_memory_region array is build/used/freed
> > > > on every vhost_commit().
> > > > Maintaining sorted array should roughly cost us O(2 log n) if
> > > > binary search is used.
> > > > 
> > > > However I like your idea with iterator even more as it have
> > > > potential to make it even faster O(n) if we get rid of
> > > > quadratic and relatively complex vhost_update_compare_list().    
> > > 
> > > Note vhost_update_compare_list is complex,
> > > but it is O(n) - it's
> > > got nested loops, but the inner loop moves forward and oldi never
> > > gets reset back to zero.  
> > While skimming through patches I've overlooked it.
> > 
> > Anyways,
> > why memcmp(old_arr, new_arr) is not sufficient
> > to detect a change in memory map?  
> 
> It tells you that you've got a change, but doesn't give
> the start/end of the range that's changed, and those
> are used by vhost_commit to limit the work of
> vhost_verify_ring_mappings.
Isn't memmap list a sorted and
 dev->mem_changed_[start|end]_addr are the lowest|highest
addresses of whole map?

If it's, so wouldn't getting values directly from 
the 1st/last entries of array be sufficient?



> 
> Dave
> 
> > >   
> > > > Pls, see comments on individual patches.    
> > > 
> > > Thanks; I have fixed a couple of bugs since I posted, so I'm
> > > more interested in structure/shape.  Any good ideas how to test
> > > it are welcome.
> > > 
> > > Dave
> > >   
> > > >     
> > > > > Dave
> > > > > 
> > > > > Dr. David Alan Gilbert (7):
> > > > >   memory: address_space_iterate
> > > > >   vhost: Move log_dirty check
> > > > >   vhost: New memory update functions
> > > > >   vhost: update_mem_cb implementation
> > > > >   vhost: Compare new and old memory lists
> > > > >   vhost: Copy updated region data into device state
> > > > >   vhost: Remove vhost_set_memory and children
> > > > > 
> > > > >  hw/virtio/trace-events |   8 +
> > > > >  hw/virtio/vhost.c      | 424 
> > > > > ++++++++++++++++++++++---------------------------
> > > > >  include/exec/memory.h  |  23 +++
> > > > >  memory.c               |  22 +++
> > > > >  4 files changed, 241 insertions(+), 236 deletions(-)
> > > > >     
> > > >     
> > > --
> > > Dr. David Alan Gilbert / address@hidden / Manchester, UK  
> >   
> --
> Dr. David Alan Gilbert / address@hidden / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]