qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 0/3] exclude hyperv synic sections from vhost


From: Michael S. Tsirkin
Subject: Re: [PATCH v2 0/3] exclude hyperv synic sections from vhost
Date: Tue, 14 Jan 2020 02:17:07 -0500

On Mon, Jan 13, 2020 at 06:58:30PM +0000, Dr. David Alan Gilbert wrote:
> * Paolo Bonzini (address@hidden) wrote:
> > On 13/01/20 18:36, Dr. David Alan Gilbert (git) wrote:
> > > 
> > > Hyperv's synic (that we emulate) is a feature that allows the guest
> > > to place some magic (4k) pages of RAM anywhere it likes in GPA.
> > > This confuses vhost's RAM section merging when these pages
> > > land over the top of hugepages.
> > 
> > Can you explain what is the confusion like?  The memory API should just
> > tell vhost to treat it as three sections (RAM before synIC, synIC
> > region, RAM after synIC) and it's not clear to me why postcopy breaks
> > either.
> 
> There's two separate problems:
>   a) For vhost-user there's a limited size for the 'mem table' message
>      containing the number of regions to send; that's small - so an
>      attempt is made to coalesce regions that all refer to the same
>      underlying RAMblock.  If things split the region up you use more
>      slots. (it's why the coalescing code was originally there.)
> 
>   b) With postcopy + vhost-user life gets more complex because of
>      userfault.  We require that the vhost-user client can mmap the
>      memory areas on host page granularity (i.e. hugepage granularity
>      if it's hugepage backed).  To do that we tweak the aggregation code
>      to align the blocks to page size boundaries and then perform
>      aggregation - as long as nothing else important gets in the way
>      we're OK.
>      In this case the guest is programming synic to land at the 512k
>      boundary (in 16 separate 4k pages next to each other).  So we end
>      up with 0-512k (stretched to 0..2MB alignment) - then we see
>      synic (512k-+4k ...) then we see RAM at 640k - and when we try
>      to align that we error because we realise the synic mapping is in
>      the way and we can't merge the 640k ram chunk with the base 0-512k
>      aligned chunk.
> 
> Note the reported failure here is kernel vhost, not vhost-user;
> so actually it probably doesn't need the alignment,

Yea vhost in the kernel just does copy from/to user. No alignment
requirements.

> and vhost-user would
> probably filter out the synic mappings anyway due to the fact they've
> not got an fd ( vhost_user_mem_section_filter ).  But the alignment
> code always runs.
> 
> Dave
> 
> 
> 
> > Paolo
> > 
> > > Since they're not normal RAM, and they shouldn't have vhost DMAing
> > > into them, exclude them from the vhost set.
> > 
> --
> Dr. David Alan Gilbert / address@hidden / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]