qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v0] spapr: Abort when hash table size requir


From: Bharata B Rao
Subject: Re: [Qemu-devel] [RFC PATCH v0] spapr: Abort when hash table size requirement isn't met
Date: Tue, 28 Jul 2015 11:03:58 +0530
User-agent: Mutt/1.5.23 (2014-03-12)

Any views on this ?

On Thu, Jul 16, 2015 at 12:25:01PM +0530, Bharata B Rao wrote:
> On Wed, Jul 15, 2015 at 03:27:13PM +0530, Bharata B Rao wrote:
> > [This patch addresses an issue which is not prominently seen in mainline,
> > but seen frequently only in David's spapr-next branch. Though it is possible
> > to see this issue with mainline too, the current version of the patch
> > is intended for David's tree.]
> > 
> > QEMU requests for hash table allocation through KVM_PPC_ALLOCATE_HTAB ioctl
> > by providing the size hint via htab_shift value. Sometimes the hinted
> > size requirement can't be met by the host and it returns with a lower
> > value for htab_shift.
> > 
> > This was fine until recently where the hash table size was dependent
> > on guest RAM size. With the intention of supporting memory hotplug, hash
> > table size was changed to depend on maxram size recently. Since it is
> > typical to have maxram size to be much higher than RAM size, the possibility
> > of host not being able to meet the size requirement has increased. This
> > causes two problems:
> > 
> > - When memory hotplug is supported, we will not be able to grow till
> >   maxram if the host wasn't able to satisfy the hash table size for the
> >   the full maxram range.
> 
> This is a recoverable condition where the hotplug can be gracefully failed.
> 
> > - During migration, we can end up having different htab_shift values (and
> >   hence different hash table sizes) at the source and target due to
> >   which the migration fails.
> 
> One possible way to solve this is to change (reduce) the maxram_size
> based on the negotiated value of htab_shit and use the changed value
> of maxram_size at the target during migration. However AFAIK, currently
> there is no way to communicate the changed maxram_size back to libvirt,
> so this solution may not be feasible.
> 
> So it is the question of whether to allow the guest to boot with reduced
> hashtable size and fail migration (this is the current behaviour)
> 
> or
> 
> As done in this patch, prevent the booting of the VM altogether.
> 
> I am leaning towards the former. Thoughts ?
> 
> Regards,
> Bharata.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]