qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Ballooning on TPS!=HPS hosts


From: Amit Shah
Subject: Re: [Qemu-devel] Ballooning on TPS!=HPS hosts
Date: Fri, 1 Apr 2016 16:22:34 +0530

CC'ing virtualization list.

On (Thu) 31 Mar 2016 [19:00:24], Dr. David Alan Gilbert wrote:
> Hi,
>   I was reading the balloon code and am confused as to how/if ballooning
> works on hosts where the host page size is larger than the
> target page size.
> 
> static void balloon_page(void *addr, int deflate)
> {
> #if defined(__linux__)
>     if (!qemu_balloon_is_inhibited() && (!kvm_enabled() ||
>                                          kvm_has_sync_mmu())) {
>         qemu_madvise(addr, TARGET_PAGE_SIZE,
>                 deflate ? QEMU_MADV_WILLNEED : QEMU_MADV_DONTNEED);
>     }
> #endif
> }
> 
> The virtio-balloon code only does stuff through ballon_page,
> and an madvise DONTNEED should fail if you try and do it on
> a size smaller than the host page size.  So does ballooning work on
> Power/ARM?
> 
> Am I misunderstanding this?

I think you're right.  Guess no one's tested this in such scenarios
yet.

> Of course looking at the above we won't actually generate an error since
> we don't check the return of qemu_madvise.

... at least we can deflate the balloon in case the madvise fails, so
the guest can use the pages it's given us.

> We have three sizes:
>     a) host page size
>     b) target page size
>     c) VIRTIO_BALLOON_PFN_SHIFT
> 
>  c == 12 (4k) for everyone
>  
> 
>     1) I think the virtio-balloon code needs to coallesce adjecent requests
>       and call balloon_page on whole chunks at once passing a length.
>     2) why does balloon_page use TARGET_PAGE_SIZE, ignoring anything else
>        shouldn't it be 1 << VIRTIO_BALLOON_PFN_SHIFT ?
>     3) I'm guessing the guest kernel doesn't know the host page size, so
>        how can it know what size chunks of balloon to work in?

Thanks,

                Amit



reply via email to

[Prev in Thread] Current Thread [Next in Thread]