qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/3] Hyper-V Dynamic Memory Protocol driver (hv-balloon)


From: David Hildenbrand
Subject: Re: [PATCH 0/3] Hyper-V Dynamic Memory Protocol driver (hv-balloon)
Date: Tue, 22 Sep 2020 09:26:33 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0

On 22.09.20 00:22, Maciej S. Szmigiero wrote:
> Hi David,
> 
> Thank you for your comments.
> 
> First, I want to underline that this driver targets Windows guests,
> where ability to modify and adapt the guest memory management
> code is extremely limited.

Yeah, I know the pain.

[...]

> 
> The idea is to use virtual DIMM sticks for hot-adding extra memory at
> runtime, while using ballooning for runtime adjustment of the guest
> memory size within the current maximum.
> 
> When the guest is rebooted the virtual DIMMs configuration is adjusted
> by the software controlling QEMU (some are removed and / or some are
> added) to give the guest the same effective memory size as it had before
> the reboot.

Okay, so while "the ACPI DIMM slot limit does not apply", the KVM memory
slot limit (currently) applies, resulting in exactly the same behavior.

The only (conceptual difference) I am able to spot is then a
notification to the user on reboot, so the guest memory layout can be
adjusted (which I consider very ugly, but it's the same thing when
mixing ballooning and DIMMs - which is why it's usually never done).

[...]

> 
> So, yes, it will be a problem if the user expands their running guest
> ~256 times, each time making it even bigger than previously, without
> rebooting it even once, but this does seem to be an edge use case.

IIRC, that's exactly what dynamic memory under Windows does in automatic
mode, no? Monitor the guests, distribute memory accordingly - usually in
smaller steps. But I am no expert on Hyper-V.

> 
> In the future it would be better to automatically turn the current
> effective guest size into its boot memory size when the VM restarts
> (the VM will then have no virtual DIMMs inserted after a reboot), but
> doing this requires quite a few changes to QEMU, that's why it isn't
> there yet.

Will most probably never happen as reshuffling the layout of your boot
memory (especially with NUMA) within QEMU can break live migration in
various ways.

If you already notify the user on a reboot, the user can just kill the
VM and start it with an adjusted boot memory size. Yeah, that's ugly,
but so is the whole "adjust DIMM/balloon configuration during a reboot
from outside QEMU".

BTW, how would you handle: Start guest with 10G. Inflate balloon to 5G.
Reboot. There are no virtual DIMMs to adjust.

> 
> The above is basically how Hyper-V hypervisor handles its memory size
> changes and it seems to be as close to having a transparently resizable
> guest as reasonably possible.

"having a transparently resizable _Windows_ guests right now" :)

> 
> 
>> Or avoid VMA limits when wanting to grow a VM big in very tiny steps over
>> time (e.g., adding 64MB at a time).
> 
> Not sure if you are taking about VMA limits inside the host or the guest.

Host. one virtual DIMM corresponds to one VMA. But the KVM memory limit
already applies before that, so it doesn't matter.

[...]

>> I assume these numbers apply with Windows guests only. IIRC Linux
>> hv_balloon does not support page migration/compaction, while
>> virtio-balloon does. So you might end up with quite some fragmented
>> memory with hv_balloon in Linux guests - of course, usually only in
>> corner cases.
> 
> As I previously mentioned, this driver targets mainly Windows guests.

... and you cannot enforce that people will only use it with Windows
guests :)

[...]

> Windows will generally leave some memory free when processing balloon
> requests, although the precise amount varies between few hundred MB to
> values like 1+ GB.
> 
> Usually it runs stable even with these few hundred MBs of free memory
> remaining but I have seen occasional crashes at shutdown time in this
> case (probably something critical failing to initialize due to the
> system running out of memory).
> 
> While the above command was just a quick example, I personally think
> it is the guest who should be enforcing a balloon floor since it is
> the guest that knows its internal memory requirements, not the host.

Even the guest has no idea about the (future) working set size. That's a
known problem.

There are always cases where the calculation is wrong, and if the
monitoring process isn't fast enough to react and adjust the guest size,
your things will end up baldy in your guest. Just as the reboot case you
mentioned, where the VM crashes.

[...]

>>>
>>> Future directions:
>>> * Allow sharing the ballooning QEMU interface between hv-balloon and
>>>   virtio-balloon drivers.
>>>   Currently, only one of them can be added to the VM at the same time.
>>
>> Yeah, that makes sense. Only one at a time.
> 
> Having only one *active* at a time makes sense, however it ultimately
> would be nice to be able to have them both inserted into a VM:
> one for Windows guests and one for Linux ones.
> Even though only one obviously would be active at the same time.

I don't think that's the right way forward - that should be configured
when the VM is started.

Personal opinion: I can understand the motivation to implement
hypervisor-specific devices to better support closed-source operating
systems. But I doubt we want to introduce+support ten different
proprietary devices based on proprietary standards doing roughly the
same thing just because closed-source operating systems are too lazy to
support open standards properly.

-- 
Thanks,

David / dhildenb




reply via email to

[Prev in Thread] Current Thread [Next in Thread]