[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH][RESEND v3 3/3] Add a Hyper-V Dynamic Memory Protocol driver
From: |
Daniel P . Berrangé |
Subject: |
Re: [PATCH][RESEND v3 3/3] Add a Hyper-V Dynamic Memory Protocol driver (hv-balloon) |
Date: |
Tue, 28 Feb 2023 17:34:56 +0000 |
User-agent: |
Mutt/2.2.9 (2022-11-12) |
On Fri, Feb 24, 2023 at 10:41:16PM +0100, Maciej S. Szmigiero wrote:
> Hot-adding additional memory is done by creating a new memory backend (for
> example by executing HMP command
> "object_add memory-backend-ram,id=mem1,size=4G"), then executing a new
> "hv-balloon-add-memory" QMP command, providing the id of that memory
> backend as the "id" parameter.
[snip]
> After a VM reboot each previously hot-added memory backend gets released.
> A "HV_BALLOON_MEMORY_BACKEND_UNUSED" QMP event is emitted in this case so
> the software controlling QEMU knows that it either needs to delete that
> memory backend (if no longer needed) or re-insert it.
IIUC you're saying that the 'hv-balloon-add-memory' command needs
to be re-run after a guest reset ? If so I feel that is a rather
undesirable job to punt over the mgmt app. The 'reset' event can
be missed if the mgmt app happend to be restarting and reconnecting
to existing running QMP console.
> In the future, the guest boot memory size might be changed on reboot
> instead, taking into account the effective size that VM had before that
> reboot (much like Hyper-V does).
Is that difficult to do right now ? It isn't too nice to make the
mgmt apps implement the workaround now if we're going to make it
redundant later.
> The above design results in much better ballooning performance than when
> using virtio-balloon with the same guest: 230 GB / minute with this driver
> versus 70 GB / minute with virtio-balloon.
snip
> The unballoon operation is also pretty much instantaneous:
> thanks to the merging of the ballooned out page ranges 200 GB of memory can
> be returned to the guest in about 1 second.
> With virtio-balloon this operation takes about 2.5 minutes.
That's pretty impressive !
> These tests were done against a Windows Server 2019 guest running on a
> Xeon E5-2699, after dirtying the whole memory inside guest before each
> balloon operation.
> Since the required GTree operations aren't present in every Glib version
> a check for them was added to "configure" script, together with new
> "--enable-hv-balloon" and "--disable-hv-balloon" arguments.
> If these GTree operations are missing in the system's Glib version this
> driver will be skipped during QEMU build.
Funnily enough there's a patch posted recently that imports the glib
GTree impl into QEMU calling it QTree. This was to workaround a problem
with GSlice not being async signal safe, but if we take that patch, then
you wouldn't need to skip the build you could rely on this in-tree copy
instead.
https://lists.gnu.org/archive/html/qemu-devel/2023-02/msg01225.html
> diff --git a/qapi/machine.json b/qapi/machine.json
> index b9228a5e46..04ff95337a 100644
> --- a/qapi/machine.json
> +++ b/qapi/machine.json
> @@ -1104,6 +1104,74 @@
> { 'event': 'BALLOON_CHANGE',
> 'data': { 'actual': 'int' } }
>
> +##
> +# @hv-balloon-add-memory:
> +#
> +# Hot-add memory backend via Hyper-V Dynamic Memory Protocol.
> +#
> +# @id: the name of the memory backend object to hot-add
> +#
> +# Returns: Nothing on success
> +# Error if there's no guest connected with hot-add capability,
> +# @id is not a valid memory backend or it's already in use.
> +#
> +# Since: TBD
> +#
> +# Example:
> +#
> +# -> { "execute": "hv-balloon-add-memory", "arguments": { "id": "mb1" } }
> +# <- { "return": {} }
> +#
> +##
> +{ 'command': 'hv-balloon-add-memory', 'data': {'id': 'str'} }
> +
> +##
> +# @HV_BALLOON_STATUS_REPORT:
> +#
> +# Emitted when the hv-balloon driver receives a "STATUS" message from
> +# the guest.
> +#
> +# @commited: the amount of memory in use inside the guest plus the amount
> +# of the memory unusable inside the guest (ballooned out,
> +# offline, etc.)
> +#
> +# @available: the amount of the memory inside the guest available for new
> +# allocations ("free")
> +#
> +# Since: TBD
> +#
> +# Example:
> +#
> +# <- { "event": "HV_BALLOON_STATUS_REPORT",
> +# "data": { "commited": 816640000, "available": 3333054464 },
> +# "timestamp": { "seconds": 1600295492, "microseconds": 661044 } }
> +#
> +##
> +{ 'event': 'HV_BALLOON_STATUS_REPORT',
> + 'data': { 'commited': 'size', 'available': 'size' } }
> +
> +##
> +# @HV_BALLOON_MEMORY_BACKEND_UNUSED:
> +#
> +# Emitted when the hv-balloon driver marks a memory backend object
> +# unused so it can now be removed, if required.
> +#
> +# This can happen because the VM was restarted.
> +#
> +# @id: the memory backend object id
> +#
> +# Since: TBD
> +#
> +# Example:
> +#
> +# <- { "event": "HV_BALLOON_MEMORY_BACKEND_UNUSED",
> +# "data": { "id": "mb1" },
> +# "timestamp": { "seconds": 1600295492, "microseconds": 661044 } }
> +#
> +##
> +{ 'event': 'HV_BALLOON_MEMORY_BACKEND_UNUSED',
> + 'data': { 'id': 'str' } }
There is a reply from Igor about possibility of sharing code with
virtio-mem. I also wonder if there's any scope for sharing with
the virtio-balloon driver too, in terms of the QAPI schema.
I've not looked closely enough to say if its possible to not, so
if not practical, no worries.
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
[PATCH][RESEND v3 2/3] Add Hyper-V Dynamic Memory Protocol definitions, Maciej S. Szmigiero, 2023/02/24
[PATCH][RESEND v3 3/3] Add a Hyper-V Dynamic Memory Protocol driver (hv-balloon), Maciej S. Szmigiero, 2023/02/24