On Fri, Mar 01, 2019 at 06:33:28PM +0100, Igor Mammedov wrote:
On Fri, 1 Mar 2019 15:49:47 +0000
Daniel P. Berrangé <address@hidden> wrote:
On Fri, Mar 01, 2019 at 04:42:15PM +0100, Igor Mammedov wrote:
The parameter allows to configure fake NUMA topology where guest
VM simulates NUMA topology but not actually getting a performance
benefits from it. The same or better results could be achieved
using 'memdev' parameter. In light of that any VM that uses NUMA
to get its benefits should use 'memdev' and to allow transition
initial RAM to device based model, deprecate 'mem' parameter as
its ad-hoc partitioning of initial RAM MemoryRegion can't be
translated to memdev based backend transparently to users and in
compatible manner (migration wise).
That will also allow to clean up a bit our numa code, leaving only
'memdev' impl. in place and several boards that use node_mem
to generate FDT/ACPI description from it.
Can you confirm that the 'mem' and 'memdev' parameters to -numa
are 100% live migration compatible in both directions ? Libvirt
would need this to be the case in order to use the 'memdev' syntax
instead.
Unfortunately they are not migration compatible in any direction,
if it where possible to translate them to each other I'd alias 'mem'
to 'memdev' without deprecation. The former sends over only one
MemoryRegion to target, while the later sends over several (one per
memdev).
If we can't migration from one to the other, then we can not deprecate
the existing 'mem' syntax. Even if libvirt were to provide a config
option to let apps opt-in to the new syntax, we need to be able to
support live migration of existing running VMs indefinitely. Effectively
this means we need the to keep 'mem' support forever, or at least such
a long time that it effectively means forever.