qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/2] Add monitor command mem-nodes


From: Anthony Liguori
Subject: Re: [Qemu-devel] [PATCH 2/2] Add monitor command mem-nodes
Date: Thu, 13 Jun 2013 20:05:34 -0500
User-agent: Notmuch/0.15.2+77~g661dcf8 (http://notmuchmail.org) Emacs/23.3.1 (x86_64-pc-linux-gnu)

Paolo Bonzini <address@hidden> writes:

> Il 13/06/2013 08:50, Eduardo Habkost ha scritto:
>> I believe an interface based on guest physical memory addresses is more
>> flexible (and even simpler!) than one that only allows binding of whole
>> virtual NUMA nodes.
>
> And "-numa node" is already one, what about just adding "mem-path=/foo"
> or "host_node=NN" suboptions?  Then "-mem-path /foo" would be a shortcut
> for "-numa node,mem-path=/foo".
>
> I even had patches to convert -numa to QemuOpts, I can dig them out if
> your interested.

Ack.  This is a very reasonable thing to add.

Regards,

Anthony Liguori

>
> Paolo
>
>> (And I still don't understand why you are exposing QEMU virtual memory
>> addresses in the new command, if they are useless).
>> 
>> 
>>>>
>>>>
>>>>>>  * The correspondence between guest physical address ranges and ranges
>>>>>>    inside the mapped files (so external tools could set the policy on
>>>>>>    those files instead of requiring QEMU to set it directly)
>>>>>>
>>>>>> I understand that your use case may require additional information and
>>>>>> additional interfaces. But if we provide the information above we will
>>>>>> allow external components set the policy on the hugetlbfs files before
>>>>>> we add new interfaces required for your use case.
>>>>>
>>>>> But the file backed memory is not good for the host which has many
>>>>> virtual machines, in this situation, we can't handle anon THP yet.
>>>>
>>>> I don't understand what you mean, here. What prevents someone from using
>>>> file-backed memory with multiple virtual machines?
>>>
>>> While if we use hugetlbfs backed memory, we should know how many virtual 
>>> machines,
>>> how much memory each vm will use, then reserve these pages for them. And 
>>> even
>>> should reserve more pages for external tools(numactl) to set memory polices.
>>> Even the memory reservation also has it's own memory policies. It's very 
>>> hard
>>> to control it to what we want to set.
>> 
>> Well, it's hard because we don't even have tools to help on that, yet.
>> 
>> Anyway, I understand that you want to make it work with THP as well. But
>> if THP works with tmpfs (does it?), people then could use exactly the
>> same file-based mechanisms with tmpfs and keep THP working.
>> 
>> (Right now I am doing some experiments to understand how the system
>> behaves when using numactl on hugetlbfs and tmpfs, before and after
>> getting the files mapped).
>> 
>> 
>>>>
>>>>>
>>>>> And as I mentioned, the cross numa node access performance regression
>>>>> is caused by pci-passthrough, it's a very long time bug, we should
>>>>> back port the host memory pinning patch to old QEMU to resolve this 
>>>>> performance
>>>>> problem, too.
>>>>
>>>> If it's a regression, what's the last version of QEMU where the bug
>>>> wasn't present?
>>>>
>>>
>>>  As QEMU doesn't support host memory binding, I think
>>> this was present since we support guest NUMA, and the pci-passthrough made
>>> it even worse.
>> 
>> If the problem was always present, it is not a regression, is it?
>> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]