qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH V4 00/10] Add support for binding guest numa nod


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH V4 00/10] Add support for binding guest numa nodes to host numa nodes
Date: Thu, 04 Jul 2013 21:49:51 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130514 Thunderbird/17.0.6

Il 04/07/2013 11:53, Wanlong Gao ha scritto:
> As you know, QEMU can't direct it's memory allocation now, this may cause
> guest cross node access performance regression.
> And, the worse thing is that if PCI-passthrough is used,
> direct-attached-device uses DMA transfer between device and qemu process.
> All pages of the guest will be pinned by get_user_pages().
> 
> KVM_ASSIGN_PCI_DEVICE ioctl
>   kvm_vm_ioctl_assign_device()
>     =>kvm_assign_device()
>       => kvm_iommu_map_memslots()
>         => kvm_iommu_map_pages()
>            => kvm_pin_pages()
> 
> So, with direct-attached-device, all guest page's page count will be +1 and
> any page migration will not work. AutoNUMA won't too.
> 
> So, we should set the guest nodes memory allocation policy before
> the pages are really mapped.
> 
> According to this patch set, we are able to set guest nodes memory policy
> like following:
> 
>  -numa node,nodeid=0,mem=1024,cpus=0,mem-policy=membind,mem-hostnode=0-1
>  -numa node,nodeid=1,mem=1024,cpus=1,mem-policy=interleave,mem-hostnode=1

Did you see my suggestion to use instead something like this:

    -numa node,nodeid=0,cpus=0 -numa node,nodeid=1,cpus=1 \
    -numa mem,nodeid=0,size=1G,policy=membind,hostnode=0-1
    -numa mem,nodeid=1,size=2G,policy=interleave,hostnode=1

With an eye to when we'll support memory hotplug, I think it is better.
 It is not hard to implement it using the OptsVisitor; see
14aa0c2de045a6c2fcfadf38c04434fd15909455 for an example of a complex
schema described with OptsVistor.

Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]