qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical ad


From: Jan Kiszka
Subject: Re: [Qemu-devel] [PATCH] Fix phys memory client - pass guest physical address not region offset
Date: Fri, 29 Apr 2011 17:45:47 +0200
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666

On 2011-04-29 17:38, Alex Williamson wrote:
> On Fri, 2011-04-29 at 17:29 +0200, Jan Kiszka wrote:
>> On 2011-04-29 17:06, Michael S. Tsirkin wrote:
>>> On Thu, Apr 28, 2011 at 09:15:23PM -0600, Alex Williamson wrote:
>>>> When we're trying to get a newly registered phys memory client updated
>>>> with the current page mappings, we end up passing the region offset
>>>> (a ram_addr_t) as the start address rather than the actual guest
>>>> physical memory address (target_phys_addr_t).  If your guest has less
>>>> than 3.5G of memory, these are coincidentally the same thing.  If
>>
>> I think this broke even with < 3.5G as phys_offset also encodes the
>> memory type while region_offset does not. So everything became RAMthis
>> way, no MMIO was announced.
>>
>>>> there's more, the region offset for the memory above 4G starts over
>>>> at 0, so the set_memory client will overwrite it's lower memory entries.
>>>>
>>>> Instead, keep track of the guest phsyical address as we're walking the
>>>> tables and pass that to the set_memory client.
>>>>
>>>> Signed-off-by: Alex Williamson <address@hidden>
>>>
>>> Acked-by: Michael S. Tsirkin <address@hidden>
>>>
>>> Given all this, can yo tell how much time does
>>> it take to hotplug a device with, say, a 40G RAM guest?
>>
>> Why not collect pages of identical types and report them as one chunk
>> once the type changes?
> 
> Good idea, I'll see if I can code that up.  I don't have a terribly
> large system to test with, but with an 8G guest, it's surprisingly not
> very noticeable.  For vfio, I intend to only have one memory client, so
> adding additional devices won't have to rescan everything.  The memory
> overhead of keeping the list that the memory client creates is probably
> also low enough that it isn't worthwhile to tear it all down if all the
> devices are removed.  Thanks,

What other clients register late? Do the need to know to whole memory
layout?

This full page table walk is likely a latency killer as it happens under
global lock. Ugly.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux



reply via email to

[Prev in Thread] Current Thread [Next in Thread]