qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] directory hierarchy


From: Blue Swirl
Subject: Re: [Qemu-devel] directory hierarchy
Date: Sun, 23 Sep 2012 16:07:00 +0000

On Sun, Sep 23, 2012 at 8:25 AM, Avi Kivity <address@hidden> wrote:
> On 09/22/2012 04:15 PM, Blue Swirl wrote:
>> >
>> >> This could have nice cleanup effects though and for example enable
>> >> generic 'info vmtree' to discover VA->PA mappings for any target
>> >> instead of current MMU table walkers.
>> >
>> > How?  That's in a hardware defined format that's completely invisible to
>> > the memory API.
>>
>> It's invisible now, but target-specific code could grab the mappings
>> and feed them to memory API. Memory API would just see the per-CPU
>> virtual memory as address spaces that map to physical memory address
>> space.
>>
>> For RAM backed MMU tables like x86 and Sparc32, writes to page table
>> memory areas would need to be tracked like SMC. For in-MMU TLBs, this
>> would not be needed.
>>
>> Again, if performance would degrade, this would not be worthwhile. I'd
>> expect VA->PA mappings to change at least at context switch rate +
>> page fault rate + mmap/exec activity so this could amount to thousands
>> of changes per second per CPU.
>>
>> In theory KVM could use memory API as CPU type agnostic way to
>> exchange this information, I'd expect that KVM exit rate is not nearly
>> as big and in many cases exchange of mapping information would not be
>> needed. It would not improve performance there either.
>>

Perhaps I was not very clear, but this was just theoretical.

>
> First, the memory API does not operate at that level.  It handles (guest
> physical) -> (host virtual | io callback) translations.  These are
> (guest virtual) -> (guest physical translations).

I don't see why memory API could not be used also for GVA-GPA
translation if we ignore performance for the sake of discussion.

> Second, the memory API is machine-wide and designed for coarse maps.
> Processor memory maps are per-cpu and page-grained.  (the memory API
> actually needs to efficiently support page-grained maps (for iommus) and
> per-cpu maps (smm), but that's another story).
>
> Third, we know from the pre-npt/ept days that tracking all mappings
> destroys performance.  It's much better to do this on demand.

Yes, performance reasons kill this idea. It would still be beautiful.

>
> --
> I have a truly marvellous patch that fixes the bug which this
> signature is too narrow to contain.
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]