qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] Memory API


From: Avi Kivity
Subject: Re: [Qemu-devel] [RFC] Memory API
Date: Wed, 18 May 2011 19:14:13 +0300
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.17) Gecko/20110428 Fedora/3.1.10-1.fc14 Lightning/1.0b3pre Thunderbird/3.1.10

On 05/18/2011 07:00 PM, Jan Kiszka wrote:
On 2011-05-18 17:42, Avi Kivity wrote:
>  On 05/18/2011 06:36 PM, Jan Kiszka wrote:
>>>
>>>   We need to head for the more hardware-like approach.  What happens when
>>>   you program overlapping BARs?  I imagine the result is
>>>   implementation-defined, but ends up with one region decoded in
>>>   preference to the other.  There is simply no way to reject an
>>>   overlapping mapping.
>>
>>  But there is also now simple way to allow them. At least not without
>>  exposing control about their ordering AND allowing to hook up managing
>>  code (e.g. of the PCI bridge or the chipset) that controls registrations.
>
>  What about memory_region_add_subregion(..., int priority) as I suggested
>  in another message?

That's fine, but also requires a change how, or better where devices
register their regions.

Lost you - please elaborate.

>
>  Regarding bridges, every registration request flows through them so they
>  already have full control.

Not everything is PCI, we also have ISA e.g. If we were able to route
such requests also through a hierarchy of abstract bridges, then even
better.

Yes, it's a tree of nested MemoryRegions.

>  We'll definitely have a flattened view (phys_desc is such a flattened
>  view, hopefully we'll have a better one).

phys_desc is not exportable. If we try (and we do from time to time...),
we end up with more slots than clients like kvm will ever be able to handle.

If we coalesce ajacent phys_descs we end up with a minimal representation. Of course that's not the most efficient implementation (a tree walk is better).

>
>  We can basically run a tree walk on each change, emitting ranges in
>  order and sending them to PhysMemClients.

I'm specifically thinking of fully trackable slot updates. The clients
should not have to maintain the flat layout. They should just receive
updates in the form of slot X added/modified/removed. For now, this
magic happens multiple times in the clients, and that is very bad.

Slots don't have any meaning. You can have a RAM region which is overlaid by a smaller mmio region -> the RAM slot is split into two.

We should just send clients a list of ranges, and they can associate them with slots themselves.

Given that not only memory clients need that view but that ever TLB miss
(in TCG mode) requires to identify the effective slot as well, it might
be worth preparing a runtime structure at registration time that
supports this efficiently - but this time without wasting memory.

Yes. Won't be easy though. Perhaps a perfect hash table for small regions and a sorted-by-size array for large regions.

--
error compiling committee.c: too many arguments to function




reply via email to

[Prev in Thread] Current Thread [Next in Thread]