qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RESEND PATCH v8 1/4] apic: map APIC's MMIO region at e


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [RESEND PATCH v8 1/4] apic: map APIC's MMIO region at each CPU's address space
Date: Thu, 25 Jun 2015 19:39:40 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.6.0


On 25/06/2015 19:32, Peter Maydell wrote:
> On 25 June 2015 at 18:27, Paolo Bonzini <address@hidden> wrote:
>> On 25/06/2015 19:08, Andreas Färber wrote:
>>> And is installing a separate address space per CPU for KVM difficult due
>>> to kernel limitations, or is this just a few lines of QEMU code that Zhu
>>> or someone would need to write? :)
>>
>> It's basically impossible.  Even though support for multiple address
>> spaces is going to be in Linux 4.2, there are going to be just two: SMM
>> and not SMM.  You don't really want to do O(#cpus) stuff in KVM, where
>> the number of CPUs can be 200 or more.
> 
> Can you explain what the issue is here? Shouldn't it just be a matter
> of kvm_cpu_exec() doing a dispatch to cpu->as rather than calling
> address_space_rw() ?  (Making it do that was one of the things on my
> todo list for ARM at some point.)

One example of the problem is that different CPU address spaces can have
MMIO in different places.  These MMIO areas can hide RAM depending on
where they're placed and their relative priorities.  If they do, KVM
cannot really assume that a single set of page tables are okay to
convert gpa->hpa for all guest CPUs.

If you can tie this to CPU state (e.g. in or out of system management
mode), you only get a small, constant number of such address spaces.

See http://thread.gmane.org/gmane.comp.emulators.qemu/345230 for the
QEMU part of the multiple-address-space support.

Paolo

> I'm happy to assume that RAM is shared by all CPUs I guess.
> 
>> TCG is okay because the #cpus is not really going to be more than 4-ish.
> 
> Well, it might be more than that in future...
> 
> -- PMM
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]