qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCHv2 10/12] tap: add vhost/vhostfd options


From: Anthony Liguori
Subject: [Qemu-devel] Re: [PATCHv2 10/12] tap: add vhost/vhostfd options
Date: Tue, 02 Mar 2010 08:07:23 -0600
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.5) Gecko/20091209 Fedora/3.0-4.fc12 Lightning/1.0pre Thunderbird/3.0

On 02/28/2010 04:39 PM, Paul Brook wrote:
I'm sympathetic to your arguments though.  As qemu is today, the above
is definitely the right thing to do.  But ram is always ram and ram
always has a fixed (albeit non-linear) mapping within a guest.
I think this assumption is unsafe. There are machines where RAM mappings can
change. It's not uncommon for a chip select (i.e. physical memory address
region) to be switchable to several different sources, one of which may be
RAM.  I'm pretty sure this functionality is present (but not actually
implemented) on some of the current qemu targets.

But I presume this is more about switching a dim to point at a different region in memory. It's a rare event similar to memory hot plug.

Either way, if there are platforms where we don't treat ram with the new ram api, that's okay.

I agree that changing RAM mappings under an active DMA is a fairly suspect
thing to do. However I think we need to avoid cache mappings between separate
DMA transactions i.e. when the guest can know that no DMA will occur, and
safely remap things.

One thing I like about having a new ram api is it gives us a stronger interface than what we have today. Today, we don't have a strong guarantee that mappings won't be changed during a DMA transaction.

With a new api, cpu_physical_memory_map() changes semantics. It only returns pointers for static ram mappings. Everything else is bounced which guarantees that an address can't change during DMA.

I'm also of the opinion that virtio devices should behave the same as any
other device. i.e. if you put a virtio-net-pci device on a PCI bus behind an
IOMMU, then it should see the same address space as any other PCI device in
that location.  Apart from anything else, failure to do this breaks nested
virtualization.  While qemu doesn't currently implement an IOMMU, the DMA
interfaces have been designed to allow it.

Yes, I've been working on that. virtio is a bit more complicated than a normal PCI device because it can be on top of two busses. It needs an additional layer of abstraction to deal with this.

void cpu_ram_add(target_phys_addr_t start, ram_addr_t size);
We need to support aliased memory regions. For example the ARM RealView boards
expose the first 256M RAM at both address 0x0 and 0x70000000. It's also common
for systems to create aliases by ignoring certain address bits. e.g. each sim
slot is allocated a fixed 256M region. Populating that slot with a 128M stick
will cause the contents to be aliased in both the top and bottom halves of
that region.

Okay, I'd prefer to add an explicit aliasing API. That gives us more information to work with.

Regards,

Anthony Liguori

Paul





reply via email to

[Prev in Thread] Current Thread [Next in Thread]