qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Re: [PATCHv2 10/12] tap: add vhost/vhostfd options


From: Marcelo Tosatti
Subject: Re: [Qemu-devel] Re: [PATCHv2 10/12] tap: add vhost/vhostfd options
Date: Tue, 2 Mar 2010 15:00:25 -0300
User-agent: Mutt/1.5.20 (2009-08-17)

On Tue, Mar 02, 2010 at 10:56:48AM -0600, Anthony Liguori wrote:
> On 03/02/2010 10:12 AM, Marcelo Tosatti wrote:
> >On Sun, Feb 28, 2010 at 02:57:56PM -0600, Anthony Liguori wrote:
> >>On 02/28/2010 11:19 AM, Michael S. Tsirkin wrote:
> >>>>Both have  security implications so I think it's important that they
> >>>>be addressed.   Otherwise, I'm pretty happy with how things are.
> >>>Care suggesting some solutions?
> >>The obvious thing to do would be to use the memory notifier in vhost
> >>to keep track of whenever something remaps the ring's memory region
> >>and if that happens, issue an ioctl to vhost to change the location
> >>of the ring.  Also, you would need to merge the vhost slot
> >>management code with the KVM slot management code.
> >There are no security implications as long as vhost uses the qemu
> >process mappings.
> 
> There potentially are within a guest.  If a guest can trigger a qemu
> bug that results in qemu writing to a different location than what
> the guest told it to write, a malicious software may use this to
> escalate it's privileges within a guest.
> 
> >>cpu_ram_add() never gets called with overlapping regions.  We'll
> >>modify cpu_register_physical_memory() to ensure that a ram mapping
> >>is never changed after initial registration.
> >What is the difference between your proposal and
> >cpu_physical_memory_map?
> 
> cpu_physical_memory_map() has the following semantics:


> 
> - it always returns a transient mapping
> - it may (transparently) bounce
> - it may fail to bounce, caller must deal
> 
> The new function I'm proposing has the following semantics:

What exactly are the purposes of the new function?

> - it always returns a persistent mapping

For hotplug? What exactly you mean persistent?

> - it never bounces
> - it will only fail if the mapping isn't ram
> 
> A caller can use the new function to implement an optimization to
> force the device to only work with real ram.

To bypass the address translation in exec.c? 

>   IOW, this is something we can use in virtio, but very little else.
> cpu_physical_memory_map can be used in more circumstances.

Does not make much sense to me. The qdev <-> memory map mapping seems
more important. Following your security enhancement drive, you can for
example check whether the region can actually be mapped by the device
and deny otherwise, or do whatever host-side memory protection tricks 
you'd like.

And "cpu_ram_map" seems like the memory is accessed through cpu context, 
while it is really always device context.

> >What i'd like to see is binding between cpu_physical_memory_map and qdev
> >devices, so that you can use different host memory mappings for device
> >context and for CPU context (and provide the possibility for, say, map
> >a certain memory region as read-only).
> 
> We really want per-bus mappings.  At the lowest level, we'll have
> sysbus_memory_map() but we'll also have pci_memory_map(),
> virtio_memory_map(), etc.
>
> Nothing should ever call cpu_physical_memory_map() directly.

Yep.

> 
> Regards,
> 
> Anthony Liguori
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]