qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Re: [PATCHv2 10/12] tap: add vhost/vhostfd options


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] Re: [PATCHv2 10/12] tap: add vhost/vhostfd options
Date: Tue, 2 Mar 2010 19:00:25 +0200
User-agent: Mutt/1.5.19 (2009-01-05)

On Tue, Mar 02, 2010 at 10:56:48AM -0600, Anthony Liguori wrote:
> On 03/02/2010 10:12 AM, Marcelo Tosatti wrote:
>> On Sun, Feb 28, 2010 at 02:57:56PM -0600, Anthony Liguori wrote:
>>    
>>> On 02/28/2010 11:19 AM, Michael S. Tsirkin wrote:
>>>      
>>>>> Both have  security implications so I think it's important that they
>>>>> be addressed.   Otherwise, I'm pretty happy with how things are.
>>>>>          
>>>> Care suggesting some solutions?
>>>>        
>>> The obvious thing to do would be to use the memory notifier in vhost
>>> to keep track of whenever something remaps the ring's memory region
>>> and if that happens, issue an ioctl to vhost to change the location
>>> of the ring.  Also, you would need to merge the vhost slot
>>> management code with the KVM slot management code.
>>>      
>> There are no security implications as long as vhost uses the qemu
>> process mappings.
>>    
>
> There potentially are within a guest.  If a guest can trigger a qemu bug  
> that results in qemu writing to a different location than what the guest  
> told it to write, a malicious software may use this to escalate it's  
> privileges within a guest.

If malicious software has access to hardware that does DMA,
game is likely over :)

>>> cpu_ram_add() never gets called with overlapping regions.  We'll
>>> modify cpu_register_physical_memory() to ensure that a ram mapping
>>> is never changed after initial registration.
>>>      
>> What is the difference between your proposal and
>> cpu_physical_memory_map?
>>    
>
> cpu_physical_memory_map() has the following semantics:
>
> - it always returns a transient mapping
> - it may (transparently) bounce
> - it may fail to bounce, caller must deal
>
> The new function I'm proposing has the following semantics:
>
> - it always returns a persistent mapping
> - it never bounces
> - it will only fail if the mapping isn't ram
>
> A caller can use the new function to implement an optimization to force  
> the device to only work with real ram.  IOW, this is something we can  
> use in virtio, but very little else.  cpu_physical_memory_map can be  
> used in more circumstances.
>
>> What i'd like to see is binding between cpu_physical_memory_map and qdev
>> devices, so that you can use different host memory mappings for device
>> context and for CPU context (and provide the possibility for, say, map
>> a certain memory region as read-only).
>>    
>
> We really want per-bus mappings.  At the lowest level, we'll have  
> sysbus_memory_map() but we'll also have pci_memory_map(),  
> virtio_memory_map(), etc.
>
> Nothing should ever call cpu_physical_memory_map() directly.
>
> Regards,
>
> Anthony Liguori




reply via email to

[Prev in Thread] Current Thread [Next in Thread]