qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v7 RFC] block/vxhs: Initial commit to add Verita


From: Ketan Nilangekar
Subject: Re: [Qemu-devel] [PATCH v7 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support
Date: Thu, 24 Nov 2016 05:44:37 +0000
User-agent: Microsoft-MacOutlook/0.0.0.160109





On 11/24/16, 4:07 AM, "Paolo Bonzini" <address@hidden> wrote:

>
>
>On 23/11/2016 23:09, ashish mittal wrote:
>> On the topic of protocol security -
>> 
>> Would it be enough for the first patch to implement only
>> authentication and not encryption?
>
>Yes, of course.  However, as we introduce more and more QEMU-specific
>characteristics to a protocol that is already QEMU-specific (it doesn't
>do failover, etc.), I am still not sure of the actual benefit of using
>libqnio versus having an NBD server or FUSE driver.
>
>You have already mentioned performance, but the design has changed so
>much that I think one of the two things has to change: either failover
>moves back to QEMU and there is no (closed source) translator running on
>the node, or the translator needs to speak a well-known and
>already-supported protocol.

IMO design has not changed. Implementation has changed significantly. I would 
propose that we keep resiliency/failover code out of QEMU driver and implement 
it entirely in libqnio as planned in a subsequent revision. The VxHS server 
does not need to understand/handle failover at all. 

Today libqnio gives us significantly better performance than any NBD/FUSE 
implementation. We know because we have prototyped with both. Significant 
improvements to libqnio are also in the pipeline which will use cross memory 
attach calls to further boost performance. Ofcourse a big reason for the 
performance is also the HyperScale storage backend but we believe this method 
of IO tapping/redirecting can be leveraged by other solutions as well.

Ketan

>
>Paolo
>
>> On Wed, Nov 23, 2016 at 12:25 AM, Ketan Nilangekar
>> <address@hidden> wrote:
>>> +Nitin Jerath from Veritas.
>>>
>>>
>>>
>>>
>>> On 11/18/16, 7:06 PM, "Daniel P. Berrange" <address@hidden> wrote:
>>>
>>>> On Fri, Nov 18, 2016 at 01:25:43PM +0000, Ketan Nilangekar wrote:
>>>>>
>>>>>
>>>>>> On Nov 18, 2016, at 5:25 PM, Daniel P. Berrange <address@hidden> wrote:
>>>>>>
>>>>>>> On Fri, Nov 18, 2016 at 11:36:02AM +0000, Ketan Nilangekar wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> On 11/18/16, 3:32 PM, "Stefan Hajnoczi" <address@hidden> wrote:
>>>>>>>>
>>>>>>>>> On Fri, Nov 18, 2016 at 02:26:21AM -0500, Jeff Cody wrote:
>>>>>>>>> * Daniel pointed out that there is no authentication method for 
>>>>>>>>> taking to a
>>>>>>>>>  remote server.  This seems a bit scary.  Maybe all that is needed 
>>>>>>>>> here is
>>>>>>>>>  some clarification of the security scheme for authentication?  My
>>>>>>>>>  impression from above is that you are relying on the networks being
>>>>>>>>>  private to provide some sort of implicit authentication, though, and 
>>>>>>>>> this
>>>>>>>>>  seems fragile (and doesn't protect against a compromised guest or 
>>>>>>>>> other
>>>>>>>>>  process on the server, for one).
>>>>>>>>
>>>>>>>> Exactly, from the QEMU trust model you must assume that QEMU has been
>>>>>>>> compromised by the guest.  The escaped guest can connect to the VxHS
>>>>>>>> server since it controls the QEMU process.
>>>>>>>>
>>>>>>>> An escaped guest must not have access to other guests' volumes.
>>>>>>>> Therefore authentication is necessary.
>>>>>>>
>>>>>>> Just so I am clear on this, how will such an escaped guest get to know
>>>>>>> the other guest vdisk IDs?
>>>>>>
>>>>>> There can be a multiple approaches depending on the deployment scenario.
>>>>>> At the very simplest it could directly read the IDs out of the libvirt
>>>>>> XML files in /var/run/libvirt. Or it can rnu "ps" to list other running
>>>>>> QEMU processes and see the vdisk IDs in the command line args of those
>>>>>> processes. Or the mgmt app may be creating vdisk IDs based on some
>>>>>> particular scheme, and the attacker may have info about this which lets
>>>>>> them determine likely IDs.  Or the QEMU may have previously been
>>>>>> permitted to the use the disk and remembered the ID for use later
>>>>>> after access to the disk has been removed.
>>>>>>
>>>>>
>>>>> Are we talking about a compromised guest here or compromised hypervisor?
>>>>> How will a compromised guest read the xml file or list running qemu
>>>>> processes?
>>>>
>>>> Compromised QEMU process, aka hypervisor userspace
>>>>
>>>>
>>>> Regards,
>>>> Daniel
>>>> --
>>>> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ 
>>>> :|
>>>> |: http://libvirt.org              -o-             http://virt-manager.org 
>>>> :|
>>>> |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ 
>>>> :|

reply via email to

[Prev in Thread] Current Thread [Next in Thread]