qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH V4 04/13] hw/9pfs: File system helper process fo


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH V4 04/13] hw/9pfs: File system helper process for qemu 9p proxy FS
Date: Mon, 12 Dec 2011 15:56:23 +0000

On Mon, Dec 12, 2011 at 3:21 PM, Aneesh Kumar K.V
<address@hidden> wrote:
> On Mon, 12 Dec 2011 12:08:33 +0000, Stefan Hajnoczi <address@hidden> wrote:
>> On Fri, Dec 09, 2011 at 10:12:17PM +0530, M. Mohan Kumar wrote:
>> > On Friday, December 09, 2011 12:01:14 AM Stefan Hajnoczi wrote:
>> > > On Mon, Dec 05, 2011 at 09:48:41PM +0530, M. Mohan Kumar wrote:
>> > > > +static int read_request(int sockfd, struct iovec *iovec, ProxyHeader
>> > > > *header) +{
>> > > > +    int retval;
>> > > > +
>> > > > +    /*
>> > > > +     * read the request header.
>> > > > +     */
>> > > > +    iovec->iov_len = 0;
>> > > > +    retval = socket_read(sockfd, iovec->iov_base, PROXY_HDR_SZ);
>> > > > +    if (retval < 0) {
>> > > > +        return retval;
>> > > > +    }
>> > > > +    iovec->iov_len = PROXY_HDR_SZ;
>> > > > +    retval = proxy_unmarshal(iovec, 0, "dd", &header->type,
>> > > > &header->size); +    if (retval < 0) {
>> > > > +        return retval;
>> > > > +    }
>> > > > +    /*
>> > > > +     * We can't process message.size > PROXY_MAX_IO_SZ, read the
>> > > > complete +     * message from the socket and ignore it. This ensures
>> > > > that +     * we can correctly handle the next request. We also return +
>> > > >    * ENOBUFS as error to indicate we ran out of buffer space. +     */
>> > > > +    if (header->size > PROXY_MAX_IO_SZ) {
>> > > > +        int count, size;
>> > > > +        size = header->size;
>> > > > +        while (size > 0) {
>> > > > +            count = MIN(PROXY_MAX_IO_SZ, size);
>> > > > +            count = socket_read(sockfd, iovec->iov_base + 
>> > > > PROXY_HDR_SZ,
>> > > > count); +            if (count < 0) {
>> > > > +                return count;
>> > > > +            }
>> > > > +            size -= count;
>> > > > +        }
>> > >
>> > > I'm not sure recovery attempts are worthwhile here.  The client is
>> > > buggy, perhaps just refuse further work.
>> >
>> > But whats the issue in trying to recover in this case?
>>
>> This recovery procedure is not robust because it does not always work.
>> In fact it only works in the case where the header->size field was
>> out-of-range but accurate.  That's not a likely case since the QEMU-side
>> code that you are writing should handle this.
>>
>> If the nature of the invalid request is different, either a broken or
>> malicious client which does not send a valid header->size then we're
>> stuck in this special-case recovery trying to gobble bytes and we never
>> log an error.
>>
>> A real recovery would be something like disconnecting and
>> re-establishing the connection between QEMU and the helper.  This would
>> allow us to get back to a clean state in all cases.
>>
>
> Since we are not having any state in the proxy helper, returning ENOBUFS
> should be similar to the above right ? One of the reason to try to
> recover as much as possible, is to make sure the guest can umount the
> file system properly. That is if we hit these error condition due to a
> bug in proxy FS driver is qemu, we want to make sure we return some
> valid error, which will atleast enable the guest/client to do an umount.

When the helper detects something outside the protocol specification
it needs to terminate the connection.  The protocol has no reliable
way to skip the junk coming over the socket so we can't process the
"next" message.

The flipside to "try to recover as much as possible" is "damage as
little as possible".  We don't want to mis-interpret requests on this
broken connection and corrupt the user's data.

I'm happy with any scheme as long as it handles all error cases.  The
problem with the -ENOBUFS case was that it is pretty artificial
(unlikely to happen) and doesn't handle cases where header->size is
inaccurate.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]