qemu-trivial
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH-for-5.0] qga-posix: Avoid crashing process when failing to al


From: Daniel P . Berrangé
Subject: Re: [PATCH-for-5.0] qga-posix: Avoid crashing process when failing to allocate memory
Date: Mon, 30 Mar 2020 17:47:24 +0100
User-agent: Mutt/1.13.3 (2020-01-12)

On Mon, Mar 30, 2020 at 05:38:05PM +0100, Dr. David Alan Gilbert wrote:
> * Philippe Mathieu-Daudé (address@hidden) wrote:
> > Cc'ing the ppl who responded the thread you quoted.
> > 
> > On 3/30/20 4:11 PM, Markus Armbruster wrote:
> > > Philippe Mathieu-Daudé <address@hidden> writes:
> > > ---
> > >   qga/commands-posix.c | 8 +++++++-
> > >   1 file changed, 7 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/qga/commands-posix.c b/qga/commands-posix.c
> > > index 93474ff770..8f127788e6 100644
> > > --- a/qga/commands-posix.c
> > > +++ b/qga/commands-posix.c
> > > @@ -493,7 +493,13 @@ struct GuestFileRead *qmp_guest_file_read(int64_t
> > handle, bool has_count,
> > >           gfh->state = RW_STATE_NEW;
> > >       }
> > >
> > > -    buf = g_malloc0(count+1);
> > > +    buf = g_try_malloc0(count + 1);
> > > +    if (!buf) {
> > > +        error_setg(errp,
> > > +                   "failed to allocate sufficient memory "
> > > +                   "to complete the requested service");
> > > +        return NULL;
> > > +    }
> > >       read_count = fread(buf, 1, count, fh);
> > >       if (ferror(fh)) {
> > >           error_setg_errno(errp, errno, "failed to read file");
> > >
> > 
> > > > On 3/25/20 7:19 AM, Dietmar Maurer wrote:
> > > > > but error_setg() also calls malloc, so this does not help at all?
> > > > 
> > > > IIUC the problem, you can send a QMP command to ask to read let's say
> > > > 3GB of a file, and QEMU crashes. But this doesn't mean there the .heap
> > > > is empty, there is probably few bytes still available, enough to
> > > > respond with an error message.
> > > 
> > > We've discussed how to handle out-of-memory conditions many times.
> > > Here's one instance:
> > > 
> > >      Subject: When it's okay to treat OOM as fatal?
> > >      Message-ID: <address@hidden>
> > >      
> > > https://lists.nongnu.org/archive/html/qemu-devel/2018-10/msg03212.html
> > > 
> > > No improvement since then; there's no guidance on when to check for OOM.
> > > Actual code tends to check only "large" allocations (for subjective
> > > values of "large").
> > > 
> > > I reiterate my opinion that whatever OOM handling we have is too
> > > unreliable to be worth much, since it can only help when (1) allocations
> > > actually fail (they generally don't[*]), and (2) the allocation that
> > > fails is actually handled (they generally aren't), and (3) the handling
> > > actually works (we don't test OOM, so it generally doesn't).
> > > 
> > > 
> > > [*] Linux overcommits memory, which means malloc() pretty much always
> > > succeeds, but when you try to use "too much" of the memory you
> > > supposedly allocated, a lethal signal is coming your way.  Reasd the
> > > thread I quoted for examples.
> > 
> > So this patch takes Stefan reasoning:
> > https://lists.nongnu.org/archive/html/qemu-devel/2018-10/msg03525.html
> > 
> >   My thinking has been to use g_new() for small QEMU-internal structures
> >   and g_try_new() for large amounts of memory allocated in response to
> >   untrusted inputs.  (Untrusted inputs must never be used for unbounded
> >   allocation sizes but those bounded sizes can still be large.)
> > 
> > In any cases (malloc/malloc_try) we have a denial of service
> > (https://www.openwall.com/lists/oss-security/2018/10/17/4) and the service
> > is restarted.
> > 
> > Daniel suggests such behavior should be catched by external firewall guard
> > (either on the process or on the network). This seems out of scope of QEMU
> > and hard to fix.
> > 
> > So, can we improve something? Or should we let this code as it?
> 
> I'll agree with Stefan's description; we should use 'try' for anything
> 'large' (badly defined) or user controlled.
> 
> So I think this should switch to having the try.

IMHO it is pointless to use try here as its not really solving the
problem. This is just the first of several mallocs in the process
of running this command. Consider the size is large, but still small
enough for the first g_try_malloc() to succeed. Soon after GLib is
going to g_malloc an even larger bock of (count / 3 + 1) * 4 + 1) bytes
in order to base64 encode this data, and that can push us over into OOM
again.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]