qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PULL 4/6] usb/redir: avoid dynamic stack allocation (CVE-2021-3527)


From: Gerd Hoffmann
Subject: Re: [PULL 4/6] usb/redir: avoid dynamic stack allocation (CVE-2021-3527)
Date: Wed, 5 May 2021 17:50:58 +0200

On Wed, May 05, 2021 at 03:29:10PM +0200, Remy Noel wrote:
> On Wed, May 05, 2021 at 03:07:14PM +0200, Gerd Hoffmann wrote:
> > [...]
> > diff --git a/hw/usb/redirect.c b/hw/usb/redirect.c
> > index 17f06f34179a..6a75b0dc4ab2 100644
> > --- a/hw/usb/redirect.c
> > +++ b/hw/usb/redirect.c
> > @@ -620,7 +620,7 @@ static void usbredir_handle_iso_data(USBRedirDevice 
> > *dev, USBPacket *p,
> >                 .endpoint = ep,
> >                 .length = p->iov.size
> >             };
> > -            uint8_t buf[p->iov.size];
> > +            g_autofree uint8_t *buf = g_malloc(p->iov.size);
> >             /* No id, we look at the ep when receiving a status back */
> >             usb_packet_copy(p, buf, p->iov.size);
> >             usbredirparser_send_iso_packet(dev->parser, 0, &iso_packet,
> > @@ -818,7 +818,7 @@ static void usbredir_handle_bulk_data(USBRedirDevice 
> > *dev, USBPacket *p,
> >         usbredirparser_send_bulk_packet(dev->parser, p->id,
> >                                         &bulk_packet, NULL, 0);
> >     } else {
> > -        uint8_t buf[size];
> > +        g_autofree uint8_t *buf = g_malloc(size);
> >         usb_packet_copy(p, buf, size);
> 
> Won't this allows us to malloc then write an arbitrary amount of heap
> memory, allowing a guest driver to abort the qemu ?

Crashing qemu is not as easy as with stack allocation, but yes, unbound
allocation is still there.  Need to figure some way to limit this in
xhci without breaking things.

Or maybe use g_try_malloc() and catch allocation failures.

Alternatively we could add a usbredirparser_send_bulk_packet_iov()
function to usbredir so we can just pass on the iov and don't need a
temporary buffer in the first place.

Not sure yet what the best way forward is.
Comments (and other ideas) are welcome.

take care,
  Gerd




reply via email to

[Prev in Thread] Current Thread [Next in Thread]