qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] virtio-net: fix for heap-buffer-overflow


From: Jason Wang
Subject: Re: [PATCH] virtio-net: fix for heap-buffer-overflow
Date: Thu, 10 Nov 2022 17:18:23 +0800

On Thu, Nov 10, 2022 at 4:28 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> Run shell script:
>
>     cat << EOF | valgrind qemu-system-i386 -display none -machine 
> accel=qtest, -m \
>     512M -M q35 -nodefaults -device virtio-net,netdev=net0 -netdev \
>     user,id=net0 -qtest stdio
>     outl 0xcf8 0x80000810
>     outl 0xcfc 0xc000
>     outl 0xcf8 0x80000804
>     outl 0xcfc 0x01
>     outl 0xc00d 0x0200
>     outl 0xcf8 0x80000890
>     outb 0xcfc 0x4
>     outl 0xcf8 0x80000889
>     outl 0xcfc 0x1c000000
>     outl 0xcf8 0x80000893
>     outw 0xcfc 0x100
>     EOF
>
> Got:
>     ==68666== Invalid read of size 8
>     ==68666==    at 0x688536: virtio_net_queue_enable (virtio-net.c:575)
>     ==68666==    by 0x6E31AE: memory_region_write_accessor (memory.c:492)
>     ==68666==    by 0x6E098D: access_with_adjusted_size (memory.c:554)
>     ==68666==    by 0x6E4DB3: memory_region_dispatch_write (memory.c:1521)
>     ==68666==    by 0x6E31AE: memory_region_write_accessor (memory.c:492)
>     ==68666==    by 0x6E098D: access_with_adjusted_size (memory.c:554)
>     ==68666==    by 0x6E4DB3: memory_region_dispatch_write (memory.c:1521)
>     ==68666==    by 0x6EBCD3: flatview_write_continue (physmem.c:2820)
>     ==68666==    by 0x6EBFBF: flatview_write (physmem.c:2862)
>     ==68666==    by 0x6EF5E7: address_space_write (physmem.c:2958)
>     ==68666==    by 0x6DFDEC: cpu_outw (ioport.c:70)
>     ==68666==    by 0x6F6DF0: qtest_process_command (qtest.c:480)
>     ==68666==  Address 0x29087fe8 is 24 bytes after a block of size 416 in 
> arena "client"
>
> That is reported by Alexander Bulekov. 
> https://gitlab.com/qemu-project/qemu/-/issues/1309
>
> Here, the queue_index is the index of the cvq, but cvq does not have the
> corresponding NetClientState,

This is not necessarily truth for some backends like vhost-vDPA.

> so overflow appears.

Note that this is guest trigger-able, so anything that is below the
VIRTIO_QUEUE_MAX but greater or equal than cvq index could hit this.

>
> I add a check here, ignore illegal queue_index and cvq queue_index.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  hw/net/virtio-net.c | 18 ++++++++++++++++--
>  1 file changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 975bbc22f9..88f25709d6 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -549,7 +549,14 @@ static RxFilterInfo 
> *virtio_net_query_rxfilter(NetClientState *nc)
>  static void virtio_net_queue_reset(VirtIODevice *vdev, uint32_t queue_index)

If we require the VirtioDeviceClass to validate the index, let's add a
document there. Or we can let the transport to validate this.

Thanks

>  {
>      VirtIONet *n = VIRTIO_NET(vdev);
> -    NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(queue_index));
> +    NetClientState *nc;
> +
> +    /* validate queue_index and skip for cvq */
> +    if (queue_index >= n->max_queue_pairs * 2) {
> +        return;
> +    }
> +
> +    nc = qemu_get_subqueue(n->nic, vq2q(queue_index));
>
>      if (!nc->peer) {
>          return;
> @@ -566,9 +573,16 @@ static void virtio_net_queue_reset(VirtIODevice *vdev, 
> uint32_t queue_index)
>  static void virtio_net_queue_enable(VirtIODevice *vdev, uint32_t queue_index)
>  {
>      VirtIONet *n = VIRTIO_NET(vdev);
> -    NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(queue_index));
> +    NetClientState *nc;
>      int r;
>
> +    /* validate queue_index and skip for cvq */
> +    if (queue_index >= n->max_queue_pairs * 2) {
> +        return;
> +    }
> +
> +    nc = qemu_get_subqueue(n->nic, vq2q(queue_index));
> +
>      if (!nc->peer || !vdev->vhost_started) {
>          return;
>      }
> --
> 2.32.0.3.g01195cf9f
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]