qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Question] fuzz: double-fetches in a memory region map session


From: Alexander Bulekov
Subject: Re: [Question] fuzz: double-fetches in a memory region map session
Date: Fri, 13 Aug 2021 06:50:12 -0400

On 210813 0349, Li Qiuhao wrote:
> Hi Alex,
> 
> Recently I was reading the DMA call-back functions in the fuzzer. It seems
> fuzz_dma_read_cb() is inserted into flatview_read_continue() and
> address_space_map() to make the host read changed content between different
> DMA actions.
> 
> My question is about address_space_map() -- How do we emulate double-fetch
> bugs in the same map/unmap session? For example:
> 

Hi Qiuhao,
Right now we don't. One strategy would be to use mprotect. When the code
fetches data the first time, we get a SEGV, where we unprotect the page,
write a pattern, and enable single-stepping. Then, after the
single-step, re-protect the page, and disable single-step.

On OSS-Fuzz, we disabled double-fetch detection, for now, as we did not
want reproducers for normal-bugs to inadvertently contain
double-fetches. To make the double-fetch detection useful for
developers, we probably need to limit the double fetch capability to
only fill the DMA regions twice, rather than 10 or 20 times. Then, in
the report, we could give the call-stacks (from the SEGV handler, or
dma_read hook) of the exact locations in the code that read from the
same address twice.

Thanks for your interest in this!
-Alex

> 
>   FOO *guest_foo = (FOO *) address_space_map(as, ...);

// mprotect in address_space_map hook   

// SEGV on the read. Un-mprotect, fill with pattern
>   uint64_t size = guest_foo->size;    // first fetch

// Single Step. Re-mprotect (or you could just immediately fill with a
// new pattern)

>   if size > limit
>     goto error;
>   
>   /* time window */
>   

// SEGV
>   memcpy(dest, src, guest_foo->size); // double-fetch ?
>   
>   error:
>   address_space_unmap(as, guest_foo, ...)
> 
> 
> Thanks,
>   Qiuhao Li



reply via email to

[Prev in Thread] Current Thread [Next in Thread]