qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] exec: check 'bounce.in_use' flag before using b


From: Peter Maydell
Subject: Re: [Qemu-devel] [PATCH] exec: check 'bounce.in_use' flag before using buffer
Date: Thu, 28 Jan 2016 15:30:14 +0000

On 28 January 2016 at 15:15, P J P <address@hidden> wrote:
> From: Prasad J Pandit <address@hidden>
>
> When IDE AHCI emulation uses Frame Information Structures(FIS)
> engine for data transfer, the mapped FIS buffer address is stored
> in a static 'bounce.buffer'. This is freed when FIS entry is
> unmapped. If multiple FIS entries are created, it leads to an
> use after free error. Check 'bounce.in_use' flag to avoid it.
>
> Reported-by: Zuozhi fzz <address@hidden>
> Signed-off-by: Prasad J Pandit <address@hidden>
> ---
>  exec.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/exec.c b/exec.c
> index 8718a75..ccc5715 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -2922,7 +2922,7 @@ void address_space_unmap(AddressSpace *as, void 
> *buffer, hwaddr len,
>          memory_region_unref(mr);
>          return;
>      }
> -    if (is_write) {
> +    if (bounce.in_use && is_write) {
>          address_space_write(as, bounce.addr, MEMTXATTRS_UNSPECIFIED,
>                              bounce.buffer, access_len);
>      }

This doesn't look right to me. The bounce buffer gets used
if address_space_map() is called on something which isn't
simple guest RAM. In this case address_space_map() will
set bounce.in_use to true and return bounce.buffer as the
mapped address. Then when the buffer is unmapped again,
address_space_unmap() will finish using the bounce buffer
and set bounce.in_use to false. You can only ever have one
user of the bounce buffer at a time because address_space_map()
will return NULL if it would need to use the bounce buffer
but somebody else owns it.

So if we get into address_space_unmap() with a buffer
value of bounce.buffer but bounce.in_use is false then
something has already gone wrong. We need to figure out
what that is.

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]