qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] scsi: esp: check TI buffer index before read/wr


From: Peter Maydell
Subject: Re: [Qemu-devel] [PATCH] scsi: esp: check TI buffer index before read/write
Date: Tue, 31 May 2016 08:58:33 +0100

On 31 May 2016 at 06:20, P J P <address@hidden> wrote:
>   Hello Peter,
>
> +-- On Mon, 30 May 2016, Peter Maydell wrote --+
> | > +            } else if (s->ti_rptr < TI_BUFSZ) {
> | >                  s->rregs[ESP_FIFO] = s->ti_buf[s->ti_rptr++];
> | > +            } else {
> | > +                trace_esp_error_fifo_overrun();
> |
> | Isn't this an underrun, not an overrun?
>
>   OOB read occurs when 's->ti_rptr' exceeds 'TI_BUFSZ'.

So? This is a FIFO *read*, and the trace message is for
the case when the FIFO is too full, which can't happen
for a read.

> | In any case, something weird seems to be going on here:
> | it looks like the amount of data in the fifo should be
> | constrained by ti_size (which we're already checking), so
> | when can we get into a situation where ti_rptr can
> | get beyond the buffer size? It seems to me that we should
> | fix whatever that underlying bug is, and then have an
> | assert() on ti_rptr here.
> |
> | > -        } else if (s->ti_size == TI_BUFSZ - 1) {
> | > +        } else if (s->ti_wptr == TI_BUFSZ - 1) {
> | >              trace_esp_error_fifo_overrun();
> |
> | Similarly, this looks odd -- the ti_size check should be
> | sufficient if the rest of the code is correctly managing
> | the ti_size/ti_wptr/ti_rptr values.
>
>   Both issues occur as guest could control the value of 's->ti_size' by
> alternating between 'esp_reg_write(, ESP_FIFO, )' & 'esp_reg_read(, ESP_FIFO)'
> calls. One increases 's->ti_size' and other descreases the same.

OK, so we need to fix these code paths so that they keep
all of the read, write and size values in sync (or just
rewrite the code so it isn't tracking three values, since
two out of three should determine the other.

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]