qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH for-2.4 06/12] etsec: Flush queue when rx buffer


From: Jason Wang
Subject: Re: [Qemu-devel] [PATCH for-2.4 06/12] etsec: Flush queue when rx buffer is consumed
Date: Wed, 15 Jul 2015 13:10:57 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0


On 07/14/2015 05:48 PM, Fam Zheng wrote:
> On Tue, 07/14 17:33, Jason Wang wrote:
>>
>> On 07/14/2015 03:53 PM, Fam Zheng wrote:
>>> The BH will be scheduled when etsec->rx_buffer_len is becoming 0, which
>>> is the condition of queuing.
>>>
>>> Signed-off-by: Fam Zheng <address@hidden>
>> I suggest to squash this into patch 05/12 to unbreak bisection.
> 05/12 didn't change the logic, it just moved the semantics from returning 0 in
> .can_receive() to .receive(). So I think it's OK to split to make reviewing
> easier.

Ok.

>
>> And can we do this without a bh? Otherwise, we may need to stop and
>> restart the bh during vm stop and start?
> A bh doesn't hurt when vm stop and restart (we get superfluous flush),

The problem is qemu_flush_queued_packets() does not check runstate. So
it may call .receive() which may modify guest state (DMA or registers).

> otherwise the call stack could go very deap with a long queue, something like:
>
>     etsec_receive
>       etsec_rx_ring_write
>         etsec_walk_rx_ring
>           etsec_flush_rx_queue
>             qemu_flush_queued_packets
>               ...
>                 etsec_receive
>                   etsec_rx_ring_write
>                     etsec_walk_rx_ring
>                       etsec_flush_rx_queue
>                         qemu_flush_queued_packets
>                           /* loop */
>                             ...

I get the point, thanks.

>>> ---
>>>  hw/net/fsl_etsec/etsec.c | 9 +++++++++
>>>  hw/net/fsl_etsec/etsec.h | 2 ++
>>>  hw/net/fsl_etsec/rings.c | 1 +
>>>  3 files changed, 12 insertions(+)
>>>
>>> diff --git a/hw/net/fsl_etsec/etsec.c b/hw/net/fsl_etsec/etsec.c
>>> index f5170ae..fcde9b4 100644
>>> --- a/hw/net/fsl_etsec/etsec.c
>>> +++ b/hw/net/fsl_etsec/etsec.c
>>> @@ -366,6 +366,13 @@ static NetClientInfo net_etsec_info = {
>>>      .link_status_changed = etsec_set_link_status,
>>>  };
>>>  
>>> +static void etsec_flush_rx_queue(void *opaque)
>>> +{
>>> +    eTSEC *etsec = opaque;
>>> +
>>> +    qemu_flush_queued_packets(qemu_get_queue(etsec->nic));
>>> +}
>>> +
>>>  static void etsec_realize(DeviceState *dev, Error **errp)
>>>  {
>>>      eTSEC        *etsec = ETSEC_COMMON(dev);
>>> @@ -378,6 +385,8 @@ static void etsec_realize(DeviceState *dev, Error 
>>> **errp)
>>>      etsec->bh     = qemu_bh_new(etsec_timer_hit, etsec);
>>>      etsec->ptimer = ptimer_init(etsec->bh);
>>>      ptimer_set_freq(etsec->ptimer, 100);
>>> +
>>> +    etsec->flush_bh = qemu_bh_new(etsec_flush_rx_queue, etsec);
>>>  }
>>>  
>>>  static void etsec_instance_init(Object *obj)
>>> diff --git a/hw/net/fsl_etsec/etsec.h b/hw/net/fsl_etsec/etsec.h
>>> index fc41773..05bb7f8 100644
>>> --- a/hw/net/fsl_etsec/etsec.h
>>> +++ b/hw/net/fsl_etsec/etsec.h
>>> @@ -144,6 +144,8 @@ typedef struct eTSEC {
>>>      QEMUBH *bh;
>>>      struct ptimer_state *ptimer;
>>>  
>>> +    QEMUBH *flush_bh;
>>> +
>>>  } eTSEC;
>>>  
>>>  #define TYPE_ETSEC_COMMON "eTSEC"
>>> diff --git a/hw/net/fsl_etsec/rings.c b/hw/net/fsl_etsec/rings.c
>>> index a11280b..7aae93e 100644
>>> --- a/hw/net/fsl_etsec/rings.c
>>> +++ b/hw/net/fsl_etsec/rings.c
>>> @@ -646,6 +646,7 @@ void etsec_walk_rx_ring(eTSEC *etsec, int ring_nbr)
>>>      } else {
>>>          etsec->rx_buffer_len = 0;
>>>          etsec->rx_buffer     = NULL;
>>> +        qemu_bh_schedule(etsec->flush_bh);
>>>      }
>>>  
>>>      RING_DEBUG("eTSEC End of ring_write: remaining_data:%zu\n", 
>>> remaining_data);




reply via email to

[Prev in Thread] Current Thread [Next in Thread]