[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[lwip-users] during Boradcast Storm pbuf_alloc returns Zero, release pbu

From: fred
Subject: [lwip-users] during Boradcast Storm pbuf_alloc returns Zero, release pbuf?
Date: Fri, 5 Apr 2013 02:54:40 -0700 (PDT)

Hi Guys

I have a problem with my ethernet Device using LWIP. I am using it with Ecos
and the latest driver for lwip.
When i make a broadcast storm i get the following error:
In the eth_drv.c the eth_drv_recv() function it is not possible to allocate
any pbuf anymore and my ethernet device does not react anymore.

I see that i get receive interrupts but can't process it because the
ownership bits of my buffer queue pointer addresses are not getting reseted
which would be done in the (sc->funs->recv) (sc, sg_list, sg_len); recv()
function of my driver. But due to the failed pbuf_alloc my drivers recv()
function is not called anymore the function returns with nothing.

static void
eth_drv_recv(struct eth_drv_sc *sc, int total_len)
  struct eth_drv_sg sg_list[MAX_ETH_DRV_SG];
  struct netif *netif = &sc->sc_arpcom.ac_if;

  struct pbuf *p, *q;

  int sg_len = 0;

  if ((total_len > MAX_ETH_MSG) || (total_len < 0)) {
    total_len = MAX_ETH_MSG;

  p = pbuf_alloc(PBUF_RAW, total_len, PBUF_POOL);

  if (p == NULL) {
    LWIP_DEBUGF(0, ("ecosif_input: low_level_input returned NULL\n"));

  for (q = p; q != NULL; q = q->next) {
    sg_list[sg_len].buf = (CYG_ADDRESS) q->payload;
    sg_list[sg_len++].len = q->len;
  (sc->funs->recv) (sc, sg_list, sg_len);
  ecosif_input(netif, p);

How can i handle this problem? Can i flush all the lwip buffers? I can't
raise the pbuf number or size because i don't have enough memory on my
device for it. Is there any way to say to the stack that it should reset the
buffers or something like this?
I have activated the SYS_ARCH_PROTECT defines in the lwip stack for critical
parts so it is thread save. 

Best regards 

View this message in context: 
Sent from the lwip-users mailing list archive at Nabble.com.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]