lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-users] Zero Copy Ethernet interface


From: Andrew Dennison
Subject: Re: [lwip-users] Zero Copy Ethernet interface
Date: Thu, 20 Sep 2007 10:17:51 +1000

On 9/20/07, Jonathan Larmour <address@hidden> wrote:
> Paul Black wrote:
> > I'm trying to get to grips with lwip and one of the things I'm looking
> > at is how data moves between the stack and the ethernet interface.
> >
> > For input:
> > The examples ethernetif.c has a function called low_level_input which
> > copies data from somewhere into a chain of pbufs that are then passed up.
> >
> > I'm thinking that I can do something like the following:
> >  - Allocate several pbufs in advance for incoming packets: can I
> >    allocate a single pbuf as I would need to dechain them otherwise. How
> >    do I allocate a pbuf of maximum size or find the maximum space in a
> >    single pbuf?
> >  - When the packet comes in, find out how many pbufs were used - I'm
> >    guessing I can then chain them together again with pbuf_cat()?
> >  - Pass this to wherever.
>
> Yes. I've implemented zero copy receives in a way similar to this (although
> I had to subvert the pbuf API, and fiddle the struct pbuf contents myself).
>   It's true you could preallocate your pool of pbufs each with the full MTU
> size, but I went with chains of pbufs of a smaller (but still fixed) size,
> as my hardware could cope with that; thus using far less space for the
> (very frequent) smaller packets.
>
> My hardware[1] uses a circular list of buffer descriptors, so I also did
> the equivalent of pbuf_cat myself in the driver too. This also meant that
> when a packet is received, I get another pbuf from the pool and put its
> pbuf payload pointer in the hardware's buffer descriptor, thus ensuring the
> hardware buffers remain full of packet buffers.
>
> Personally I did this with some modifications imposing extra constraints on
> the pbuf pool, which I did using a new override macro, along with a hook
> which I added to PBUF_POOL_FAST_FREE. This is because of both the buffer
> fiddling and extra alignment and positioning constraints on the payload.

Sounds like a nice way to do this. I implemented zero copy in a simple
way in the driver I just wrote:

input_thread_loop:
    pbuf_alloc() 1514 bytes
    pass pbuf to driver and block waiting for packet then DMA from device
    pbuf_realloc() <- trim to actual length
    netif->input()

low_level_output:
    pass pbuf to driver and block waiting for space in device, then
DMA to device

Note that this relies on some simple OS services and a device with
internal packet buffers. It doesn't suit on-chip MACs as nicely as
Jifl's method, but it does use the existing api which had some appeal
for my first experiment with LWIP.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]