lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-users] lwip whole performance down


From: Bill Auerbach
Subject: Re: [lwip-users] lwip whole performance down
Date: Wed, 16 May 2012 12:25:09 -0400

Vincent,

 

In my experience, most performance issues have been in my driver or hardware platform, not in lwIP.  I don’t use a real OS so I can’t speak to its added overhead.

 

I do note you changed PBUF_POOL_BUFSIZE.  I believe if smaller than the default you get chained pbufs which is a little less efficient than one packet per pbuf.

 

Bill

 

From: address@hidden [mailto:address@hidden On Behalf Of vincent cui
Sent: Wednesday, May 16, 2012 1:49 AM
To: Mailing list for lwIP users
Subject: Re: [lwip-users] lwip whole performance down

 

Hi :

 

I enable LWIP_STAT to get LWIP status when performance goes down. but it seems no error detected …

Anyone help to have a look  ?

 

ETHARP

         xmit: 2

         recv: 761

         fw: 0

         drop: 0

         chkerr: 0

         lenerr: 0

         memerr: 0

         rterr: 0

         proterr: 0

         opterr: 0

         err: 0

         cachehit: 1750

 

IP

         xmit: 1753

         recv: 1971

         fw: 0

         drop: 0

         chkerr: 0

         lenerr: 0

         memerr: 0

         rterr: 0

         proterr: 0

         opterr: 0

         err: 0

         cachehit: 0

 

TCP

         xmit: 1

         recv: 1757

         fw: 0

         drop: 0

         chkerr: 0

         lenerr: 0

         memerr: 0

         rterr: 0

         proterr: 0

         opterr: 0

         err: 0

         cachehit: 0

 

MEM HEAP

         avail: 12288

         used: 1676

         max: 1760

         err: 0

 

MEM RAW_PCB

         avail: 4

         used: 0

         max: 0

         err: 0

 

MEM UDP_PCB

         avail: 6

         used: 1

         max: 2

         err: 0

 

MEM TCP_PCB

         avail: 16

         used: 1

         max: 2

         err: 0

 

MEM TCP_PCB_LISTEN

         avail: 6

         used: 4

         max: 4

         err: 0

 

MEM TCP_SEG

         avail: 32

         used: 0

         max: 1

         err: 0

 

MEM NETBUF

         avail: 2

         used: 0

         max: 0

         err: 0

 

MEM NETCONN

         avail: 8

         used: 2

         max: 2

         err: 0

 

MEM TCPIP_MSG_API

         avail: 8

         used: 0

         max: 0

         err: 0

 

MEM TCPIP_MSG_INPKT

         avail: 8

         used: 1

         max: 1

         err: 0

 

MEM SYS_TIMEOUT

         avail: 10

         used: 6

         max: 6

         err: 0

 

MEM SNMP_ROOTNODE

         avail: 30

         used: 0

         max: 0

         err: 0

 

MEM SNMP_NODE

         avail: 50

         used: 0

         max: 0

         err: 0

 

MEM SNMP_VARBIND

         avail: 2

         used: 0

         max: 0

         err: 0

 

MEM SNMP_VALUE

         avail: 3

         used: 0

         max: 0

         err: 0

 

MEM PBUF_REF/ROM

         avail: 16

         used: 0

         max: 0

         err: 0

 

MEM PBUF_POOL

         avail: 10

         used: 2

         max: 2

         err: 0

 

 

From: address@hidden [mailto:address@hidden On Behalf Of vincent cui
Sent: 2012
515 21:31
To: Mailing list for lwIP users
Subject: [lwip-users] lwip whole performance down

 

Hi:

 

I am working on LWIP1.4.0 and FreeRTOSv7.1.1 to setup web server with SOCKET API..and found that sending speed becomes lower.

In order to know the actual sending speed, I write a simple application to test it. And found that the sending speed is up to 8Mb/s. it is high enough for me.

I notice that the application only setup connection one times. In web server, it closed connection after sending all data out. I do n’t know if it may cause system

Performance down. I extend the sending buffer as I can. They are following:

 

#define TCP_MSS                 (1500 - 40)   /* TCP_MSS = (Ethernet MTU - IP header size - TCP header size) */

 

/* TCP sender buffer space (bytes). */

#define TCP_SND_BUF             (8*TCP_MSS)

 

/*  TCP_SND_QUEUELEN: TCP sender buffer space (pbufs). This must be at least

  as much as (2 * TCP_SND_BUF/TCP_MSS) for things to work. */

 

#define TCP_SND_QUEUELEN        (4* TCP_SND_BUF/TCP_MSS)

 

/* TCP receive window. */

#define TCP_WND                 (4*TCP_MSS)

 

/* ---------- Pbuf options ---------- */

/* PBUF_POOL_SIZE: the number of buffers in the pbuf pool. */

#define PBUF_POOL_SIZE          10

 

/* PBUF_POOL_BUFSIZE: the size of each pbuf in the pbuf pool. */

#define PBUF_POOL_BUFSIZE       1024

 

/* MEMP_NUM_TCP_SEG: the number of simultaneously queued TCP

   segments. */

#define MEMP_NUM_TCP_SEG        64

 

/* MEMP_NUM_TCP_PCB: the number of simulatenously active TCP

   connections. */

#define MEMP_NUM_TCP_PCB        32

 

/* MEM_SIZE: the size of the heap memory. If the application will send

a lot of data that needs to be copied, this should be set high. */

#define MEM_SIZE                (12*1024)

 

/* MEMP_NUM_PBUF: the number of memp struct pbufs. If the application

   sends a lot of data out of ROM (or other static memory), this

   should be set high. */

#define MEMP_NUM_PBUF           64

 

Those value is enough big for me….but why do I get better performance ?

 

vincent

 

 

 

 


reply via email to

[Prev in Thread] Current Thread [Next in Thread]