nmh-workers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Nmh-workers] Time to branch 1.6?


From: Mikhail
Subject: Re: [Nmh-workers] Time to branch 1.6?
Date: Thu, 10 Apr 2014 18:12:24 +0400

>I see some people are starting to push some last-minute bug fixes; thanks,
>everyone!  So, let me ask ... is there anything else people want to cram
>into 1.6?  My personal feature list is complete; does anyone have anything
>else they want to get into 1.6?  Any other bug fixes, minor cleanup, etc etc?
>
>If no one has anything else, I have some time this afternoon and I could start
>the release cycle and get 1.6-RC1 out the door.

With the latest git I lost ability to see some email messages (I attach
an example [message 19], it was taken from public freebsd mail list).

The error is:
mhshow: unable to convert character set to , continuing...
part       text/plain                5224

Also, same stuff with emails from Apple Mail client, but error a little
different (attached as message 36, again public freebsd mail list):

mhshow: unable to convert character setof part  to 1, continuing...
part 1     text/plain                1102

I've seen such errors before with git, but at least content was
displayed. 1.5 shows them fine, just with <92> instead of proper
apostrophe.

My locale:
LANG=en_US.UTF-8
LC_CTYPE="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_ALL=
--- Begin Message --- Subject: Re: Some gruesome moments with performance of FreeBSD at over 20K interfaces Date: Thu, 10 Apr 2014 09:17:19 +0200
>From experience with large number of interfaces and configuring them.

Its not that the kernel cannot handle it the problem is that you call
generic utilities to do this job.
I.E. to setup an ip on the interface ifconfig has first to get the whole
list of interfaces to determine if that interface exists and extra
checkings.
This is what slows down the whole thing.

In pfSense by using custom utilities the time for configuring 8K interfaces
went from around 30 minutes to mere seconds or about a minute.
It has been long time not testing such scenarios and if you can generate a
config(xml format) with all the information for pfSense i can give a
look to see what is the bottleneck there.



On Wed, Apr 9, 2014 at 11:14 PM, Vladislav Prodan <address@hidden>wrote:

> Dear Colleagues!
>
> I had a task, using FreeBSD 10.0-STABLE:
> 1) Receive 20-30 Q-in-Q VLAN (IEEE 802.1ad ), inside of which 2k-4k vlan
> (IEEE 802.1Q). Total ~60K vlan
> 2) To every vlan interface assign ipv4 and ipv6 addresses, define routes
> to ipv4 and ipv6 addresses on another side of vlan (ip unnumbered), and
> also prescribe ipv6 network /64 by size through ipv6 address on another
> side of vlan.
> 3) Perform routing from the world to all of these ipv4/ipv6 addresses и
> ipv6 networks inside ~60K vlan
>
>
>
> To accomplish the 1st task I have no alternatives to using Netgraph.
> I noticed incorrect behavior of ngctl(8) after addition of 560th vlan
> (bin/187835)
> Than speed of addition 4k, 8k, 12k vlans was damnably slow:
>         10 minutes for first 4k vlans
>         18 minutes for first 5k vlans
>         28 minutes for first 6k vlans
>         52 minutes for first 8k vlans
> Than I added more 4к vlans
>         20 minutes - 9500 vlans
>         33 minutes - 10500 vlans
>         58 minutes - 12к vlans
>
> In total speed of addition of 4k, 8k, 12k vlans was subsequently
> 10m/52m/110m
> It's hard to imagine, how many time is needed to add ~60K vlan :(
> Process was accelerated a little by shooting off devd, bsnmpd, ntpd
> services, but it found another problems and limitations.
>
> For example,
> a) Service ntpd refuse to start at 12K interfaces:
> ntpd[2195]: Too many sockets in use, FD_SETSIZE 16384 exceeded
> I remind, that in files /usr/src/sys/sys/select.h and
> /usr/include/sys/select.h FD_SETSIZE value is only 1024U
>
> b) Service bsnmpd started at 12K interfaces, but immediately loaded CPU at
> 80-100%
>
> last pid: 64011;  load averages:  1.00,  0.97,  0.90
>                       up 0+05:25:39  21:26:36
> 58 processes:  3 running, 54 sleeping, 1 waiting
> CPU: 68.2% user,  0.0% nice, 30.6% system,  1.2% interrupt,  0.0% idle
> Mem: 125M Active, 66M Inact, 435M Wired, 200K Cache, 525M Free
> ARC: 66M Total, 28M MFU, 36M MRU, 16K Anon, 614K Header, 2035K Other
> Swap: 1024M Total, 1024M Free
>
>   PID USERNAME    THR PRI NICE   SIZE    RES STATE    TIME    WCPU COMMAND
> 63863 root          1  96    0   136M   119M RUN     35:31  79.98% bsnmpd
> ...
>
> c) Size of fields during output of command netstat(1) - netstat -inW is
> unsufficient (bin/188153)
>
> d) If indicate in command netstat of interface it's impossible to
> understand, which ipv4/ipv6 neworks are indicated here.
>
> # netstat -I ngeth123.223 -nW
> Name      Mtu Network       Address              Ipkts Ierrs Idrop
>  Opkts Oerrs  Coll
> ngeth12  1500 <Link#8187>   08:00:27:cd:9b:8e        0     0     0
>  1     5     0
> ngeth12     - 172.18.206.13 172.18.206.139           0     -     -
>  0     -     -
> ngeth12     - fe80::a00:27f fe80::a00:27ff:fe        0     -     -
>  1     -     -
> ngeth12     - 2001:570:28:1 2001:570:28:140::        0     -     -
>  0     -     -
>
> e) Very low output of command arp:
> # ngctl list | grep ngeth | wc -l
>    12003
> # ifconfig -a | egrep -e 'inet ' | wc -l
>    12007
> # time /usr/sbin/arp -na > /dev/null
> 150.661u 551.002s 11:53.71 98.3%        20+172k 1+0io 0pf+0w
>
>
> More info at
> http://freebsd.1045724.n5.nabble.com/arp-8-performance-use-if-nameindex-instead-of-if-indextoname-td5898205.html
>
> After using of patch, speed became acceptable:
>
> # time /usr/sbin/arp -na > /dev/null
> 0.114u 0.090s 0:00.14 142.8%    20+170k 0+0io 0pf+0w
>
> I suspect, that output of standard network stack will be too low to
> accomplish a 3rd task, routing  of ~60K vlan
> I have no idea, how to use netmap(4) in this situation :(
> Please, help me in fulfillment of assigned task.
>
> P.S.
> Colleague-Linuxoid is adjusting the same task and bragging:
> At Debian, in test (kernel 3.13), 80K vlans arose in 20 minutes. It takes
> 3 GB RAM. And deleting of these vlans also took 20 minutes.
>
> --
> Vladislav V. Prodan
> System & Network Administrator
> http://support.od.ua
> +380 67 4584408, +380 99 4060508
> VVP88-RIPE
> _______________________________________________
> address@hidden mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "address@hidden"




-- 
Ermal
_______________________________________________
address@hidden mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "address@hidden"

--- End Message ---
--- Begin Message --- Subject: Re: http://heartbleed.com/ Date: Thu, 10 Apr 2014 13:33:47 +0300
On 8.4.2014, at 17.05, Dirk Engling <address@hidden> wrote:

> On 08.04.14 15:45, Mike Tancsa wrote:
> 
>>    I am trying to understand the implications of this bug in the
>> context of a vulnerable client, connecting to a server that does not
>> have this extension.  e.g. a client app linked against 1.xx thats
>> vulnerable talking to a server that is running something from RELENG_8
>> in the base (0.9.8.x).  Is the server still at risk ? Will the client
>> still bleed information ?
> 
> If the adversary is in control of the network and can MITM the
> connection, then yes. The client leaks random chunks of up to 64k
> memory, and that is for each heartbeat request the server sends.
> 
>  erdgeist
> 

Going back to this original report of the vulnerability. Has it been 
established with certainty that the attacker would first need MITM capability 
to exploit the vulnerability? I’m asking this because MITM capability is not 
something that just any attacker can do. Also if this is true then it can be 
argued that the severity of this vulnerabilty has be greatly exaggerated.

-Kimmo

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail


--- End Message ---

reply via email to

[Prev in Thread] Current Thread [Next in Thread]