[ previous ] [ next ] [ threads ]
 
 From:  Charles Trevor <ct dot lists at qgsltd dot co dot uk>
 To:  m0n0wall at lists dot m0n0 dot ch
 Subject:  Hardware sizing
 Date:  Wed, 15 Nov 2006 21:41:44 +0000
Hi All,

I posted a little while ago asking what hardware would be required to 
allow a throughput of around 100mbit between interfaces. I got some 
interesting answers and have gone away and done some testing and 
experimentation and thought people might be interested.

My test environment has been 2 laptops, 1 on either side of the 
firewall, 1 centrino 1.6 ghz running Centos4, 1 P4 2 ghz running FreeBSD 
6.1, with the throughput testing carried out with Iperf running all 
default settings. (http://dast.nlanr.net/Projects/Iperf/). The centino 
has a 1Gig nic, the P4 has a 100mbit nic, all are connected together 
with Cat5e through Netgear FS116 and FS105 100Mbit switches. Neither 
laptop has broken much of a sweat in any test, with load averages around 
   .5.

In testing my WRAPs running m0n0 1.22 are not able to get the 35mbit 
Chris B has been able to achieve 
(http://lists.soekris.com/pipermail/soekris-tech/2005-April/008125.html).
With no traffic shaping and with device polling I am able to achieve 
approx 18mbit at best, with a cpu load of 0-3%. Without device polling I 
get an average or around 16mbit and a cpu load of 100%.

I also tested a 2 year old HP ML110 G1 2.6ghz Celeron also running m0n0 
1.22 via CF. This has 2 * 2 port Intel Pro1000MT Server adapters using 
the em driver and one onboard 1 gig adapter using bge. Identical 
throughput (93Mbit) is achieved with and without device polling, with 
cpu usage being 0-1% with polling enabled and 25% - 30% without, and no 
difference is observed if two ports on the same em card or a port on 
either em card are used. If the bge card and one em port is used 
throughput is again around 92.8 - 93mbit and cpu usage is 8-15% with 
device polling and 25 - 35% without.

Testing laptop to laptop via the switches also gives me 93.2 mbit so I 
think the bottle neck is either the switches or the 100mbit fxp laptop 
adapter. I dont have gigabit switches in the lab to play with, so I cant 
push the hardware any harder, but from observation it would seem like 
the ML110 will push several hundred Mbit quite easily with the Intel 

worth of network cards added!


Any observations gratefully received. If the WRAPs can be tweaked to 
gain better throughput I'd love to hear an tips.

Charlie

**WRAP**
##Device poling enabled
[root@ctlaptop charlest]# iperf -c 192.168.11.10 -t 60
------------------------------------------------------------
Client connecting to 192.168.11.10, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.10.201 port 39900 connected with 192.168.11.10 port 5001
[  3]  0.0-60.0 sec    133 MBytes  18.6 Mbits/sec

## No Device poling
------------------------------------------------------------
Client connecting to 192.168.11.10, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.10.201 port 42928 connected with 192.168.11.10 port 5001
[  3]  0.0-60.0 sec    115 MBytes  16.1 Mbits/sec

**ML110**
[root@ctlaptop charlest]# iperf -c 192.168.30.10 -t 60
------------------------------------------------------------
Client connecting to 192.168.30.10, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.10.201 port 45176 connected with 192.168.30.10 port 5001
[  3]  0.0-60.0 sec    666 MBytes  93.2 Mbits/sec

**Laptop to Laptop**
[root@ctlaptop charlest]# iperf -c 192.168.10.190 -t 60
------------------------------------------------------------
Client connecting to 192.168.10.190, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.10.201 port 46145 connected with 192.168.10.190 port 
5001
[  3]  0.0-60.0 sec    667 MBytes  93.2 Mbits/sec