[ previous ] [ next ] [ threads ]
 
 From:  Manuel Kasper <mk at neon1 dot net>
 To:  Lee Sharp <leesharp at hal dash pc dot org>
 Cc:  m0n0wall at lists dot m0n0 dot ch
 Subject:  Re: [m0n0wall] The "buffer bloat" issue
 Date:  Sat, 8 Jan 2011 20:27:37 +0100
On 08.01.2011, at 19:24, Lee Sharp wrote:

> Cool.  For the record, I was not seeing any problems with m0n0wall.  I just wanted to know what
the behavior was.  The fact that the traffic shaper has a 50 packet queue length was something I
should have noticed. :)  However, the traffic shaper does not come into play on any internal traffic
(LAN to Opt1 for example) correct?

It doesn't come into play by default, but you can enable it on any interface if you choose. I don't
think it will provide any improvement though; if your LAN and OPT1 interfaces have the same link
speed and your CPU load isn't too high, there will be no reason for m0n0wall to queue any packets
between these interfaces (not taking into account other concurrent traffic flows, of course).

Personally, I wouldn't worry about any of this on "internal" interfaces unless you're frequently
running them near their maximum throughput. It makes more sense to focus ones attention to the queue
size in your Internet upstream path, and perhaps on WLAN APs.

> What is the queue length there?

I'm not sure; the output queue size may even depend on the particular type of NIC/driver in use.
Would have to go digging deep in the kernel source code to find out :) The IP input queue size can
be adjusted using the net.inet.ip.intr_queue_maxlen sysctl (default is 50), but if that queue often
becomes full, it would be time to upgrade your m0n0wall's hardware anyway.

- Manuel