[ previous ] [ next ] [ threads ]
 From:  Dan Dill <dandill at gmail dot com>
 To:  kudzu at tenebras dot com
 Cc:  m0n0wall at lists dot m0n0 dot ch
 Subject:  Re: [m0n0wall] Traffic Shaping with WFQ
 Date:  Tue, 16 Jun 2009 13:25:45 -0700
Thanks for that link, that's helpful.  According to that link they say:

*"If you want all machines to share evenly a single link, you should use
instead: *

*    ipfw add queue 1 ip from any to
    ipfw queue 1 config weight 5 pipe 2 mask dst-ip 0x000000ff
    ipfw pipe 2 config bw 300Kbit/s

That's what I need I think.  I would only have a single queue but I want to
ensure that bandwidth is shared equally among different connections within
that queue (and not allowing one connection to use the entire link at the
expense of other connections).  And a single queue seems to accomplish that
(via WFQ within the queue) whereas direct to the pipe does not according to

Is that right?

On Tue, Jun 16, 2009 at 12:24 PM, Michael Sierchio <kudzu at tenebras dot com>wrote:

> Dan Dill wrote:
> > My question is, if I have a rule that just dumps traffic directly into
> the
> > pipe, is fair queuing using for that traffic?  Or to use fair queuing
> should
> > I create a single 'default' queue and then put traffic into it.
> >
> > Another way to phrase this would be what are the default scheduling
> methods
> > for traffic going directly from a rule to a pipe.  Also the scheduling
> > method when using just a single queue and pipe would be helpful
> information.
> >
> > Below are screenshots of my setup.  Thanks in advance.
> This is old, but a good intro to the concepts
>        http://info.iet.unipi.it/~luigi/ip_dummynet/
> There is a default queue for a pipe, but no way to use WFQ2+ without
> explicitly
> defining queues with weights.  Having a single queue does not seem
> particularly
> useful to me, since the purpose is to assign different weights to traffic
> sharing
> the same pipe.
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: m0n0wall dash unsubscribe at lists dot m0n0 dot ch
> For additional commands, e-mail: m0n0wall dash help at lists dot m0n0 dot ch