[ previous ] [ next ] [ threads ]
 
 From:  Didier Lebrun <dl at quartier dash rural dot org>
 To:  m0n0wall at lists dot m0n0 dot ch
 Subject:  RE: [m0n0wall] Re: m0n0 - traffic_shaper
 Date:  Thu, 29 Apr 2004 00:49:19 +0200
I confirm that it is sensible to do bandwidth management on inbound 
traffic. I actually do that with a FreeBSD box used as gateway between a 
bidi sat link and a wireless + wired local network, and it works without 
any need to drop packets.

Say the purpose is to enforce a fair sharing of the sat up and down links 
between all LAN clients, which is a quite common type of problem.

I have 2 pipes associated with the sat up and down links, without any fixed 
bandwidth limit (*) and a queue size based on the link TCP window size 
(queue size = link bandwidth * RTT max):

(*) : during some tests, I tried to set a fixed bandwidth on the up and 
down pipes and found that it was resulting in a much higher latency (+230 
ms for the round trip !). It is not useful anyway !

"""
# NB: kernel compiled with IPFW2
pipe_up_size="24Kbytes" # 128 kbps link with 1500 ms RTT max
pipe_down_size="96Kbytes" # 512 kbps link with 1500 ms RTT max

${fwcmd} add set 15 pipe 1 all from { $inet or me } to any out xmit ${oif}
${fwcmd} add set 16 pipe 2 all from any to { $inet or me } in recv ${oif}
${fwcmd} pipe 1 config queue ${pipe_up_size}
${fwcmd} pipe 1 config queue ${pipe_down_size}
"""

Each client host has a separate queue, using the netmask syntax, and a 
queue size the same as the pipe:

"""
queue_up_size="24Kbytes" # same as pipe
queue_down_size="96Kbytes" # same as pipe
user_up_weight=10 # could be based on some calculations like user's 
bandwidth consumption history
user_down_weight=10 # could be based on some calculations too, same as up 
or different

${fwcmd} add set 17 queue 1 all from ${inet} to any out xmit ${oif}
${fwcmd} add set 18 queue 2 all from any to ${inet} in recv ${oif}
${fwcmd} queue 1 config pipe 1 mask src-ip 0x000000ff weight 
${user_up_weight} queue ${queue_up_size}
${fwcmd} queue 2 config pipe 2 mask dst-ip 0x000000ff weight 
${user_down_weight} queue ${queue_down_size}
"""

That's all ! Since packets are taken in each queue in a round robin 
fashion, the traffic is evenly shared between the users and no one is 
preventing others from inserting itself into the traffic by saturating the 
pipe. Since each queue is sized slightly above the global TCP window size, 
the TCP flow is naturally regulated between its ends without any need to 
drop packets (it would be a pity after such a long trip !). We have such a 
config, with 20 hosts sharing the sat link bandwidth, and it has been 
satisfying for quite a while.

One can make it a bit more complicated, by adding more queues dedicated to 
specific traffic. I have for example a higher priority queue for DNS 
traffic between the gateway DNS server (dnsmasq) and the provider's remote 
DNS. I have some lower priority queues as well for download, CVSUP and 
such. The principle stay the same.

The only problem I see for doing that with m0n0wall is related to the 
choice of ipfilter + ipnat, since it always comes first (before 
IPFW/DUMMYNET), thus preventing from using the "mask src-ip 0x000000ff" 
syntax for outbound traffic. My gateway is based in IPFW + NATD, so I don't 
have this problem. On the other hand, ipfilter + ipnat has some advantages, 
in terms of speed and dynamic firewall rules.

Didier


At 12:04 28/04/2004 -0700, Don Hoffman wrote:
>Adam, what you say is generally true.  However conceptually it is still
>possible to do bandwidth management on inbound data.  Basically, one
>measures the rate of the all the input TCP streams.  As they approach the
>target threshold, you start to randomly discard packets (Google for Random
>Early Discard (RED)).  This causes TCP to reduce rate accordingly.  By
>discarding "early" (i.e., before you actually hit the limit where the
>offered rate is greater than the bottleneck link), you avoid the queue build
>up described below. (To a first approximation, this is what Jose is doing.)
>
>If you throttle the bandwidth to be some number less than the downstream
>rate of your access link, then this virtual queue will become the bottleneck
>rather than the output queues on your upstream router.  RED had the nice
>property of keeping queue occupancy (and hence latency) lower than other
>discard strategies.
>
>So in fact, you tell the source to reduce rate by throwing away the packet.
>Note that in TCP, it is not the upstream router that directly controls the
>rate that data are sent, but rather the TCP source.  The routers signal
>congestion to the source by discarding packets (or setting the congestion
>bit if ECN is used, which is rare).
>
>Non-adaptive UDP streams complicate things, but they are an issue no matter
>what.  But if you make the target incoming aggregate TCP traffic
>*significantly* less than the incoming link speed, you should be able to
>"reserve" some of that bandwidth for the more or less constant packet rate
>VoIP RTP/UDP traffic.  Trick is how to modulate that behavior?  My current
>thinking is to use some sort of hierarchical bandwidth management scheme
>(e.g., CBQ), but need to think it through more.  (E.g., Monitor the rate of
>incoming UDP traffic. As it increases, decrease the target threshold for TCP
>traffic.) Been some years since I worked on this sort of thing...
>
>As soon as I get a environment set up (and time :-)) to start hacking
>m0n0wall I want to try a few experiments on the above.
>
>Don
>
>
>-----Original Message-----
>From: Adam Nellemann [mailto:adam at nellemann dot nu]
>Sent: Monday, April 26, 2004 12:56 AM
>To: Jose Iadicicco
>Cc: m0n0wall at lists dot m0n0 dot ch
>Subject: Re: [m0n0wall] Re: m0n0 - traffic_shaper
>
>
>Hi Jose,
>
>Here's the theory behind why inbound shaping doesn't work (or at least
>doesn't work well) If anyone has knowledge to the contrary, please post:
>
>= = =
>
>Since there is no built-in "traffic control messages" (or similar) in
>TCP/IP, there is no way for m0n0wall to "tell" the device(s) on the
>"other side" of your WAN link (such as the router at your ISP's end of
>your ADSL line) to stop sending packets (or do so at a slower rate).
>
>For this reason, limiting your inbound traffic will only cause packets
>to be queued on your m0n0wall box. While this may cause you (on a LAN
>box) to see shaped traffic limits being upheld, those queued packets
>would still have arrived through your ADSL line at the same rate
>(bandwidth) that they were sent from the server(s), regardless of what
>settings you have in your m0n0wall traffic shaper, and would thus have
>"hogged" whatever portion of your inbound bandwidth they happened to need.
>
>Of course, eventually, since the ACK's for these (queued) packets
>doesn't arrive (or are much delayed), the sending server(s) may stop
>transmitting more packets, but it is difficult to say if this has the
>effect you were looking for with the inbound traffic shaping, and it
>will certainly depend on the protocol(s) and the server(s) in
>question. In some cases this may have little to no effect, in others
>it may cause inbound traffic to become "lumpy", and (perhaps) in some
>it may do just what you want (ie. cause traffic to be slowed down to
>the limit you have set in the shaper).
>
>Since the latter effect (delaying of ACK packets) can be achieved with
>proper outbound shaping (of small packets, with the ACK flag set), I
>would think that is the better solution, instead of queueing a lot of
>traffic on m0n0wall (especially if you have ANY kind of RAM or CPU
>limitation on the m0n0wall box!)
>
>= = =
>
>As mentioned, if you (or anyone) has good arguments or knowledge that
>contradicts the above, I would much like to know about it (and, so I
>should think, would the rest of the mailinglist!)
>
>
>Regards,
>
>Adam.
>
>
>Jose Iadicicco wrote:
> > Hi all you guys! Why you sayed that traffic shapper for inbound

> > traffic and outbound traffic and it works so fine! I limited the
> > downstream to 64 Kbits and the upstream to 32 Kbits and it works
> > okey too. I dont understand you guys. Can you explain me what are
> > you talking about? My internet connection is 512 kbits upstream and
> > 128 kbits downstream (ADSL) and I have 12 computers of different
> > people using Kazaa, Emule, etc etc in each computer, without any
> > control of each computer, but its working okey since 4 months ago
> > when I configured the traffic shaper.
> >
> > Greetings to all you friends!!!!
> >
> > Jose
> >
> >

> > Wallberg wrote:
> >
> >>>> Hi I read the traffic_shaper.txt and I have one question.
> >>>> "For assymetric WAN connections, it is important to ensure
> >>>> that all pipes and queues exists in pairs, one for each
> >>>> direction, and that these have the apropriate bandwidth
> >>>> specifications". Do I really have to have pipes/queues for
> >>>> inbound traffic? Would it not be sufficient with
> >>>> rules/pipes/queues on by outbound traffic since this it the
> >>>> only traffic I really can control?!
> >>>>
> >>>> /Dennis
> >>
> >> Hi Dennis,
> >>
> >> You are of course right, there is (usually) no need to have any
> >> pipes/queues/rules for inbound traffic. Indeed, this will often
> >> make things worse!
> >>
> >> I was rather new to traffic sharing when I wrote the text, and
> >> did so in a hurry due to "popular demand". At the time I thought
> >> I'd soon get around to writing another, more thorough, version,
> >> as well as possibly doing the traffic shaper chapter for the
> >> m0n0wall documentation project. Unfortunatly I've been very busy
> >> ever since, and have thus had no time to do these things :(
> >>
> >> Since then I've myself removed all the inbound rules from my own
> >>  m0n0wall configuration. I guess I should have posted a notice
> >> about this.
> >>
> >> Sorry for the inconvinience this may have caused!
> >>
> >> Hopefully I will soon be able to, at least, post an updated
> >> version of traffic_shaper.txt, with corrections such as the
> >> above, but right now I'm still too busy.
> >>
> >> Adam.
> >>
> >> ---------------------------------------------------------------------
> >>  To unsubscribe, e-mail: m0n0wall dash unsubscribe at lists dot m0n0 dot ch For
> >> additional commands, e-mail: m0n0wall dash help at lists dot m0n0 dot ch
> >>
> >
> >
> > ===== El objetivo escencial del correr es probar los limites de la
> > voluntad humana...
> >


> > http://autos.yahoo.com.ar
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: m0n0wall dash unsubscribe at lists dot m0n0 dot ch
>For additional commands, e-mail: m0n0wall dash help at lists dot m0n0 dot ch
>
>
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: m0n0wall dash unsubscribe at lists dot m0n0 dot ch
>For additional commands, e-mail: m0n0wall dash help at lists dot m0n0 dot ch

--
Didier Lebrun
Le bourg - 81140 - Vaour (France)

mailto:dl at vaour dot net (MIME, ISO latin 1)
http://didier.quartier-rural.org/