The m0n0wall Traffic Shaper (as far as I can grok it)
All my knowledge about the traffic shaper in m0n0wall is based on the few bits of information I've
found on the net, read in the mailinglist or groked myself by trial-and-error or simple deduction.
I (being the party or parties denoted as "me") can't be held responsible for ANYTHING what so ever,
including but not limited to my own actions as well as those of others. Furthermore, in the event of
an earthquake, civil unrest or severe birth defects, blah, blah, liability, blah, blah, legal
action, blah, blah, blah... [insert more mumbo-jumbo here]. We (being the entity refered to as "I")
retain the right to change or ammend this legal notice (being part of the document your are
currently reading) at any time without informing anyone about anything. We furthermore retain the
right to be drunk or otherwise intoxicated at any given time as well as generally behaving stupid.
Any bugs will always be considered "user error", and thus cannot be attributed to anything herein,
aside from where Microsoft(tm) products are mentioned, in which case the bug will always be
considered as originating from the product in question.
Feel free to distribute this document as much as you like (although, you really should wait for the
real m0n0wall manual!)
As it is (perhaps?) possible (if not very likely) to break you firewall configuration (at least to
the extent of making the webGUI inaccessible from your current host) by misconfiguring the traffic
shaper, I strongly recommend that you make a local backup of your config file and ensure you have
some means of restoring it in case you suddenly can't access the webGUI. In all fairness it should
be said that I've never succeeded in making the traffic shaper doing anything that prevented me from
accessing the webGUI myself or otherwise made anything go broken in m0n0wall, with the possible
exception of slowing down my network a bit during some of my (mis)configuration attempts!)
Like when making firewall rules, having a good set of aliases, including some network aliases for
the various subnets and IP pools used, will make it much easier to maintain the shaper configuration
(and, in some cases, allow you to use a single rule instead of several, by clever use of network
aliases to denote a contigous group of IPs on a subnet boundary. See my examples below for how this
can be achieved.)
There are (obviously) three parts to the shaper: Rules, queues and pipes.
The rules are the "brain" of the shaper; they work much like the firewall rules, except instead of
passing or blocking they select where the packets go: Either directly to a pipe, or through a queue
which in turn pass it on to its associated pipe.
Queues, the "shaper police", are wedged between the rules and the pipes; they hold packets in an
orderly fashion until the associated pipe is ready to accept more data, while also sharing the
bandwidth of the pipe according to the different queue "Weights" (if any).
Pipes are the bottom layer of the shaper, the "plumbing" if you will pardon the pun; they simply
limit the maximum bandwidth used by, as well as optionally introducing a fixed delay for, all
packets passing through them.
Note: Since the traffic shaper introduces a number of extra "layers" in the way m0n0wall process its
packets, a slightly higher latency might be a result of enabling the shaper. In addition to this,
the various limitations on pipes and queues will further reduce bandwidth and/or increase latencies.
This is to be expected, and will in most cases be more than made up for by the increased throughput
achieved with proper traffic shaping.
I find the logical way to tackle the three parts of the shaper is with a bottom-up approach,
starting with the pipes, going on to the queues and finishing with the rules. This goes for
understanding the shaper, as well as for building your shaper configuration, thus I have choosen
this order for my description of the three parts below.
When creating a pipe, the most important property you specify is its "Bandwidth" (or more precisely,
its "Bandwidth limit"). This determines the maximum "flowrate" of traffic going through the pipe. If
more traffic is trying to get through, the pipe will start holding back the packets (exactly how it
does this I don't know, except if the packets are in a queue, in which case they will simply stay
there until the pipe is ready for more traffic. I must assume that each pipe has its own "internal"
queue, in case packets are sent directly to it?)
Rules of thumb regarding the use of bandwidth:
- Ensure that the total bandwidth of all pipes (including the expected number of "virtual" pipes)
NEVER exceed your actual (real-world) bandwidth (preferably as tested by some reliable speed-test,
rather than calculated from the stated bandwidth of your ISP. If not possible to test reliably,
subtract at least 10% of the stated bandwidth.)
Aside from the bandwith, a pipe can have a "Delay", simulating a high-latency connection. While I'm
not sure about how this is implemented, I'm guessing that only one package is sent through the pipe
during each interval, which would mean that if 10 ms is specified, only 100 packets will get through
each second. However, it might instead be that each package is simply delayed for the specified
interval, in which case any number of packets can go through each second, but only after waiting for
the specified number of milliseconds.
Rules of thumb regarding the use of delay:
- Don't! Unless you really want to severely limit the speed of certain packets, and even so this is
probably mainly effective in the outbound direction, as delaying inbound packets when they reach
your firewall will typically just cause a backlog at the ISP, preventing the shaper from working
Finally the pipe can have a "Mask", used to determine when to create "virtual pipes". The way this
works is that for each host IP matching the mask (either source or destination), a virtual pipe is
created. This can be used for splitting the available bandwidth statically between hosts. For
instance, a pipe accepting outbound traffic (as determined by some rules) without any mask, will
cause the local hosts to share the bandwidth of the pipe for all outbound traffic going through this
pipe (total bandwidth = bandwidth of pipe). If source is specified for the mask, each local host
will get their own (virtual) pipe and thus each get the specified bandwidth (total bandwidth =
bandwidth of pipe x local hosts). Of couse one could also specify a destination mask, in which case
the (outbound) pipe would create a virtual pipe for each destination host on the WAN (probably not a
The last example above shows how important it is to get the mask right, if using these. Remember
that the "direction" of traffic is determined not only by the "Direction" of the rules, but also by
which interface to which the rules apply (ie. the same package could first be inbound on LAN then
outbound on WAN) so when moving rules from one interface to another, you will typically need to swap
the mask (if specified).
Rules of thumb regarding the use of mask:
- Typically used only when you want a static split of bandwidth between a number of (source or
- Depending on which interface your users come from, you will typically want to set up the masks in
such a way as to always select on either remote or local hosts (assuming what you want is to give
each user a certain fraction of your total bandwidth).
Some pipe examples (relating to the pipe and rule examples further below):
(No: Bandwidth, Delay, Mask, Description)
1: 232 Kbit/s, [blank], [blank], "<= ADSL Full"
2: 992 Kbit/s, [blank], [blank], "=> ADSL Full"
These two pipes is what I use for all my WAN traffic (which is through a 1024/256 Kbit/s ADSL
modem). Note that my ISP has a very "friendly" deffinition of the speed provided (I suspect they aim
to provide the stated speeds AFTER any overhead has been disregarded), normally you would probably
need to lower the bandwidths even further!
3: 64 Kbit/s, 2 ms, Source, "<= ADSL Limited"
4: 256 Kbit/s, [blank], Destination, "=> ADSL Limited"
I use these pipes for my WLAN "guests", they are masked so each local host will get its own pipe
(ie. static bandwidth sharing). Notice how I use source for the outbound pipe and destination for
the inbound, this is because I want a virtual pipe created for each local host. Since my rules will
be working on the WAN interface, the local host will be the source for outbound and the destination
for inbound packets (if the rules were to be on another interface, or if I wanted to create virtual
pipes for each remote host, the masks would have to be swapped.)
As an experiment, I've assigned a small delay to the outbound pipe, in an attempt at limiting the
number of requests for inbound traffic (which I guess will only work as intended if I'm right about
how the delay is implemented above!)
5: 2048 Kbit/s, [blank], [blank], "<=> Intranet"
This last pipe is used for intranet traffic, which in my case consists mainly of jobs to my print
server. Since I'm using a 802.11b wireless LAN, I'm using this pipe to limit the local traffic so as
to keep some bandwidth reserved for packets to or from the WAN.
While I guess a queue has an effect merely by existing (and being used of course), they really come
into their right when you assign a different "Weight" to a number of queues using the same pipe (at
least I think the latter is a requirement?) In this case the the queues are prioritised in such a
manner that the queue with the highest weight get more of the pipes bandwidth (while always ensuring
that all queues get their share, even if a queue with a higher weight has packets waiting, thus the
use of "weight" rather than "priority").
Rules of thumb regarding the use of weight:
- Remember these are relative "ratios", that is a 20 queue should get twice the bandwidth in
relation to a 10 queue, but since this number isn't a strictly a priority, even if the 20 queue has
a large number of packets waiting the 10 queue should still get its third of the bandwidth.
- Don't expect the ratio to be precisly reflected in the traffic flow. Experimentation is the only
way I've found, that will tell you what you get with different weights.
Like pipes, queues can have a mask. The same logic as for pipes, applies to how "virtual" queues are
created based on this mask. While there might be other sensible possibilities, it is my experience
that a queue should typically have the same mask setting as its associated pipe, and in many cases
the queue should have no mask, even if the pipe has one. I assume the only reason for giving a queue
a mask, would be to give each host a separate queue as well as a separate pipe, in a static
bandwidth splitting scenario, making traffic for each host entirely independant of traffic from the
Rules of thumb regarding the use of mask:
- (see pipes).
- Ensure that the mask of a queue (if any) matches that of the pipe (if any) it is using.
Note: Since it is possibly to specify a rule that send packets directly to the pipe, bypassing the
queues altogether, there is effectivly always an extra "high priority" queue available. I guess
caution should be exercised when sending packets directly to a pipe, as too much traffic in this
manner will probably put the queues on permanent hold? It should however, be an efficient way to get
certain small, low volume packets (such as ACK, SYN and DNS) through the shaper as quickly as
Some queue examples (relating to the pipe and rule examples further above and below):
(Num: Pipe, Weight, Mask, Description)
1: 1, 96, [blank], "<= High"
2: 1, 32, [blank], "<= Medium"
3: 1, 2, [blank], "<= Low"
These are my three outbound queues, they all use the same pipe but with different weights for
prioritising different traffic (ie. dynamic bandwidth sharing).
4: 2, 96, [blank], "=> High"
5: 2, 32, [blank], "=> Medium"
6: 2, 2, [blank], "=> Low"
These are the three inbound queues, as above but for the other direction.
7: 3, 4, [blank], "<= Limited"
8: 4, 4, [blank], "=> Limited"
These are the queues used by my "guests", going to the two limited and masked pipes. While I'm not
sure the weight will have anything to say since these queues go to seperate pipes, I've made sure to
give them appropriate values just in case. Also, you would typically specify a source respectivly
destination mask for queue 7 respectivly 8, since the related pipes are masked this way, but as I
have so few "guests" which I mainly want to prevent from stealing too much bandwidth altogether,
I've choosen to use a global queue instead of separate virtual ones, minimizing the strain on
As mentioned the shaper rules are a bit like those used for the firewall, specifically the
interface, protocol, source, destination and port ranges are the same. In addition to this, you can
also specify a direction (relative to the choosen interface), a packet size (or size range) and a
number of TCP flags that must or mustn't be present in the packet, all used to further refining the
match criteria. Finally each rule has a "Target", which can either be a pipe or a queue, to which
any matching packets are passed. Like with firewall rules, the order of the shaper rules is
relevant, as a packet will be checked against the rules from the top down, and the first one
matching will be used to decide which pipe or queue the packet is sent to. I guess the rule list
could have been separated into seperate lists for each interface like the firewall rules, but this
is not the case (currently anyway, I don't know if this is something Manuel plans to do in a future
Making these rules is much like making rules for the firewall, with some exceptions: The interface
is typically WAN, as it is mainly the connection to the outside net that need to be shaped (although
there are scenarios where inter-LAN or LAN<->OPTx packets will need shaping as well, one example
could be to limit the LAN<->DMZ traffic, in order to ensure free bandwidth for the WAN<->DMZ
traffic, or advanced "double shaping" combining static and dynamic shaping on two interfaces). Also
a shaper rule has a "Direction" (in, out or any), which is used to split traffic between inbound and
outbound pipes and queues, either to ensure proper masking of these, or becuase of asymetrical WAN
bandwidths. Finally you can optionally specify a packet size (or size range) and a number of flags
that need to be on or off in the packet. The flags are used to narrow the matching criteria even
further depending on certain flags being on or off, and are mainly used for special cases, like ACK
or SYN packets. Size limitation can be used when bypassing the queues, to prevent large ACK packets
(ie. combined ACK and data packets) from filling the pipe and thus blocking the queues, or in other
cases where packets below or above a given size need to be handled seperatly etc.
Rules of thumb regarding shaper rules:
- Unless you have special needs and/or know what your are doing, always make your shaper rules work
on the WAN interface.
- Make sure that the source, destination of a given rule corrosponds to both its direction, as well
as the mask(s) on the queue and/or pipe specified for the rule.
Some rule examples (relating to the pipe and queue examples further above):
(IF, Dir, Proto, Src/port(s), Dest/port(s), Length, Flags, Target, Description)
LAN, any, any, IntraNet/any, IntraNet/any, [blank], [don't care], Pipe 5, "LAN <=> WLAN"
WLAN, any, any, IntraNet/any, IntraNet/any, [blank], [don't care], Pipe 5, "WLAN <=> LAN"
These rules are for limiting my LAN <=> WLAN traffic (mainly to ensure that some wireless bandwidth
are reserved for traffic to the WAN interface). Notice that I have not used a queue for this
traffic. As I assume there will seldom be a backlog / congestion on the intranet, I might as well
save m0n0wall the trouble (not that my current platform lacks RAM or CPU power, but for Soekris
users this might be of some importance!) Also there isn't anything else going through these
particular pipes, making the use of a queue somewhat (if not entirely) moot. (As previously
mentioned, I have yet to completely figure out the difference between going through a queue and
going directly to the pipe, in cases where only one queue is present for the pipe. Thus I really
can't say if this is the right way to do it or not?)
"IntraNet" is my alias for the x.x.32.0/23 subnet encompassing the WLAN (x.x.32.0/24) and LAN
(x.x.33.0/24) subnets. This is a way to avoid having a similar set of rules for each subnet (the
same trick can be used for DHCP pools or other contigous IP ranges, if made to fit with a subnet
size and boundary). WLAN is my OPT1 interface, used as wireless AP in m0n0wall.
WAN, out, TCP, PrioNet/any, */any, 0-128, ACK set, Pipe 1, "<= ACK"
WAN, in, TCP, */any, PrioNet/any, 0-128, ACK set, Pipe 2, "=> ACK"
These rules are used to ensure swift transfer of small ACKs in both directions (notice that these
are sent straight to the pipe instead of through a queue). I have a similar set of rules for SYNs
(with SYN flag instead of ACK, and size = 0-512), and for DNS (no flags, size = 0-512,
source/destination port = 53, with an extra set of rules for UDP, since I'm not sure which protocol
is used for DNS if not both?) These rules are all limited to my own hosts.
"PrioNet" is an alias for a small subnet (x.x.32.128/30), encompassing the (static) IPs assigned to
my hosts on the WLAN (the same trick as with the "IntraNet" alias above, used here to avoid having
to specify all the rules, relating to the "priority" hosts, four times each!)
WAN, out, TCP, PrioNet/25, any/*, [blank], [don't care], Queue 1, "<= SMTP"
WAN, in, TCP, */any, PrioNet/143, [blank], [don't care], Queue 4, "=> IMAP"
These rules are for mail traffic (SMTP out and IMAP in in this case) to and from my own hosts. This
traffic is passed to the high priority queue, to ensure mail will get preference over any other kind
of traffic (with the exception of the ACK, SYN and DNS packets). I have similar rules for HTTPS
(port = 443 in both directions).
WAN, out, TCP, myhost/p2pp, any/*, [blank], [don't care], Queue 3, "<= P2P (clients)"
WAN, in, TCP, */any, myhost/p2pp, [blank], [don't care], Queue 6, "=> P2P (clients)"
These rules are for my peer-to-peer program (the actual port numbers as well as the name of my p2p
program have been removed to protect the "innocent"). I pass this traffic to the low priority queue,
to ensure that anything else gets preference over the p2p traffic. Doing it this way, with a queue
with low weight going to the same pipe as my other traffic instead of using a separate pipe, allow
my p2p program to use all available bandwidth, while forcing it to release a certain fraction of it
when needed for other traffic (making it possible to have a decent web-browsing experience without
limiting my p2p more than necessary.)
WAN, out, any, PrioNet/any, */any, [blank], [don't care], Queue 2, "<= Other Prio traffic"
WAN, in, any, */any, PrioNet/any, [blank], [don't care], Queue 5, "=> Other Prio traffic"
These rules are "catch alls" for any traffic not matched by the rules above, but going to or from
the priority hosts. This is passed to the medium priority queue, giving it preference over p2p but
below the high priority stuff in the rules above.
WAN, out, any, */any, */any, [blank], [don't care], Queue 7, "<= Other Prio traffic"
WAN, in, any, */any, */any, [blank], [don't care], Queue 8, "=> Other Prio traffic"
These rules match anything that isn't already caught by the other rules, assuming this will be
traffic from guest hosts to the WAN. This traffic is passed to the limited queues, which in turn
pass it to the limited and masked pipes (which, as mentioned, will provide a static bandwidth
allocation for each of these hosts).
A packets way through the traffic shaper:
IF(in) => [matching rule] => [queue] => [pipe] => IF(out)
IF(in) => [matching rule] => [pipe] => IF(out)
IF(in) => [no match!] => IF(out)
Note that the last possibility will effectively bypass the traffic shaper (while still adding the
overhead of checking the packet(s) against all shaper rules). It is therefore important to ensure
that all possible packets will be match by at least one rule. Unlike the firewall, the shaper has no
default "catch all" rule (except, as indicated above, sending the packet on to its destination
interface). Even if some traffic doesn't need to be shaped, as long as it passes through an
interface that is otherwise being shaped, it is wise to make a rulefor it (and perhaps a pipe
and/or a queue to send it through as well). This way there is no chance for this "unshaped" traffic
to flood the interface and prevent the shaper from operating as intended.
While I have mentioned it before, it is very important to understand the issue about not using too
high bandwidth(s) in the shaper and/or allowing traffic to "slip through" the shaper rules. There is
no magic built into the shaper, it will only be able to perform its tasks if it remains the single
point of limitation along the path from source to destination. This means you need to prevent your
WAN equipment (both the modem or whatever on your side and whatever equipment is on your ISPs side)
from queuing your traffic. If this happens, the shaper will no longer have any control over the
queue, and is thus effectivly "out of the game".
This is why you need to ensure that the total bandwidth through all pipes (remembering to include
any "virtual" ones created by masking) taken together, is safely below your actual bandwidth.
Knowing the precise number of virtual pipes or queues created can be difficult, not to say
impossible, especially in larger networks with a lot of DHCP hosts or scenarios with a lot of hosts
on the WAN side. In such cases a fair guess (preferably erring on the slightly larger side) will
have to be used, possibly in conjunction with "double shaping", as in: Shaping the total traffic,
with a single pipe limiting the maximum bandwidth on one interface, then shaping once again on the
other interface, this time with masked pipes/queues making the static bandwidth split. This way,
even if [number of hosts] times [bandwidth allocated for each host] exceeds your total bandwidth,
the total traffic shaping prevents this from flooding any queues not controlled by m0n0walls shaper.
(I haste to say that I've not tried such "double shaping" myself, thus this is purely conjecture on
For assymetric WAN connections, it is important to ensure that all pipes and queues exists in pairs,
one for each direction, and that these have the apropriate bandwidth specifications. Additionally
the rules must enforce these directions. This is necessary to prevent traffic in one direction from
getting into the queue(s) or pipe(s) for the other direction, which might have entirely different
References and Links
None (so far).
Any questions, comments, corrections, suggestions or additional information will be more than
welcome, and should be addressed to: adam at nellemann dot nu