On Wed, 12 Oct 2005, Manuel Kasper wrote:
> could become. Your (constructive) comments, suggestions and opinions
> are very welcome, and I expect they'll be numerous as well. Please
> post them to the m0n0wall-dev mailing list only (I'm just CC'ing this
> to the main list to draw people's attention to this discussion).
Here are my thoughts:
Embedded platforms (e.g., Soekris & WRAP) should remain the primary target,
regardless of what surveys say. Even if lots of people want to run
m0n0wall on boxes where the CPU lifetime after a fan failure is measured in
milliseconds, those who are astute enough to use more reliable hardware
without moving parts shouldn't be neglected, any more than Mac, Linux, and
BSD desktop users should be neglected just because 95% of the desktops run
The "fanless" requirement imposes a serious upper limit on CPU power which
is *not* significantly alleviated by "Moore's law" evolution (especially
with the x86 architecture). Thus it's not reasonable to allow slothful
code and assume that the performance problems can be covered up by throwing
faster hardware at it. And of course, something which runs well on slower
hardware will run really well on faster hardware, while something that runs
adequately on faster hardware may be unusable on slower hardware.
This is not to suggest that the *capabilities* should be limited, especially
if moving away from the "pure MFS" approach solves the RAM footprint problem,
but rather just that efficiency should be an important consideration.
On the "Pure MFS" Approach:
It's been stated that an MFS-based system is a requirement for having a
system that can tolerate "disorderly shutdowns". This is false. First of
all, *no* filesystem has a problem with this if the mount is read-only. If
the disk is never written, there's nothing to clean up. And even
read/write mounts *may* take this in stride, but it depends on the
filesystem particulars and may involve extra cleanup on the next mount if
the dismount isn't "clean". But m0n0wall currently makes very little use
of writeable persistent storage, anyway.
M0n0wall wasn't the first system to store a compressed MFS in flash. In
the early days of flash, it tended to be small, making the compression
important just to get things to fit. And an "unhacked" BSD or Linux system
tends to mix up a lot of read-only and read/write use, so that pointing the
whole mess at RAM is a quick and dirty way to deal with the limited
erase/write lifetime of flash (e.g., I know of a case where someone did a
fairly stock Linux install to flash, and it was dead within days, but that
didn't even do obvious things like mounting noatime). Note that this is
*not* an issue for reads - there's no lifetime issue with *reading* flash.
Today, CF is available (at reasonable cost) in sizes pretty much as big as
one would conceivably need for this sort of application, while RAM on the
embedded platforms remains fixed and limited. Thus, doing something that
increases the RAM footprint in order to decrease the CF footprint is
totally inappropriate on today's hardware. A combination of MFS for
temporary storage and read-only CF for most files would eliminate the
RAM-based constraints on what can be included.
To fully exploit this, the firmware update would have to switch from being
fully buffered to being streamed. If implemented as in the current system,
that would greatly increase the possibility of an error that renders the
system unbootable. But a simple brute-force fix for that is to have two
copies of the partition, active and inactive, where only the inactive
partition gets overwritten. After a successful update, the roles get
switched. There are already some other devices that use this approach.
It's not entirely clear that a pure "disk-based" system wouldn't have some
problems, but a "hybrid" approach is possible, where a minimally populated
MFS is used as the root filesystem, and then the CF partition becomes a
subsidiary mount point (e.g. /usr).
Some systems, notably CD-based systems, really need to use the MFS, but
there's no reason why the MFS/"disk" choice couldn't vary as a function of
the target platform, just as the config device varies between CF directory
The fewer languages involved in a given project, the better. In general,
more languages means more components needed for a development environment,
more complicated build procedures, more languages that need to be
understood by developers, and more issues with communication between
different components. It's not a large enough project that it needs to be
an elephant to a bunch of blind men.
M0n0wall shouldn't be a "developer's playground", where people get to mess
around with whatever language seems cool at the moment. The issue is what
fits the task with reliability, security, and efficiency. You don't use a
screwdriver instead of a wrench to remove a hex nut just because the handle
feels more comfortable.
Although "object-oriented" seems to be many people's favorite bandwagon at
the moment, the baggage that OO languages bring along isn't really
appropriate for something like a firewall. In a security-conscious
application, you really don't want a bunch of complex, obtuse, and often
buggy mechanisms hiding behind the language definition. And when one uses
an OO language, there's a tendency to frame everything in OO terms, whether
there's a real benefit or not, due to the "when you're a hammer, everything
look like a nail" principle. Programmers who can't blow their noses
without a box of "object-oriented" tissues shouldn't be doing firewall
Some have suggested using Java, apparently with a straight face. To get an
idea of how "manageable" Java is, note that its version numbering has a
major version, a minor version, a micro version, and a release level within
the micro version. Typically each new release level fixes around 60 bugs.
And keeping up with it is enough hassle that Apple's last "security update"
to Java had jumped four Sun release levels from the previous one, with
around 200-250 fixes relative to the last Apple update. This is not the
sort of crap you want in a firewall. Now including Java *applets*, to be
executed in the *browser*, might not be unreasonable for some purposes (as
long as they're not mandatory functions), but keep the JVM itself out.
PHP has serious performance issues due to being an interpreted language,
but that can be tolerated in the WebGUI, particularly if more attention
were paid to performance issues. I'd certainly get away from using it for
On Hosts vs. Routers:
There's a pervasive failure to understand just how different the issues are
between the host (server or desktop) and router roles. Aside from the NIC
drivers and certain add-on packages like filtering and NAT, the code
involved in forwarding packets is almost completely different from the code
involved in locally originating and receiving packets. Nevertheless,
"wisdom" from server applications is often blindly applied to m0n0wall. To
1) Polling. The purpose of polling is to avoid wasting CPU time in
interrupt servicing, in order to make more CPU available to userland code.
It does *not* have a significant effect on routing throughput (unless the
NIC driver is pretty brain-damaged), since the number of packets handled
per interrupt tends to self-regulate to the point of "keeping up". Because
of this, it's not possible to determine maximum throughput merely by
extrapolating CPU usage from lighter loads. Thus, on a router, polling
only helps ancillary functions (e.g. the WebGUI) and userland-routed
protocols like OpenVPN. Meanwhile, polling-induced latency is much more
significant in a router than in a host, for a number of reasons.
2) Tuning. Recently someone posted an excerpt from the FreeBSD tuning
guide related to mbuf sizing. But the stuff about mbufs needed per
connection was all based on sizing *socket buffers*, not forwarded
packets. In fact, in a router, once you have enough buffering to avoid
starvation, additional buffering actually *worsens* performance by
increasing latency (due to longer queue lengths).
3) Network performance. Practically all references to "network
performance" (including benchmarks) are oriented to *server* performance,
not *router* performance. And claims like "fastest TCP/IP stack" are
pretty meaningless when most of the "stack" isn't even involved in routing.
On OS Choice:
First of all, it should be noted that *no* *nix system is really ideal for
routing purposes, for two main reasons:
1) The networking code in *nix systems is designed primarily for the host
role, with routing added as an afterthought. The code is designed
primarily for originating and receiving packets. A router OS should really
make packet *forwarding* the fundamental activity, with the local system
simply being one of the "interfaces".
2) Because *nix context switching is so "heavyweight", it's necessary to do
a lot of things at interrupt level just to get decent performance. In a
well-designed microkernel system, context switching is fast enough to do
practically everything in userland with good performance. This improves
robustness, debuggability, and CPU allocation.
While there are OSes much better-suited to router applications than *nix
systems, I don't know of any that are free and open-source (not to mention
issues with driver availability). Thus, for a m0n0wall-style license, one
is stuck with *nix systems.
The major branches of free *nix systems are Linux, BSD, and Darwin. Linux
is the main "bandwagon", and in fact has become the "Windows" of
open-source OSes; i.e. just as lots of people think that "operating
system" is a synonym for "Windows", there are also lots of people that
think that "open-source operating system" is a synonym for "Linux". But
Linux is not known for having the best networking code. It's the only
common OS (unless you count later versions of the "Classic" Mac OS) whose
networking code is *not* derived from BSD. Instead, it reinvented the
networking wheel for no particuarly good reason, and is in general the
"quirkiest" of the common network implementations. Granted, most of those
issues relate to the host role rather than the router role, and it's
matured enough by now that it's probably no longer significantly buggier
than BSD, but I certainly don't see it as being better.
While Linux probably has the best driver support of any open-source OS,
it's not clear that there's a significant advantage there within the
particular kinds of devices that one would want on a router. And whenever
the actual drivers are open-source, BSD drivers tend to follow.
Although Darwin's Mach microkernel might have some advantages, they're
largely lost in the "BSD layer", and meanwhile Darwin has a very
"heavyweight" driver model based on C++, making it bloated and overly
complex for a router.
If one sticks with BSD, there are basically four variants to choose from:
FreeBSD, NetBSD, OpenBSD, and (for completeness) Dragonfly BSD. One of the
most important factors to consider when comparing these is the "focus" of
the particular branch, since that has a lot of impact on future development
directions. The primary foci are as follows:
OpenBSD: Security and Reliability
Note that *none* of these actually set out to be optimized as a router OS.
However, OpenBSD's emphasis on security and reliability made it an
attractive choice for firewall/router applications, which led to greater
developer attention to router issues, which led to increased use as a
router, etc. etc. Thus, it's become the de facto BSD of choice for router
applications, which receive a lot of developer attention, including the
creation of things like pf and carp. In general, most *nix-based routers
use either Linux or OpenBSD, with the former choice being more due to the
"bandwagon factor" than real benefit.
The effect of FreeBSD's "focus" is quite apparent in the 5.x fiasco.
FreeBSD's main goal is to provide the best-performing server OS on x86
hardware, and it's done an excellent job of that (though recent Linux
versions are giving it a run for its money). But in some cases,
server-oriented improvements inflict collateral damage on the routing code,
as in 5.x. There's no reason to suppose that this was a one-time thing -
the FreeBSD developers are always going to pay more attention to server
issues and router issues, because *not* doing so would distract them
from server improvements. So even if 6.x completely repairs the damage
caused by 5.x, there's no reason to suppose that the same thing won't
happen again in 7.x or whatever. After all, 5.x went through at least four
versions wherein they either didn't notice or didn't care that they had
seriously degraded routing performance. Hence, I don't think FreeBSD
presents the best long-term choice for routing, regadless of whether a
particular instance is adequate.
Dragonfly seems like a poor choice, since not only is it not ready for
prime time, but it again has a very different focus from routing. Not to
mention possibly having the worst driver issues due to being tied to an
old FreeBSD version.
NetBSD would provide the largest choice of hardware platforms, but it's not
clear that there are any platforms significantly interesting as routers
that are supported by NetBSD and not OpenBSD. Sure, there may be a few
people wanting to run m0n0wall on their dual-NIC toasters, but that doesn't
seem like a very good justification. NetBSD is claimed to offer
performance somewhere between FreeBSD and OpenBSD, but those are
host-related claims. And at least in V1.5.3, I found NetBSD to have the
worst installer and the most "pickiness" about the hardware configuration
of the three (never bothered trying Dragonfly).
Thus, I consider OpenBSD to be the best choice, not primarily due to
"better security" (although that's a factor), but because it's the one
branch of BSD where the developers take routing applications seriously.
The two main arguments that have been advanced against OpenBSD are:
1) Performance. Some have made, and others have parroted, claims that
OpenBSD has substantially worse "networking performance" than FreeBSD. But
AFAIK these are all based on *server* performance. The only *routing*
performance comparison I've seen mentioned here was what Manuel did before
starting m0n0wall, which was of course several versions ago for both
systems. I've not seen any information on routing performance comparisons
between *current* versions of OpenBSD and FreeBSD. Plus, due to the "focus
factor", one would generally expect that comparison to shift more toward
OpenBSD as the two systems evolve.
2) Atheros support. While the concept of a vendor-supplied HAL makes
sense, Atheros really screwed it up by providing it in binary-only form
(for a pretty lame reason AIUI). If they had done it right - i.e.
providing the HAL as CPU- and OS-independent C source, then there would
have been no issue. As it is, some people grudgingly accepted the
"official" HAL, while someone else developed an open-source replacement HAL
by reverse engineering Atheros's, and that was introduced in the latest
version of OpenBSD. You'll find things of the form "It's evil because it's
based on reverse engineering" and "Is this really legal", as well as "When
is OpenBSD's open-source HAL going to be ported to our platform so we can
junk the binary-only crock?". I have yet to see any actual behavioral
comparisons between the two. Of course the best scenario would be if the
open-source HAL became sufficiently popular to shame Atheros into providing
the "official" HAL in source form. :-)
Note that if there really were a significant advantage in using the binary
HAL in OpenBSD, it could be used with a "wrapper" in the same manner as it
is in FreeBSD. One could even do a side-by-side comparison of two builds
of OpenBSD, one with the binary HAL and one with the open-source HAL. But
I wouldn't go to that much trouble without determining that the effort is
More generally, it's important to make a distinction between what devices
are "officially" supported by OpenBSD, and what devices are supported by
third-party drivers. OpenBSD is more rigorous about the definition of
"open-source" than some other systems, and hence can't officially include
drivers that aren't fully open-source. I'd expect the same to be true of
Linux due to GPL requirements. While I don't consider an open-source
system to be "tainted" by the inclusion of binary *firmware* to be loaded
into peripherals, if any code executed by the CPU isn't open-source, then
the system can't be honestly described as open-source without a suitable
disclaimer. And one has to question the wisdom of including "black-box"
code in a firewall.
It's also worth noting that binary driver code is at the very least
CPU-specific, making it a major hassle for an OS that supports multiple CPU
architectures. FreeBSD tolerates this more easily due to being, for all
practical purposes, x86-only, but other OSes have more difficulty with it.
And there seems to be quite a bit of interest in running m0n0wall on
non-x86 router hardware.
OpenBSD and NetBSD are sufficiently similar internally that drivers (and
even in some cases kernel patches) can usually be ported from one to the
other fairly easily. Thus, any device supported by NetBSD can be at least
unofficially supported by OpenBSD without too much difficulty. FreeBSD
is sort of the "odd man out" in this department.
The bottom line is that the two major arguments against OpenBSD are at the
very least lacking in data, if not simply wrong.
Issues with OpenBSD:
Here are some issues I'm aware of that would need to be addressed in moving
1) FTP NAT performance: Unlike IPFilter's in-kernel FTP proxy, pf relies
on the userland ftp-proxy program. As far as the control connection goes,
this is a big win, since handling the control connection at the packet
level is a horribly kludgy and fragile mess. So much so that Cisco has
fixed bugs in that area within the past year. And control-connection
performance is a non-issue. However, ftp-proxy also proxies the data
connection through userland, which involves a significant performance
penalty. It can avoid that in two out of the four combinations of
client/server and active/passive, but not the remaining two. There's an
(unofficial) alternative called "ftpsesame", which manipulates the filter
tables to forward the data connection properly, but it has a horrible
packet-oriented bpf-based kludge for the control connection. Ideally, one
would like a hybrid of the two approaches, but AFAIK none such exists ATM.
Note that this is not strictly an OpenBSD/FreeBSD issue, but rather a
pf/IPFilter issue, so pfsense is presumably already living with the extra
overhead, and since practically everyone seems to agree that it makes sense
to move to pf instead of IPFilter, it's an issue regardless of OS choice.
Fixing it might be a bit easier in OpenBSD, simply because it's the "native
habitat" of both pf and ftp-proxy, without "compatibility layers" getting
in the way.
2) PPTP server performance: While OpenBSD has in-kernel support for PPP
and PPTP, the userland portion is currently client-only. AFAIK, the only
way to get server-side PPTP on OpenBSD is with PoPtoP, which means userland
routing. If the PPTP server is only used as backup for IPsec, that's
probably unimportant, but it could be a performance issue for "serious"
PPTP server usage on lower-end hardware.
3) Timecounter support: Unlike FreeBSD, OpenBSD doesn't include official
support for the Elan timecounter, and thus has to rely on the crappy 8254
clone for timing. There is, however a third-party patch for NetBSD which
has been used successfully in OpenBSD. In the case of the Geode
timecounter, I'm not sure FreeBSD officially supports it either, although
there's a third-party patch available for it. Since the Geode's
implementation of the Pentium TSC is too badly broken to be used for
timekeeping, supporting the timecounter is needed to avoid falling back to
4) Booting with MFS: FreeBSD and OpenBSD take different approaches to
this. Unlike FreeBSD, which loads the MFS as a "module", OpenBSD builds it
into the kernel image. This has advantages and disadvantages. The main
advantage is that booting is much simpler (e.g. one of the two reasons
m0n0wall needs the full-blown multi-component TFTP-unfriendly bootloader is
that the simple bootloader can't load modules). But it makes the MFS more
difficult to work on, which could make development a bit more difficult.
Then again, if the root MFS is only minimally populated in the new scheme,
that may not matter much.
5) Whatever prompted the continued use of ipfw in pfsense would need to
Advantages of OpenBSD:
1) Strong orientation toward router-related issues.
2) Proactive security. Some have described this as a "solved problem",
but it doesn't remain "solved" as long as the code keeps changing.
3) Most aggressive development in cryptographic areas, including support
for hardware accelerators.
4) Widely desired features like pf and CARP in their "native habitat".
5) Ability to run on non-x86 hardware.
6) IPv6 support that's actually usable. At least as recently as 5.3,
FreeBSD had conflicts between IPv6 and FAST_IPSEC which could cause kernel
panics. Although IPv6 isn't exactly taking the world by storm, it
definitely needs to be in any reasonable plan for the future.
7) Improved IPsec support, including NAT-T.
8) Development cycle based on conservative goals, where new releases have
a high probability of being adoptable without significant problems.