On 28.03.2004 15:27 +0200, Martin Holst wrote:
> I believe that the problems browsing through a PPTP-tunnel are
> MTU-related. I would still like to know if this IS a bug or not?
OK, based on this feedback, I decided to investigate this issue, and
was able to reproduce it. It is a bug, but like countless other
*it is not a bug in m0n0wall itself*,
but in MPD.
> - PPTP access from DMZ to LAN is OK
> - PPTP access from WAN to LAN is OK
> - PPTP access from DMZ to WAN fails due to MTU-related problem.
In addition to the MTU handshaking bug between XP and MPD, it looks
like ipnat has troubles calculating the checksum for the "ICMP
unreachable - need to frag" packets, which might be why path MTU
discovery fails too, even with Internet hosts that do not block ICMP
packets (while it works with LAN hosts - no ipnat there). ipfilter
3.4.32 was supposed to fix that ("checksum adjustment corrections for
ICMP & NAT") though (and m0n0wall 1.1b1 uses ipfilter 3.4.33).
Anyway - turns out that this patch:
kinda "solves" the problem. Steven has confirmed this too. It even
works if you only add 4 instead of 6 bytes in that MRU calculation.
XP likes to request an MSS of 1360 (corresponding to a packet size of
1400 bytes) when it opens a TCP connection through the PPTP tunnel -
1400 is the MRU that XP and MPD decide upon during LCP negotiation,
but MPD subtracts 4 bytes from that figure for MPPE and MPPC
(encryption/compression) overhead - that's why it sets ng1's MTU to
1396. The question is: who's right? It's probably time to dig out the
packet sniffer and see how much additional overhead there is in
reality. In case it helps, MPD performs those overhead calculations
in bund.c in the function BundUpdateParams().
To me, the patch mentioned above doesn't look like the proper way to
solve this problem, and it could create problems with PPPoE (where
MPD is used as well).
I'm quite weary of fixing other people's software, so for now I'll
leave it to those who really want that kind of PPTP setup working to
find a solution. Maybe it's as simple as posting a message to the MPD
mailing list, explaining the situation - I don't know. Sometimes I
wonder if we'd have been better off using /usr/sbin/ppp and poptop...
But targeting embedded platforms calls for efficiency - something
userland PPP cannot provide.
BTW, I also noticed that the recent ng_pptpgre patch to disable the
PPTP ACK window mechanism (and therefore resolve the packet loss
problem) caused a performance drop by disabling delayed ACKs
(surprise, surprise)... But probably still better than 50% packet
And before anyone asks - no, MPD 3.17 does not solve this problem