Re: NetScape...)

Tony Sanders (sanders@bsdi.com)
Thu, 3 Nov 1994 06:09:47 +0100

Ramin Firoozye writes:
> In either case, it all boils down to whether you want to cooperate
> with the rest, or reach for as many free resources as you can get your
..
> I wish instead of going this route, the MCC guys had sat down and come with
> a nice UDP-based lightweight binary protocol with built-in compression
> and caching that would seriously kick butt...

UDP is the **WRONG** solution. It will *NOT* work for HTTP-like protocols.

There, now that we are on the same wavelength... :-)

A simple minded UDP-based protocol would very likely consume vastly more
network and system resources than TCP does and if you put all the smarts
of TCP into UDP then you would end up with about the same thing but it
would less efficient because of the need to move the data into and out of
user space on the machines (a double loose). TCP goes to great lengths
to ensure that it doesn't totally screw your network. Ever heard of MTU
discovery, windowing, congestion avoidence, etc? Also, UDP comes with a
whole different set of problems (like machines that ignore UDP checksums).

It would be much easier to do a connectionless TCP interface.

I would highly recommend that long-haul transport protocols be left up to
people who have lots of real-world experience doing them. What may seem
like obvious common sense and work fine in a high-speed LAN environment
is often a disaster in a WAN environment.

As a random example, normal NFS (which uses UDP) is pretty useless over
a 14.4 modem connection but the BSD folks did an implementation of NFS
using TCP and it works fine (well, it's slow of course but at least you
ever get data out of it, the UDP version just times out).

As for a binary protocol you are assuing that the bulk of the protocol
is a factor here but this is not clearly the case. Certainly in the big
picture the protocol overhead isn't that great. There are some performance
issues also and it would probably be a win there but it's not clear that
this is even an issue. Surely a T1 line will max out long before even a
cheap 486 or Mac would run out of cycles for processing requests. Larger
data pipes could do it (A T3 for sure) but how many people are hitting
that limitation? It's not clear that the benifits of a binary protocol
outweigh the additional troubles. Let's just say that the jury is still
out until I see hard data indicating otherwise.

Compression is another bad idea. It only really helps if the link is slow
and the major slow link these days is 14.4 modems which already support
compression in the hardware and you actually *loose* on through-put if
you precompress. This doesn't mean that we shouldn't use JPEG where
they make sense but JPEG isn't just a compression, it's an entire
encoding scheme; totally different issue.

Caching is a different issue with different problems but it is independent
of the underlying transport and is already being done I will not address it.

--sanders