Network Abuse by Netscape? -- Was: Mosaic replacements, etc...

Robert Raisch (raisch@internet.com)
Sat, 22 Oct 1994 19:35:47 +0100

On Sat, 22 Oct 1994, Internet Presence Inc. wrote, on the inet-marketing
mailing list, regarding the approach Netscape takes to retrieving all of
the graphical elements of a web document all at once:

> We've noticed that now, 4-5 "hits" will just pop up in the logs at
> once when people use Netscape or WebExplorer. It's no big deal.

Sorry, but it is a very big deal indeed. I have great concern over the
technical implications of this approach and I am not alone.

What Netscape has done, in a sense, is to abrogate its responsibilities
for efficient behavior at the expense of the network at large and those
who choose to operate http servers.

Background:

Netscape retrieves the document and as it reads it --character by
character-- immediately initiates requests for all of the graphical
elements on the document as they are seen, opening a socket request for
each element. Those requests are fulfilled on seperate virtual circuts
and the data then comes crashing down the pipe, to be displayed on the
user's screen. (Marc, please correct me if I am wrong.)

Effects on Network Infrastructure:

What this does, besides making the retrieval appear to operate faster to
the user, is to create a very bursty network usage profile.

For example, it is very possible, for a single Netscape document request
to max out a full T-1 for a small, but measurable period of time. This
will, I suspect, cause Netscape initiated web traffic to operate faster
*AT THE EXPENSE* of all the other network services.

Effects on HTTP Servers:

This is also very server/provider unfriendly because your server has now
been turned from serving numerous requests at the same time to
potentially serving a single "virtual" request.

Under Sun/OS, the kernel is preconfigured to provide only 32 IO slots
per process. When your server (either inetd if you run under that
mechanism or the actual www server itself) receives the requests which
comprise a document with 16 graphical elements on it, that single user
has consumed half of the available IO slots for the length of time it
takes to fulfill the request. (Yes, I realize you can up the IO slots
and remake the kernel, that is not the point.)

So, it's really a question of serving multiple users (at the same time)
at an expected rate, or serving fewer users at an improved rate.

This pushes the performance bottleneck squarely on the shoulders of the
network infrastructure and the service providers which, in a non-metered
network and in the current non-pay environment of the web, is grossly
irresponsible (IMHO).

</rr> (Robbing Peter to pay Paul.)