Re VT100 etc...

Dave_Raggett <dsr@hplb.hpl.hp.com>
From: Dave_Raggett <dsr@hplb.hpl.hp.com>
Message-id: <9304071218.AA25570@manuel.hpl.hp.com>
Subject: Re  VT100 etc...
To: timbl@nxoc01.cern.ch
Date: Wed, 7 Apr 93 13:18:37 BST
Cc: www-talk@nxoc01.cern.ch
Mailer: Elm [revision: 66.36.1.1]
In  <9304061951.AA17526@ nxoc01.cern.ch > Tim makes the case for
WWW browsers that work effectively over phone lines.

It also seems to me that we should revisit the specs for HTRQ and
the HTTP Mime headers to boost network performance via compression.

Tim says:

> The requirements or the phone line protocol are that
> it should be efficient, it should allow long transfers
> to be stopped at short notice to make way for new ones,
> and it should, preferably, look ahead to guess what the
> user might want next, and transfer it while he is reading
> what he has got.   This would take advantage of a reasonable
> amount of cache disk space on the user's machine, if he has
> it.  The ideal is to keep the phone line humming as it is
> the bottleneck. The user browses around, with an apparently
> very good response time. All the scrolling and such is done
> locally so that is instant.

> There is not as far as I know any existing public protocol
> which does this.   If anyone knows of one, please say!
> If anyone wants to [form a group to] define and implement
> one, then why not.  I see this as an important step
> toward getting the internet information world out to
> everyone in schools and homes.

Are you familiar with the standard for Compressed TCP/IP Headers for
Low-Speed Serial Links, RFC1144? This cuts the headers down to an
average of 3 bytes and is both efficient and simple to implement
(about 250 lines of C). It was motivated by the need to get good
response times when using telnet over phone lines running from
300 to 19,200 bps. It is designed to increase line efficiency
while offering an under 200 millisecond response time for character
echo. For a 9600 bps line there is no point in increasing packet size
longer than 200 bytes. Increasing the maximum packet size to 576 bytes
increases the average delay by 188% while throughput increases by 3%.

The complete Berkely Unix implementation of cslip is available by
anonymous ftp from ftp.ee.lbl.gov as "cslip.tar.Z". I expect versions are
also available from PC software vendors, e.g. Distinct of Saratoga CA who
can be contacted at mktg@distinct.com (if not we should press them to
implement cslip right away!). Smart IP implementations also open and close
the phone line intelligently - so you don't hang onto an expensive long
distance line a moment longer than you need to.

The next step is to devise a www browser which offers look-ahead (under
user control via a preferences menu of course!). This would need to take
into account available resources (disk, memory, line costs) to decide which
references to prefetch. I, for one, would hate the browser to prefetch
megabytes of postscript and jpeg files! This shows the need to check file
size first using the appropriate methods, and underlines the need for
HTTP2 servers to supply a Length: field with document headers.

Finally www servers should offer compressed versions of documents. I have
looked through the MIME spec and the appropriate thing seems to be:

 a) documents are returned with an attribute specifying the kind
    of compression

        Content-Type: text/html; compression=Z
        Content-Transfer-Encoding: 8BIT

 b) Browsers can request compression with the HTRQ field

        Accept-Encoding: compression=Z

Where "Z" stands for the standard compress/uncompress utilities.
Some alternatives could be "z" for gzip, and "lzh", "zip" for the PC.

What do you think?

Dave Raggett