Re: Holding connections open: an immodest proposal
Thu, 15 Sep 1994 00:26:40 +0200

I started to answer questions one by one but I think a single lump reply
is better:

Chris Lilley:

>That last sentence seems to carry an implicit assumption that images are
>*always* on the same server that references them, which is clearly not true.
>Unless you mean for the server to use a citation index such as the Webcrawler

Accepted, except that if the document is not actually on the server in question
then an MGET is not really appropriate. Unless we are talking proxies... (Rick Troth):

> You may have solved it (below), but if not then there remains
>the problem of non-graphical clients getting a big multipart object
>with GIFs it can't use.

This is why there really has to be two requests if it isn't a monolithic
object. What we can do is to sent the accept headers as normal and
instead of having a monolithic mime have a list of items to send. The
server reads the list and constructs the multipart chunk by chunk.

> No it doesn't. This is how multipart might provide the needed
>EOF indicator (which I will mention in another note).

We would prefer to kill multipart boundaries except for the purpose of
giving the final EOF marker. The pattern matching algorithms take a
significant time and they really cannot be defended in an 8 bit clean
protocol. They are a hack to get through mail gateways.

> I don't see how SMTP's connection keeping is harmful.
The post is logically one operation but in normal SMTP you have to do three
operations to send a mail where one would suffice. The point is not
the keeping the connection or not the point is that the traditional
negotiation of header information is a bad idea. It is a human oriented
and not a machine oriented protocol.

Most Everyone complained re caching/proxies:

in the case of a cache the cache simply knowck out all the relative
urls that it can send itself. The only point where this falls apart is when
the security is used (Shen, SHTTP can't proxy) since removing a relative
URL line would break the message digest of the head. This could be got arround
with extra `do not send headers'. This can be done as follows:

Digest-Boundar: MD5, random
Relative-Url: url1
Relative-Url: url1
If-Modified-Since: 1-Jan-1900

The semantics of if-modified since then have to be specified such that it
relates either to the main url or to the last url specified in a relative
url tag.

Remmember that under the scheme there are two requests, one to fetch the base
document and one to fetch the associated bits 'n pieces from the server
in question. There are a few extra issues to consider wrt proxies and in
paticular wrt security.

Phill H-B.