Re: Faster HTTP Was:Re: The Superhighway Steamroller

Thu, 7 Jul 1994 13:03:23 +0200

In article <> you write:
|>From: (Marc VanHeyningen )
|>Subject: Re: Faster HTTP Was:Re: The Superhighway Steamroller
|>Message-ID: <>
|>Organization: CERN European Lab for Particle Physics
|>Distribution: cern
|>Date: Wed, 6 Jul 94 06:57:40 GMT-1:00
|>Expires: Sun, 17 Jul 1994 22:00:00 GMT
|>Lines: 54
|>Phil Hallam-Baker sed:
|>>One idea we had a dinner last night is to have `accept groups'. To first
|>>order one can infer most of the image etc formats understood by the user
|>>agent id field. After all all mosaics are going to do gif and html, the
|>>CERN linemode is going to do html etc... Now the problem here is
|>>maintenance since the server must know what the groups mean... even if the
|>>group was declared long after the server came up... URL time!!!!
|>>OK so this >Looks< like we have an extra connection per transaction. Quelle
|>>horreur! In fact we cache the page - cleverly in parsed form. So we only do
|>>one extra GET and one parse for the accept group each time the server comes up
|>Of course, the precise Accept: header will be different not just for
|>each browser, but for each different mailcap (or other similar
|>configuration) file; i.e. it will be different for each site, and
|>plausibly different for each user. This means the browser needs to be
|>able to somehow create a document which specifies its current accept
|>status, which is possible but far from trivial or universal, and
|>caching will have limited benefit.

If you have a different accept header then you are most likely to be adding
extra accepts in rather than subtracting (Yuk! I don't want Gifs ... ?).

I don't think we should get hung up about the 5% or 1% of cases which are
different. Nobody is suggesting getting rid of Accept headers altogether.
Its just an optimisation that can be applied for a large number of cases.

In general I would like to suggest a principle that *ALL* areas where there
is a degree of flexibility should be referenced via URLs. If we tried to
compress accept groups in a static manner then there would always be a problem
when new accept groups were declared. URLs allow us to deffer the specification,
and more importantly distribute it.

The same arguments apply for the security scheme. I am now experimenting with
sending certificates about attached to URLs or URNs. Eg if we want to
protect the traversal of an anchor :-

<a href="", crypt="">
You don't want to be seen looking at this!</a>

With URNs all this gets much easier. The keycert then becomes an interesting
subcase, the DN is after all a URN (of sorts) even though it has an odd
syntax. We should have a translitteration service n'est pas?

Personaly I think the big gain is compressing the body while sending. If
anyone has a bunch of compression routines I can plug 'em in. I looked
at GZIP but there are hiddeous numbers of static and global variables. Anything
in the library has to be reentrant so that we can go multithreaded.

Of course if your modem compresseth then you want to be able to turn this
stuff off. But there are still many who are cast into the outer darkness
with much wailing and gnashing of teeth etc. etc.

Phillip M. Hallam-Baker

Not Speaking for anyone else.