Smart server or smart client ?
Kevin Hoadley <K.Hoadley@directory.rl.ac.uk>
Date: Wed, 18 Nov 1992 10:04:15 +0000 (GMT)
From: Kevin Hoadley <K.Hoadley@directory.rl.ac.uk>
Reply-To: K.Hoadley@directory.rl.ac.uk
Subject: Smart server or smart client ?
To: www-talk@nxoc01.cern.ch
In-reply-to: Putz.parc@xerox.com's message of Tue, 17 Nov 1992 12:13:13 PST: <92Nov17.121323pst.58401@spoggles.parc.xerox.com>
Message-id: <Ximap.722084834.9952.khoadley@danton>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Steve Putz wrote:
>I like the idea of your suggested INPUT tag. It seems like a logical
>extension of the current protocol, and I can think of many uses for it,
>including form-based queries and maybe even a server-based Hypertext editor
=============================
>running on a very simple generic client.
>
>Steve Putz
>Xerox PARC
>
>
This raises the question of smart servers vs smart clients.
How much does the client need to know about format conversion ? The client
knows what formats it understands (maybe extensible through a system like
a MIME mailcap), but does it need to know about how to convert between
different formats ? Adding format conversion into the clients will make
them much more complicated, which is usually not a good thing (witness
the success of POP over IMAP in the PC mail market: POP is simple minded,
easy to implement and as a result widespread; IMAP is a rather nice protocol,
fairly powerful, but more difficult to implement (though not THAT difficult).
RESULT: though IMAP is technically far superior to POP, POP is winning
the war.)
Thus I think it would be a good idea to keep the clients simple, shunt
the complexity where possible into the server (it doesnt matter much if
ther are only one or two server implementations; it matters a great deal
however if the clients are only limited to certain Unix boxes).
I think we can learn from the way the DNS works. The are two types of DNS
queries: iterative and recursive. With an iterative query, the client
(resolver) sends a query to one server. From the result it may be referred
to another server, which it then queries and so on until it reaches a
conclusion. With a recursive query, the resolver (a stub resolver) sends
the initial query (with the RR bit set) to a local server, which then does
all the work. The protocol remains the same in both cases, the actual
procedure of lookup remains the same, they differ only in whether the local
resolver does all the work or simply punts it to a local server.
The best way to support small machines such as PCs, Macs, Amigas etc
within the WWW would seem to me to support the concept of a stub client
that merely throws everything at the server and leaves it to sort out the
mess. When the stub client connects to the server it informs the server
what formats it understands (lists out its wwwcap ?). Compatibility with
existing clients can be maintained in that if a client doesn't inform the
server of the formats it knows, we can asssume a minimum (HTML + plain
text ?).
This can be extended further... consider a site running a firewall,
limiting access to the outside to a few selected hosts (poor misguided
people !). If I'm sitting on a PC and I'm reading a document with a link
to another document outside my site, that link is useless to me because
I cant access the outside world. On the other hand a central site server
might have external access, thus if I can punt the query to the server
and say "you go fetch this for me" everything is hunkydory. The underlying
protocol doesn't change, all that happens is that I can ask any server
to get me the answer, rather than just the right one.
Again look at the DNS. The great advantage of stub resolver is that they
lead to a centralised cache at the local server. If we punt www queries
to a local server we get the same: the ability to build a cache of the
information regularly used by the locals (I'm writing from a research lab
with a heavy slant towards high energy physics - thus there is likely
to a slant in the documents that users around here want; cacheing this
working set would be a big win.)
(For this cacheing to work well we'd need to associate a time to live
with each document - again like the DNS).
It should be possible to cache format conversions as well.
Anyway, these are just some ideas, maybe useful maybe not.
Kevin Hoadley, Rutherford Appleton Laboratory, khoadley@ib.rl.ac.uk