Re: Information integration at client or server?
Fri, 22 Jul 1994 05:50:51 +0200 (Nick Arnett) writes:
> It seems to me that there are some significant
> implications for HTML as well, under the assumption if it
> fails to provide the means to describe the structure of
> information coming from heterogeneous sources, that's
> a virtual guarantee that the browser developers will
> bypass HTTP as the delivery protocol, I'd imagine.

Actually, the trends I see point in the opposite direction. Due to
the unifying concepts of URLs and MIME, HTTP provides a convenient,
portable, widely available tool for distributing and hyperlinking
*any* sort of static or interactive information. To the extent that
HTML fails to fulfill needs, HTTP rather stands to gain in relative

Two important examples of this trend on the horizon:
* Several groups are working on HTTP-based hyper-TeX (really
hyper-DVI) for distributing mathematical and scientific text.
* WIRED is leading a discussion on HTTP-based distributed virtual
reality. A large number of commercial and academic groups have
expressed interest in working on this.

As far as I can see, the pressures to move beyond HTTP and HTML are
coming from different sources. The server maintainers are the ones
who are looking for new protocols beyond HTTP, while the browser
authors are the ones looking for models of content beyond HTML.

These pressures aren't necessarily bad. I agree that what makes the
Web the Web -- what must remain constant in all these scenarios in
order to hold the Web together -- is the hyperlink, the dynamic
combination of:
1. URI addressing,
2. MIME typing, and
3. link relationship values.
Naturally, it is also helpful if the number of protocols and content
models is small, but I don't think that total unity is needed.

Paul Burchard <>
``I'm still learning how to count backwards from infinity...''