Usenet news and WWW

Karl Lehenbauer <karl@one.neosoft.com>
Subject: Usenet news and WWW
To: www-talk@nxoc01.cern.ch
Date: Tue, 12 Jan 93 0:06:00 CST
From: Karl Lehenbauer <karl@one.neosoft.com>
In-reply-to: <9301111351.AA00475@www3.cern.ch>; from "Tim Berners-Lee" at Jan 11, 93 2:51 pm
X-Mailer: ELM [version 2.2 PL13]
Message-id: <9301120006.AA00591@One.NeoSoft.Com>
I am a latecomer, so forgive me if this is naive or old hat.

Many of the issues that people seem to be grappling with are already
handled by news.

For example, we are talking about caching nodes.  News has highly evolved
caching capabilities -- I mean, caching is what it is all about -- both for 
TCP/IP and UUCP-based links.

Someone mentioned the issue of caching and node names, apparently
node names would have to be rewritten by the cacher or need to be made
machine-independent in some way (?).  Article IDs are guaranteed unique
and are server-independent.  The mechanism for translating article
IDs to filenames is fast and pretty highly evolved.

Oh, ugh, "Supercedes:" doesn't cut it unless the article superceding
the old one replaces its article ID, which would probably be Bad.

Expiration dates can be set with "Expires:", and sites that 
archive certain groups already do special things on "Archive-Name:".

Plus news is already ultra-portable.

Is the brief-connection-per-document approach of HTTP still necessary
when the data is widely replicated?

It would be painful to go reap all the references that
point to expired articles, although if a user traversed to an expired
article, perhaps it could be pulled off of tape or an NNTP superserver 
somewhere.

Clearly the authors of WWW think news is important because WWW has 
nice capabilities for accessing NNTP servers.  What, then, is the 
motivation for HTTP as opposed to, say, using news with HTML article 
bodies?