Re: web roaming robot (was: strategy for HTML spec?)

Thomas A. Fine <fine@cis.ohio-state.edu>
Date: Thu, 14 Jan 93 14:50:30 -0500
From: Thomas A. Fine <fine@cis.ohio-state.edu>
Message-id: <9301141950.AA29567@soccer.cis.ohio-state.edu>
To: murphy@dccs.upenn.edu, timbl@nxoc01.cern.ch
Subject: Re: web roaming robot (was: strategy for HTML spec?) 
Cc: www-talk@nxoc01.cern.ch, litwack@dccs.upenn.edu
X-Mailer: Perl Mail System v1.1
>Timbl:
>> Great idea, LOTS of applications.  Traversing a tree to a given depth
>> makes a book.  Tony's WWWVeronica is a great idea -- particularly as  
>> it can pick up WAIS indexes and Gopher and telent sites all together  
>> and make a megaIndex of the whole scene!
>
>I agree that the map produced by traversing all of WWW would be
>QUITE large and that it would be more efficiently searched as
>an Index (WWWVeronica), if the output of the Robot were Text.
>
>A new twist: (I am still trying to catch up on all the features &
>functionality of WWW, so if WWW already does this, please tell me)
>
>Is it possible for a user of WWW to see, graphically, a map of the
>Web?  If the map produced by the WRR (Web Roaming Robot) were to
>produce a map instead of a book, and the user could point at a spot on
>the map and select it, then I think it would be a lot easier
>to navigate.

Well, thats more or less what I meant when I said this would make an
ideal history mechanism.

>There could be a "short-range" scan which would show a close-up view
>of the web links all around the current node, upto around 3 or 4
>levels.
>
>There could be a "long-range" scan which would show a global
>view of the Web, leaving out the details of the local links.

I'd prefer continuous zoom.  I've seen widgets that do this, although
I'm not sure where to find one.  At any rate the problem is solvable.

>This requires an extension beyond hypertext into hypermedia, of
>course.

Not necessarily.  As the first pass, it can be implemented as history
for the browsers.  Combine that with a saveable history, and you could
just stick in the whole web into your history.

On the other hand, I have (very distant) plans to lay a WWW browser on
top of GhostScript.  The browser would convert to postscript (also
very convenient for printing).  Ghostscript provides hooks for inserting
mouse functionality, for the clicking, and away we go.

The point is this: once that's done, hypertext postscript just sort of
happens.  It's there, with no additional work whatsoever.

>Anyway, would hyper-graphical representations of the Web be useful?
>If it is worthwhile, is it doable?

Useful, yes; doable, yes; easy, no.

The hard part is telling a computer to lay out the graph.  You can assume
a tree structure for any given local site, and be mostly correct, but I
get the impression that the web is much more web-like across multiple
sites.  Evan assuming a tree structure, you have to make sure that non
of your non-tree links are drawn on top of each other (in parallel, that is).

Suppose we have a tree like this:

	      A
	     /|\
	    / | \
	   B  C  D

If B, C, and D are all inter-linked, you can't draw their links and
have them all be visible, unless you move one of B, C, or D; or use
curved lines as links (anybody know how they generate the Usenet
maps?).  The specific solution is obvious, but generalizing it is not
(to me, anyway).

One concept that would probably help is grouping.  It's common that a
large body of documetation has a single entry point (or at least a
small number of them).  These could be displayed as a single element in
the map, until a certain zoom level is reached.  A good first guess
would be that each site is a single entity.

Parting Shot - does it occur to anyone that a Virtual Reality interface
might provide the best view of the web, and the best way of traversing it?
And does it also occur to anyone that this view of the web might look
a lot like the cyberspace described by author William Gibson?

	  tom