Extending HTML

Chris Adie <cja@castle.edinburgh.ac.uk>
Via: uk.ac.edinburgh.castle; Wed, 28 Apr 1993 09:22:56 +0100
To: www-talk@nxoc01.cern.ch
Subject: Extending HTML
From: Chris Adie <cja@castle.edinburgh.ac.uk>
Reply-To: C.J.Adie@edinburgh.ac.uk
Date: Wed, 28 Apr 93 9:21:49 WET DST
Message-id: <9304280921.aa24764@castle.ed.ac.uk>
RARE has set up a short study on network access to multimedia
information, which I am conducting.  I'm listening to the discussions on
extending HTML and HTTP with great interest. 

Talking to some of the potential USERS of multimedia information servers
(MMIS) and looking at their projects, I came up with a list of
requirements.  I know that WWW can do much of this, but not all, and it
seems that this might be a good time to mention some of these
requirements.  There are sections on Hyperlinks, Presentation (this
matches up with discussions on text flowing, inline images etc),
Searching, QoS and Management.  I'd be interested in comments. 


It is clear that many applications require their users to be able to
navigate through the information base according to relationships
determined by the information provider - in other words, hyperlinks. 

Some "hypermedia" systems are in fact simply hypertext, in that they
require the source anchor of a hyperlink to be in a text node.  A true
hypermedia system allows hyperlinks to have their source anchors in
nodes of any media type.  This allows a user to click the mouse on a
component of a diagram or on part of a video sequence to cause one or
more related nodes to be retrieved and displayed. 

Some hypermedia systems allow target anchors of a hyperlinks to be
finer-grained than a whole node - eg the target anchor could be a word
or a paragraph within a text document.  It is not clear to me whether
this is a critical requirement.


Related information of different media types must be capable of
synchronised display.  The synchronisation may be in time and/or space. 
For instance, an image may have an associated text caption which should
be retrieved at the same time as the image and displayed adjacent to it,
perhaps in a window which the user can scroll.  A sound clip may have
some associated text (perhaps a translation) which must be displayed in
sync with the sound, eg in an automatically-scrolled window. 


Database-type applications require varying degrees of sophistication in
retrieval techniques.  Typically, non-text nodes form the major data of
interest.  Such nodes have associated descriptions, which may be plain
text, or may be structured into fields.  Users need to be able to search
the descriptions, obtain a list of "hits", and select nodes from that
list to display.  Searching requirements vary from simple keyword
searching to full-text indexing (with or without Boolean combinations of
search words), to full SQL-style database retrieval languages.

The user must be able to retrieve and display all the information in a
record in a single operation.  (This may involve several separate
retrieval operations "under the hood", but the user should not be aware
of this.)


User toleration of delay in computer systems depends on user perception
of the nature of the requested action.  If the user believes that no
computation is required, tolerable delays are of the order of 0.2s.  If
the user believes the action he or she has requested the computer to
perform is "difficult" - for instance a computation of some form - then
a tolerable delay is of the order of 2s.  Users tend to give up waiting
for a response after about 20s.

Such user expectations are difficult to meet, particularly for
voluminous multimedia data.  There are several ways of alleviating this
problem, some of which are described below. 

 *	Give clues that fetching a particular item might be timeconsuming -
	simply quoting the size may be sufficient. 

 *	Display a "progress" indicator while fetching data. 

 *	Allow the user to interact with other, previously fetched information
	while waiting for data to be fetched. 

 *	Allow several fetches to be performed in parallel. 

 *	Pre-fetch information which the client software believes the user
	will wish to see next. 

 *	Cache information locally. 

 *	Where multiple copies of the same information are held in different
	network locations, fetch the "nearest" copy.  This is (sometimes)
        known as "anycasting". 

 *	Offering multiple views of image or video data at different
	resolutions and therefore sizes enable the user to select a balance
	between speed of retrieval and data quality. 

 *      Make provision for using isochronous data streams (if available)
        for audio and video.


In order to support applications involving real-money information
services (eg academic publishing) and learning/assessment applications,
there must be a reliable and secure access control mechanism.  A simple
password is unlikely to suffice - Kerberos authentication procedures are
a possibility. 

Users must be able to determine the charge for an item before retrieving
it (assuming that pay-per-item will be a common paradigm - alternatives
such as pay-per-call, pay-per-duration are also possible).  Access
records must be kept by the information server for charging purposes. 

Learning applications have similar requirements, except that the purpose
here is not to charge for information retrieved, but to monitor and
perhaps assess a student's progress. 


Another requirement which has escaped from the above list is the ability to
specify in a hypermedia document a mail address, in such a way that the
user can click on it to invoke his/her mailer with the address ready
in the To: field.

Chris Adie                                   Phone:  +44 31 650 3363
Edinburgh University Computing Service       Fax:    +44 31 662 4809
University Library, George Square            Email:  C.J.Adie@edinburgh.ac.uk
Edinburgh EH8 9LJ, United Kingdom