Re: launching executables through HTML files

chrisf@sour.sw.oz.au (Christopher Fraser)
From: chrisf@sour.sw.oz.au (Christopher Fraser)
Message-id: <199306250058.AA27421@sour.sw.oz.au>
Subject: Re: launching executables through HTML files
To: www-talk@nxoc01.cern.ch
Date: Fri, 25 Jun 1993 10:58:37 +1000 (EST)
Return-Receipt-To: chrisf@suite.sw.oz.au
X-Mailer: ELM [version 2.4 PL21]
Content-Type: text
Content-Length: 4394      

(This is a repost; I think the previous one got lost.)

But I thought Lou Montulli said:
> 
> > 
> > What's the current status on the idea of allowing hyperlinks in HTML
> > documents to cause executables to be launched?  
> > 
> > Also, there was the idea that maybe executables should only be allowed
> > to be launched from documents residing on the local host (not over
> > ftp, http, or anything else).  This seems like a pretty much useless
> > restriction with the current state and expansion of transparent
> > networked filesystems, though.
> > 
> 
> 
> Most network file systems still appear to be local accesses so this IS
> still a valid idea.  I think it is unreasonable to expect users to be
> able to make reasonable judgements about how a particular command might
> effect there system.  Do YOU really think that there is no one clever
> enough to fool YOU with some cryptic UNIX trojan link.  The average
> user isn't competant enough to tell whether some command will destroy
> his system, therefore tight control should be exercised with this
> link type.  Another issue is how can a single executable link be useful
> on other systems?  Why should a machine specific link be usable across
> the network?  We are playing with fire with this link type, I know this
> from experience.
> 
> My suggestion would be to allow "exec:" type links to be run only if
> the file that contains them resides on the local filesystem.  (physical,
> NFS, AFS, etc.)  Or a file that resides on a trusted host. (in a list?)  
> That way there is at least some sense of security.
> 

I think the problem with the current suggestions is that they rely on the
server suppling pathnames to commands. Firstly, the namespace on a machine is
particularly machine and configuration specifc; relying on a command to be in
particular directory is a bit tenuous. Secondly, we want the user and site
webmaster to be able to configure the way the commands are run.

A much better approach, IMHO, is to have a symbolic command
name which is mapped to a particular command by the browser (client, whatever
the correct terminology is). For example:

  exec:text-edit%/etc/passwd

The browser has some default mapping for text-edit, (eg: using the EDITOR 
environment variable in an xterm), however the user can override this if they
like (eg: if DISPLAY is set, and EDITOR is emacsclient, don't bother with
the xterm). If there is no mapping for the command name, some sort of requestor
thing is popped up. There is certainly scope to fetch the mapping information
from some remote server, but ultimately it's up to the browser to interpret
the command.

You should be able to pass arguments in some manner, eg: everything
after the percent or something similar It should be up to the
command to check for tainted/invalid arguments. I'd imagine most of the
commands would be shell scripts which assemble a command line and exec
it (Macs and PC's would have some sort of analouge).

Normally the browser should wait for the command to finish (report the exit
status if nonzero etc) however it would be cute to also have a mechanism
whereby the browser creates a pipe, and then runs the command with
stdin/stdout attached to the other end of the pipe. The browser then talks
http to the command across the pipe. The namespace of the command would
be something like:

  local:pid/

When the last page onto the command's namespace was closed, the command would
be killed (SIGHUP, or some additional http construct).

Now, why would we want to do this? Well, there are situations I see where it's
adventageous to keep state at the remote end of a http connection. For example,
you want to browse the contents of a compressed tar file, the command would
unpack it somewhere, when it started up and delete it when it exited. Keeping
state as the remote end is exactly the sort of thing you want to avoid across
network connections, but for local commands it's fine.

Appologies if I'm reiterating what has alreday been said ... Incidently, when
people have something new and interestign which is W3 related, please don't
just post the http address, some of us only have netfile ftp access to outside
world.  

Cheers,
--
Christopher Fraser              ``Insult is the lowest form of flattery''
chrisf@sw.oz.au 

Cheers,
--
Christopher Fraser              ``Insult is the lowest form of flattery''
chrisf@sw.oz.au