Re: CGI/1.0 status (Rob McCool)
Message-id: <>
From: (Rob McCool)
Date: Mon, 29 Nov 1993 03:05:23 -0600
In-Reply-To: (Ari Luotonen)
       "CGI/1.0 status" (Nov 28,  5:08pm)
X-Mailer: Mail User's Shell (7.2.5 10/14/92)
To: (Ari Luotonen)
Subject: Re: CGI/1.0 status
 * CGI/1.0 status  by Ari Luotonen (
 *    written on Nov 28,  5:08pm.
 * what is the status of CGI/1.0, can we start coding soon??

I haven't recieved anything that I haven't addressed (to my knowledge, if
something slipped by someone please say so). The doc at is the latest.
Note the addition of the naming convention "nph:scriptname" for scripts
which require direct output to the client.

There's only one more touchy subject I want to discuss before we finalize,
and that's the who-decodes-the-arguments issue. I've been busy converting
some scripts over to my preliminary CGI/1.0 implementation, and I'm finding
that I really don't like having the server decode the arguments. 

Here are the pro-arguments as I remember them, and my reasons for

Argument: If the server does it, scripts don't have to do it, so there are
          simpler scripts.

Counter: However, a prudent script must have code to decode long arguments
         anyway. Therefore, if the scripts may have to do it themselves
         anyway, why bother decoding it in the first place, if the scripts 
         need the code anyway?

Argument: We already know how to decode the URL, there is ISINDEX and FORMs,
          and we know how to decode both.

Counter: FORMS are part of HTML+. What if there are other aspects of HTML+,
         or HTML++ which are not compatible with these two methods? I don't
         want to have people upgrading their server every time a new
         convention is invented.

My arguments for having the scripts do the decoding:

1. It's painfully simple to do it even from a shell script, one line with a
   C support program. PERL and C code is available to do so. What's the
   advantage of having the server do it, besides avoiding a little confusion
   for novice script writers?

2. Any script which needs to decode its own URL still has the server decode
   it, possibly in a way the script doesn't want it to.  Wasted effort for
   the server, CPU time which could be better spent servicing the ~130 other
   waiting users (at least, if you're www.ncsa).

3. POST scripts which handle forms need the unescaping code regardless.
   Again, duplication of effort.

Is there a compelling argument which I am missing?