Re: Access Authorization

luotonen@ptsun00.cern.ch (Ari Luotonen)
Date: Wed, 15 Sep 93 11:38:47 +0200
From: luotonen@ptsun00.cern.ch (Ari Luotonen)
Message-id: <9309150938.AA12697@ptsun00.cern.ch>
To: www-talk@nxoc01.cern.ch, wa@mcc.com
Subject: Re: Access Authorization
Status: RO

   Hello to all,

If you find this message too long for you, at least read the last
two paragraphs!

> In regards to the proposed "basic" scheme, I certainly wouldn't protect
> important documents that way.

I wouldn't either, if they were that important.  However, something
is not quite clear to me.  When discussing the protection of documents
it seems that authentication is more important than protecting the
contents of the documents. If someone can listen to the ethernet and
catch my cleartext password and use it to access protected documents,
then what prevents him from catching the cleartext documents flying by
even if I use Kerberos to authenticate myself. Ethernet-listeners just
have to eavesdrop long enough and the chances are they may get the
entire protected documentation this way.

I may have misunderstood something, and please correct me if I'm wrong,
but I think encrypting documents is exactly as important as encrypting
authentication information, and there should be no talk about using
non-cleartext authentication with cleartext server replies. Currently
it is more vital to protect the documents than to verify integrity and
sender authenticity (sure we can do that too, and it will be a side
product of putting encryption in).

Lets consider both PEM and WWW, without using encryption in the message
body. When composing a PEM message I as the sender authenticate myself
to the recipient, and provide means for checking my message integrity.
When the recipient gets my message he can be sure of two things:
the message is from me, and the message was not tampered by a third
party during transfer. He cannot be sure that no one else has seen it.
In WWW it is much more a problem that secret documents get exposed,
rather than forged, during transfer.

> Authentication by public key is
> fine, but the infrastructure necessary for wide-spread use of such
> a system is not yet in place. EINet will eventually support this form

There is no more infrastructure needed for using public keys than using,
say Kerberos, as long as the servers don't have to authenticate
themselves to the clients. There is only one public key needed, and
that is the server's. It does not have to be distributed in some
special way, it is simply transferred to the client in the server reply.

> ...
> Well, the client can make a request without the authentication field
> and the server will tell it which schemes it supports. So, the client
> re-submits the request with authentication field, they do the Kerberos
> two-step together, and the server sends back the data (or not).
> Doing this for every document retrieval seems pretty inefficient, so 
> I proposed a negotiation phase in the retrieval protocol.

This is what happens with all the protection schemes: the client
cannot know beforehand what protection scheme the server supports/
requires. However, not every request has to be made twice: once a
request has failed the browser then knows that this server is using,
for instance, Kerberos.

The way I had specified the browser algorithm for resolving whether
a document is protected by a given protected server has been
described as "highly heuristic and bound to fail". I was very upset
and I still disagree. Funny, that the same person suggested a
probably more heuristic approach, which was also more unsecure,
harder to maintain, and certainly stranger to common sense and
therefore harder to understand by regular people, -- not everyone
is an internet-freak like we are, certaily not all the information
providers who would have to maintain protected Webs.

Especially, what is bothering me is that the people (here at CERN)
who requested the Access Authorization and need it badly and quickly
do not want a complicated cryptic hard-to-maintain system. They
don't want to put their files in directories named after random numbers.
They don't understand why their file "SecretOverview.html" is not
that but "362SecretOverview.html" because there has to be some hash
number in front of the filename.

[This may sound strange to you, because I refer to a discussion
that we had here at CERN: it was about protecting docs by making
URLs themselves the access tickets for documents -- well its a nice
idea, and these crazy ideas can be made up for many everyday things,
but they don't always prove practical in real use however nice they
look.]

A short review of my proposal:

	Protected documents are gathered into directories of
	protected docs. Because it is defined that URLs
	indicate directory-like structure, and extremely
	seldom this structure is different from the real
	directory structure in which the documents are stored,
	I can use this in my favour.

	If the browser fails to access a doc /a/b/c because
	of lack of authentication, it figures out that
	every document in directory /a/b is protected, and
	always authenticates himself when accessing files
	in that directory.

	When accessing a protected document in some other
	directory, this fails again once, and then automatically
	sends authentication with subsequent requests.


I fail to see how this is inefficient -- in fact quite contrary: in
a long run this proves to be much more efficient than holding the
connection.

As for the claim about my proposal being "heuristic", I disagree
with this, too.  This statement was backed up by the fact that
the rule file can map individual files all around the directory
tree. Now I ask: how many such file-to-file mappings are there
except some directory names are mapped to corresponding Welcome
pages? And another question: if you have a set of protected files
do you *really* hash them all around the directory tree? That
would be like asking for someone to see a couple of them by
accident. At least I would collect all my protected stuff into
a clear hierarchy where they are easy to maintain and to keep an
eye on. Wouldn't you all agree?

Now, what most scares me is that WWW Access Authorization
becomes too cryptic for the end-user, and especially for
[protected] information providers. WWW is just making its way
to the top, and I would hate to see people commenting my access
authorization like many people now comment Kerberos: "Oh, I'm
sure its good, but it's too cryptic and I didn't even want to
look into it".

I *have* looked into it, so I know what it's like -- it's not
that cryptic, but because it appears that way to a casual
person looking at it, I believe that we have to be very careful
about  our Access Authorization. Never forget, this has to be
operated by people without the special knowledge and insight
that we might have about this.


-- Ari Luotonen --
 
                     \\\\Ari Luotonen//////
                      \\\\WWW Person//////
                       \\\\\\/\\\\\//////
                        \\\\//\\\\//////
                         \\////\\//////
                          \/\/\/\/\/\/