Re: How about a Safe Virtual Machine?

Karl Auerbach (karl@cavebear.com)
Sun, 2 Oct 1994 23:10:39 +0100

Very nice discussion and summary. Thanks.

> A truly anonymous program might begin executing with with finite
> resources: a certain amount of memory and some CPU time. To gain
> access to the filesystem, network ports, databases, more compute time,
> or more memory, I see three options:
>
> (2) In a batch session, the program would have to be certified by some
> principal that is authorized to use the resources. For example, the
> bytecodes might get to the server with a certificate that says 'This
> program is allowed to access databases X, Y, and Z. It can use 50MB
> of disk space for intermediate query results.' An interesting way to
> deploy an authorization/accounting system like this is given in a
> paper by Cliff Neuman[3].

This isn't all that different from the "capability" architectures I
used back when I was building provably (I can't spell!) secure
operating systems and network. (Yes, we were doing formal
verification of OS designs and protocols against a formal security
model. I designed and built the first B level secure OS and was
working on the first A level secure one.)

Capabilities are essentially unforgeable certificates to use a
resource. What is really nice about them is that they allow a request
to be handled through a chain of distinct protection domains. Unlike
Unix setuid, in which the privileges are somewhat cascaded and added
(at least to one level), capability passing allows each domain to
explicitly express what privilege is to be passed to another domain.
In other words, the trust one puts into a domain of execution is only
that that domain does not abuse those privileges expressly passed to
it or those which it itself owns.

It's not an easy mechanism to build, especially when the capability
objects must cross over a network.

Capabilities/certificates make it easier to determine whether one
will trust a piece of software, but it is still necessary to determine
whether to trust it or not.

And to me that is the crux of the issue: Safe environments are
designed to so limit a program that whatever it does is ok, even if
that program is generated by a random (or even an "evil") code writer
program. In other words, no trust is necessary.

But I think that, in networks of information especially, it will be
valuable to have programs that can aggregate and manipulate
inforamation bases in the non-bounded WWW space. And I have a feeling
that this will require trusted programs rather than confined programs.

On the other hand, I'm not about to give automatic trust to something
that arrives in my machine and says "execute me."

--karl--