Re : Image quality on web

Joel A Crisp (
Mon, 21 Nov 1994 10:12:29 +0100

Chris Lilley said:
> Joel Crisp said:
> >Chris Lilley said :
> >>
> >> Yechezkal-Shimon Gutfreund said
> >>
[ About problems of monitor display ]
> I agree that user ignorance is a problem. However the current situation is that
> experienced users are currently unable to shareimages with any sort of colour
> fidelity. Once we have that cracked, making it foolproof for naive users is
> indeed the next step.
I suppose once you have a good standard, calibration images can be made
avilable to at least demonstrate the problem and allow some kind of
setup procedure.
> > The colour of an object depends on the texture of the surface, the
> > reflectance and adsorption of the surface, the angle of incidence of
> > the light source AND viewer, and the spectrum of the light source.
> > All of these generate problems. Texture is obvious. Most colours
> > which people wish to represent on the screen are originally adsorption
> > based ( mostly ), whereas screens are emmision based ( and non-linear ).
> > This causes another mapping problem.
> Yes and no. We are talking screen display here, not physical objects. We can
> either assume that the light has been modelled correctly to produce the image
> (see, for example,
> <> or we can
> assume that the image has been satisfactorily colour balanced using something
> like Photoshop and the LAB values are known for each pixel.
In general I agree. However to hand balance 20,000 images is a looong
job. I would like to see a system which is at least semi-automatic.
> The absorption vs emission question is a bit of a red herring in this instance.
> Things we can see give off light. Sometimes this light is generated by the
> object, sometimes it is a modification of light falling on it, often it is both.
> Whatever, things give off light which has a defined spectrum and can thus be
> reduced to an XYZ triple.
In about 30 different and incompatable ways.
> That being the case, we can largely ignore the factors you mention except in
> that background illumination falling on the screen affects the light we see
> from the screen.
> The light entering our eyes when viewing an on screen image has three sources:
> 1) from those phosphors the electron beam is firing at. This we can affect;
> given phosphor chromaticities, whitepoint and transfer functions or gamma we can
> not just affect it but control it.
[ deleted ]
> Oshima, Yuasa, Sakanoshita and Ogata "A CAD system for Color Design of a Car"
> Proceedings Eurographics 92, Computer Graphics Forum 11(3) 1992 pp C-381 -
> C-390.
I am not qualified to comment on this point.

> > The angle of incidence is
> > essentially uncontrollable,
> Considering the magnitude of the effect, rather than just a catalogue of
> possible effects, means this is fairly well down the list of things I would
> worry about in the present context.
I can accept that.
> > as is the spectra of the light source.
> See coments on adaptation above
> > Any file format which wishes to produce a 'true' representation of
> > a colour should be capable of specifing all of these at 'preferred' values -
> > however, there are so many different ( and conflicting ) specifications
> > of colour spaces and colour transformation equations that this is
> > almost impossible.
> Unless you have some direct experimental evidence to cite, I think you overstate
> the case. What i want to avoid is a collerctive response of, "oh, this is too
> hard, it is impossible, let's not bother, here is an RGB file".
This is certainly not what I'm trying to say. I would hope that people will
not rush into saying TIFF is *the* solution for all time. I believe that
as wide a view of the potential problems should be aired as possible before
we shoot ourselves in the foot. I am quite prepared to accept existing
standards as time proven, but I do think that some thought for the
future needs to be discussed.
> Well understood solutions do exist that would bring a great improvement over the
> current situation. Let us implement these, and implement them well, and proceed
> from there, rather than give up in despair.
Fine by me, so long as they are expandable and people accept that things
will change.
> [About specified display sizes for images]
> >And a standard for representing the units which these are specified in,
> Yes, absolutely. HTML 3 DTD seems to use ems which is fair enough.
I don't see it as requiring one fixed method - more are possible so long
as they are specified and widely compatable.
> >along with prefered re-sampling method
> That could be left to the client, really. Bicubic interpolation would be fine,
> in whatever colour space the image is expressed in, given the fine granularity
> of an image and the fairly limited resampling that is needed to cope with the
> range of screen resolutions.
Technically you may be correct - I hear a lot of people asking for their
'perfect' images to be sent out with a method which they can control.
This is similar to the problem I had recently with an un-named author
who phoned me complaining that the text files I had put on our web server
were badly corrupted - I explained that if he read the intro pointing out
that Mosaic 2.0a2 didn't work with the system they would have looked fine.
The point I'm trying to make here is that you can put too much reliance on
the client authors in such cases unless you specify a minimum common
> > and dither method.
> DITHER!! This is accurate image display we are talking about here. If a client
> is displaying on an 8 bit visual into the standard colour map, whatever dither
> the client chooses will be just fine. Netscape does well here.
Whereas MS-Windows mosaic doesn't. Not everyone is running on a UNIX
> > Copyright and distribution rights ( and PGP authentication of content ? )
> > are notable ommissions here.
> Perhaps you misunderstand. I am not producing a catalogue of indexing terms
> which I think all images should have. (If I were, the terms you suggest would be
> excellent suggestions)
This is also being read by other people who may not have understood that.
> I am bringing to the attention of potential client writers that inline TIFF, if
> implemented well, could provide a ready means to browse this handy information,
> for which tags are already defined in the spec, bny constructing an on-the-fly
> document to hold the information.
> If you want to suggest your tags as future enhancements to the TIFF spec, the
> address is
I'm not certain yet that TIFF is the format to use in the long term. Before
I hassle them I'd like to see more discussion here as to the real
requirements in terms of system independence and network issues.
It's better to give them one proposal that many IMHO. But thanks for
the mailing list address anyway.
> On the other hand, if you are suggesting that images should have this
> information in the HTML document, then putting it as a link from the copyright
> symbol in the HTML 3 caption would seem like a good move.
This is fine, except that the information then gets lost if the image file
is downloaded separately. This is a situation many of our image donors
are concerned about.
> > This would also be nice to be able to encapsulate, so that source 1
> > supplies a content authenticated image to source 2, who then encapsulate
> > it with additional meta-information with overall content authentication
> > without having to affect the auth info on the original image.
> Sounds good, this would mean inline multipart/parallel I suppose.
I had hoped for a one file heirachical format. See my comment about
image file separation above.
> > We found that a number of companies we were suppling to ( in my previous
> > job ) were not happy with CIE-LAB for colour representation on screen.
> I am not all that happy with it either, but it sure as hell beats anonymous RGB
> which is the current situation. Do you have a pointer to a better spec which is
> implementable in the near future?
No, but I don't see why that should stop us discussing the problems.
> > Particularly car paint manufactures, who have a high gloss on the
> > final surface.
> That is becuase a single flat colour is different from a curved car panel
> painted that colour and then viewed in normal daylight. So, specifying a
> particular red in LAB does not tell you what a car will look like when painted
> that colour. Fine, but wide of the current context.
> Whereas what we are discussing is sending around a photo, or (as in the paper I
> cited earlier) a daylight simulation of the car painted that colour, and
> ensuring that the *image* is viewed with much better fidelity than the present
> situation.
> > I suppose what I'm really trying to say, is that colour
> > representation is less important than 'appearance' representation.
> I will agree with you for objects, and for printout (the Carisma project at
> Loughborogh University of Technology springs to mind here). I would like to see
> some evidence that this is a big enough win over LAB for on-screen viewing of
> images before bypassing LAB and going for something new that perhaps has little
> track record.
> [About calibrated RGB spaces]
> >This is going in the right direction, if the user is given the
> >ability to correctly set up a display system using these parameters.
> 1) Surely you realise that a calibrated RGB space is just an alternative
> representation of XYZ and that, if you are not happy with LAB (and by
> implication XYZ) then you are not happy with calibrated RGB either.
I suppose the emphasis should have been on "calibrated".
> 2) Users do not "correctly set up a display stystem using these parameters".
> How, pray, would you adjust your monitor chromaticities unless you have a
> phenomenally specialised monitor? Calibrated RGB can be displayed one of two
> ways:
I've seen a lot of users try. This was my argument at Spectrum Ic. about
on screen 'true' colour representation. However, modern monitors are
allowing much more control over this kind of setup. If this trend
continues, I would like to be ready to take advantage of it.
Even a basic monitor with just brightness and contrast controls can
drastically alter the quality of a displayed image.
> - the quick and inaccurate way, call it RGNB and throw it at the screen
> - convert from the known RGB space to XYZ and then display as with any other CIE
> based colour space.
> > But then, people don't tweak dot gain to
> > compensate for miscalibrated imagesetters in this day and age, do they ;-P
> I hope you spotted that was sarcasm.
Sarcasm ? Don't know the meaning of the word !
> > People do tweak monitor caliabration tho'
> Tweak it for what purpose? What is your point here?
I hope the same as yours - given an option to tweak something, people will.
The best you can hope for is to be able to give them some guidelines on
*how* to tweak it without screwing everything up.
> [About YCbCr colour space and subsampling]
> > This is one of ( many ) my biggest problems with Photo-CD.
> Kodak YCC, although based on YCbCr is not necessarily subject to the same
> limitations. In oparticular it is not limited by broadcast video conventions
> designed to stop transmitter overload and multiplexing artefacts. There is
> subsampling on the domestic version, but you can get a better image by taking
> the chrominance components from the next image resolution up. I don't know if Unless you are at the highest resolution.
> the Pro version has subsampling.
> > Agreed. There is also the problem that in digitised images of any type,
> > the backgound is not often uniform - black translates to about 48 shades
> in RGB ;-) which is not perceptually iuniform. Those 48 shades all look much the
> same, don't they? There would be a lot less than 48 shades in, for example, LAB.
> > when digitising from our videodisk via a TARGA board. The noise inherent in
> > the system makes automatic background detection difficult
> Are you using a CCIR-709 style gamma function with a toe-in slope limit near the
> origin, or a simple power law? If the latter, changing your gamma transfer
> function will help with noise a lot.
How does the comment 'B****** if I know ?' go down at this point ? This is
one area where the equipment which we can afford does not give us the
degree of control over the process which we wish for ;-(
In the case of most image suppliers out in netland, this is also going to
be true. We are also using sources which have been though several stages
of video manipulation.
> [About output quality consistency problems]
> Commercial houses deal with this by calibrating to quality standards daily or
> hourly.
> People with computer printers in the UK pounds 5 to 7k range should not be
> surprised if they do not get magazine quality with zero effort on their part.
> Especially if they throw raw RGB data at it.
This is not something which bothers me in particular. Most people with
that kind of equipment will be specialists. I thought we were discussing
*general* standard which can be applied across the network ? ( How many
>4K printers does your department have ? )
> Again, I am not sure whether you are saying the problem is insoluble and the
> attempt should be abandoned, or what.
Definately not. Just that it is not something which can be solved just be
throwing TIFF at it.
> [About Photoshop]
> >No comment. ( Other than it is a commercial package ).
> Sure. Also a defacto standard of astonishing uniformity across that whole
> industry. If there is already a MIME type for MS Word, I don't see a problem
> with another one for Photoshop files. It's just one more piece of technology to
> throw into the pot.
I see a problem with the MIME type for MS-word, so I can't agree with you
on this one.
> > I am more inclined to talk about power values at frequencies within a
> > 10nm spectra.....with the conditions of sampling heavily specified. This
> > may be converted into CIE et al
> Why stop at 10? Values are available at 2 and 1 nm steps too - is this just a
> counsel of perfection or do you honestly think that this will give better image
> display on the monitor in a Web browser? I assume you are familiar with
> trichromatic theory and that wildly different spectra can give indistinguishable
> colours.
Yes, but at the same time you can calculate most 3D colour spaces from
spectro data, ( 10nm is an illustration - not many peopel nowdays have
spectros which will do more than that ). In addition, for most scanners
this would have to be back-calculated anyway as they sample in RGB.(?)
Given spectro data you have a non-subjective method for comparing the
values - two values which differ may indicate a different colour, but
all values the same WILL indicate the same colour.
[ deleted ]
> Chris Lilley
> --
Please - this was not meant to be an attack on what you are trying to
do, but I believe that a lot of people out there are not specialists
in the imaging field ( I'm certainly not ). I think that these issues
have to be looked at from the technical side, and also from the
usability side. No matter how good a file format is, it won't solve
all the problems ; That's not a reason NOT to design one, just a reason
to raise the awareness of why the problems occur in the first place.

I don't see any format making a major difference to the quality of
image transmission in the next year or so - this is not something which
has to be rushed into.

Joel Crisp ( Linux/33 )
Multi-Media Technical Support for Bristol Uni.
Educational Technology Support Service