ask me?

I used to run an FAQ site all about colour. I still enjoy hearing new (and sometimes old) questions about colour and trying to answer them. Please leave your questions below – simply add a comment – and I’ll try my best to produce a blog post that answers your question.

Steve

DSC_2015

107 thoughts on “ask me?

  1. Hi

    Thanks for asking this – it’s a good question. A few months ago the Colourware web site became infected with viruses. It was something to do with the server technology it was hosted on. I didn’t really have much choice but to wipe it all clean and start again on new servre technology. So I am taking the opportunity to look again at what is on the Colouware site – it was looking a bit tired.

    The good news is that I have just restored the http://www.colourware.co.uk/compute/ link and the toolbox can be downloaded from there now. Sorry for any inconvenience for the short while when it was unavailable.

    I am currently writing a second edition of the book so please let me know if there are ny things missing in the first edition that you really think I should be adding in the second edition.

    1. Dear Professor Westland,

      I just got a copy of the book from Amazon and i am really excited to read it.

      However, when I go to:
      http://www.colourware.co.uk/compute/

      I get the message:
      “Internet Explorer cannot display the webpage”

      Is the website down again?
      Could you add the toolbox to the Matlab file exchange as it is suggested in the intro to the book?

      Thank you so much,

      Matteo

      1. Great!!
        Thank you.

        I can’t wait to start the book this weekend.
        I will be sure to ask some questions. I’m working – for fun – on a perceptually balanced rainbow transfer function.
        Matteo

  2. I was looking for “spectra.txt” mentioned in the textbook, “computational colour sicence.” Where can I find the file?

    Sam

    1. Dear Steve
      The link “http://www.colourware.co.uk/compute/spectra.txt” is broken.
      Where can I find the file “spectra.txt” ?

      I would like to make some practice.

      Thanks!!!

      Yomi

    1. The question doesn’t make complete sense to me. If you were asking me what the CIELAB values are for the white point of a substrate then I would reply that these are normally L* = 100, a* = b* = 0. This is one advantage that CIELAB has over, for example, the original CIE XYZ system. CIELAB is a colour-appearance space and the white point is normally constant (L* = 100, a* = b* = 0) which reflects the phenomenon of colour constancy. By contrast, the XYZ values of the white point vary considerably depending upon the illuminant.

      But I know you ask about Delta E. But Delta E is about distance – usually between two colour stimuli. So I am not sure how to interpret that in the context of your question about white substrates. Hope my answer helps anyway.

  3. Dear Steve,

    I have thoroughly enjoyed your book and commend you on your very clear explanations. Now for the question. Does chromatic adaptation occur independently between the two eyes. In other words, if I simultaneously show one eye an target illuminated with a fluorescent light and the other eye an identical target with illuminated with an incandescent bulb, would the target color match for each eye?

    Cheers,

    Jim

    1. Dear Jim

      Firstly, many thanks for your very kind comments. They are much appreciated.

      Now, I should say that I am not an expert on the physiological mechanisms of chromatic adaptation but certainly part of it is to do with bleaching of the cone pigments. Therefore you would expect that each eye – if viewing a scene monocularly – could be separately adapted. And indeed, I believe that is what the data show. But it is only part of the story. I am aware, for example of an interesting paper (published in Vision Research, vol 27 (3), pp. 429-439, 1987) by Ram Vimal and Steve Shevell from the University of Chicago that suggests that there is a central binocular mechanism that affects chromatic adaptation.

      Ram and Shevell performed an experiment where they adapted each eye on its own (monocular adaptation) but then also together (bonocular adaptation). They found that the adaptation effects (shifts) were greater when both eyes were adapted. They concluded that there exists a central binocular mechanism that affects the state of chromatic adaptation that serves to keep the two eyes in balance.

      So, in short, my summary would be that, yes, under extreme conditions of monocular viewing it is possible to differeentially adapt the two eyes. However, in practice (normal life) the two eyes are kept in balance because any small differences in chromatic adaptation are compensated by the binocular neural mechanism.

      Steve Shevell’s homepage – http://psychology.uchicago.edu/people/faculty/sshevell.shtml

      Best
      Steve

  4. Hi Steve,

    In looking at your file ‘xyz2srgb.m’, I noticed that you scale the xyz values by a factor of 80. Is the reason for this factor explained anywhere?

    Thanks, Tom

  5. Hi Tom,

    Very very good question. I should say that the xyz2srg.m script is not actually in my book (so it is not explained there) though it will be in the next edition 🙂

    However, I added the xyz2srgb.m file to the toolkit just because it is so useful and lots of people ask for it. But it is not explained anywhere and I apologise for that. And I also admit that I am not sure that the factor of 80 is correct. It is something I have fretted about for sometime. I am going to the IS&T color imaging conference in New Mexico on Sunday and I may ask some people there who are experts in sRGB.

    But the reason I used this factor was based on this article: http://www.w3.org/Graphics/Color/sRGB.html

    If you look at the coefficients in the matrix that converts XYZ to RGB it seems to be that the XYZ values should be normalised in the range 0-1 (or similar). But looking in the sRGB standard the luminance level is 80 cd/m^2. So, my understanding is that for the XYZ –> sRGB transform we should be using this luminance level and also illuminant D50. Often when we measure XYZ for the monitor display we are using absolute XYZ values where Y is the luminance. And based on the luminance of the white point being 80 cd/m^2 I decide to normalise by 80 rather than 100.

    As I say, I am not at all sure it is correct but it has given good results for me. If I find out anything more next week I’ll add a postscript to this reply then.

  6. Hi, Steve,

    Could you recommend THE book explaining correctly the perception, impact, role and use of colours in communication, leadership, management, etc.? And, if possible, the best title in french too, please. Thank you very much in advance. It is very interesting to read your texts.

    Hanitra, from Madagascar

    1. Hi

      If there was such a book we would all be buying it. Sadly, I do not think there is one book (in English) that achieves all that you desire. My favourite book on colour perception is probably Wandell’s Foundations of Vision; this is a book about vision generally but has some nice information on colour vision of course. Fairchild’s book on Colour Appearance is also very authoritative.

      For a book about the cultural aspects of colour I think you could do no better than look at Gage’s books. There are several, Colour in Art and Colour in Culture, are two that spring to mind.

      Good books on colour in leadership and management are rather harder to find and I would welcome suggestions from other people.

      I do not speak French and so it is impossible for me to comment on the quality of the French colour literature. However, google people like Francoise Vienot and Valérie Bonnardel would be a good place to start.

      Steve

  7. Hi steve.
    I am international student, I am a little bit confused when I was do some reading. Yestoday I saw a sentence in website that is: the CMS(color management system) translates colors from the color space of one device into a device-independent color space. I wanner know if ” one device” refers to a device-dependent space? that is Adobe RGB ?

  8. This is a good question but may require quite a long answer. When you are working on a computer using a piece of software such as Adobe Photoshop you are manipulating RGB values. What do these RGB values refer to? The answer is …. your working RGB space. Settings in the software allow you to choose one of several working spaces such as sRGB or Adobe’s 1998 RGB space.

    The working space is quite different from, say, the colour space of your display device (monitor). The display colour space will depend upon the design of the device and also its settings (contrast, brightness, white point, gamma etc.). Colour management (CMS) converts your working space RGB values into your device space RGB values so that the colours you are using appear on screen. If the gmaut of your display device is different to that of your working space then some decisions need to be made about how to do the gamut mapping. These decisions are normally referred to as rendering intents.

    When you click print in your software the data is sent to a printer. The printer colour space will certainly be different to either your working space or your display space and therefore, again, the CMS will make this conversion for you (and gamut mapping will be carried out according to your rendering intent).

    So where does device-independent space come in? Well every other space (the working space, the display device space, the printer space etc.) is known in terms of its relationship to some device-independent space such as CIE XYZ. The process of knowing and determining this relationship is called profiling but in the absence of user profiling the CMS will rely upon some default profiles.

    So to display your image your working space RGB values are converted into CIE XYZ values and then from there into device RGB values. When you print, your working space RGB values are ocnverted into CIE XYZ values and then into printer colour space (probably CMYK).

    The user doesn’t need to know what the device-independent coluor space is – all that matters is that each manufacturer (of printers, monitors, cameras etc) agrees on some sort of standard and then creates default profiles for their devices. This is achieved through the International Color Consortium (ICC) and the current standard is ICCv4. See http://www.color.org/version4html.xalter

    ps. Why would you use a working space like Adobe (1998) RGB which is wider than the gamuts of most available monitors? Simply because, for example, though your monitor may not be able to display all the colours in your working space, your printer may be able to. Or even if your printer cannot then the more expensive printer used by your print shop may be able to. So you are not constrained by the limitations of your particular printer or monitor. Coloutr management information is usualkly embedded in your document so even when you sent your document to someone else colour management can (in theory) take place. Beware, however, since there are also disadvantages to using a wide-gamut working space and many people prefer to use sRGB as their working space.

  9. Dear Steve,
    I have seen here that you are writing second edition of “Computational Colour Science”. Do you have any idea when it will be available?

    Best regards,

    Ondrej

    1. Yes,I have just signed the contract for a second edition. It will be out in Autumm 2011. Unfortunately these things take quite a long time to develop and we’re planning a lot of new material in the second edition.
      Steve

  10. Dear Steve
    I am having trouble understanding the difference between a colour working space and a display device space. How can you know that the colours you are manipulating in Photoshop, for example, are different when your eye can only see the monitor in front of you?
    Thanks,
    Ben

    1. Hi Ben. Good point. If you use a wide-gamut working space (wider than you actual monitor space, for example) then it is not easy to make colour decisions. Because your monitor can only display colours within its gamut.

      So why would someone use a wide-gamut working space and how would it operate? Well, imagine a situation where someone is designing a logo. The requirements from the client are for a particular yellow in the logo that is outside of the gamut of most monitors but is achievable using a high-quality commercial printer. If the designer used a working space that was equivalent to his monitor space then he/she would simply not be able to create this colour on screen. Instead he/she would need to use a desaturated yellow. And when the file was sent to the printer the result would be a desaturated yellow and a dissatisfied client.

      Now imagine that the designer uses a wide-gamut working space. This would allow the yellow to be specified in the software. Of course, it still wouldn;t look right on screen because the monitor cannot display that yellow. However, when the file is sent to the printer the correct yellow could appear. In this example, you’ll note that the designer is working ‘blind’; that is, he/she is not making a visual decision about the colour on screen. Rather he/she probably has a pantone or similar specification from the client and has converted this to the wide-gamut RGB space. He/she doesn’t really care that the colour on screen doesn’t look right but knows that when it prints it will be fine.

      The above scenario may be ok for certain situations. But in most cases the designer wants/needs to be able to make a visual decision based on what is on screen. It is therefore often better to use a working space that corresponds to the monitor. Using sRGB as the working space would be fine. The trade-off for being able to then work visually is that you are restricted to the colours that your monitor can display.

      Hope this helps.
      Steve

  11. Hi there,

    Im really confused about device-dependant spaces, and device-independant spaces? Can you explain what they are and why they are important?

    Many thanks
    Holly

    1. I’ll try to explain what they both are and then what the significance is.

      First, of all, device-dependent colour space. Well, what is a colour space? You should check out my blog post on this before reading on – http://colourware.wordpress.com/2009/09/20/what-is-a-colour-space/

      When you define a colour in a software package such as Adobe Photoshop you do so by setting RGB values. For example, [R=255 G=0 B=0] is a red, [R=255 G=255 B=0] is yellow etc. Imagine you make a logo consisting of just these two colours: red and yellow, defined using the RGB values above. You look at the colours on the screen. And then you change the brightness and contrast of the screen by adjusting it using the knobs that are normally at the bottom of the screen. When you make this adjustment, guess what – the reds and yellows change colour. Ok – they probably stay as red and yellow. But not the same red and yellow as before. Depending how you changed the settings on the screen they may be brighter or duller. But different. However, the RGB values have not changed. The RGB values have not changed but the colours on screen have – so perhaps RGB doesn’t define colour after all!!

      This is the basic idea behind a device-dependent colour space. The RGB values only define colour for a particular device (by device I mean a specific monitor/display with specific settings). As soon as you change the settings (brightness/contrast/gamma etc) on a device then its not the same device in colorimetric terms. You might email your logo to a friend who has a different display to yours and different settings and, although probably the reds will still be basically red and the yellows basically yellow, the colours of the logo won’t look the same to your friend as they did for you. I hope you can see that because RGB colour space is device-dependent – in the way I have just described – it is a serious limitation if we would like to view colours on different screens/displays in differnt conditions without loss of colour fidelity.

      The problem I have described above you may not even think of as a problem. As long as reds are red and yellows are yellow, isn’t that good enough? Well, it may be good enough, if you are just putting up snaps of a night out onto facebook. But for professional design work – colour often matters …. a great deal.

      What can we do about this? The answer is colour management. Basic colour management is taking place every time you use a computer whether you realise it or not. In fact, it is because of basic colour management that the reds always look red and the yellows always look yellow. But you can do better by fine-tuning colour management – see, my blog http://colourware.wordpress.com/2009/07/29/colormunki-colour-management/ for example.

      So I hope you now know what a device-dependent colour space is. And why it matters – that RGB is device-dependent is why colours often look different on different computers. And what we can do about it – improve colour management. Go into a TV store and look at all those TVs showing the same image – they all look different in colour don’t they. That’s because RGB space is device-dependent.

      There are some ways to define colour, however, that do not depend upon a specific device. One of these is the CIE XYZ system. If you were to display your logo on 10 different displays so that it looked the same. To do this you might need to adjust the RGB values on the 10 different computers. But imagine you did that – so that the logo looked the same on the 10 computers. Then you measure the CIE XYZ values of the red and yellow using a colour measurement instrument. You’d probably find that the CIE XYZ values were the same for the red no matter which display you measured it from. The same for the yellow. That’s because CIE XYZ values don’t depend upon a device.

      I hope this helps. I know it’s complicated. You are now probably wondering what a colour measurement instrument is. That’s a question for another day ….

      Steve

  12. Hi steve,

    Im wandering if you can explain rendering intents to me and what part they play in colour management?

    Thanks, Hannah

    1. One of the issues that colour management has to deal with is the fact that different devices have different gamuts. That is, your printer can display some colours that are outside the range of the colours that your display device can show; and your display device can show some colours that are outside of the range of those colours that your printer can display. How to deal with this is called gamut mapping – from a technical perspective.

      Imagine that you have an image on screen and there are some colours that your printer cannot display. How should your software deal with this? One possibility is that it should print all the colours as best as it can apart from those that are outside of the gamut of the printer (and for these out-of-gamut colours it would print them as saturated as possible). Another possibility is that the software desaturates all the colours so that they are all printable. In this second option none of the colours would come out quite right but relatively speaking the colours would all have the same relationships to each other. Which is best? Well, it depends upon what you are trying to do. And that is what rendering intents are!!

      Software like Adobe Photoshop allow you to specify a rendinering intent – that is, you tell the software something about what you are trying to achieve (in colour terms obviously, not in terms of your general life) and this allows te software to deal with the out-of-gamut colours in a way that may meet with your expectations.

      There is some nice information about this topic in the following website – http://www.cambridgeincolour.com/tutorials/color-space-conversion.htm

      Steve

  13. Dear Steve,

    I came across a question recently and I’d like to confirm if my answer is correct.
    The question is, How does colour management enable good colour fidelity between screen and print colours.

    1. Hi
      Why don’t you say what your answer is, and then I will comment on whether you are on the right lines.
      Best
      Steve

  14. Well my answers are in line with the definition of colour management as a process of ensuring colour output between devices are thesame as a result of the ICC. However, i’m not sure of wether its necessary to talk about gamut mapping. I think i’ve also confused myself on the relationship between gamut mapping and rendering intent.

    1. Hi

      As for the relationship between gamut mapping and rendering intent, if you scroll up a little here you’ll see that someone asked about this and I gave quite a long answer. I am sure this will help.

      As for gamut mapping. Well, that different devices have different gamuts (so that a printer can often print colours that can not be displayed on your monitor, for example) is a problem and is something that limits the performance of colour management. So I think gamut mapping (the CMS solution to the gamut problem) is worth mentioning when talking about CMS. It’s one of the things the CMS needs to deal with and one of the reasons we need CMS at all.
      Best
      Steve

  15. Dear Professor Westland,

    I am studying at Leeds Uni and found your lectures here very clear and engaging. I have also found your answers to be an invaluable revision resource for my work on colour management. I do however have a question regarding CIE XYZ values. If they are a device independent colour space wouldn’t the initial RGB working space be required to be perfectly calibrated (system to display) to ensure true colour fidelity between the working space and those values, to avoid working ‘blind’ from the perspective of the original designer. Is there an accepted industry standard for monitor calibration and gamuts?

    Best wishes and Happy New Year.

    Charlie

    1. Hi Charlie

      You’re right. For colour management to work perfectly then every device should be perfectly calibrated in terms of some device-independent space such as CIE XYZ. In fact, we say that every device should have a perfect profile. In reality no profile is perfect and even on the best systems colour fidelity is not perfect. (Indeed, even if we had perfect profiles for every device the problem of gamut mapping would still prevent perfect colour fidelity.)

      As a compromise many people use sRGB as the working space. The default colour space for many devices (if they do not have a profile) is sRGB. Therefore, if you use sRGB as the working space then you can be quite confident that your colours will look reasonable when viewed, for example, on any machine over the internet.
      Steve

      ps. Thanks for your kind comments on my teaching 🙂

  16. Hi Steve,

    I’m struggling with the concept of metamer sets for quite a while (e.g. while trying to to comprehend the papers of Ali Aslam, Graham Finlayson and others). I was able to relate to your example of deriving a set of basis functions using SVD as well as Cohen’s method of decomposing spectra into fundamental stimuli and metameric blacks — but I’m wondering how e.g. Finlayson apparently combines this methods: “Any n-dimensional basis set can be split into two parts such that the first 3 basis vectors project non-trivially onto the sensors and the second n-3 vectors are Metameric blacks.” (http://www.imaging.org/ScriptContent/store/epub.cfm?abstrid=1190).

    Can you to shed some light upon the relationship?

    Many thanks in advance!
    Klaus

    1. Hi Klaus

      Good question. I just read Graham’s Metamer Sets paper that was published in JOSA in 2005. Pretty much everything that Graham does is very complex but, once understood, usually elegant.

      As you know, the idea of a metameric black is of a spectral reflectance curve that has X=Y=Z=0. Of course, to achieve this the spectral curve needs to be negative at some wavelengths. So it’s a mathemtical construct rather than a physical possibility. Nevertheless, the idea is that if you have a metameric black (or set of them) then it can be added to any reflectance curve and without impacting on the CIE XYZ values. So it’s a way of generating metamers. If you have a set of metameric blacks then you can each each in turn to a reflectance curve and you will create a metamer to the original curve. Another way of thinking about this is that if you take a set of metamers (reflectance spectra that all have the same XYZ values) then you can decompose them into one (fundamental) component that would be common to them all and a metameric black (that is specific to each metamer).

      The second idea you refer to is that you could represent a set of reflectance spectra by a set of basis functions – normally derived by some technique such as principal component analysis or SVD.

      Finlayson does combine these two ideas in his 2005 paper as a way of generating the metamer set for a particular XYZ stimulus. He starts with imagining that the reflectance curve would be represented by a set of k basis functions. He shows that you can represent this as 3 basis functions that would give rise to the fundamental component (if you like) and then k-3 basis functions that are metameric black. One can then add these metameric black components in any arbitrary amounts to the fundamental component since they will not affect the XYZ values. However, finally, Finlayson applies a set of constraints that ensure that the metamer set is restricted to those spectra that are physically reasonable (bounded to be greater or equal to zero and less than or equal to one, for example, at every wavelength).

      I hope this helps. It’s not an easy concept to explain without all the maths. But if you have not seen it then I do recommend the 2005 JOSA paper. If you email me privately I’ll send you a copy if you don’t have one.

      Steve

  17. Dear Mr. Westland,

    I´m struggling with the cband2-function from “Computational Colour Science using MATLAB”. Comparing the calculated values with the ones I get from cband I noticed large errors and negative values for spectral reflectance.
    Maybe I´m getting something wrong, (as a MATLAB-beginner…) or is there a way to optimize the cband2-function to work for a n number of spectral reflectance data?

    Kind regards from Germany
    Vladimir

  18. Dear Mr. Westland,

    my problem is solved (as expected, on a quite simple way). The matrix have to be transposed (in my case from 1485*36 to 36*1485). In that case cband2 performs fine and I learned a lot of usefull things!

    Kind regards
    Vladimir

    1. Hi Vladimir,

      Thanks for letting me know!! As you posted this message I was just about to start replying to your earlier post so you have saved me some time. 🙂

      Steve

  19. Hello again,

    thank’s for replying anyway. I still have one question. Would you recommend using spectral bandpass correction, when working with commercial profiling solutions, in general? Do you have any information about how various software-vendors solve this problem?
    Today I did some tests with Heidelbergs ColorTool. After applaying spectral bandpass correction to my original measurement I´ve noticed that my paper-tint reproduction within the icc-profile was visually mutch closer to the original then without correction.

    Kind regards
    Vladimir

  20. Hello,

    After you mentioned a couple of weeks ago on Friday the fact that other species have developed different numbers of cone types (such as pigeons), I was wondering about the evolutionary reason behind that adaptation. What more do pigeons needs to see in order to survive?

    I was also thinking that perhaps the harmony of colour is mathematical in the sense of having resonating fire rates in different types of eye cells – that might cause them to ‘feel’ stronger in the presence of various other harmonious firing rates – like half, or the same; kind of like how the beating of certain drum patterns can sound good to us. If this were true, the various colour relationships would be like mixing of different types of beat, which may explain something about the aesthetics and mood created by different colour combinations (similar to different styles of music).Perhaps the syncing up of certain firing rates (just as resonating hairs in the cochlea) generates an emotional effect beyond the simple recognition of colour differentiation at a level more fundamental than culture.

    For this reason I wanted to know the mechanics behind the varying sensitivity to wavelength among cone cells. *Why* is there a gradient in sensitivity rather than a digital ‘cut off’ for a distinct range of wavelengths?

    Finally, why is purple generally put on the lower end of the spectrum next to blue, rather than next to red?

    Thanks,

    – Chris

    1. Hi Chris

      That’s a whole bunch of very nice questions. I’ll try to answer them in turn.

      Regarding avian colour vision – most birds have at least four cones (and hence the potential to see 4-d colour) and also have sensitivity in the uv region. Why do birds have such good colour vision? One could imagine a host of possible reasons – however, rather than do that I would refer you to a very interesting page about colour vision and ecology at the University of Bristol – http://www.bristol.ac.uk/biology/research/behaviour/vision/4d.html

      Your comments about colour harmony are very interesting. You are not the first, of course, to think that colour harmony is mathematical (that harmonious colours have certain geometric relationships in some colour space) and that there may be a link with music. However, as a drummer, I really like you comments about the relationships between the neural firing rates that each colour gives rise to. All I would say at this point is that it is an intriquing idea 🙂 – perhaps the start of a research project?

      The shape of the cone spectral sensitivity curves is rather easier to answer. Almost all dye absorption curves are roughly gaussian in shape. The reasons for this are explained in a very nice paper published by Maloney in the Journal of the Optical Society of America in 1986 (Maloney, JOSA A, vol 3, 1673-1683, 1986). Essentially, the peak is in a certain position because it corresponds to a packet of energy that is absorbed and enables the dye to undergo an electronic transition. The greater the amount of energy needed, the shorter the wavelength of the light absorbed. So why do not dyes (and cones) absorb light at only a single wavelength? Because the excited electronic state has various rotational and vibrational states. This results in other ‘allowed’ transitions (though they be slightly less likely) and the bell-shaped absorption curves. So the shape of the cone spectral sensitivity curves are constrained by the way that matter typical absorbs light. If we were designing a colour-vision from scratch and were not so constrained we may well think that a good way forward would be to have sensors that do not overlap in spectral space. Again, ideas about what the optimal sensors are lie in the realm of research and may be the focus of students’ PhD work.

      Regarding your last question: one thing to remember is that when you see a spectrum in a book or on a screen it is just a representation – and not a very good one. The question is whether, when one looks at a rainbow, the real spectrum is the way you descibe it. Possibly. But even were it the case, your question would be hard to answer. It would be like trying to answer: why is light at 700nm red? Or. why isn’t the short-wavelength light red and the long-wavelength light blue? We cannot answer questions like this about our phenomenological colour experience.

      I hope my attempted answers help a little, at least. The best questions don’t have clear answers.

      Steve

  21. Hi Steve
    Where could i find info on how camera manufactures convert from ZXY to srgb. Not looking for trade secrets just general principals. eg Do they just clip from XYZ to sRGB gamut boundary or is there some sort of gamut mapping etc

  22. There is a very nice document that describes the sRGB system and how to convert from XYZ to sRGB etc. Here is the link – http://www.w3.org/Graphics/Color/sRGB.html

    I suppose you could argue that for a camera manufacturer, they first obtain RGB values (in the camera’s space), convert to XYZ, and then convert to sRGB. I doubt they do this – I would imagine that they would convert their RGB values durectly into sRGB.

    That is, assuming that they wanted to convert to sRGB at all. Some cameras allow you to save in raw format – where you will have raw RGB values and (hopefully) information about the camera that would enable any other software to do the conversion. What happens then depends upon the software. Sometimes there is an option to save in a certain space such as sRGB. Then, clipping or gamut compression may be needed. To be honest though, having never worked for one of the camera manuacturers, I don’t know what they do. Quite possibly they all do different things.
    Steve

  23. Hi Steve

    I know we have already discussed this with regard to the LRV (light reflectance values) but just wanted to confirm with anyone who has done this (ie. used a spectrophotometer to obtain the LRV) if under the British Standard BS8493:2008 that the LRV = Y tristimulus value (this is true for the corresponding ASTM standard)- 10 degree observer, specular light included, D65?

    I take you point that it is used as a measure of contrast between two surfaces/objects/interfaces.

    Thanks

    Gordon

  24. Yes, I am 99% sure that the LRV is effectively the CIE Y tristimulus value. It is used in mnay building manufacturing standards especially to ensure sufficient contrast.

    Anyone else know anything about this? Please comment…

    I do know that CERAM – http://www.ceram.com/ – provide a service to provide LRV values for samples that are sent to them.
    Steve

  25. I see you are thinking of coming to the Color Marketing Group meeting in Amsterdam next week. As President
    of CMG I wanted to take a moment and encourage you to come. I know you will enjoy it thoroughly and come
    away with some new insights. I look forward to meeting you next Wed night.

    Best,

    James

    1. I didn’t get to go unfortunately – by the time I found out about it I just couldn’t rearrange enough time. It’s a shame because it looks a wonderful event. I’ll certainly be looking out for further CMG meetings and I am sure I’ll be getting to one in the very near future!!

  26. Dear Steve,

    I would like to know which is the minimum value of DE (CIELAB) that produces a noticeable colour stimulus.

    Thank you.

    Sincerely yours,

    C

    1. Hi

      CIELAB is often referred to as an approximately uniform colour space. In a completely uniform colour space the distance one would have to move in colour space from a point in order to have a noticeable difference in colour perception from that point would be the same no matter where the starting point was; no matter whether we are talking about blues, reds, browns or blacks, the colour difference that corresponds to a just noticeable difference would be constant. In such cirumstances I could answer this question very easily indeed and simply say that the answer is 0.5, or 1.0, or whatever DE corresponds to the just noticeable difference.

      Unfortunately CIELAB is not a perfectly uniform colour space and therefore there is no easy answer to this question. The answer depends upon which colours you are talking about. As a rule of thumb I tend to think that, on average, a CIELAB DE of about 0.5 is often noticeable and that a DE of about 1.0 is acceptable in many situations. Of course, this is for relatively large areas of uniform colour seen side by side (for images the colour difference threshold for perceptibility is much higher). But it’s only a rule of thumb. Because CIELAB is not perfect.

      Because it has long been known that CIELAB is not perfect there was a quest – in the latter part of the 20th Century – for a more uniform colour space or for an equation that at least produces consistent DE values for a just noticeable difference. A number of equations were developed and tested. I would recommend you look at the second edition of Billmeyer and Saltzmann’s Principles of Color Technology by Roy Berns which includes readable and accurate reviews of much of this work.

      Most industries do not use CIELAB DE for pass/fail work, precisely because of the problems I refer to above. The CMC equation – published by the Society of Dyers and Colourists in the UK – has been extensively used and increasingly I am seeing the latest equation CIEDE2000 achieving acceptance.

      I hope this answer is helpful despite being very wordy.

  27. Dear Steve,

    thank for your answer related with CIELAB DE.

    Another question come to me about the usage of the chromaticity x, y to represent colours. In many texts it is described that it is not correct to represent the plan that falls within the horseshoe-shaped outline with colours. So my question is: each individual colour as not one specific x, y coordinates? we can have different colours with the same x, y? wich is information is lost representing X, Y and Z by x, y?

    Sure it is a basic question, but I can not find the explanation for this.

    Thanks a lot for sharing your knowledge through your blog. This is real civic service.

    Sincerely yours,

    Carlos

    1. Many thanks for your kind comments Carlos.

      I am not 100% sure I understand your question but I will try to answer it. I think you are asking about the xy chromaticity diagram and what information is lost when we plot colours in xy rather than in some 3-d space such as XYZ.

      I think one way to help think about this is to consider the white point (or neutral point). This has x = 0.3138 and y = 0.3310 for D65 (1964). However, this single chromaticity point represents white and black and all the levels of grey in between white and black. White, black and grey all have the same chromaticity but have different luminance values. The same is true of any other colours – at any point in the chromaticity diagram there is a set of colours who all share the same chromaticity but who have different luminance values.

      Consider the two colours:

      A: X = 30, Y = 30, Z = 30
      B: X = 60, Y = 60, Z = 60

      In each case the chromaticity coordinates are the same, thus:

      A; x = 0.3333, y = 0.3333
      B; x = 0.3333, y = 0.3333

      but when we look at the chromaticity coordinates we lose the information about how luminous a colour is.

      I hope this helps.
      Steve

  28. I have measured a series of colors in emissive mode off a LCD monitor using a colorimeter. The measurements are in the form of xyY, Y being Luminance, in cd/m2. I converted the CAP XYZ values from xyY but I have a hard time figuring how to “normalize” the measurements to the monitor XYZ? I would like to restate all measurements on a scale between 0.0 to 1.0 values, with Y = 1.0 for the “brightest” color, i.e. monitor white point. I have studied your book but have not found a passage that explains the math behind this transform.

    Kindest regards / Roger Breton

  29. I think I found the beginning of the answer. In my research, I stumbled upon a monitor analysis report where two set of XYZ values were displayed for the monitor white. The first was called XYZ and show 113.48 118.92 116.55. The second XYZ values was called “normalized” in parenthesis, and show 95.42 100.00 98.00. So, if I do the math and divide 100 by 118.92 I get a conversion factor equal to 0.840921446. When I multiply this factor by 113.48, I get pretty close to 95.42, and when I do the same for 116.45 I get pretty close to 98. So, it would appear that, to obtain “normalized tristimulus values”, one applies the scaling derived to make the abolute Luminance Y = 100 to X and Z? Is it that simple? Or am I missing anything? Also, how would I obtain a restatement of the normalilzed XYZ on a scale of 0 to 1.00, instead of on a scale of 0 to 100: do I just divide out everything by 100? Thank’s for your help in advance.

  30. Hi Roger,

    I think you have answered your own question anyway, When you measure XYZ values using a non-contact colorimeter it will usually give you XYZ values that are absolute, This means that the Y value is luminance and has units of cd/m^2. The luminance of a monitor white could be anywhere from 50 to 150 depending upon the monitor and the settings.

    [If you were to convert these XYZ values to CIELAB values you’ll find a normalization in the equations so that L* = 100 for the white.]

    Some people like to convert their absolute XYZ values to relative XYZ values. To do this just divide by the Y value of the white and multiply by 100. This will give you Y = 100 for the white. Hope this helps.
    Steve

  31. After further experiments, I’m not sure my “method” is entirely correct.
    Suppose I have collected the following XYZ values with my colorimeter:

    a) 149.94 161.00 131.37
    b) 135.83 143.00 119.50
    c) 151.79 162.00 130.04
    d) 139.58 146.00 115.52

    As you can see, c) has the highest Luminance Y=162.00. So I decide to use 160 to normalize XYZ, so that Y = 100.

    So it seems that I have to carry two distinct transformations in order to make all XYZ values “relative” to this reference white (these are all white measurements btw, coming from a not so uniform monitor). First, I have to scale all X and Z by a factor that is the ratio of 100 to 162 or 0.61728. Second, I have to divide all Y by 162, so that, at 162, Y will become 100. The transformed XYZ are:

    a) 149.94 161.00 131.37 -> 92.56 99.38 81.09
    b) 135.83 143.00 119.50 -> 80.41 84.57 66.08
    c) 151.79 162.00 130.04 -> 93.70 100.00 80.27
    d) 139.58 146.00 115.52 -> 91.75 95.68 74.71

    Is this correct conceptually?

    Best / Roger

  32. After further research, I found the correct answer. The idea is to calculate a correction or scaling factor based on Luminance alone, and use that factor to scale all three tristimulus values. So, the correction factor is Actual Luminance / Normalizing Luminance. The, divide all three XYZ values by this factor so that everything falls on the same new scale.

    Question: to have the input values for Bradford, does the values need to be normalized on Y=100, or is it possible to calculate directly the adaptation matrix without normalizing first?

  33. Hello Steve,

    Thank you for your book, I’ve really enjoyed it! I was going to purchase a copy for our lab, but I remember you mentioning that another edition is on it’s way. Do know when we can expect the next edition?

    Thanks and take care,
    Kristyn

    1. Hi Kristyn

      Thanks for your kind comments. We’re certainly working on a second edition which is going to be a big improvement. But it’s some way off of course. Probably early 2012.

      Steve

  34. Hi there,

    I just want to ask how can i convert CIELab values to xyz color values and get the whiteness index..What is the formula for WHiteness Index?

    thanks

    Regards,
    Yaj De Vela

    1. The CIE whiteness index is calculated from the Y tristimulus value and the xy chromaticity coordinates thus:

      W = Y + 800 (xn-x) + 1700(yn-y)

      where xn and yn are the chromaticity coordinates of the white point. For D65/1964 xn = 0.3138 and yn = 0.3309.

      The formulae for converting XYZ into CIELAB are given in many many textbooks. It is a relatively straight forward matter to invert these. I think I show how this is done in my book – Computational Colour Science using MATLAB.

  35. Dear Professor Westland,

    I have greatly enjoyed your book, finding it most informative and interesting. You have managed to succinctly bring together many concepts and ideas in a brilliant book.

    The only low point is that I cannot access your toolbox on http://www.colourchat.co.uk/compute. I dare say it’s just a little technical glitch.

    Looking forward to your new Edition in 2012.
    Oliver

      1. Dear Professor Westland,

        Thank you for your quick reply.

        I have another question about your book. When trying to calibrate an image from a digital camera that contains a grey scale (using getlincam.m). You mention that you can use the Y tristimulus value (I presume as it is a linear measure of luminance) to calibrate the RGB values.

        However when I have done this, the grey scale I gain when transforming the image is not what I expect. I consequently tried applying the value, in Munsell Notation, which is a non-linear measure of luminance. This gives me the values I would expect.

        I was wondering if this happened because my grey scale is not linearly related with the Y value but is linearly related to Value (Munsell notation)?

  36. I�d should verify with you here. Which isn’t one thing I normally do! I get pleasure from studying a publish that can make folks think. Additionally, thanks for permitting me to remark!

  37. Suppose I pick two coordinates: one at the center of a MacAdam ellipse and another on the edge of a 3-step ellipse. If I calculate Delta E 2000 for those points (assuming the same luminance), I should get 1. In practice I get a different number, sometimes very different. I know that the Delta E calculation was not developed from MacAdam’s dataset, but this raises a question: Are MacAdam ellipses a reliable way to measure color differences? Why are they so different from Delta E 2000?

    Additionally, someone told me that nobody has been able to reproduce MacAdam ellipses (size or orientation) and that the colorimetry world looks down upon MacAdam ellipses “because they are hogwash.” What are your thoughts?

  38. Nice question and one that has given me a lot of thought. It is not something I have thought too much about before.

    I guess the first thing I would say is that I don’t think that the MacAdam ellipses were the final word on colour difference evalaution. If they were, then we would be able to develop colour spaces (such as CIELAB) and optimused colour-difference equations (such as CMC, BFD, CIE94, and CIEDE2000) without doing any further experimental work. Of course, looking back over the last 40-50 years we see a very different story whereby a huge volume of additional psychophysical work has been carried out and used to develop colour–difference equations. Even in the 90s there were arguments about which data set was better and most reliable and a more-or-less global agreement on this issue led to the CIEDE2000 recommendation which is the final word (for now).

    That said, even though one could argue that I am one of the people who live in the colorimetry world, I don’t think the MacAdam ellipses are hogwash. They were an important demonstration of the limitations of the 1931 CIE system and paved the way for the development of CIELAB. Broadly speaking my guess would be that the MacAdam ellipses are correct. However, they are just one data set, and an old one at that. The work that led to CMC and then CIEDE2000 was based on a careful consideration of a number of data sets that were carried in laboratories in different countries. So if I was faced with a colour decision and CIEDE2000 was telling me one thing and the MacAdam ellipses were telling me another thing …… then I would go with what CIEDE2000 was telling me. It is currently our best and most sophisticated attempt to predict colour differences.

    I wouldn’t worry too much about the fact that MacAdam ellipses are not consistent with CIEDE20000. I have respect for MacAdam ellipses but not in the same way as we do for, for example, the CIE 1931 colour-matching functions wich are almost a gold standard.

    I hope this answers your question to some extent at least. In short, are MacAdam ellipses a reliable way to measure colour differences? I would say they are not a crazy way but not as good as using a modern colour-difference equation such as CIEDE2000.
    Steve

  39. Greetings Professor Westland,

    You seem so knowledgeable and friendly that I desire your help. However, at this time it is not my wish to appear foolish by writing my questions (and suppositions) here for all to see. I would like a bit of “private assistance”, if that is feasible. (I do realize how busy you may be, but also believe it shall mutually beneficial, should you at least try.)

    Basically, I do believe to have stumbled onto a remarkable “new discovery” – an interesting facet relating to the refraction of light. Sadly though, to me it appears that I may have opened up the proverbial “pandora’s box” here – but perhaps it wouldn’t all seem so VERY COMPLEX, if seen “through your eyes”. I sure could use your help to “fill in the gaps”.

    I did read a blog of yours indicating that you are in disagreement with some (on Facebook) who believe that indigo “should not be included within the spectrum”, as when light is refracted to produce a rainbow. I too, believe as you do (and as did Newton) that there are seven “basic” colors, not six. (ROY-G-BIV).

    It makes me joyful to anticipate that you are able to provide some very meaningful insight into this, but wonder just how willing you are to help someone as insignificant as myself – I am certainly no scholar compared to you!

    Therefore, (if by any means you are “somehow urged” to assist me) I desire that you would email me at melliot2@twcny.rr.com. I promise not to take much of your time, and will faithfully to keep our matter confidential, until you indicate otherwise.

    Hoping to hear from you soon,
    Mr. E. (aka: “Mystery” – LOL)

      1. You have been most gracious thus far, kind sir! A letter has been sent to you in private. Further comments to me at this site are no longer necessary. Thank you very much.

  40. Hallo
    My name is Volker and I live on the south of France.
    I think it’s a very good idea.
    When do you installe more grand chessmaster games?

    1. I am so sorry that I have not added more games yet. It’s on my list of things to do but I can’t promise a date yet I’m afraid.
      Steve

  41. Hello Stephen
    Could you tell me something about gold and the reflexion of light. Is it considered a color, does it absorb or only reflects.
    Thank you
    Aline Lobo

  42. hello sir
    i am a pediatric dentist and have a project on color stability of dental materials
    i have obtained reflectance spectra of the samples but have no idea how to convert them to cielab measurement
    kindly help as this is in no way related to my field and i am very confused
    thank you

    1. If you email me, s.westland@leeds.ac.uk, with your email address I will send you n Excel spreadsheet which will allow you to to enter your spectral data and which will compute the CIELAB values. However, I need to know the shortest and longest wavelengths in your data and your wavelength interval, e.g. 400-700nm at intervals of 10nm. Steve

  43. Hi there and thank you for helping us on the subtle subject of colour conversion. I have a question or something that I can’t really make sense of. I have downloaded the spectral data for a MacBeth chart and used the standard formular (and color matching function data) to compute XYZ values. Then I use the standard XYZ to RGB values. When I do so, the white square from the chart (as well as all the colors) are slightly red. For instance the values for white are 255.000000 225.494095 193.420441??? Do you have any idea why?

    Here is the matrix I use:
    float XYZ2RGB[3][3] = {
    { 3.2404542, -1.5371385, -0.4985314},
    {-0.9692660, 1.8760108, 0.0415560},
    { 0.0556434, -0.2040259, 1.0572252}};

    Also I found on the internet a few places where the explain how the XYZ values should be normalised. They say that you should compute the sum of all the Y values from the color matching function which corresponds to the energy of a purely white material and divide the XYZ coordinates with this value (sum). Is this correct?

    Thank you so much again for helping us -coralie

    1. To be honest, I have some difficulty with sRGB, white points and normalization. But the following is what I believe to be the case.

      If you take your matrix and compute its inverse we can use this to convert sRGB = [1 1 1] into XYZ. If you do this you get XYZ = [0.9505 1.0000 1.0890]. I would multiply each of these my 100 which would give XYZ = [95.05 100.00 108.09]. You may recognise these – it is the white point of the D65 illuminant. It’s closest to 1931 D65.

      Imagine now going back the other way. If we started with XYZ = [95.05 100.00 108.09] we would need to normalise before we use your matrix. We would divide by 100. This is very similar to what you suggest above; that is dividing by the Y value of the illuminant (this is always 100). However, it does depend upon whether you are using absolute or relative colorimetry. Most people use relative colorimetry. If this is the case then the perfect white always has Y = 100 and dividing the XYZ values by 100 before you apply the sRGB matrix will be the right thing to do.

      Your Macbeth chart white should have XYZ values close to [95.05 100.00 108.09]. If you then divide these by 100 and apply your matrix you will get RGB values close to [1 1 1] and it won’t look pink. Your actual XYZ values will be a little less than [95.05 100.00 108.09] because the reflectance of the Macbeth white is not 1 at all wavelengths. However, the XYZ values should be in the same ratio and then you should get RGB values that are a little less than 1 but all should be similar in value.

      What could be going wrong in your case? It could be that you did not use 1931 D65 when you computed XYZ from your spectral data. If this is the case you won’t get XYZ values anything like [95.05 100.00 108.09] and when you apply the transform the RGB values will be quite different from each other. Another possibility is that your display device is set so that D65 is not the white point. However, I suspect the former based on what you say. Let me know if this helps.
      Steve

  44. Hi Stephen,
    I’m a graduate design student, and I’m doing research on the relative color spaces of different display devices, past and present. Do you know where I could find data on how particular 20th-21st century display technologies (Monochrome, Plasma, CRT, LCD, etc) compare to the full spectrum of visible color? I’m quickly coming up against the limits of my scientific understanding of this stuff, so perhaps I don’t know the correct terms to search for.
    Thank you so much!
    Liz

  45. Hi Liz,

    The data you are likely to find would be the chromaticities of the RGB primaries. This allows you to plot the gamut of the display in a chromaticity diagram (see http://en.wikipedia.org/wiki/CIE_1931_color_space). (Of course, gamuts are really three dimensional but for comparative purposes it can be useful to simply look at them in the 2-D chromaticity diagram.)

    You could do worse than start with a chapter in a book that I recently had published which was about this very thing – the different RGB standards that have been used throughout the years. The details of the chapter are

    Westland S. & Cheung V., 2012. RGB Systems, in Handbook of Visual Display Technology, Chen J, Cranton W and Fihn M (eds.), 147-154, Springer-Verlag.

    You should be able to get this out of the library. If you have trouble finding it or have any other questions you are concerned about please don’t hesitate to email me – s.westland@leeds.ac.uk

    Steve

  46. My spouse and I stumbled over here coming from a different web
    page and thought I might check things out. I like what I see so now i am following you.

    Look forward to exploring your web page repeatedly.

  47. Yes I have something to ask (sorry for so many posts on your blog today…):
    Do you know any university research program (a phd) where I could develop my artistic research?

    Best,
    A

    1. I am sure there are lots of places. I can suggest a couple of people that I know and respect that work in both art and colour.

      There is Judith Mottram who is Professor Visual Arts at Coventry University (UK).

      There is Carinna Parraman who is Deputy Director of the Centre for Fine Print Research at the University of the West of England (UK).

      There is David Briggs who teaches at Julian Ashton Art School in Sydney (Australia). David runs a particularly impressive web page – http://www.huevaluechroma.com/

      Steve

      1. Thanks a lot!

        I tried to contact Carinna Parraman on her email but she never answered, maybe she was too busy to check and she will answer soon. The centre for Fine Print Research looks like a very interesting place for me.

        I didn’t know Judith Mottram, her activities seems a bit different than mine, but it could be interesting.

        Anyway I don’t know if there’s any form of financial help possible – I never studied in the UK.

        At Julian Ashton Art School it seems they have no Phd program.

        Thanks a lot for the info,
        I have a talk in Glasgow University (Colorstudies Group) in February, would be glad to see you there, if you can attend!

        Adrien

  48. I’d be interested to read your thoughts on colour lighting in movies/television. I heard once that in the Matrix movies they use a lot of greenish lighting to infer mystery and I wondered if it were true. I have also noticed, personally, that shows on tv use interesting coloured lighting for example the CSI franchise. The original is very green, the Miami series is very orange/yellow and the New York series is bluish…or am I just completely crazy?

  49. Hi Steve,
    I was lucky to meet you in gargnano – Italy during the CREATE meeting. Your blogs are always very helpful to understand many colour issues. I have something to ask.
    1) The luminosity function Vλ has its maximum at 555 nm, this means that human vision has its maximum sensitivity at this wavelength across the visible spectrumn, why? Is it the reason behind a vast greenish area in the CIE 1931 chromaticity diagram? How can we understand this Vλ function in relation to the colour-matching function of the 1931 CIE standard observer?
    2) Colour-rendering index of the lamp is a measure of “how good the lamp is at developing the accepted ‘true’ hues of a set of colour standards”. Tungsten lamp has the highest value of CRI = 100 although its low energy distribution in the blue region of the visible spectrum. Filtered Xenon lamp, which has SPD as similar as that of D65, has CRI=93. I suppose that Tungsten lamp will be unable to render the blue colours correctly as the Xenon lamp. Am I wrong?
    Thank you a lot.
    Bassem

    1. HI Bassem,

      I’m glad you like the blog. If you look at the spectral sensitivities of the human cones (for example, http://en.wikipedia.org/wiki/Spectral_sensitivity) then you can see that the L- and M-sensitive cones (which vastly out-number the S-sensitive cones) have their peak sensitivities close to 555 nm. In fact, if you simply add together the sensitivities of the L- and M-sensitive cones then you pretty much get V Lambda.

      It’s not helpful to consider the colour-matching functions in this regard. The CMFs are in fact arbitrary; that is, if the CIE has used different primaries (other than X, Y and Z) then the CMFs would have been quite different. So there is nothing helpful in the shape of the CMFs in this regard. So to understand V lambda you need to look at the cone spectral sensitivities which represent our fundamental sensitivity (without regard to an arbitrary set of primaries).

      I hope this helps. I need to think about your second question for a few days. It’s more complicated for me. 🙂
      Steve

      1. Thanks a lot! It’s extremely helpful.
        Your answer has inspired me another questions. If our spectral sensitivities were overlapped, what would our vision like? The ideal observer has spectral sensitivities that evenly sample the visible spectrum. Does this mean that he has better colour vision?

  50. Hi Bassem

    It’s another good question. Well, I would first of all say that I am not sure that the idea observer would have spectral sensitivities that evenly sample the visible spectrum; that would only be true if sampling the visible spectrum was important – particularly important – for an observer and I am not sure that it is.

    Our three cone spectral sensitivities are overlapped. The long- and medium-wavelength sensitive cones have peaks that are separated by only about 30nm. Why are they so broadband? And why are they so overlapped? These are great questions even if I do not accept the premise that we would ideally like a system that has equal sensitivity at all wavelengths. I will take them in turn:

    Why are they so broad band? We have cones with peak sensitivities at about 450nm, 550nm and 580nm. They are quite broad sensitivities; think of guassians with standard deviations of about 30-40nm and you won’t be far away from what they look like. Actually, if we had cones that were sensitive at only single wavelengths (say, 450nm, 550nm and 580nm) then our colour vision would still work great even though we would only be detecting light at three wavelengths in the spectrum. This is because most physical objects in the world have reflectance spectra that are very broad band. Yellow objects don’t just reflect light at the wavelengths that we see as yellow in the visible spectrum. So why do have such broadband spectra in our cones? I would take an educated guess and say that it is because the cone pigments are just that – pigments. If you look at dyes and pigments they all have quite broad absorption spectra. The reason for this can be understood if you know about how matter and light interact and different energy levels (rotational, vibrational, electronic etc.). In short, our spectra are broadband and overlap because of physics.

    Why are the L- and M-cones sensitivities so close together? Wouldn’t it be better if the L and M cones were further apart? Wouldn’t we have better colour discrimination? Probably. However, there are a number of reasons that people have put forward about why our L and M cones are the way they are including (a) reducing the effect of chromatic aberration in our lens; (b) because the M cone evolved to allows our ancestors to make particular colour discriminations that were red/green and (c) because the M cone was a genetic mutation of the L cone (or the other way around).

  51. Best Stephen!

    Glad you liked the Lüscher test! If you´re interested in taking the full test, I´m going to be in England in october!

    Best regards/
    Markus Olofsson
    Swedish Max Lüscher Institute

  52. I would like to do more studies about colours because it will enhance my profession. I design clothing and i want to know how to combine colours.thank you.

  53. In your computation colour book it mentions in the Multispectral section that there are limits color discernment when you only have three sensors R G B. What exactly are the limits? For example can you not achieve accurate deltaE <1, over the whole color space? Or is just that discernment is limited it certain sections of the color space (like browns or violets)?

  54. Is there any literature on “Just noticeable difference for CIE Whiteness index & Yellowness index”?

  55. Hi Stephen – we are a design consultancy struggling with LRVs in a commercial interior where we are hoping to use back painted glass. Would you happen to have any data or understanding as to the affect of glass on the colour painted on the back? Many thanks.

  56. Hi Steve

    Hypothesis: ‘ REFLECTION’ – Is it possible to illustrate a personal journey conveying specific emotional reflections through visual communication, primarily through the medium of painting acrylic on canvas?

    I’ve developed a series of paintings to reflect a personal journey encompassing memory and loss. Through initial research my exploration fits well with the ethos of the colorfield painters who aligned themselves with Jungian theory as well as Sartre and other existential philosophers (there is a long list that goes much further!). I’m now exploring colour and meaning, especially from a psychological perspective and I’m really keen to find out as much as possible about neurological discoveries as this gets to the ‘nub’ of how colour actually effects mood scientifically (not saying psychological experiments aren’t scientific, just incredibly hard to take subjective use of linguistics out of the equation…). I’m doing this research privately so have few academic resources available other than some very good books and sifting through the Internet. I’d be glad of any pointers… thanks
    Lesley

  57. I made a startling discovery about the dress illusion… if I point my head downwards so my eyes have to look up to stare at the screen, it looks black/blue; if I point my head upwards so my eyes have to point down, it is gold/white.

    This is when the room is dark, or with light from any direction, with one eye the other eye or both, large picture or small picture, dark adaptation or not, and was still a steady effect the next day.

    How can angle affect color perception so much??? It seems MY EYE IS IN THE EXACT SAME POSITION AND SHOULD PERCEIVE THE SAME THING. (I tried sitting on a pillow to see if the minor height difference mattered — it doesn’t)

    If I look at it from a regular angle (head pointing forward) it seems a tiny angle difference will trigger the change still, while forcing straight-steady with the help of an object to lean on will result in alternating blue/black and gold/white every few seconds or minutes.

    strangest illusion I know.
    Anyone else try looking at that illusion at various angles?

  58. Hi Prof Stephen

    I run the World Carrot Museum and have been studying/interpreting (amongst a myriad of many other things!) the origin of orange carrots and consequently the first use of the term “orange” to describe that specific colour.
    Here is my piece about the etymology of the colour – http://www.carrotmuseum.co.uk/orange1.html#etymology

    Can you throw any further light on this subject in particular any written evidence of when the term orange for a colour was first used. I go back to 1512.

  59. I saw bunch of your videos in someone else’s MOOC.
    Can’t find license information.
    Is using your videos in other’s courses legal?

  60. I was involved in an online colour course.

    Here it is https://colour-theory-for-manufacturing.thinkific.com/courses/colour-made-simple

    It is mainly based on the work on Nick Harkness but Vien Cheung and I made some contributions. It is definitely worth checking out.

    I also am launching a new PGT course at University of Leeds next year – called MSc Colour – though this is not online of course. It is at Leeds University. But this is a very exciting development.

    There is also my patreon page of course where you can get a lot more detailed information and it only costs a few pounds per month. You can see this here –
    https://www.patreon.com/colourchat

    I also started a new youtube channel about colour. You can see this here
    https://youtu.be/DzHFSNo6v7k?si=61fGZryVz89fukQA

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.