Category Archives: advanced stuff

Colour Matching and Cones

Earlier today I posted something on quora about who many colours there are. It’s quite a long post. You can read it here. However, if you want the short cut the answer is 3-5 million. 🙂

However, I also linked to the post on LinkedIn and someone asked me a question about the relationship between colour-matching functions and cone sensitivities so I thought I would make a new post today about that topic. I have used my message on LinkedIn as the basis for this post but modified it a little to make it suitable for general consumption.

Here are two diagrams:

This shows the actual spectral sensitivities of the MLS cones in the human eye

The 1931 CIE XYZ colour-matching functions

It’s another common misconception that people get these two diagrams confused. The cone spectral sensitivities are the actual sensitivities of the cones in the eye. Although people often say that the eye responds mainly to red, green and blue light, it’s not so simple.  In 1931 the CIE measured the colour matching functions. One of the reasons that they did this was that in 1931 we didn’t actually know what the cone spectral sensitivities were; these were only known for sure in 1964. So in 1931 the CIE measured the amounts of three primary lights that an observer would mix together (additively) in order to match a single wavelength of light. And they did this for each wavelength. The second of the diagrams above shows the amounts of each of the primaries needed to match each wavelength on the spectrum.  Originally, the CIE used three lights (these were RGB)  or primaries. However, they mathematically transformed their RGB colour matching functions to create the XYZ colour matching functions. These are sometimes also known as the CIE colour matching functions or the CIE standard observer.

These are the original CIE RGB colour-matching functions

The point of these (XYZ) colour matching functions are that they allow us to calculate the CIE tristimulus values XYZ of an object if we know the spectral reflectance of the object and the light it is viewed in. The XYZ values are the amounts of the three XYZ primaries that an observer would, on average, use to match that object viewed in that light source. If two samples have the same XYZ values then they are a visual match; because an observer would, on average, use the same amounts of the XYZ primaries to match each. And this was the whole point of the CIE system; to determine when two colour stimuli are a visual match.  Had we known the cone spectral sensitivities in 1931 it’s possible that history would have taken a different course and that instead of having CIE XYZ we would simply calculate the cone responses LMS. And we could say that if two samples have the same cone responses they are a visual match. But I guess we’ll never know.

Now, if two samples have the same XYZ values then they will have the same cone responses. This is a bit technical but this is true because the cone spectral sensitivities are a linear transform of the CIE XYZ colour matching functions. They are also a linear transform of the CIE RGB colour-matching functions.

The colour-matching functions depend upon which primaries are used whereas the cone spectral sensitivities are more fundamental. Doesn’t this make the colour-matching functions arbitrary? Not really. Although the actual shapes of the colour-matching functions depend upon the actual primaries used, the matching condition does not. If two samples generate the same cone responses then the observer would match them with the same amounts of the XYZ primaries and the same amounts of the RGB primaries.

On this page – https://en.wikipedia.org/wiki/CIE_1931_color_space – you can see the cone spectral sensitivities and the RGB and XYZ colour matching functions.

Quora is alive and kicking

I have been posting here on Colourchat for a long time. I think it is nearly 10 years but it could be longer. Time flies. However, I just wanted to let you know that I also post on a website called quora. Quora is a site where people post questions and other people can answer them. It used to be completely free although quora have recently introduced a model where people can put their answers behind a pay wall. However, my answers are free and I just wanted to let you know that there is a lot of stuff there that might interest you. I have only been posting there about 3 years but my answers have received over 2 million views (whereas Colourchat has had less then 500,000 views over a much longer time period).

However, quora is a little bit tongue-in-cheek. Not all of the answers are serious though most of mine are. I still reserve my best content for Colourchat where I can give a lot more detail. And I also have a patreon page where I do charge a small fee (because that’s how patreon works) where I am curating my most detailed content and this includes quite a few videos that are unique to the patreon site.

Anyway, if you want to have a look at quora you could take a look at this post I made today which answered the question of why a mixture of red and blue light doesn’t generate a hue that is between the two ends of the spectrum. I hope you like my answer. What I focus on – and what I am striving towards though perhaps not always achieving – is to try to answer these questions in a way that is maximally informative but at the same time doesn’t require an understanding of maths, for example, so that it is maximally inclusive.

Analysing CIELAB values

Imagine you have a standard (std) and a batch (btx) and you have the CIELAB values of each. How can you analyse these numbers, in particular, the differences? This post explains how to do it.

Let’s start with a real example.

Now what can we say about these two samples. Well, we can calculate the colour difference. If we want to calculate the CIELAB colour difference we can simply calculate the differences in each of the three dimensions, square them, add them and take the square root. Thus DL* = 2, Da* = 10, and Db* = 6. So the CIELAB colour difference is sqrt(4 + 100 + 36) = sqrt(140) = 11.8. This is quite large. Of course, we might prefer to use some other measure of colour difference such as CMC or CIEDE2000. But let’s stick with CIELAB.

The next thing is to look at the individual differences. Since a* is redness we might conclude that the btx is redder than the std (the btx has an a* of 36 whereas for the standard it is only 26). And since b* is yellowness we might conclude that the btx is yellower than the std (the btx has a b* of 9 whereas for the standard it is only 3). However, it is really confusing to look at the data this way. Perceptually, we might be interested in whether there is a chroma difference (is the batch weaker or stronger?) and whether there is a hue difference. Let’s plot these samples in the a*-b* plane of CIELAB.

As you can see, the btx has a larger a* value and a larger b* value than the std. However, we cannot deduce anything about hue or hue differences just by looking at a* or b* on their own. Hue is an angular term in CIELAB space.

As you can see from the above figure, the hue of the standard is 6.6 degrees and the hue of the btx is 14.0 degrees. The CIE method to calculate hue descriptors is to move radially from one sample to another and note which axes we cross. So if we start of with the btx we move clockwise towards the std; we keep going and we cross the red axis and then (if we keep going) we cross the blue axis. So we would conclude that the std is redder (bluer) than the btx. According to CIE guidelines, one of these descriptors makes sense and the other doesn’t.

In this case, I would say that the std is bluer than the btx. In hue terms it doesn’t really make sense to say that the std is redder than the btx when they look quite red anyway. And we would say that the btx is yellower (greener) than the std.

In terms of chroma we calculate the distance from the centre for each of the colours. As you can see from the diagrams, the batch is much further out from the centre than the std.

So, in conclusion, we would say that the btx is lighter, stronger and yellower than the std. The std is darker, weaker and bluer than the btx.

The point of this is to highlight that we cannot make decisions about hue and chroma by looking at just a* and b*. We need to look at both a* and b*. Better than trying to do this is to calculate the polar coordinates, hue and chroma. These are generally more helpful than the cartesian coordinates, a* and b*. In my experience, people have a reluctance to think in terms of polar coordinates and I think this is because they have much greater experience at school with cartesian coordinates. Everyone spends their schooldays looking at certesian plots of x vs. y don’t they? But getting to grips with polar coordinates in colour science will really pay off in the long run.

Notice that just because the batch has a larger a* value than the std, this does not make the batch redder. In fact, as can be seen from the first diagram, it is the std that is closer to the a* (red) axis than the btx, despite having a smaller a* value.

What type of colour information do designers want?

In this study we were interested in which type of colour information designers want. We carried out surveys and interviews (with senior designers and brand managers) and the results are summarised below:

We used a card-sorting technique in our interviews to ensure that the participants knew what each of our terms meant.

We found that colour meaning was one of the aspects of colour that designers would like to be able to put their finger on; it was more important that colour trend information in fact! We also looked some existing colour tools and found that none of them really offered the most important information that designers and brand managers want to know about colour. What would be really cool would be a tool that provided accurate information about the meanings that colours have in different cultures and perhaps in different contexts.

The full paper will shortly be published in Color Research and Application.

Won S & Westland S, 2018. Requirements capture for colour information for design professionals, Color Research and Application.

The full publication details will be added here when they are available. Meanwhile, you can read it here.

 

Digitizing Traditional Cultural Designs

A bojagi is a traditional Korean wrapping cloth.

There is currently interest in re-using traditional and cultural designs in modern commercial applications. The bojagi is one of these traditional designs that could be reinvented and hence reinvigorated. But how can a designer create bojagi patterns for use in new digital design?

Working with Meong Jin Shin I developed a software tool that can create a wide range of different bojagi. We identified 8 different classes of traditional bojagi as shown below:

We then created a software tool that would allow a user to create new bojagi which would have the same visual characteristics as one of these 8 traditional classes.

We had some designers in Korea evaluate the tool and they were quite impressed. Although in this study we worked with Bojagi, in fact we were interested in exploring the general method of using digital tools such as this one to allow users to explore traditional designs and to use them in their contemporary design work. The ideas could be easily extended to cover other traditional designs such as tartan. The software could also be added to a package such as Adobe Photoshop as a plug-in.

You can read the full paper that we published here.

Shin MJ & Westland S, 2017. Digitizing traditional cultural designs, The Design Journal, 20 (5), 639-658.

Does context affect colour meaning?

One of the reasons that colour is such a powerful and important property is that it conveys information. Colour imparts meaning. If you see a big red button you may understand that something important or dramatic may happen if you press it. If someone is wearing bright yellow clothes it might imply something about their personality. Take a walk into a toy store and notice the swathes of pink in the girls’ section (though note that I don’t imply that this is a good thing; indeed, I would refer you to the pink stinks campaign in order that you may become a right-thinking person). But it is clear that the manufacturers of the toys believe that the colour pink will indicate that these are toys for girls and that its use may even make girls want to have these toys. If you see two washing-up liquids and one is green and one is yellow you might think that they would smell of apples and lemons respectively before you even open them! Colour sells. And part of the reason that colour sells is that it is informative. Colours have meanings.

But does colour per se have meaning or does colour only have meaning when it is an attribute of a product? The colour red on an emergency stop button may have one meaning but the colour red on the soles on Louboutin shows may have an altogether different meaning. And, of course, colours mean one thing in one culture but another in a different culture; black is commonly associated with death in the West but in China and some other countries in Asia death is more commonly associated with white. Nevertheless, I do believe that colour per se, that is colour in an abstract sense, does have meaning and there are a number of studies out there that tend to support me (though some social scientists, in particular, who would disagree).

What I mean by this is that if we take a culture, such as the UK, then a colour such as red will be associated with various ideas and concepts to varying degrees of strength. Red may take on different meanings when applied to different products (that is, in context). But is there any relationship between the abstract colour meaning and the product colour meaning? This is the question that Seahwa Won (who was a PhD student working with me) and I asked each other that led to a piece of work and an academic paper.

If there is no relationship between abstract colour meanings and  product colour meanings then it might mean that there is little practical or commercial value in studying abstract colour preferences (though it may still be worthy of study). On the other hand, if there is a relationship between abstract colour meanings and  product colour meanings then knowing the former may help us to predict the latter in a wide range of circumstances. To carry out our study we used scaling (I have blogged about some aspects of scaling before) where we try to quantify the perceptual response of participants to physical stimuli. For example, we show people a colour patch on a display screen and then below this there is a slider bar which allows the participants to respond whether the colour is warm, for example, or cool. We do this for lots of colours and lots of participants (nobody said colour science was easy!!) and then we can average these and have a warm-cool scale along which we can place all the colours. When we do this, for example, we find that participants think red is much warmer than blue. However, what Seahwa and I also did was to repeat this type of experiment with different colour products rather than simple colour patches. Would participants place a red toilet roll on the same point on the warm-cool scale as the red colour in an abstract sense? If they would then we can conclude that abstract colour preferences and product colour preferences are related.

We did this for quite a few different scales (warm-cool, expensive-inexpensive, modern-traditional, etc.) and for for a few different colours. The figure below shows the results when we explored the masculine-feminine scale. Look at the left-hand part first, where it says chip along the bottom. Chip indicates the abstract colour meanings (for example, when participants view a simple square or chip of colour). Note that participants scale beige, red and yellow as being feminine colours whereas black, blue and green are more masculine colours. Now look at the right-hand part of the figure, where it says crisps (in the UK a crisp is something you buy in a bag to eat; Americans may call these potato chips). When we showed crisp packets that were differently coloured the masculine-feminine scale values were almost the same as for the abstract colours themselves. We found strong relationships between abstract colour meanings and product colour meanings more often than not.

Our findings are broadly compatible with an earlier study by Taft in 1996 who found that there was no significant effect of context on colour meaning in the majority of cases. We did find some effects of context though. For example, black-coloured medicine was perceived as being more feminine that the abstract colour black itself.

We published this paper in 2016 in the journal Color Research and Application and you can read the paper in full here.

Won S & Westland S, 2017. Colour meaning in context, Color Research and Application42 (4). 450-459.

Consumer Colour Preferences

How does your personal colour preference affect the colour of the things that you buy?
It is well known that people prefer some colours more than others. Personally, I much prefer red to blue. But I am probably in a minority. Many studies have shown that blue is the most popular hue with yellow being one of the least popular hues. But this is when we think of colour in an abstract sense. But what about when colour is applied to a product: a pair of trousers, a toothbrush, a fidget spinner? Well, my favourite colour is red but I have never owned a pair of red trousers. I tend to buy buy blue or brown trousers even though I don’t really like the colour blue in the abstract sense. But are there products where, if we were presented with a choice in colour, we would tend to buy the colour product that matches our abstract colour preference? This is the question that I set out to answer answer two years ago with my colleague Meong Jin Shin. We carried out an experiment over the internet where we presented people with a choice of products in different colours and asked which they would buy given the choice. They were presented with images a little like the one below:

After we asked participants which product they would buy for a number of different products we then asked them what their favourite colour was in an abstract sense (we showed a number of colour patches on the screen and asked the to click on the one they liked best). Our hypothesis was that for some products participants would tend to select products that closely matched their most preferred abstract colours but that for some other products we would not find this.

This is exactly what we found. For some products, such as bodywash, we found that people tended to prefer a particular colour for the product (in this case, blue). The figure below shows the results for bodywash. The rows represent the colour of the products and the size of the circle in each row represents the proportion of people who generally preferred either red, orange, yellow, green, blue or purple that selected that product colour. As you can see below the majority of people chose a blue bodywash no matter what their abstract colour preference was.

However, for the toothbrush product a very different picture emerged. As shown below, people who liked red generally tended to select a red toothbrush and people who preferred purple tended to select a purple toothbrush. For example, 41% of people who preferred green selected a green toothbrush.


So sometimes people’s personal colour preference could be used to predict which colour product they would choose to buy given the choice (and sometimes it couldn’t be). How could this be useful? Well, if we could predict which products where this is true then it would suggest that a multi-colour marketing strategy could be appropriate. Also, imagine you are in a supermarket and you are presented with an offer – 50% off toothbrushes today – and alongside this you see a red toothbrush. If red was your favourite colour then there might just be a little more chance you would accept the proposition. If a supermarket could predict a consumer’s personal colour preference …. [more of this in a later post].

This paper was published in 2015 in the Journal of the International Colour Association. You can read the full paper for free here.

Westland S & Shin M-J, 2015. The Relationship between Consumer Colour Preferences and Product-Colour Choices, Journal of the International Colour Association14, 47-56.

colour physics 101

kindle_colourphysicsfaq

Download my colour physics FAQ e-book for the Kindle here.

Also available as a physical book from Amazon.

  • What is colour?
  • How does colour vision work?
  • Why is the sky blue?
  • What is the colour spectrum?

The answers to these and many other related questions about colour physics are each provided in a short and easy-to-understand form. Will delight and entertain colour professionals and curious members of the public.

accurate colour on a smartphone or tablet

Electronic displays can vary in their characteristics. Although almost all are based on RGB, in fact the RGB primaries in the display can vary greatly from one manufacturer to another. Colour management is the process of making adjustments to an image so that colour fidelity will be preserved. In conventional displays – desktops and laptops – the way this is achieved is through ICC colour profiles. Colour profiles store information about the colours on a particular device that are produced by RGB values on that device. So to make a display profile you normally need to display some colours on the screen and measure the CIE XYZ values of those colours; you then have the RGB values you used and the XYZ values that resulted. The profiling software can use these corresponding RGB and XYZ values to build a colour profile so that the colour management engine knows how to adjust the RGB values of an image so that the colours are displayed properly. Building a profile often requires specialist colour measurement equipment – though this can often be quite inexpensive now. If you are using your desktop or laptop display and you have never built a profile then you are probably using the default profile that was provided when your display was shipped. The default profile will ensure some level of colour fidelity but particular settings (such as the colour temperature or the gamma) may not be adequately accounted for. If you want accurate colour then you should learn about colour profiling.

It all sounds simple except for the fact that ICC colour profiles are not supported by iOS or Android operating systems on mobile devices. I find this really surprising but that’s how it is for now. Maybe it will be different in the future.

This means that ensuring colour fidelity on a smartphone or tablet is not so straight forward. So what can you do?

Well, there are two commercial solutions to this problem that I am aware of. They are X-rite’s ColorTrue and Datacolor’s SpyderGallery. ColorTrue and SpyderGallery are apps that will use a colour profile and provide good colour fidelity. These are great solutions. Perhaps the only drawback is that the colour correction only applies to images that are viewed from within the app. Having said that, they allow your standard photo album photos to be accessed – but the correction would not apply, for example, to images viewed using your web browser. This is why a proper system implemented at the level of the operating system would be better, in my opinion.

There are two alternatives. The first would be to implement your own colour correction and modify the images offline before sending them to the device. This would not suit everyone – the average consumer who just wanted to look at their photos for example. But it is what I typically do here in the lab if I want to display some accurate colour images on a tablet. But if you were a company and you wanted to display images of some products for example – it might be a reasonable approach. It has the advantage that the colour correction will work when viewed in any app on the device because the colour correction has been applied at the image level rather than the app level. But it does mean you need to do this separately for each device and keep track of which images are paired to each device. This is ok if you have one or a small number of devices but maybe not so good if you have hundreds of devices.

The second alternative would be to build your own app. If you want to do things with your images that you cannot do in ColorTrue or SpyderGallery or if you have lots of devices and you can’t be bothered to manually convert the images for each device, then you could install your own app that implements a colour profile and then does whatever else you want it to do.

Incomplete pair comparison

One of my big academic interests is scaling perceptual phenomena. That is, we take some physical stimuli (for example, a set of sounds of varying intensity/volume) and then we want to know how loud they are perceived to be by people. This allows us to build a relationship between the physical stimulus (in this case intensity) and the perceptual stimulus (in this case loudness). The same idea could be used to scale largeness, smallness, colourfulness, whiteness, lightness, heaviness, sweetness etc. It’s not always a -ness. But it usually is.

There are a great many techniques to scale perception. You can just ask people, for example, to assign a number. For example, you play a sound and ask them to rate how loud it is on a scale, say, from 0 to 100. This is called Magnitude Estimation (ME). It’s a perfectly good technique but it has limitations and one of these is that it can be quite difficult for the participant. And, say, the first stimulus seems really loud and they assign it a loudness of 90; then it turns out that all the subsequent stimuli are louder – then all their estimations will be squeezed in the 90-100 range, which is not ideal. Consequently, in the ME technique we often have so-called anchors – that is, example stimuli at each end of the scale.

An alternative technique is called paired comparison (PC). In this we might have, for example, five stimuli A, B, C, D and E and we present them in pairs and ask the participants which one is louder (or whiter or yellower etc.). The total number of paired comparisons is 10 in this case which is quite manageable. From the results of these paired comparisons it is possible to estimate a scale value for each of the stimuli where the scale value will be an interval scale of loudness (or whiteness or yellowness, etc.). This is a really nice technique and there are quite a few papers that claim that PC is more reliable than ME, for example. However, when the number of stimuli is large the number of pair comparisons becomes huge and the the task is not practicable. When this happens it is possible to undertake so-called incomplete pair comparison where we only present some of the possible pairs to the participants. The question is, however, what proportion of the pairs should be present for the PC experiment to be reliable?

This was the question that Yuan Li and I asked each other during her doctoral research. We undertook a large-scale simulation of a PC experiment. I won’t go into the details here. The method and results have just been published in the Journal of Imaging Science and Technology (JIST). You can see the paper here.

However, I show below the key table from the research which I think might be of interest to other people who are undertaking, or planning to undertake, an incomplete PC experiment.

table

This table shows the number of stimuli that are being compared along the top. Down the left-hand side are the number of observers taking part. The figure in the corresponding row and column shows the per cent of pair comparisons that need to be carried out to get robust results that would be similar to those you would get if you did the full PC experiment. So, for example, if you 20 samples and 15 participants then you need to half of the possible comparisons. For 20 samples there are 190 comparisons so you would need to 95 of them (which could be selected randomly).

I should point out that there is a caveat that needs to be considered. This work is only valid if the observers can be considered to be stochastically identical. If we ask people to rate samples for loudness, or whiteness, or heaviness, for example, I think this assumption is justified. However, if we were asking people to scale how beautiful people’s face were, for example, – an experiment reminiscent of the early facebook experiment by Mark Zuckerberg – then observers could differ wildly in their judgements. One participant may rate as most beautiful a face that another participant rates as the least beautiful. Because of the assumptions that we made in our modelling we cannot predict the proportion of pair comparisons that would be needed in a case like this. We are thinking about it though.