Tag Archives: colour

measure colour with your smartphone

node

This looks interesting. Node is a way to add sensors to your iOS device. It allows you to measure all sorts of things, including colour if you have the node+chroma combination. The node costs about £100 and the additional sensors cost about £50 each. I am not sure how much the chroma sensor costs.

You can find further details here – http://variableinc.com/chroma-contact/

accurate colour on a smartphone or tablet

Electronic displays can vary in their characteristics. Although almost all are based on RGB, in fact the RGB primaries in the display can vary greatly from one manufacturer to another. Colour management is the process of making adjustments to an image so that colour fidelity will be preserved. In conventional displays – desktops and laptops – the way this is achieved is through ICC colour profiles. Colour profiles store information about the colours on a particular device that are produced by RGB values on that device. So to make a display profile you normally need to display some colours on the screen and measure the CIE XYZ values of those colours; you then have the RGB values you used and the XYZ values that resulted. The profiling software can use these corresponding RGB and XYZ values to build a colour profile so that the colour management engine knows how to adjust the RGB values of an image so that the colours are displayed properly. Building a profile often requires specialist colour measurement equipment – though this can often be quite inexpensive now. If you are using your desktop or laptop display and you have never built a profile then you are probably using the default profile that was provided when your display was shipped. The default profile will ensure some level of colour fidelity but particular settings (such as the colour temperature or the gamma) may not be adequately accounted for. If you want accurate colour then you should learn about colour profiling.

It all sounds simple except for the fact that ICC colour profiles are not supported by iOS or Android operating systems on mobile devices. I find this really surprising but that’s how it is for now. Maybe it will be different in the future.

This means that ensuring colour fidelity on a smartphone or tablet is not so straight forward. So what can you do?

Well, there are two commercial solutions to this problem that I am aware of. They are X-rite’s ColorTrue and Datacolor’s SpyderGallery. ColorTrue and SpyderGallery are apps that will use a colour profile and provide good colour fidelity. These are great solutions. Perhaps the only drawback is that the colour correction only applies to images that are viewed from within the app. Having said that, they allow your standard photo album photos to be accessed – but the correction would not apply, for example, to images viewed using your web browser. This is why a proper system implemented at the level of the operating system would be better, in my opinion.

There are two alternatives. The first would be to implement your own colour correction and modify the images offline before sending them to the device. This would not suit everyone – the average consumer who just wanted to look at their photos for example. But it is what I typically do here in the lab if I want to display some accurate colour images on a tablet. But if you were a company and you wanted to display images of some products for example – it might be a reasonable approach. It has the advantage that the colour correction will work when viewed in any app on the device because the colour correction has been applied at the image level rather than the app level. But it does mean you need to do this separately for each device and keep track of which images are paired to each device. This is ok if you have one or a small number of devices but maybe not so good if you have hundreds of devices.

The second alternative would be to build your own app. If you want to do things with your images that you cannot do in ColorTrue or SpyderGallery or if you have lots of devices and you can’t be bothered to manually convert the images for each device, then you could install your own app that implements a colour profile and then does whatever else you want it to do.

curved displays are the future

samsungevent

Yesterday I spoke an an event to launch Samsung’s latest curved screen displays. The technology is really gorgeous and everyone who attended was wanting one of the new displays after seeing them.

I am convinced that curved screens will become ever more popular in the future because not only do they look good but they offer serious advantages for users who undertake intensive tasks – the sort of tasks that need a large desktop display rather than a mobile device. When it comes to desktop displays it is really quite simple – bigger is better.

Many people – and I am one of them – are what is known as ‘double screeners’. I have two screens attached to my desktop and my operating system is spread seamlessly across them because I wanted more screen space to work in. I recently carried out a survey – you can find more details here – which showed that 38% of British office workers are already using two or more screens attached to their desktop computers.

Of course, in an ideal world one very large screen would be better than two smaller screens. But there is a problem with most flat-screen technology which is that the LED/LCD pixels emit light straight out but emit a lot less light at an angle to the screen. This means that you look at a large flat screen the light reaching your eye from the edges of the screen is a lot less. Not only that but, because you are looking at the screen at an angle, text and other fine details can be distorted at the edge. Curved displays get around this problem and I am hoping to replace my two flat screens soon with a single Samsung curved display.

With a curved display the distance from the eye to the screen is the same across the whole display and the angle of view is also constant. Not only does this solve the colour and acuity problems I just mentioned but it means that users need to need fewer eye and neck movements. Given that many of us spending longer using a display than we do actually sleeping this could have a big effect on user well-being.

Our survey also showed that about 60% of office workers think it is important that the office technology they use looks good. This can help to motivate them and help them to feel good about themselves. The new Samsung curved displays certainly will satisfy these people.

What colour is the sky on mars?

mars_originalmars_red

The cameras never lies. Or does it? Recently I had to take a photo for a medical case and before submitting it I had to sign to say that the photo had not been modified. I did this – but it was ridiculous of course. Many people have this idea that the cameras faithfully captures what the scene looks like and that, unless we intentionally manipulate the images (in photoshop, for example), then we have captured the truth. Nothing could be further from the truth – as the recent image of #TheDress showed.

The top photo above was taken and released by NASA in 1976 and shows a Martian landscape. The sky is blue. However, at the time, Carl Sagan said “Despite the impression on these images, the sky is not blue…The sky is in fact pink.”

You see the original image had not been colour corrected. Colour correction is a process that takes place on most cameras these days without the user being aware of it but in 1976 was not automatic. The process can compensate for the spectral sensitivities of the camera sensors (which may differ from one camera to another) or for the colour of the light source. The second picture (above) shows the colour-corrected image. Some people are now arguing, however, that the amount of colour correction applied by NASA is wrong and that the sky should not be as red as it appears on the second photograph. For the full story including some other nice images of Mars see here.

get it right in black and white

A student was asking me about use of colour in a design (that showed text on a background) today and one of the things I said to her was “Get it right in black and white”. Prof Lindsay MacDonald taught me this. The idea is to make sure there is contrast in lightness and that you are not relying on a contrast in hue for people to read the text. So, for example, if you must put red text on a green background – I don’t advise this particularly, but if you do – then make sure it is a dark green and a light red or a light green and a dark red.

text1

text2

In the above two images, one is easier to read than the other. In both cases the hue of the red and green are the same. But in one case there is a large lightness difference and in the other there is not. if you were to print these out in black and white, one would be more readable than the other. That is what, “Get it right in black and white means.” It’s sensible if for no other reason than it increases the chance that someone who is colour blind (most are red-green colour blind) would be able to read it. Of course, maybe red and green would be not great colours to use in the first place – but that is a longer story.

I have come across a really lovely interactive website that helps with this. It is called colorable. It allows you to enter two colours (in hex format) – or use slider bars to control hue, lightness and saturation – and then it gives you a WCAG contrast ratio and even a pass/fail decision about whether you meet the minimum guidelines. Please try it – it’s great fun.

how colour vision works

yellow

Really super article by Ana Swanson in the Washington Post about colour vision and how it works. As she explains, it is not really correct to think of the long wavelength visible light as being red. It is better, as Newton knew of course, to say that the long-wavelength light has the ability to cause the sensation of redness in us. She gives a nice visual example of how the spectrum looks to a dog, something (by coincidence) that I was only talking about in a lecture last week. As she says:

Is what I see as “blue” really the same thing as what you see as “blue”? Or have we both learned the same name for something that looks different to each of us?

Her article is really worth reading.

There is just one thing I take issue with. It may be ‘nit picking’. But she says “A green leaf, for example, reflects green wavelengths of light and absorbs everything else.”

My image, at the top of this post, shows the reflectance of a typical yellow object. At each wavelength the reflectance is between 0 and 100 per cent. But notice that it is not zero at any wavelength in the range shown (400-700nm). That means that the object reflects light at every wavelength. And it is not 100 at any wavelength meaning that it also absorbs to some extent at every wavelength. It’s just it absorbs more at the shorter wavelengths than at the longer wavelengths and it reflects more at the longer wavelengths than at the shorter ones. But notice one other remarkable thing – the yellow object reflects more light at 700nm (a wavelength we would normally associate with red) than it does at 580nm (a wavelength we might normally associate with yellow).

Yes, the reflected light does look yellow. But, the notion that a “A yellows object reflects yellow wavelengths of light” is misleading. It suggests that the yellow object only reflects, for example, the wavelengths in the spectrum we would normally think of as yellow (around 580nm) and absorbs the rest. This is just not how things are.

What colour is your office?

Decorating-GOMIX-new-office-2

I just saw an interesting article by Kim Lachance Shandrow about how the colour of your office can affect productivity. The article refers to a paper (2007) in Color Research and Application (CRA) by Nancy Kwallek entitled Work week productivity, visual complexity, and individual environmental sensitivity in three offices of different color interiors. The paper suggests that the influences of interior colours on worker productivity were dependent upon individuals’ stimulus screening ability and time of exposure to the interior colours. CRA is a top quality academic journal that is peer reviewed and so I am respectful of the findings.

However, in Kim’s online article there is a lot of stuff that I am highly sceptical about. For example, she writes that “Red … increases the heart rate and blood flow upon sight.” Is this true? Is there really any evidence for this. I have two PhD students working in this area right now and I am far from sure that colour does affect heart rate and, if it does, the effects are probably tiny. And yet we can read statements like this all over the internet as if it is a fact beyond doubt. Other things she says that I take with a pinch of salt is that “green does not cause eye fatigue” and that “yellow triggers innovation.” Don’t get me wrong – I am very interested in how colour can be used to affect us emotionally, psychologically and behaviourally; it’s just there is a danger that if some things are said often enough (such as red increases your blood pressure or heart rate) then people start believing them even though there may be little evidence.

That said, you might find the infographic fun and it is well done. See the original and full article here.

On CIE colour-matching functions

In 1931 the CIE used colour-matching experiments by Wright and Guild to recommend the CIE Standard Observer which is a set of colour-matching functions. These are shown below for standard red, green and blue primaries. These show the amounts – known as tristimulus values – of the three primaries (RGB) that on average an observer would use to match one unit of light at each wavelength in the spectrum. Why are these so important? Because they allow the calculation of tristimulus values for any stimulus (that is, any object viewed under any light as long as we know the spectral reflectance factors of the surface and the spectral power of the light).

650px-CIE1931_RGBCMF.svg

I gave a lecture this week about these and so they are fresh on my mind. I wanted to use this blog post to explain two things about the colour-matching functions that may be puzzling you. The first was stimulated after the lecture when one of the students came up to me with a question. You will note that for some of the shorter wavelengths the red tristimulus value is negative. Hopefully you are aware that no matter how carefully we choose the three primaries we cannot match all colours using mixtures of those three in the normal sense. What we have to do is to add one of the primaries to the thing we are trying to match and then match that with an additive mixture of the other two primaries. The question from the student was, wouldn’t that change the colour of the thing that is being matched? The answer is that it would of course. But it’s ok.

We normally represent this matching with an equation:

S ≡ R[R] + G[G] + B[B]

which simply means that the stimulus S is matched by (that is the symbol ≡) R amounts of the R primary, G amounts of the G primary, and B amounts of the B primary. The values R, G and B are the tristimulus values. I put square brackets around the primaries themselves to distinguish them from the amounts or tristimulus values of the primaries being used in the match.

Now when we add one of the primaries to the stimulus (the thing we are matching) itself, we can write this equation:

S + R[R] ≡ G[G] + B[B]

The new colour, S + R[R], can now be matched by an additive mixture of the other two. Hmmmmmm? You may ask. How does that work? Well, we can rearrange this equation to make:

S ≡ -R[R] + G[G] + B[B]

In other words, matching the additive mixture of the original stimulus S and some red with some green and blue, means that – if it were possible – we could match the original stimulus S with the same amount of green and blue and a negative amount of the red. I appreciate that this is mathematical but I hope that it is maths that anyone could understand. It’s not rocket science. Just simple adding and subtracting. This is how we arrive at the colour-matching functions above. No matter what RGB primaries we use one of them will have to be used in negative amounts to match some of the wavelengths. In practice, this is done by adding it to the stimulus as described above. Of course, you may also know that the RGB colour-matching functions were transformed to XYZ colour-matching functions. These are the XYZ values everyone is familiar with. But that is another story I will devote another post to one day.

The second question though, is isn’t this just arbitrary? If we used a different set of RGB primaries wouldn’t we get a different set of colour-matching functions? Again, the answer is yes, but again it doesn’t matter. The whole point about the CIE system was to work out when two different stimuli would match. If two stimuli are matched by using the same amounts of RGB then by definition those two stimuli must themselves match. If we used different RGB primaries the amounts of those tristimulus values would change, of course, but the matching condition would not. Two stimuli that match would also require the same RGB values as each other to match them, not matter what the primaries were (as long as they were fixed of course). So the key achievement of the CIE system was to define when two stimuli would match. However, it was also useful for colour specification or communication but that does indeed depend upon the choice of primaries and requries standardisation.

I hope people find this post useful. Post any questions or comments below.

Do women use more colour names than men?

I just came across this funny cartoon about the difference between men and women in terms of colour names.

doghouse_color_wheel_altered

But on the same page I found the results from an actual colour survey where over five million colours were named across 222,500 user sessions. One aspect of the results is shown below:

doghouse_analysis

It does seem that there is some evidence that women use more colour names than men – though generally there was agreement between how the names were used. For further details see the original article.

Press coverage of #TheDress

Whatever anyone thinks about the colour of dress and the attention it is received there is one undeniable fact – this story had received huge attention from the public and from the media. That in itself is probably more interesting than the debate itself.

The Daily Mirror story covered the angle that we are all right whatever we see because colour exists only in our heads. According to Dr Paul Knox, a reader from the University of Liverpool’s department of Eye and Vision Science, “Colour isn’t something that exists in the world. Different wavelengths of light exist and can be observed but colour is something we make up inside our heads.”

ITV also took the view that the explanation is that colour doesn’t exist. I broadly agree with this view, but the interesting thing is that that doesn’t explain why there was so much disagreement about the colour in this particular case whilst normally we barely notice any disagreement. If it is simply that colour doesn’t exist then why do we ever agree about colour at all?

On the other hand, in the Guardian an article by Bevil Conway considers cognitive processes in our colour vision and visual strategies that may vary from one person to the next. Of course, Bevil Conway is a super scientist and I agree with almost everything he says. Certainly, cognitive strategies could have something to do with this phenomenon. However, when he says that “By accident or design, the dress is a carefully created composition of orange and blue that confounds our visual systems,” I have to disagree. If you look at a properly taken photograph of the dress or the dress itself in real life what you see is shown below:

dress_original

The dress is not a carefully crafted composition of orange and blue – the dress is blue and black. However, Bevil is probably talking about the image that was circulated not the one shown above. To understand this phenomenon you need to understand colour imaging and the fact that colour images are sometimes not faithful reproductions. One of the reasons why this story has run and run is that there is no simple answer, no 10-second soundbite that can put the story to bed. It is a complicated phenomenon.