My blog has been moved to

Tuesday, April 21, 2009

On hue subdividision for mortals

The use of HSV/HSL color space is obvious when we need to have several colors distributed in an optimal way, colors as unique as possible. For example, SpeedCrunch uses it to autogenerate the colors used for the syntax highlighting feature. The details behind its algorithm was already described by Helder in his Qt Quarterly article Adaptive Coloring for Syntax Highlighting. Basically we use the color wheel and subdivide the hue into equal parts. Primary additive color components are red, green, and blue. This distributes the colors in maximum angular distance with respect to the hue values.

So far so good. However, it was known that human eyes are generally more sensitive to green than other colors. In computer graphics, this is often manifested in the grayscale function, i.e. the function that converts RGB to a grayscale value. Take a peek at Qt's qGray(), it gives the red, green, and blue the weighting factors of 11, 16, and 5, respectively. Shall we take this into account when we subdivide the hue?

If this theory holds, it actually means that a change of shade in the green region should give more perception of change to our eyes then e.g. the same change of shade in the blue region. Another way to say it: the same amount of color difference (to our eyes) corresponds to different angular distances (in the hue component, in HSV/HSL color space) in the green and blue region. Hence, if we want to subdivide green, we can have a smaller spacing there compared to the case where we want to subdivide blue. A simpler way to do it would be to stretch the green region so that it is wider than blue. That way, we just subdivide the color wheels with equal spacings and overall we still get more contributions from green than other components. This is illustrated in the following picture. It will be more "human-friendly", won't it?

Here is a detailed explanation. Suppose a (in the range 0..1) denotes the angular distance relative to a reference. For the purpose of this analysis, assume a=0 means red, 0.333 means green, and 0.667 means blue. This is a 1:1 mapping to the the normalized hue value (in HSL/HSV color space). It is exactly what is shown in the "Machine" version of the color ring in the above picture. On the right, the "Human" version, a=0.333 still means green, same for 0 (red) and 0.667 (blue). However, we see that the yellow color (roughly marks the transition between red and green) is in a different position, same for cyan and magenta. Overall, the coverage area of green is larger, analog to (like previously described) 50% contribution of the green component to the grayscale value. This means that the mapping between a and hue gets more complicated.

A simple solution is to have a custom interpolation between red and green, green and blue, and blue and red. In the case of "Machine", any value a between 0 and 0.333 corresponds to a linear combination between red and green, and thus the middle point (yellow) sits at a=0.167. For the "Human" version, this is not the case anymore. The distance between yellow-red and yellow-green has a proportion of 11 and 16. Thus, yellow sits at a=0.136. If we continue for green to blue and blue to red in a similar fashion, we will arrive at the complete mapping between a and hue.

I decided to take another route. After few minutes experiment with different curve fitting methods, here is an interesting mapping function:

hue = (1.39 - a * (4.6 - a * 4.04)) / (1 / a - 2.44 + a * (0.5 + a * 1.77));

that is exactly the one I used to produce the image of the color rings above. You still need to take care of avoiding divide-by-zero (or rewrite it to avoid division: left as a 5-minute exercise to the curious reader), but otherwise the function is smooth and fast enough to execute on modern machine. Isn't math cool?

Of course, take this with a pinch of salt: likely I make a lot of gross approximation and model simplification.

Personally I still doubt that this will make a big difference. Afterall, you can hardly distinguish two saturated colors when they have a hue distance less than 0.1. They just look the same, unless we play with the saturation and value. We can even attack it from a different point view: since a bit of shade of green provokes our eyes more than blue and red, should not we shrink the green region instead, and thus effectively reducing its contribution? Or maybe we need to use the concept with a different approach? Or let us just forget it and use subtractive color model instead?

Comments? Ideas? Flames?


randomguy3 said...

Humans may be more sensitive to green, but that doesn't mean we're any better at distinguishing different hues of green than hues of other colours. In fact, probably the opposite - our sensitivity to green may be what makes blue so distinct from cyan and red from yellow, while purple seems to fade evenly into both blue and red.

Think about the differences between hues 90 degrees apart (or slightly less) on each wheel. On the "machine" wheel, there is a stark contrast between any two hues 90 degrees apart. On the human wheel, the top hue (one end of green) and the right hue (the other end of green) are very similar. At least, that is how it seems to me.

Pitazboras said...
This comment has been removed by the author.
Pitazboras said...

That's exactly what I wanted to write - in "machine" model there is red on 0, yellow on 0.167, green on 0.333 and cyan on 0.5. In "human" model there is still red on 0 and something similar to yellow (with a bit of green) on 0.167, but on both 0.333 and 0.5 there are some greeny colors I can hardly distinguish. Differences in "machine" model are much bigger.

Anyway, using subtractive model isn't bad idea. I always preferred to treat orange as more "primary" than cyan. But it's only subjective opinion.

Anonymous said...

I have no idea if this is relevant, but I've heard more than once that your native language has some influence on your color perception.

Anonymous said...

I like the first one better.

Anonymous said...

have a look at colorbrewer, its mainly used to select colors for maps (distinguish between details) it might be very relevant.

Dread Knight said...

I'm pretty sensitive when it comes to colors, but i prefer 'machine' version. The human one has too much green in it and i don't really see the differences in the middle part, perhaps because of my display, not sure...

Tim said...

I concur. 'Machine' looks much more evenly distributed, I would perhaps give even less space to green.

Tim said...

PS: This problem has already been solved (better!) by the LAB colourspaces.

Gregory Haynes said...

Something I think might affect peoples impression of the a wheel is the rotation of the colors. On the 'human version' green is at the top, so when peoples eyes natually gravitate there at first you recognize a giant span of green. Maybe placing the green near the bottom might make it slightly more appealing?

Anonymous said...

For me, even in the "machine" model there is too much green already. I can't really distinguish between the middle quarter of the green region there. Having a wider range of yellows would be nice though. I think that, in general, having an even distribution of the spectrum is the way to go.

Anonymous said...

On the machine circle, I would say the yellow is too small. I see these distinct colours:
red 0,
red-orange 30 (+30),
orange 45 (+15),
yellow 60 (+15),
yellow-green 75 (+15),
green 120 (+45),
blue-green 160 (+40),
cyan/sky-blue 180 (+20),
blue 210 (+30),
indigo 240 (+30),
purple/violet 270 (+30), and
pink/magenta 315 (+45),
red 360 (+45)

The average person have colour sensors for orange-yellow+violet, green-yellow, blue and overloaded green-cyan (rod).

Since grey is 59% green, 30% red, and only 11% blue, implies the eye is *less* sensitive to green, but over a few seconds an AGC process takes over and adjust the sensitivity to each colour.

On the computer monitor 255, 255, 255 is white, so it is already adjusted.


Ariya Hidayat said...

Thank you everyone for the feedback!

I have a follow up at: