My blog has been moved to

Tuesday, April 28, 2009

transparent QWebView and QWebPage

Seems that the trick to make a transparent QWebView or QWebPage is not very well known. So here is the magic incantation:

    view = new QWebView(this);
    QPalette palette = view->palette();
    palette.setBrush(QPalette::Base, Qt::transparent);
    view->setAttribute(Qt::WA_OpaquePaintEvent, false);

Or grab it at

Here is the result (click to zoom). I put the famous TuxKiller wallpaper as the background for the main window. The central widget is set to a QWebView instance, using the transparent trick. As everyone loves Cube these days, that is the URL I am loading:

Note 1: of course this does not work if the web page explicitly sets the background color. For example, (see its HTML source) forces a white background.

Note 2: with Qt 4.4's QtWebKit, you have to use the background brush instead of the base brush. This is changed in Qt 4.5 for consistency with the rest of Qt (it is mentioned in Qt 4.5.0 changes file).

Friday, April 24, 2009

quattro cinque uno

Fresh from the oven: Qt 4.5.1, Qt Creator 1.1, new SDK.

Details on what has changed can be examined in the changes file. Now that the release is out, the QtWebKit team is busy again fixing bugs and backporting important fixes for the next patch release (4.5.2). Expect to see more extensive changes there. Few QtWebkit-related examples which I have written are also being cleaned-up and imported as new Qt examples, as we speak.

Wednesday, April 22, 2009

Still about color wheel

This is the follow-up to what I wrote before: hue subdivision for mortals.

Seems everyone echoes my sentiment: increasing the coverage area of green is not the right way to go. The easiest explanation is as follows. Since this is an additive color model, due to the higher sensitivity of green, its contribution to other primary colors should be reduced. Effectively, this means we should shrink the green region in the color wheel:

For the "Mortal" version in the above picture, I modified the conversion from hue to RGB (assuming fully saturated color), because I discarded the idea of curve-fitting to map the angle to the hue value. Another change is that I gave up keeping the triangle of the primaries, i.e. while 0 is still red (as an arbitrary reference), 0.333 is not green anymore. The actual position of a color component is now determined by the inverse proportion of its part to the grayscale function. I arrive at 26% red, 17% green, and 57% blue. Unsuprisingly, it means blue now occupies most of the space.

Under each wheel, shown also colors taken from the color wheel, if it is equally divided into eight parts. The result for "Droid" is probably familiar for a lot of people. Comparing it to the "Mortal" version gives an interesting insight. As can be predicted, now the contribution of green is less dominant. Indeed, blue shades are apparent in few more colors. In fact, this arises a problem. The light and dark blue colors (in "Mortal") look too similar. Compare to the light and dark green (in "Droid"). Maybe this is because the weighting factors of 26:17:57 are completely busted? But then, how shall I come up with nice weighting factors?

Seriously, maybe I should just stop trying all this with an additive color model...

Tuesday, April 21, 2009

On hue subdividision for mortals

The use of HSV/HSL color space is obvious when we need to have several colors distributed in an optimal way, colors as unique as possible. For example, SpeedCrunch uses it to autogenerate the colors used for the syntax highlighting feature. The details behind its algorithm was already described by Helder in his Qt Quarterly article Adaptive Coloring for Syntax Highlighting. Basically we use the color wheel and subdivide the hue into equal parts. Primary additive color components are red, green, and blue. This distributes the colors in maximum angular distance with respect to the hue values.

So far so good. However, it was known that human eyes are generally more sensitive to green than other colors. In computer graphics, this is often manifested in the grayscale function, i.e. the function that converts RGB to a grayscale value. Take a peek at Qt's qGray(), it gives the red, green, and blue the weighting factors of 11, 16, and 5, respectively. Shall we take this into account when we subdivide the hue?

If this theory holds, it actually means that a change of shade in the green region should give more perception of change to our eyes then e.g. the same change of shade in the blue region. Another way to say it: the same amount of color difference (to our eyes) corresponds to different angular distances (in the hue component, in HSV/HSL color space) in the green and blue region. Hence, if we want to subdivide green, we can have a smaller spacing there compared to the case where we want to subdivide blue. A simpler way to do it would be to stretch the green region so that it is wider than blue. That way, we just subdivide the color wheels with equal spacings and overall we still get more contributions from green than other components. This is illustrated in the following picture. It will be more "human-friendly", won't it?

Here is a detailed explanation. Suppose a (in the range 0..1) denotes the angular distance relative to a reference. For the purpose of this analysis, assume a=0 means red, 0.333 means green, and 0.667 means blue. This is a 1:1 mapping to the the normalized hue value (in HSL/HSV color space). It is exactly what is shown in the "Machine" version of the color ring in the above picture. On the right, the "Human" version, a=0.333 still means green, same for 0 (red) and 0.667 (blue). However, we see that the yellow color (roughly marks the transition between red and green) is in a different position, same for cyan and magenta. Overall, the coverage area of green is larger, analog to (like previously described) 50% contribution of the green component to the grayscale value. This means that the mapping between a and hue gets more complicated.

A simple solution is to have a custom interpolation between red and green, green and blue, and blue and red. In the case of "Machine", any value a between 0 and 0.333 corresponds to a linear combination between red and green, and thus the middle point (yellow) sits at a=0.167. For the "Human" version, this is not the case anymore. The distance between yellow-red and yellow-green has a proportion of 11 and 16. Thus, yellow sits at a=0.136. If we continue for green to blue and blue to red in a similar fashion, we will arrive at the complete mapping between a and hue.

I decided to take another route. After few minutes experiment with different curve fitting methods, here is an interesting mapping function:

hue = (1.39 - a * (4.6 - a * 4.04)) / (1 / a - 2.44 + a * (0.5 + a * 1.77));

that is exactly the one I used to produce the image of the color rings above. You still need to take care of avoiding divide-by-zero (or rewrite it to avoid division: left as a 5-minute exercise to the curious reader), but otherwise the function is smooth and fast enough to execute on modern machine. Isn't math cool?

Of course, take this with a pinch of salt: likely I make a lot of gross approximation and model simplification.

Personally I still doubt that this will make a big difference. Afterall, you can hardly distinguish two saturated colors when they have a hue distance less than 0.1. They just look the same, unless we play with the saturation and value. We can even attack it from a different point view: since a bit of shade of green provokes our eyes more than blue and red, should not we shrink the green region instead, and thus effectively reducing its contribution? Or maybe we need to use the concept with a different approach? Or let us just forget it and use subtractive color model instead?

Comments? Ideas? Flames?

Saturday, April 04, 2009

this is the world that we live in, the Python world!

It's been one year I work for Qt Software (nee Trolltech). Two big releases: Qt 4.4 and Qt 4.5. Qt for S60. LGPL-ed Qt. Graphics Dojo. Going to Munich and Redwood City for DevDays. Things are exciting as ever.

In one month, I will be in Florence (Italy) for PyCon Italia. I'd have one technical talk: Advanced Graphics Programming with PyQt, see the abstract for details.

Check also other interesting talks in the schedule. For example, don't miss PyQt for Desktop and Embedded Devices by our Python+Qt guru, David Boddie. And yes, I would not dare to skip Guido von Rossum's keynote on Python 3.0.

If you will be around and want to have a snack or a chat, just let me know!