Canvas demo: subpixel rendering
This is a proof-of-concept demo of sub-pixel rendering, which takes advantage of the position of individual red, green, and blue display pixels, to deliver higher-resolution images.
This article demonstrates a technique for optimizing high-resolution images for display on RGB-striped pixel displays. We show two methods here: HTML5 canvas processing, and Photoshop.
This is a proof-of-concept demo of sub-pixel rendering, which takes advantage of the position of individual red, green, and blue display pixels, to deliver higher-resolution images.
It draws onto a canvas, which is then drawn to the screen to take advantage of RGB-striped subpixels. It works with alpha channels too.
Note that this might not work if you are using a portrait display, non-striped pixels, or if your browser displays canvases at a lower resolution than your display.
First, we create a hidden canvas, three times wider than the one we're displaying, and apply scale(3,1)
to it, so that all future drawing is stretched in the x direction. We do all our drawing on that. When we're done, we copy it to the display canvas, in a special way.
To copy the high-resolution canvas to the display canvas, we do something like this: align the canvases the same physical size without losing any pixels, then for each pixel: draw the red component of the display pixel, baed on the corresponding three red pixels in high-res canvas.
Have a look at some other web articles to see the format of canvas.getContext('2d').getImage().data
. With that pixel format, and the offsets of the colour elements in mind, we do the above drawing using a matrix like this:
Source | Destination | |||||
---|---|---|---|---|---|---|
b | g | r | a | |||
pixel offset | subpixel offset | 2 | 1 | 0 | 3 | |
−1 | r | −4 | w2 | – | ||
g | −3 | – | ||||
b | −2 | – | ||||
a | −1 | w2 | ||||
0 | r | 0 | w1 | – | ||
g | 1 | w2 | – | |||
b | 2 | – | ||||
a | 3 | w1 + w2 | ||||
1 | r | 4 | w2 | – | ||
g | 5 | w1 | – | |||
b | 6 | w2 | – | |||
a | 7 | w1 + w2 + w2 | ||||
2 | r | 8 | – | |||
g | 9 | w2 | – | |||
b | 10 | w1 | – | |||
a | 11 | w1 + w2 | ||||
3 | r | 12 | – | |||
g | 13 | – | ||||
b | 14 | w2 | – | |||
a | 15 | w2 | ||||
Total | 1 | 1 | 1 | 3 |
where w1 is the weighting for the centre pixel of the sampled trio, amd w2 is the weighting of the outer pixels. w1 + w2 + w2 should total 1.
1/3 is the ideal value for w1 (and incidentally w2). Larger values for w1 will sharpen the image, but there's a risk that it might introduce coloured artefacts on sharp edges, so it's best not to take this too far.
The alpha blending samples from each source pixel in the same proportion as the colour values. This means that you can render alpha transparency into the striped display format. To avoid alpha anomilies on the edges of drawn shapes on a transparent background, any compositing should be done on the high-res hidden canvas, and then conversion to the display format should be done as a final stage.
Feel free to have a look at the source code for this page: it is HTML5 without dependencies (other than attempting to use fonts installed on the system).
We regard this example as an inefficient prototype, workable for static images.
Left: Photoshop's "Resize Sharper" image reduction; right: our sub-pixel optimized reduction.
This technique is good for showing screenshots, icons, small text, or other images where high-contrast detail can be resolved, AND the target display is a TFT screen. It is therefore suited to embedded systems, software rendering applications where the user may set it as a preference, or for images used on websites.
Top-to-bottom:
Nearest Neighbour,
Bicubic Sharper,
sub-pixel optimized.
The actual technique is similar to ClearType, and indeed any small lettering in source images will enjoy similar benefits to ClearType-rendered text. We've provided some early comparison images on this page, and we'll be adding more soon.
There are many considerations that make this work as well as it could: display gamma, actual sub-pixel positions, the correct type of re-sampling, and a few more details that make tiny improvements.
There is one major drawback: in preparing images for generic use, we risk the image being displayed on a system that does not comply with the RGB stripe pattern that we exploit. This hazard is to be expected if we are guessing the output device and making an image device-dependent. We therefore see this as a device-level rendering method, even though images may be prepared conventionally.
If executed without the considerations mentioned above, images may exhibit colour fringing.
Although our images are a little duller, they are a more honest representation of the original, and we can compensate with more processing to match expectations as needed.
Photoshop's image resampling introduces some undesirable artefacts, all of which could be interpreted as enhancements for some purposes. For photographic application, where resolution is plentiful and pixels are hardly ever seen directly, this does not matter, but for on-screen images where pixels are large enough to be seen, some people find it distracting.
We think it's a worthwhile technique for specialised applications, especially where detail needs to be presented, and the best use needs to be made of pixels. The result is superior to resampling and rendering methods that do not use sub-pixel precision. As usual, we'd be interested to hear feedback on this.