In the course of heated technology discussions, photographers often bandy about megapixel figures. Having more megapixels is said to be an advantage in good lighting conditions, when using low sensitivities. It is commonly known that more resolution is better. Conversely, less, bigger pixels are said to be more sensitive, thus taking in more light and having better noise performance. What many tend to forget, however, is the fact that megapixel figures are only comparable if you are talking about sensors of the same type. That is to say, as soon as the mistake of comparing pixel figures of different types is made, the megapixel house of cards collapses. The pixel, which is sold to us as a kind of standardized unit of resolution, is anything but standardized. To illustrate this, I will present the two best widely known sensor types, the ubiquitous Bayer and the Foveon sensor by Sigma.
On the picture above, the two sensor-types are illustrated in a simplified manner. The multilayer-structured Foveon is on the left-hand side, the plain Bayer-sensor on the right. Since light of various wave length may advance variably deep into silicone, with the Foveon, every physical pixel consists of three layers. The blue layer is on the lens-facing side, because this wave length is immediately absorbed, below that is the green one and even deeper in the silicone is the red one. No matter which physical pixel is hit by a light ray, at this point the three detectors for blue, green and red absorb the according wave lengths. If you measure how much light each detector actually got, you can determine the colour at that point. It is a bit as if you’d combine blue, green and red paints in various amounts and mix them with a brush to come up with any conceivable colour. As any of its pixels can “see” all three primary colours, the Foveon is referred to as true-colour sensor. This means the Foveon in Merrill cameras actually has 15 megapixels (rounded to simplify matters).
The Bayer sensor consists of blue, red and green pixels situated next to one another. Considering the little parts of the structure, there are 2 green, 1 blue and 1 red pixel in each group made of 2*2 pixels. In order not to see a blue-green-red checkered pattern while looking at the picture on the sceen in 100% view, neighbouring pixels are taken into account during the colour determination of each pixel. Instead of measuring all three primary colours at each point, a certain value needs to be approximated by means of complicated calculation and colour information of multicoloured neighbouring pixels. As is known by everyone engaging in mathematics, no information can be reconstructed that hasn’t existed beforehand, no matter how sophisticated the method of calculation. And this is what makes it so difficult to assign a megapixel figure for the Bayer sensor. The 36 “MP” Bayer of the Nikon D800, for instance, has 18 MP of green, 9 MP of red and 9 MP of blue pixels.
Under ideal circumstances, it may approximate this value, at worst it maybe reaches 9 megapixels. To illustrate this, let’s take the following example: A small pattern consisting of red tiles, separated by black lines, is to be photographed. Without the green and blue wave lengths, there is nothing the respective pixels could absorb. They are practically blind. There is nothing to get out of them, not even with the calculation. What’s left is the 9 MP of red pixels. Thus, a D800 is rather a “9 to ~36 MP” camera.
My advice is, therefore, to question all figures that are used in any industry as a performance indicator and to get informed what they actually stand for. Horsepower in cars are a further, paramount example to illustrate this. 😉