In photo forums, we photo freaks often discuss chromatic aberrations, vignetting, distortion, but most frequently sharpness and resolution. It’s almost as if a camera could never have too much resolution, aka megapixels (unless noise performance suffers from it) and a lens can never be too flawless or sharp. Reading resentful image quality debates, you immediately realize that in most cases you have to zoom into the image and count pixels (this is called “pixel peeping”), in order to perceive optical flaws, noise and resolution that is too low. The present photo technology is so advanced that you have to search for possible flaws with a magnifying glass. This leads to the assumption that our eyes must be rather “coarse-pixelated”, or else have very little “megapixels” at their disposal. The functionality of a digital sensor and our eyes are very different and hardly comparable, but if you tried to compare the resolution and expressed it in megapixels, how many would our eyes have?
To lift the secret, it only just has 7MP in the middle of our field of vision, which are being absorbed by the high-resolution areas of both retinas, the fovea, and approximately 1MP spread over the rest of our field of vision. This 1MP consists of a gradually decreasing amount of color pixels towards the edge. On the outer edge we are only able to see black-and-white, where the brain, with corresponding PP-effort, “reconstructs” the colors. If you’re in command of the English language and burning to get more information on this topic, you should definitely watch the embedded video below by vlogger Michael Stevens J.