An interesting fact about our retinas is that they do not sense red. Instead, they contain three types of cones, L, M, S, that roughly correspond to the colors yellow, green and blue respectively. Since L (yellow) is roughly a sum of M (green) and red, our eyes calculate red as the difference of L and M.
Historically, image sensors directly sensed red as the computational power to mimic the processing of our eyes was not available. This led to reduced SNR since red is less sensitive than L, and poor color accuracy, since a part of the ideal red spectral response is negative and cannot be realized by a physical filter. Since Moore’s law has provided us with the necessary computational power, a change in CFA design is overdue.
ENGINEERED LIKE THE RETINA
Our LMS camera is designed to sense and process signals in a manner similar to human eyes. Its sensor uses a LMS CFA pattern modeled on the retina, which replaces low-sensitivity red pixels with high-sensitivity L pixels. Additionally, the camera has more high-sensitivity L and M pixels and fewer low-sensitivity blue/S pixels, resulting in a 4.25 dB luminance SNR advantage over Bayer sensors
Our eye removes both luma and chroma noise before transmitting the captured images to our brain. Now, luma denoising is difficult while chroma denoising is easier. By sensing L = R+G, instead of R, our eye increases its luma SNR since L is more sensitive than R. However, this comes at the cost of decreased chroma SNR since the calculated R = L-M is noisier than directly sensed R. Thus, our eye trades the difficult task of removing luma noise for the easier task of removing chroma noise. The LMS camera follows the same principle.
Chroma denoising is highly effective and generates very few artifacts, without obscuring detail or texture. In contrast, luma denoising is much more challenging and often results in the loss of texture and a plastic or painted appearance in images.
When gently applied to well-lit images, chroma denoising is practically transparent and causes no visible damage. When applied strongly on low light images, chroma denoising desaturates colors of small features but cleans up the image a great deal in the bargain.
Due to its effectiveness, chroma denoising is now universally used by modern image signal processors.
The New Color Filter Array
The new CFA has more L pixels and fewer S pixels than a hypothetical Bayer LMS pattern for the following reasons:
Sparse S Pixels
The low density of S (blue) pixels, or any other color, does not necessarily result in resolution loss since natural images are compressible, and other colors are densely sampled. The Nyquist theorem is not applicable in this context, and sampling density can be reduced – up to a point.
However, the sparsity of S pixels increases blue noise, requiring more denoising. Fortunately, blue denoising artifacts are not noticeable because the retina has very few S cones.
Manufacturing Color Filters
The LMS camera requires the manufacturing of the new L, M, S color filter, which are variants of Yellow, Green and Blue color filters used in present day cameras. Fortunately, this is a relatively simple task because the response curves of L, M, S are Gaussian functions. In contrast, several non-RGB color filter arrays that were proposed in the past had complicated spectra, making them difficult to manufacture.
RGB Color Filter Choices of Today’s Cameras
Sensing R directly inevitably leads to color errors because part of its spectrum is negative. This is the primary source of color errors in today’s cameras.
Attempts to avoid R’s negative spectrum by using the XYZ color space, which is a linear combination of the LMS color space, have been successful in color meters. However, the filters required have proven too complex for CFA implementation. With the availability of denoising, the motivation for using the XYZ color space instead of LMS is unclear since it trades the easy-to-remove chroma noise for the hard-to-remove luma noise.