|Introduction | Modeling small pixels | Comparing the SNR of cameras with different resolutions | Real-life comparisons | Conclusion|
Imagine a low-resolution sensor. A higher-resolution sensor can be obtained by cutting each pixel of the low-resolution sensor into four smaller pixels:
For a given exposure time, each smaller pixel receives four times less light than a large pixel—the equivalent of reducing exposure time by a factor of four. So to get the same sensor response, exposure time needs to be multiplied by four, which means that the ISO sensitivity of the high-resolution sensor is four times less than the low-resolution sensor.
Now assume that the same exposure times and identical ISO settings are used with a low-resolution camera and with a high-resolution camera having four times as many pixels. Since each high-resolution pixel is intrinsically less sensitive, a higher gain (either analog or digital) is applied to the signal, yielding more noise.
Let I denote the gray level on the sensor, and sL(I ) and sH(I ) the standard deviation on the low- and high-resolution sensors, respectively. With equivalent technology, the high-resolution sensor has more noise because of the higher gain: sL(I ) < sH(I ).
However, the four high-resolution neighboring pixels can be averaged out to form a low-resolution pixel. The statistical formula below shows the noise yield for the downsampled image:
The new SNR is
The loss of resolution produces a better SNR. We now have two images at the same resolution and shot in similar conditions. When printing with the same printer using the same format, it is more relevant to compare sL(I ) and sH(I ) + 6dB.