|Motivation behind DxOMark Score | DxOMark Score design|
Here we detail the motivations behind the DxOMark Score and the rationale for its conception.
A camera consists of many different components. Because so many factors related to all these components have to be taken into account, it can be difficult to choose between models. To make it easier for photographers to choose, we wanted to design an objective numerical quantity to globally represent the average image quality a given camera can achieve.
To fully embrace the complexity of the camera, however, it is necessary to go back and forth between several levels of detail for different parts of the system, analogous to looking at something under a microscope. When looking inside a camera, for example, the data in lens with camera and in camera sensor reveal many details about their respective target components. Taken together, they provide a more global view of the camera’s image quality performance.
The idea of DxOMark Score is to quantify the amount of information captured by the camera, taking into account all the optical aberrations and sensor characteristics measured by DxO Labs. This quantity is called the information capacity of a camera.
Information capacity can also be defined as the product of the number of effective bits sampled at each position and the effective resolution of the camera. It is expressed in Megabits (Mbits). The higher this number, the better the camera.
These two numbers (effective bits and resolution) depend on the characteristics of the camera, including such parameters as focal length and f-number, along with the amount of light coming into the camera.
To compare different kinds of cameras, we choose a low-light use case in terms of illumination and exposure time. More precisely, we choose the scene illumination to be 150 lux and the exposure time 1/60s. Such conditions were chosen as we believe low-light performance is particularly important for today’s photography and it is also important for photographers to know how well lenses perform at widest aperture.
Information capacity is an open scale. However, for a given sensor, it is usually about twice the sensor pixel count.
Optical aberrations depend on image field position, which means that information capacity is first locally computed then summed over the whole image. Optical aberrations such as blur, lateral chromatic aberration, distortion (barrel and pincushion), and lens shading all influence the quality of the signal at each point, and result in reduced information capacity.
In addition to optical limitations, a camera’s T-stop setting determines the amount of light that crosses the optical system and eventually reaches the sensor. In general terms, less light means a noisier (or less useful) signal. The signal-to-noise ratio (SNR) determines the number of useful digital values needed to describe the luminance at each point.
A final characteristic to consider is the way each sensor "sees" color, or its spectral response. These responses are different from one sensor to another, and are also different from the response of the human eye. In order to output an image with similar rendering, independently from the sensor, the color space of the sensor is mapped to a standardized color space (as sRGB for instance). This mapping usually enhances noise, and therefore decreases the information capacity.