256 Shades of Grey: The Story of Shot Noise, Part 1

People use the term image noise somewhat loosely to capture any non-ideality that detracts from the “ideal” image. Contributors to noise include fixed pattern noise (FPN), photoresponse nonuniformity (PRNU), temporal random read noise, and even artefacts like electronic exposure control feedthrough. Shot noise is also an important contributor, but it rarely receives the same level of attention as some of the other sources. That has always struck me as surprising since in machine vision application shot noise is usually the noise source that ends up limiting application performance. In this post I’ll review how shot noise shows up, and compare it to the much more commonly discussed read noise.

Shot noise shows up wherever there is illumination. And unlike read noise, which is typically treated as independent of light level, the magnitude of the shot noise scales with the strength of the illumination. In fact the shot noise is exactly equal to the square root of the average number of detected photons. So in the absence of any light the only source of noise is read noise, but as illumination increases the shot noise eventually overtakes and dominates. Fig. 1 illustrates this graphically. As we push the read noise down further and further with each new generation of image sensor that crossover point happens at lower and lower illumination levels.

Fig. 1. The differing behaviors of read noise and shot noise versus illumination level.

Of more relevance to applications engineers is the impact on SNR. Assuming that we are using a conventional detector in which the signal level increases linearly with illumination, the SNR will increase linearly with illumination for read noise and will increase as the square root of illumination for shot noise. Note that although the shot noise increases with illumination the signal size increases faster – as a result the SNR always improves as we add illumination, even when shot noise dominates.

In machine vision applications, the goal is typically to identify an object, be it something in the scene to be dealt with, or a defect to be recognized. As an example, consider a scene (Fig. 2) to be imaged by a linescan sensor that consists of one white object and two “grey” objects all sitting against a black background. The two grey objects differ slightly between each other in degree of brightness, one at a level that is 50% of the white object, and the other at a level that is 55% of the bright object. Now consider what the detected signal looks like for a detector with a QE of 100% and with 10 e- of read noise. The plots in Fig. 3 illustrate what we could expect to see for differing light levels.

Fig. 2 – Idealized scene.

Fig. 3. The captured image at different illumination levels. In each case the read noise is 10 e-. An offset of 100 DN has arbitrarily been applied to prevent signal clipping.

For an illumination level that would deliver 10 photons per pixel from the white object the read noise dominates and the SNR is <1 – the objects cannot be discerned from the background nor from each other. At 100 photons the shot noise and read noise contributions are equal, and the SNR is sufficiently >1 that we can discern the objects relative to the background. But the SNR is still too low to distinguish the two grey objects. Even with 1000 photons of illumiation, by which point we are firmly in the shot noise regime, the grey objects cannot be convincingly distinguished – more light is needed before the SNR reaches a point at which we can make a convincing decision.

Perhaps the most interesting thing to note here is that the ability to distinguish the objects has very little to do with the read noise. True, at the very lowest light levels the read noise obscures the objects completely relative to the black background. As even a small amount of light is added the shot noise dominates, but to reach the SNR targets required to distinguish the different brightness levels in the scene (especially the two shades of grey) we need to sufficient illumination that we are very firmly in the shot noise regime.

For completeness, Fig. 4 illustrates the corresponding results for a fixed illumination level and differing levels of read noise. Note that lower levels of read noise increase our ability to detect objects relative to the black background, but even for zero read noise we can’t distinguish between shades of grey.

Fig. 4. The captured image for a fixed illumination level and differing read noise values. The illumination level is such that the white object would deliver 10 photons per pixel to the detector.

The take away is that read noise is generally our biggest concern when we need to distinguish an object from a signal-less (i.e. perfectly black) background. But when we need to distinguish objects from each other or relative to an illuminated background generally, shot noise is the key limitation.

Stay tuned for a subsequent post where I’ll comment on how this influences sensor design and sensor selection.

Eric F

About Eric F

Eric has been involved in the world of photonics for more than 25 years. Most of that time has been spent developing image sensors, first with CCD technology, and for the past 14 years with CMOS. Favourite activities outside of work are family and almost anything related to music.
Posted on by Eric F. This entry was posted in Machine Vision and tagged , , , , , , . Bookmark the permalink.

One Response to "256 Shades of Grey: The Story of Shot Noise, Part 1"

  1. Pingback: 256 Shades of Grey: The Story of Shot Noise, by Eric F (Part 1) | Digital Cinema Tools | Scoop.it