256 Shades of Grey in the World of Machine Vision – Part 2

In my last post I described the role of shot noise in determining the ability of machine vision systems to detect objects. Specifically, I pointed out that in my experience, shot noise rather than detector read noise is more often the dominant contributor to noise despite the fact that detector read noise typically gets far more attention in product promotional material. In this post, I will explore what image sensor designers and applications engineers can do to help minimize the impact of shot noise.

I have always been a sports car nut. In the quest for more performance there are many tweaks that can be made, but if you really want more performance then ultimately you have to increase the amount of fuel-filled air being pumped through the engine. There is a parallel in imaging – if you want to boost SNR (Signal-to-Noise Ratio) performance in a shot noise limited application then you have to pump more photons through the image sensor. For example, if  you are collecting only 100 photons per pixel then your SNR can never be better than 10 no matter how low you push the read noise. And if the application requires an SNR of 100 to make an accurate decision then you have to find a way for your pixel to see a signal that corresponds to 10,000 detected photons.

That leaves two options – train more light on the scene, or collect more of the available light. Increasing illumination is a challenge that belongs to the system integrator – there is little a sensor designer can do about that (of course in passively illuminated applications like satellite earth imaging there isn’t much the system integrator can do either!) What the sensor designer can do is to make better use of available light. One solution is to increase the pixel size. Of course that comes at a cost – assuming that the spatial resolution and field of view cannot be sacrificed, this requires image sensors and lenses that are larger and therefore more costly and more bulky. However this is precisely what happens in applications where image performance is most demanding. For example, it is one of the key reasons why pixel sizes in professional digital still cameras are larger than in mobile imaging cameras.

Alternatively the pixel size can remain the same and the detector QE can be increased. One of the more effective ways to achieve this is through back side illumination (BSI). BSI is a must have for space imaging applications, principally because external illumination cannot be applied. Equally, it is now very common in mobile applications where the cost driver to decrease pixel size is so intense.

Another way to increase light collection efficiency is to introduce parallelism. Time Delay and Integration (TDI) architectures achieve this goal very effectively. TDI is a scanning modality in which a single pixel row detector is replaced by a detector with multiple pixel rows. Effectively the scene is imaged with as many different exposures as there are pixel rows. This provides the same advantage as could be obtained with larger pixels or with longer integration times but without impacting lens selection, spatial resolution, field of view, or temporal resolution. It is as close to getting “something for nothing” as I can point to in the imaging world and the reason why these types of sensors are now so ubiquitous in high performance line scan applications.

Of course in order to make use of all of the extra photons that are available with any of these solutions the charge handling capacity must scale appropriately. That isn’t difficult to do when the pixel size is increased, but it can be a challenge when solutions take the form of increased QE or increased illumination. Mobile phone imagers, for example, can get by with a few thousand electrons worth of pixel capacity while machine vision and demanding human vision applications (e.g. professional DSC, commercial video capture) typically require a minimum of 20 to 30 ke- of pixel capacity.

So what is the take away from these 2 posts?

  • First, look carefully at what you need to accomplish in your application. If your goal is to detect dim objects against a completely black background then read noise may be what impacts your system performance most. If you need to detect objects relative to each other or against a background that is not completely black, then there is a good chance that shot noise is your bigger challenge.
  • The second take away is that if shot noise is your biggest limitation, then a good first step is to determine the number of photons you need to see and then selecting a detector with a corresponding pixel capacity that maximizes your sensitivity to the available photons.

’til next time.

Eric F

About Eric F

Eric has been involved in the world of photonics for more than 25 years. Most of that time has been spent developing image sensors, first with CCD technology, and for the past 14 years with CMOS. Favourite activities outside of work are family and almost anything related to music.

Posted on by Eric F. This entry was posted in Machine Vision and tagged , , , , . Bookmark the permalink.

Comments are closed.