My name is Matthias. Like Cher, I’ve decided not to publish with my full name, for a few reasons, but mostly as a result of recent conversations with my wife about spam (not the meat) and security. I admit, I’ve been reluctant to adopt social media generally – because in my view, all of this capability for communicating has, in fact, made us less social. Have you noticed how many people text/email someone who happens to be sitting right beside them? or in the next cubicle?
For now, you’ll learn more about me as I post to this blog. Suffice it to say, I believe I could probably post every day for a year just recalling my years with DALSA, well, Teledyne DALSA – except that like you, I get caught up in the daily requirements of my work and find myself too busy. Now that we’re here, I am looking forward to sharing with you my perspective on sensor design, pixels, and the challenges of delivering leading technology to our many customers with their very many interesting uses for it.
Quantum Efficiency is the measure of how well a given sensor can convert photons (of a given wavelength) into electrons (as can be measured on the sensor output). Obviously at the beginning of this process there is a physical basis: photons have to arrive at the sensor surface (and we can discuss optical parameters that relate to scene, lenses etc. at another time). We set this number at 100%. In the graphic below, this number is illustrated by the parallel light rays reaching the pixel surface.
Next, these photons have to travel through the sensor’s optical stack (which is the front of the sensor optically, but referred to commonly as the “backend” in terms of processing wafers) where loss occurs due to reflection on metal and oxide interface layers. The illustration below shows a single pixel being exposed to light (rays) with and without micro lenses applied. The loss is illustrated as light rays not reaching the photodiode.
Once through the optical stack only a fraction of the original photons arrive “safely”. We can call this percentage M%.
Now the material science of silicon kicks in and converts these M% photons into electrons. This is a physical property of silicon and pretty much the same for all image sensors based on plain, implanted silicon photo diodes. Note that there can be differences based on wavelength etc. depending on the implant profiles!
So, let’s say my silicon has a photon-to-electron conversion factor of N%. This N% number is typically referred to as “Quantum Efficiency” or “QE”. Note that this is largely a physical property of the material.
What we really care about is the combination of all effects. As a user, you won’t be so concerned with the material property of the silicon, but how much light (photons) you need to send to get a certain number of electrons (output signal). So to get the “effective Quantum Efficiency” or “eff. QE” we need to consider all loss mechanisms:
eff QE = QE * Optical stack loss = N% * M%
In simple cases the “Optical Stack loss” is provided as the “Fill Factor” of a given pixel. Here one must distinguish between a physical Fill Factor (e.g. the ratio or non-metallized pixel region divided by the total pixel area) and an optical Fill Factor (e.g. using a micro lens the metal aperture is much less limiting). Good micro lenses achieve effective Fill Factors that are near 100% while a good pixel design usually has a physical Fill Factor around 50% (when we look at Machine Vision type pixels in the realm of 5 um pitch).
Next we will review what happens when the light incidence is not normal (i.e. vertical to the pixel surface) but comes in with a non-zero Chief Ray Angles (“CRA”).