Most would point to the middle part of the last century as the birth of electronic imaging. RCA, Philips, EMI and others introduced several generations of tube image sensors with names like Orthocon, Plumbicon, and Vidicon. In all cases the devices were what we would today refer to as hybrid sensors – the sensing and readout operations were accomplished with physically different elements and materials. In some cases even the act of sensing – i.e. photoelectron conversion, multiplication of that charge signal, and storage of the charge on a target screen – were divided between distinct elements and materials. This separation was driven more by a lack of practical alternatives than by choice, but it had the advantage that engineers at the time could select the best material for each step in the image acquisition process.
In the last quarter of the century that all changed. Monolithic integration was the name of the game in all aspects of electronics, including imaging. In CCDs, photoelectron conversion, charge storage, and readout were integrated onto a common silicon substrate. This was a tremendous advancement – in addition to allowing for improved imaging performance, the resulting miniaturization ushered in the mass commercialization of electronic imaging that is so ubiquitous today. That isn’t to say that hybrid solutions vanished – phosphors and scintillators were placed over the CCD surface to extend sensitivity to the UV and Xray, and low bandgap materials were bump bonded to silicon to extend sensitivity to the IR. But these were specialty processes for specialty applications.
At the end of the century photolithography allowed CMOS image sensors to be practically manufacturable. Here monolithic integration went a step further to include elements like signal processing and timing generation on the same substrate as the photoelectron process and the readout. A camera on a single chip was made possible.
But arguably that trend has reversed through the past few years. Advances in micromachining and materials science have made it possible to once again split up functionality between physically different elements and materials while still maintaining the advantages associated with integration. Sony’s recent announcement of a stacked chip assembly is only one of many in which multiple chips are stacked in order to allow for optimization of the different elements in the signal chain. And companies like InVisage are banking on the deposition of exotic photoelectric materials on top of silicon read out chips as the path forward to improved imaging performance.
At Teledyne DALSA we continue to use hybrid solutions principally to extend spectral range into the Xray and IR, but we are also investigating hybrid architectures for visible image sensors that will allow us to increase light collection efficiency, to achieve very high data throughputs in the visible, and more. Single silicon substrate solutions are still many years away from becoming dinosaurs (use of the term “dinosaur” should take some of you back to discussions in the late 1990’s about CCD vs. CMOS!). Still, it is enticing both for image sensor developers and for image sensor users to think about what may be possible by continuing the trend “backwards” towards greater decoupling through hybridization.