A Preview to Color Machine Vision Tutorial @ Automate 2013

On January 23, I’ll be presenting a tutorial on Color Machine Vision at the AIA’s Automate Show. Attendees who pass a subsequent examination will become Certified Vision Professionals (CVP). Here are a few key points to pique your interest in perhaps attending said course. Hope to see you there. 

Every vision system requires 3 things – 1. a source of illumination, 2. material to inspect, and 3. one or more sensors for measuring the illumination reflected or transmitted from the material being inspected. A simplified model of this process is that the spectra (distributed by wavelength) of these three components are multiplied together and integrated over a wavelength to give an output value from a sensor.

The goal of color inspection is to recover an object’s spectrum given only the outputs from a set of sensors. At least two broadly-tuned sensor “types” are needed for this; otherwise you couldn’t differentiate a change in color from a change in intensity of illumination or object reflection. Machine vision uses human vision as its model, and so cameras use three sensor types, also RGB. The more sensor types, the better the spectral resolution.

Figure 1 shows us illumination reflecting from a red apple to give RGB color samples at each pixel in the camera. “Color” is in quotes because a machine doesn’t see color – color is a construct of our perception. The task in this case might be to determine red from green apples or distinguish bruises by their color.

Blog_002_2013-01-14_Figure1

Because the illumination, object, and sensor responses are multiplied, we can’t distinguish changes in one factor from changes in another. For example, if the light’s spectrum changes, you can’t distinguish this from changes in the object’s reflectance.  To recover the object’s color, you have to know the lighting spectrum and the sensor responses spectrum. Our vision system estimates the illumination spectrum and knows its sensor responses. In machine vision, we control the illumination spectrum and know the camera’s sensor responses.CIE_Graph_1931

In practice, we don’t divide the illumination and sensor responses to determine an object’s color. Instead, we calibrate object colors to a standard, using a colorimeter, and then reference the sensor outputs to this standard. These standards are derived from human vision measurements; perhaps you’ve seen the CIE “horseshoe” diagram that shdevice can generate or detect.

Unfortunately, a colorimeter only calibrates at a single point and it is difficult to extend that calibration over an image. For example, the angle of the part can influence the measured color and the lighting might not be uniform over the image.

It’s a challenge to know exactly where to end this given the subject matter, but as I said, I’ll be presenting this topic at Automate 2013 – in just a few short days. If you’re interested in learning more, make sure to join me at the show.

Ben

About Ben

I earned M.S.E.E. and Ph.D. degrees from Stanford, was at MIT for many years, and have been working in the vision business longer than I want to admit. At Teledyne DALSA, I develop vision algorithms, provide customer support on difficult vision tasks, and do “technical marketing” — writing papers, blogging, and lecturing.

Posted on by Ben. This entry was posted in Image Sensors, Machine Vision, smart cameras, Software. Bookmark the permalink.

Comments are closed.