Welcome, Robot Overlords

Aug 8, 2014
Glen

Lost in Space c. 1966Every so often a technological advancement renews concern around the future of humanity as we know it, in recent months, the role of technology in enabling robots has surfaced – again. I think the reason “robots” get a bad rap is because those of my generation grew up with robot characters like The Robot from Lost In Space and Rosie the Robot from The Jetsons (OK I’m dating myself, I’m sure you can find many more modern examples). These characters are robots with a vaguely human shape to them. Today’s robots are nearly indistinguishable from human beings. These robotic forms make it easy for us to anthropomorphize them – attribute human emotions and feelings to their actions. Those of us in the technology business know that robots do not “think” this way.

Turing Test

The latest technology scare occurred when we were told that a computer program passed the famous Turing Test. The Turing Test was proposed by Alan Turing, a British cryptographer considered to be one of the pioneers of modern computing. His test involved humans communicating with a computer program through a “chat” mechanism. If the computer program could convince at least 30% of the humans communicating with it that it was a real person and not a computer program, the program is said to have passed the Turning Test. Other programs have come close before, but none had reached the 30% mark.

Eugene GoostmanRecently, a computer program called Eugene convinced 10 of 30 judges that he was a 13 year-old boy from Ukraine. While it is quite debatable whether or not this program passed the spirit of the test as well as the letter of the test, all most people heard was, “Welcome, robot overlords.”

Artificial Intelligence (AI)

The anthropomorphizing of robots is probably fueled, at least in part, by publications from researchers who work in the field of artificial intelligence, or AI. Their goal, in fact, is to impart human characteristics into computer programs, such as the ability to learn from mistakes. AI has proven to be quite daunting. We still do not fully understand how the human brain works, so how can we program something we don’t understand into a computer? I don’t doubt that people in the AI field will be able to make great strides using the parts of human cognition that we do understand. Perhaps AI will actually teach us a great deal about how the human mind does, or at least does not… work.

Robots In Industry

I don’t know of any robots in industry that are using AI to perform their jobs. I know that AI has been used in vision systems for things like classification of defects. By training a vision program using a large set of different defects, their program can start correlating things like shape, brightness, and texture with those defects, then categorize new defects into one of these pre-defined classification bins based on this information. But this is a long way from a completely autonomous robot, and certainly a long way from anything we humans would describe as “thinking.”

Conclusion

Technology continues to advance at incredible rates. And while someday a combination of technologists and biologists may crack the code for human thought, I don’t think any of us need to be worrying about sentient robots taking control of the planet any time this century. But, just to be on the safe side (see Pascal’s Wager), let me be the first to say, “Welcome, robot overlords.”

Posted on by Glen in AI, Image processing, Image Sensors, Machine Vision | Leave a comment

Who needs Captain America when we’ve got Heroes with Vision?

May 27, 2014
Kirk Petersen
superhero

credit: Marvel Comics

Last month I had the opportunity to attend the AIA Vision Show in Boston. With some 2,000+ attendees and 100+ exhibitors this conference is not large when compared to other North American trade events but it is an important one.  It’s important because it is one of a small handful of trade conferences that is singularly focused on industrial vision. Jeff Burnstein and the team at the AIA host a great event; the showcased technology, practical seminars, training sessions and opportunities for networking are all first rate. Now anyone who is employed in a technology industry knows that the only constant is change. Technology companies are built on their ability to deliver better, faster, and cheaper solutions. Certainly, the machine vision industry is no different. If you attended the AIA event you know what I mean. Every exhibitor was highlighting their incremental technology advantages. Don’t get me wrong, there is nothing wrong with incremental. For sure, if you want to learn about the latest and greatest in machine vision technology this is the show to attend. But what really impressed me were the people.

photo credit: Marvel Comics

credit: Marvel Comics

As I talked to attendees and exhibitors, I was struck by how passionate they all are about this industry. It seems to me that it’s this passion, commitment and drive that is the real star of the show. Yes, technology is important but the best technology is just a tool; the real value comes from the engineering expertise and the innovation and imagination to apply it.  Spend 5 minutes with industrial vision pioneers, like Rusty Ponce de Leon from Phase 1 Technology, Rex Lee of Pyramid Imaging, or Sharla Burns at Image Labs International, or Salvador Giro at Infaimon, to name just a few, and you’ll see how passionate they are when focused on solving a challenging vision application.

So as I return to my regular job of telling you all about Teledyne DALSA’s new products, or our recent advancements in CMOS image sensor performance, I want to take a moment to acknowledge the influential, imaginative and passionate people who drive our industry; and yes, I include in this group the very talented people who are our competition. In the end we are all working toward the same goal and share the same passion, which is to see this industry reach its full potential, not just in traditional machine vision applications but in the physical and health sciences, arts and entertainment, as well as oceanographic, geographic, and interplanetary exploration. For me, it’s the combination of imagination and engineering that makes digital imaging an exciting place to be – and for certain, full of possibility.

Here’s to all of you….my vision heroes.

 

Posted on by Kirk Petersen in Cameras, CMOS, Image Sensors, Machine Vision | Tagged , , , , | 1 Comment

A “What IF” Approach to Machine Vision via Semi Custom Solutions.

May 22, 2014

Genie_SCAs engineers, we enjoy a challenge, especially when it’s something new! In general, we don’t like to turn customers away, and it’s rare for us to outright decline a project. Our business model has enough flexibility to allow both small and large projects. We do look for long-term partnerships with customers, and expect some level of follow-on business. Here are a few examples of custom solutions and a brief description of the business cases around them.

1. Image Optimization Algorithm

A common customization request we see is to alter an existing algorithm for our Genie camera. One of our customers developed a security application that acquired images outdoors. They explained that the images contained both very bright and very dark regions, but it was the middle-range of pixels that contained the most important information for their application. The scenes would be subject to a lot of movement and constant changes in brightness. They were concerned that those mid-range pixels would not have enough brightness and contrast because they would be affected by the two extremes.

In effect, our customer was looking for heightened functionality from what was a standard feature so we modified our auto-brightness function. And then? Instead of charging the customer who made the request, we realized all of our customers would benefit from the enhancement. It may have cost us a little time, but the benefit certainly outweighed the effort in this case.

2. RLE Algorithm for Data Reduction

Run-length encoding (RLE) is useful in situations where multiple pixels in a specific region of the image have the same value or, are above a given threshold. Instead of passing the value of each pixel to the host, RLE informs the software that from position ‘x’, the next 50 pixels, for example, have the same value. Instead of 50 pixel values, RLE encodes the information in a few bytes, reducing the amount of data transferred.

The technique has been used in a material inspection application, where a vision system guided a cutting head around imperfections in the material. In this application, the RLE was used to isolate specific regions in the image, reduce the amount of data required to represent those regions and pass only that information along to the host for further processing and mapping of the cut zones. We were able to use that same approach for a bio-medical application. The customer placed markers on a test subject in order to analyze the movement of the markers through the image data. The problem was, they wanted to use as many as ten GigE cameras on the same network. The challenge was to fit all that image data on a network with a 100MB/s bandwidth. Data reduction was the only way to do it. By modifying the camera’s firmware to use RLE, only data representing the position and shapes of the markers were sent to the host computer. It was very easy to implement because we had done it before – though for a very different market segment.

3. Image Compression Algorithm

On-camera image compression is another example of semi-custom functionality implemented for a specific customer. In this case the client wanted to use five cameras to inspect trains without having to slow them down. To accomplish this we needed to run all five cameras at 100 frames per second. The problem was, all that data represented too much bandwidth for a single GigE network. In order to get all of the image data through the same network connection, we added JPEG compression to the camera’s firmware. A feature now common to our latest Genie TS cameras.

4. Hardware Form Factor

Hardware customization is also possible. We’ve done mechanical and form factor changes, as well as sensor modifications like specialized coatings and filters. Like firmware customizations, these hardware solutions come about after discussions with customers. In one case, the customer had a system for calibrating color quality for print inspection. It started out as a custom processing project, but as it progressed, we learned that a change in form factor would help them out significantly. In the end, we gave them a flat camera by unfolding the PCBs and replacing a flexible interconnect with a rigid section of PCB. They never mentioned it before because they assumed we wouldn’t change the form factor. In this case, they were glad to be wrong. :-)

5. Hardware Interface

Some customers run into obsolescence issues with third party vendors. When their supplier discontinues certain features in their next generation products like frame grabbers, they come to us. When this customer came to us, we had a frame grabber that was capable of doing the job—except the client’s existing product used cabling and connectors for another brand of frame grabber. Instead of turning them away, we modified the connectors and pin-outs on one of our standard frame grabbers, allowing seamless integration into the client’s existing product. Needless to say, the customer was very happy.

Your Wish List. The common thread that binds all semi-custom work is the communication between customer and vendor. We can’t stress the importance of open dialog. We are constantly developing our standard products, upgrading and optimizing our firmware—we often plan a year in advance, prioritizing our task list. We’ve had clients ask for something that’s already in the planning but perhaps a year off. If we know a client needs a change that’s already on our to-do list, and there is justification to do it sooner than planned, we will.

My advice to any customer seeking something beyond our published specifications is, ASK! Put your wish list together, compile a list of your dream features; you might be surprised by what is possible. And the old adage is especially true – if you don’t ask, you’ll never know – and in the case of a semi-custom solution? Neither will we.

 

Posted on by Bob in Machine Vision | Leave a comment

Imagine That.

May 7, 2014
Kirk Petersen

I’m sure it’s no surprise to you that applications for digital imaging technology are growing. New applications for vision are opening up in areas that include the physical sciences and biophotonics, medicine, arts and entertainment, space, and defense. And applications continue to grow beyond the visible light spectrum to enable enhanced automotive vision systems, integrated traffic management networks, homeland security, and search and rescue. Advances in image sensor technology continue to drive incremental improvements in performance while increased miniaturization and data processing capability continue to push deployment costs down and open up even more opportunities.

Now, imagine a website dedicated to discovering and sharing these innovation stories – a website that explores imaging technology’s ability to empower human achievement.

That’s why I’m excited to tell you about our new Possibility Hub (http://possibility.teledynedalsa.com). The Possibility Hub is a resource for stories about the technology, science, and people shaping digital imaging and will explore incremental advancements and emerging technologies. It will delve into existing as well as far-reaching new applications and innovations. Perhaps most importantly, the Possibility Hub will profile the influential, imaginative and passionate people who drive our industry.

The Possibility Hub is purposely separate and distinct from our own web site and will be free of product information or traditional corporate news and profiles. To be sure, the Hub has some very real brand and business development objectives, but it’s not all about us. There are incredibly wild developments being imagined in research centers all over the world.  Like Quantum Dot Sensors, or bio-engineered, light-sensitive bacteria, and a team of scientists at John Rogers at the University of Illinois, are looking at creating curved image sensors, which consists of silicon detectors and electronics that conform to a curved surface.

I’m really excited about this new content resource and believe that the value of the Possibility Hub will resonate with our customers and stakeholders but equally with the broader industry. In many ways I think our industry is still in its infancy and we want to celebrate the visionaries and shine a spotlight on interesting imaging applications stories regardless of where they come from.

Imagine that!

Posted on by Kirk Petersen in Machine Vision | Leave a comment

The Eyes Have It

Apr 22, 2014
Neil Humphrey

A trip to the optometrist for eye care involves plenty of digital imaging

I recently visited my optometrist for a regular checkup, briefly leaving behind topics of machine vision to focus on my own human vision…or so I thought. Turns out there was almost as much digital imaging waiting for me at my doctor’s office as at my own.

Besides the usual acuity charts and prescription checks (and my arch-nemesis, the intra-ocular pressure “puff” machine), Dr. Howard Dolman of Dolman Eyecare was firm in his desire to dilate my pupils to carefully examine and take images of the inside of my eyes. I’ll admit I hate that part, since after the dilation drops everything seems so bright I need to wear sunglasses indoors for the next four hours. But as the good doctor explained the “why” and the “how” of these images, I became more and more interested.

Worldwide, glaucoma is the second leading cause of blindness after cataracts (says Wikipedia), affecting 1 in 200 people under age 50, 1 in 10 over age 80, and a much higher proportion of diabetics of all ages. It can be caused by high pressure inside the eye and is characterized by damage to the retina and optic nerve, generally starting from the periphery (and resulting in “tunnel vision”). It can come on quickly and painfully, which patients notice immediately, but it can also progress much more slowly; those affected may not notice problems until the disorder has already caused significant damage…and while there are treatments to slow or even halt the advance of the condition, the damage is incurable and irreversible. Early detection is therefore critical.

With this and other threats in mind, optometrists nowadays image the inside of the eye in several ways. They take color and monochrome images of the retina and its vascularization (veins), looking for signs of current problems or hints of future issues. And as Dr. Dolman explained to me, they now have the technology to take not just 2D photos, but high resolution 3D imagery through a technique called optical coherence tomography (OCT). OCT uses multiple (safe) wavelengths of lasers to create interference patterns in the near infrared spectrum several millimeters deep into the back of the retina that can be detected (by appropriately tuned CCD and CMOS image sensors and image processors) and interpreted into 3D data for unprecedented “insight.”

Neil's_eye

Dr. Dolman showed images of my own eye (above) on his screens. The image data can also be modeled in 3D animations, such as this striking example from Zentrum für Medizinische Physik und Biomedizinische Technik.

The advantages of this approach are compelling. High resolution, real-time, non-invasive imaging that can see under the surface to reveal features or problems long before they are detectable any other way. OCT can flag issues with glaucoma, macular degeneration, and a range of other serious conditions literally years before they become noticeable by any other means, giving doctors and patients precious time to take action. And even if there are no issues to address, OCT images provide an important baseline for future reference.

Needless to say, I was sold. A few hours of mild sunglass-related ridicule from my kids (“Future’s so bright, eh dad?”) was worth it. It was also good to be reminded just how many other applications beyond machine vision depend on digital imaging…come to think of it I have an appointment with the dentist coming up and I believe I’m due for x-rays. Stay tuned.

Posted on by Neil Humphrey in Machine Vision | Leave a comment

Technology @ Work. Liquids Packager Aims for Zero Defects.

Feb 25, 2014

Automation World recently featured a case study from a company called Sealed Air, and how they implemented a machine vision system to inspect the clear plastic liners used to store liquids.

Here’s a summary:

Sealed Air, an Australian maker of machinery and materials  has developed the Entapack Liquid Packaging System – super-tough liners for transportation and storage of liquid and dry solid materials, including bulk liners, bag-in-a-box packaging and aseptic packing for products requiring long shelf life at ambient temperatures.

image_bladder_EntapackThe manufacturing process for these liners provides a new level of safety and protection for the food, beverage and medical industries. But to ensure this, no visible contamination can  be trapped between the layers of the liners. A speck of dust or a hair as small as 50 microns can contaminate the packaging process and cause a system breakdown. Therefore,  100% inspection of every liner was necessary, with a goal of zero defects.

The Solution?

In addition to manufacturing the liquid bladders in a positive-pressure clean room, Sealed Air  implemented a machine vision system with the help of Adept Turnkey Pty and CPE Systems. The existing visual inspection by operators was not adequate for catching 100 percent of defects. Automated optical inspection by machine vision was the only option. But the solution still wasn’t easy.

Piranha line scan cameras were chosen because of their combination of high resolution, speed, and dynamic range. Xcelera frame grabbers were used to interface to the Piranha cameras and the inspection application was built in Teledyne DALSA’s Sherlock software environment.

The system has been in use for over a year and is reliably detecting contaminants and meeting the high standard set by the customer.

Read the full story: “’Zero Defects’ Goal Challenges Liquid Packager” in Automation World.

If you’ve got a story to share – let us know!

Geralyn

Posted on by Geralyn in Cameras, Machine Vision, smart cameras, Software | Leave a comment

A Schongau State of Mind. Innovation, Advancement and Industry Standards.

Feb 18, 2014
Mike

Editor’s note: This article published later than expected with thanks to Mike for his patience. For further reading on the standards meetings in Schongau – see the December print edition of inspect magazine.

When do engineers get to:

  • Meet old friends
  • Make new friends
  • Taste delicious foods
  • Enjoy local beverages – in beautiful places – while creating solutions…
  • … and work with competitors?

It’s more often than you might think – and recently – it happened again – and roughly every six months to a year during the machine vision standards committee meetings.

Certainly engineers participating in the meetings come back energized and eager to implement ideas and concepts agreed to during the meetings. So, do companies that contribute the time and expense for engineers to participate get value for their dollar? Read on… I’ll weigh in at the end.

I classify myself as a “hardware” guy and I had doubts about the 2 days of GenICam meetings I signed on for. As Chair of the Camera Link HS (CLHS) Committee, I hope to introduce some of the techniques used to coordinate the efforts of such a large team.

There was a big “Aha Moment”  for me during the 3D presentation which changed the requirements for CLHS revision 2. 3D cameras require that multiple types of data from one “frame” are stored together in one buffer or need to be associated with each other. Each data field or “zone” (as GigE likes to call them) can have different pixel types and bit depths. Revision 2 of CLHS includes multiple ROI from a single frame and as a result of attending the GenICam meetings,  I am happy to report that CLHS will be able to support multiple ROI, each with a different pixel type and bit depth as required by 3D cameras. Methods will be used to enable the camera to change the number and size of the Regions of Interest (ROIs) on every frame and inform the frame grabber and application software about the data that follows. Additionally, a new virtual channel will be added to communicate ROI definitions from the frame grabber to the camera, enabling the frame grabber hardware to command changes, or an application program, with frame by frame capability. Achieving all this functionality exceeded my expectations going into Schongau. I would like to thank the CLHS team for bringing their ideas to the table and for working together to achieve such a fantastic result.

Standards Committee Members

Standards Committee Members.
Copyright: inspect – Wiley-VCH Verlag

Thank you to Werner Feith of Sensor to Image for organizing such a large gathering and for making sure all 70 participants arrived safe, were well fed, and had places to sleep in the beautiful town of Schongau.  On behalf of the CLHS committee I would like to thank Werner and the employees at “Sensor to Image” for inviting the CLHS committee to use their conference room.  The beautiful offices and comfortable conference room helped CLHS achieve more than the goals set for the meeting.

And to answer my own question – “Is there return on investment?”  Innovation starts with understanding problems that require a solution. Standards committees share problems that enable innovation and allow for better products to be developed. Return on investment? I’d say yes – absolutely.

Posted on by Mike in Machine Vision | Leave a comment

Opportunity takes its own tenth birthday pic.

Feb 11, 2014
Neil Humphrey

Last month Mars Rover Opportunity celebrated its tenth anniversary on the red (brown?) planet by snapping a selfie and sending it home. How does it look? It has suffered from the scarcity of local carwashes, but despite the dust buildup, the rover has aged remarkably well.

Think of what were you driving ten years ago…does it run as well as Opportunity still does? Although its twin Spirit got stuck and stopped answering calls in 2010, Opportunity remains active as a low mileage (tens of millions of km flown, but under 40 km on the ground) prestige vehicle that has been very carefully driven.

opportunity-selfie

Opportunity self portraits in 2004 and 2014. What it wouldn’t give for a squeegee. Image credit: NASA/JPL

Opportunity’s continued opp-eration is a testament to the incredibly robust system designed by NASA/JPL. It’s one thing to have the latest, gaudiest performance specs; it’s quite another to deliver on your performance for more than 40 times longer than the original spec with no chance of maintenance.

Designing a long lifespan for a product requires engineers to optimize for different priorities. According to Wikipedia, Opportunity’s onboard computer uses a 20 MHz RAD6000 CPU with 128 MB of DRAM, 3 MB of EEPROM, and 256 MB of flash memory–not raw performance numbers that would impress an aficionado even a dozen years ago, but I think we can all recognize and appreciate the primacy of “design for reliability” in this case. Computers are notoriously quick to evolve into obsolescence (every 18 months or so I’m told, and while machine vision systems may iterate at a slower rate, how many of us would expect to install systems today and see them running smoothly ten years on?

Opportunity’s components (thousands and thousands of them) and systems were all designed, simulated, tested, characterized, tested, integrated, tested, and um, re-tested with obsessive-compulsive attention to detail by people who took pride in their work and thought about the long term. When so many of the technology products we buy (and sell) today are intended to be disposable and intentionally replaceable, it is refreshing to consider things built to last. (Full disclosure: Teledyne DALSA has skin in this game, having manufactured the image sensors on the rovers’ Hazcams and Navcams). Granted, Opportunity is ultimately also disposable (sadly nobody is going to go retrieve it), but it is definitely not replaceable. Opportunity and its designers deserve not just a slow clap but a full-throated standing ovation*. Long may it continue…it and its younger sibling Curiosity.

You can follow more adventures of all the Mars rovers at http://marsrover.nasa.gov/home/index.html

*(and so, it should be said, does Spirit, which didn’t burn out as much as it faded away—stuck in terrain that could only be guessed at during its design phase, Spirit eventually ran out of power because of the buildup of dust on its solar panels and the fact it couldn’t reach a location with more solar exposure).

Posted on by Neil Humphrey in Cameras, Image Sensors, Machine Vision | Tagged , | 1 Comment

Stopping Time with TDI

Jan 21, 2014
Glen

What if you could stop time? In addition to being able to eat as much as you want without consequences, there is also a compelling case for it in machine vision.

Let’s say you are running a web process of some kind. Unlike discrete components like bottles or cans, a web is a continuous flow of material like paper, steel, aluminum, or fabric – just to name a few. And let’s say you have a vision system to inspect this web (probably with a line scan camera to handle the continuous flow of material), but you simply can’t get enough light into the camera for a good quality image. Lack of sufficient light is a common problem in web inspection (stay with me – I promise to return to why you might want to stop time).

Continuous sheet of metallic material

Continuous sheet of metallic material

Continue reading

Posted on by Glen in Machine Vision | Leave a comment

Camera Link HS Raises the Roof on ROI

Oct 8, 2013
Mike
Simplified Inspection

Simplified Inspection

Consider a typical inspection system (Fig. 1) consisting of an area camera inspecting an object with 4 holes near each corner. The inspection system exists to confirm the size and relative location of each hole.

The traditional CCD camera reads out the entire field of view with a resulting transmission bandwidth of  720 MPix/sec Continue reading

Posted on by Mike in Cameras, Interface Standards, Machine Vision | Tagged , , | Leave a comment