As Gardasoft celebrates 20 years of producing lighting controllers for machine vision, we present the second in a series of articles that examine how our fast-moving and exciting discipline evolved from its modest beginnings and what we can expect from future developments.
Forty years after the first embryonic machine vision systems saw the light of day, the machine vision industry has grown into a mature core technology. Established vision techniques are key to a diverse range of applications. Industrial inspection systems improve the quality and efficiency of manufacturing processes by decreasing the likelihood of a defective component progressing down the line. Video technology in sport can uphold or correct an official’s decision and allow the watching public to see the outcome (even if the VAR system in football’s English Premier League is experiencing some teething troubles!). Vision systems and robotics are being used to replace manual labour in numerous applications such as pick and place, palletisation and even picking and trimming vegetables. Automatic number plate recognition (ANPR) at car park security barriers is now commonplace, and the list goes on. In short, machine vision technology has become an integral part of our daily lives.
Keeping pace with developments
Machine vision is a fast-moving technology. Developments in processing power, CMOS camera sensors, illumination, optics, software capabilities and data handling constantly push at the boundaries of what’s possible. Well-established techniques continually evolve and a constant stream of new technologies is arriving in the industry. Higher-resolution CMOS sensors for both area scan and line scan cameras, faster operation, smaller physical size and ever more powerful image processing systems enable increasingly complex inspections to be carried out. There is an increasing use of imaging systems using wavelengths outside the visible spectrum, such as short wave infrared and long wave infrared (thermal imaging) to reveal information not normally visible. Newer polarisation cameras can also reveal otherwise invisible information, including physical properties, such as stress or birefringence. These cameras feature CMOS sensors with on-chip nanowire micropolarisers which allow on-sensor detection of the plane of light in four directions.
Data handling and processing
The enormous volumes of image data generated by higher-resolution cameras make it important to choose the best transmission standard. This is usually made on the basis of the speed, cable length distance and configuration needed. Gen<i>Cam provides a generic programming interface for all common interface technologies and a standard feature naming convention for lighting devices has also recently been agreed. Data transmission is just part of the challenge, however, since many of the imaging techniques currently in use are particularly computationally intensive and have become front-line techniques thanks to faster PCs with FPGA and multicore embedded processor architectures. One such technique is hyperspectral imaging which combines infrared spectroscopy with machine vision to allow the chemical composition of organic materials being imaged to be determined. This has opened up major new possibilities for detecting impurities, notably in the food, pharmaceutical and agriculture industries.
Deep learning, or machine learning, is now available as part of commercial software suites running on PCs with GPUs. Using sets of ‘training’ images, deep learning systems learn to recognise features or defects for classification purposes. We’re also seeing neural networks that can be trained and then run directly on a dedicated camera with on-board processing power, opening up even more possibilities. Deep learning is particularly useful in classifying organic objects, where there are lots of natural variations.
Illumination and control
Optimising illumination is a critical requirement for machine vision in order to get the best quality image possible of the features to be measured and this includes lighting control. Running LEDs in excess of their maximum rating for short periods of time for increased light output; pulsing the light to allows imaging of objects at high speeds and maintaining a consistent light output are just some of the ways dedicated lighting controllers can help. Gardasoft’s FP200 series of high speed lighting controllers has extended trigger frequencies to up to 10kHz, for high speed pulsing applications such as line scan imaging. OLED panel lighting together with full lighting control capabilities opens up exciting new opportunities thanks to the exceptionally stable and uniform light intensity across the width of the panel and small form factor. Lighting controllers can also provide a wide range of illumination sequencing options involving multiple lights and/or multiple cameras. Multi-lighting schemes involve multiple lights being triggered individually at different intensities and durations in a predefined sequence from one trigger signal, allowing multiple measurements to be made using a single camera station. This reduces mechanical complexity and saves money because less equipment is required.
Multi-light imaging using line scan cameras involves acquiring multiple views of an object during a single scan by capturing information from different illumination sources on sequential lines. Individual images are extracted using image-processing software. Sequential multi-shot imaging is also the principle used in computational imaging. Here a programmable lighting control system is used to generate a sequence of images of an object using different illumination directions or wavelengths. Key information is extracted from each captured image in software and combined to form a composite image that contains information that cannot be seen in an individual image. The most popular use of this technique is photometric stereo, where an object is sequentially illuminated with light from four or more directions. Combinations of these images allows shape and texture components to be separated.
Embedded systems
Embedded vision generally refers to the integration of vision systems into machines or devices without the use of a PC for control and image processing. Smart cameras have the camera sensor, image capture, processor for image evaluation, vision software and the I/O interfaces, as well as, in some cases, the lighting and the lens, combined in the camera housing. The camera can be set up for a particular inspection with the results delivered directly to the process control system. Other embedded solutions include compact vision systems where multiple cameras can be connected to a dedicated image-processing controller, deep embedded vision and system on chip (SoC). Deep embedded systems have extremely low unit production costs, but high development costs since they are developed for specific tasks and cannot be easily reprogrammed. SoC is an extremely flexible ARM-based embedded computer technology that enables bespoke systems with low investment and system costs using standard cameras and board-level cameras using standard interfaces. Integrating hardware, such as FPGAs, GPUs or DSPs, makes local pre-processing and data reduction possible.
Read the free white paper from Gardasoft here.
Author: Jools Hudson, Gardasoft Vision Ltd, Trinity Court, Buckingway Business Park, Swavesey, Cambridge, CB24 4UQ, UK
T: +44 (0)1954 234970
E-mail: [email protected]
Web: www.gardasoft.com