Machine vision has long been used in industrial automation systems to improve production quality and output by replacing traditional manual inspection. From pick-up and placement, object tracking to measurement, defect detection and other applications, using visual data can improve the performance of the whole system by providing simple failure information or closed-loop control loop.
The use of vision is not only in the field of industrial automation; we also see a large number of applications of cameras in daily life, such as for computers, mobile devices, especially in automobiles. Cameras were introduced into cars only a few years ago, but now cars have been equipped with a large number of cameras to provide drivers with a complete 360 degree view of the vehicle.
But when it comes to the biggest technological advances in machine vision, it may always be processing power. With processor performance doubling every two years and continuous attention to parallel processing technologies such as multi-core CPU, GPU and FPGA, visual system designers can now apply highly complex algorithms to visual data and create more intelligent systems.
The development of processing technology brings new opportunities, not just smarter or more powerful algorithms. Let's look at an example of how to add visual function to a machine. These systems are traditionally designed to form an intelligent subsystem network of collaborative distributed systems, which allow modular design (see Figure 1).
Figure 1: Intelligent subsystem network, which is designed to constitute a cooperative distributed control system. The system allows modular design, but this hardware-centric approach may lead to performance bottlenecks.
However, with the improvement of system performance, it may be difficult to adopt this hardware-centric approach because these systems usually use a mix of time-critical and non-time-critical protocols to connect. Connecting these different systems through various communication protocols can lead to bottlenecks in latency, certainty and throughput.
For example, if designers attempt to develop applications using this distributed architecture and must maintain close integration between visual and motion systems, such as those required in visual servo systems, they may encounter major performance challenges due to lack of processing power. In addition, since each subsystem has its own controller, this actually reduces the processing efficiency.
Finally, because of this hardware-centric distributed method, designers have to use different design tools to design specific visual software for each subsystem of the visual system, as well as special motion software for the motion system. This is especially challenging for smaller design teams, because a small team or even an engineer is responsible for many parts of the design.
Fortunately, there is a better way to design these systems for advanced machines and equipment, which is a way to simplify complexity, improve integration, reduce risk and shorten time-to-market. What happens if we shift our thinking from hardware-centric to software-centric design (see Figure 2)? If we use programming tools that can perform different tasks with a single design tool, designers can reflect the modularity of mechanical systems in their software.
Figure 2: Software-centric design approach allows designers to simplify the control system structure by integrating different automation tasks (including visual inspection, motion control, I/O and HMI) in a single powerful embedded system.
This allows designers to simplify the control system structure by integrating different automation tasks (including visual inspection, motion control, I/O and HMI) in a single powerful embedded system (see Figure 3). This eliminates the challenge of subsystem communication because all subsystems now run on the same software stack on a single controller. High-performance embedded vision systems are the best candidates for such centralized controllers because these functions are already built into these devices.
Figure 3: Heterogeneous architecture that combines processors with FPGA and I/O is not only an ideal solution for designing high performance vision systems, but also for integrating motion control, HMI and I/O.
Let's look at some of the benefits of this centralized processing architecture. Take the application of vision-guided motion as an example, such as flexible feeding, in which the vision system provides guidance function for the motion system. Here, the position and orientation of the parts are random. At the beginning of the task, the vision system takes the image of the part to determine its position and orientation, and provides the information to the motion system.
Then, the motion system moves the actuator to the position of the part according to the image coordinates and picks it up. It can also use this information to correct direction before placing parts. In this way, the designer can eliminate any fixtures previously used for orienting and locating parts. This not only reduces costs, but also allows applications to adapt more easily to new part designs, requiring only software modifications.
The key advantage of hardware-centric architecture is its scalability, which is mainly attributed to the Ethernet links between systems. However, special attention must also be paid to the communication through this link. As mentioned earlier, the challenge of this approach lies in the uncertainty of the Ethernet link and the limited bandwidth.
This is acceptable for most visual-guided motion tasks that give guidance only at the beginning of the task, but there may be other situations where the change of delay may be a major challenge. Turn this design around
For more information, please read the Chinese version.
Contact: Manager Xu
Phone: 13907330718
Tel: 0731-22222718
Email: hniatcom@163.com
Add: Room 603, 6th Floor, Shifting Room, No. 2, Orbit Zhigu, No. 79 Liancheng Road, Shifeng District, Zhuzhou City, Hunan Province