The robot vision system has undergone three generations of development. The first generation of robot vision functions generally processes images according to a prescribed flow and outputs the results. This kind of system is generally made up of ordinary digital circuit, mainly used for the defect detection of the flat material. The second generation robot vision system is generally composed of a computer, an image input device and result output hardware. Visual information flows through the machine in a serial manner and has certain learning ability to adapt to various new situations. The third-generation robot vision system is currently being developed and used in the world. Using high-speed image processing chips and parallel algorithms, it has a high degree of intelligence and general adaptability, and can simulate a person's high visual function.
The main problems existing in robot vision
1. How to identify the target accurately and at high speed (in real time).
2. How to effectively construct and organize reliable identification algorithms and successfully implement them. This expects a new breakthrough in high-speed array processing units and algorithms (such as neural network methods, wavelet transforms, etc.), so that functions can be implemented in parallel with extremely low computational complexity.
3, real-time nature is an important problem that is difficult to solve. The low speed of image acquisition and image processing take a long time to bring a significant time lag to the system. In addition, the introduction of visual information also significantly increases the amount of computation of the system, such as calculating the image Jacobian matrix, estimating depth information, and so on. Image processing speed is one of the major bottlenecks affecting the real-time performance of visual systems.
4. Stability is the first consideration of all control systems. For vision control systems, whether based on location, image-based or hybrid-based visual servoing methods, are faced with the following questions: How to ensure the stability of the system when the initial point is far from the target point? That is, increase the stability area and ensure global convergence; in order to avoid servo failure, how to ensure that the feature point is always in the field of view.
Robot vision should be further studied
1, the choice of image features.
The performance of visual servoing is closely dependent on the image features used. The selection of features not only takes into account the identified indicators but also the control indicators. From a control point of view, using redundant features can suppress the effects of noise and improve the performance of visual servoing, but it can also make image processing more difficult. Therefore, how to choose the optimal performance characteristics, how to deal with features, and how to evaluate features are all issues that need further study. Sometimes tasks may need to be switched from one set of features to another. Consider combining global features with local features.
2. Combining the research results of computer vision and image processing, establish a dedicated software library for the robot vision system.
3. Strengthen the dynamic performance of the system. Current research focuses on the determination of the desired robot motion based on image information, but lacks research on the dynamic performance of the entire visual servo system.
4, the use of smart technology results.
5. Use the results of active vision.
Active vision is a hot topic in the field of computer vision and robot vision research. It emphasizes the interaction between the visual system and its environment. Different from traditional universal vision, active vision emphasizes two points. One is that the visual system should have the ability of active perception, the other is that the visual system should be based on a certain task (TaskDirected) or purpose, and active vision is considered in the process of obtaining visual information. , should adjust the camera's parameters more actively, such as the direction, focus, aperture, etc., and can make the camera quickly aim at the object of interest.
More generally, it emphasizes the gaze mechanism, emphasizing the selective perception of signals in different spatial ranges and time periods with different resolutions. This kind of active sensing can be achieved at the hardware level through the adjustment of camera physical parameters. It can also be implemented on the premise of a passive camera, on the algorithm and presentation layer by selectively processing the acquired data. At the same time, active vision does not make sense for visual processes that are not based on any purpose. The vision system must be associated with its purpose (such as navigation, recognition, operation, etc.) to form a perception/action loop.
6, multi-sensor fusion problems. Vision sensors have a certain range of use. If they can effectively combine with other sensors and take advantage of their complementary performance, they can eliminate uncertainty and achieve more reliable and accurate results.
7.5 Mm Nano Tip,Smart Pen Infrared,Infrared Pen Touch,Slim Infrared Pen
Shenzhen Ruidian Technology CO., Ltd , https://www.wisonen.com