ShIP to


Robot localization technology is introduced

by:Fugen      2020-10-13
Autonomous mobile robot navigation process needs to answer three questions: 'where am I? '' I want to go? 'And' how do I get there? ”。 Positioning is to answer the first question, the exact, mobile robot localization is to determine the robot in the coordinates of its movement in the environment the world coordinate system. Now introduce several kinds of robot location technology in detail. 1, vision navigation and positioning is mainly completed with the help of a visual sensor, the robot using monocular and binocular depth cameras, video cameras, digital video signal equipment or fast signal processor based on DSP and other external devices of image, and then on the surrounding environment of optical processing, will be collected to compression of image information, feedback to the composed of neural networks and statistical methods to study subsystem, and then will be collected by the subsystem of image information with the actual location of the robot, complete orientation. Advantages: application field widely, mainly used in unmanned aerial vehicle (uav), surgical instruments, transportation, agricultural production and other fields. Disadvantages: large amount of image processing, general computer can't complete the operation, poor real-time performance; Restricted by the light condition is bigger, can't work in the dark environment; Binocular body inspect at present is mainly used in four areas: the parameters of the robot navigation, micro operating system detection, 3 d measurement and virtual reality. 2, the principle of infrared navigation and positioning infrared navigation and positioning is infrared IR emission modulation infrared ray, by installing in indoor reception of optical sensors to locate. Advantages: distance measurement, in the absence of reflector and under the condition of low reflectivity can measure the distance; Have synchronous input, but the synchronous measurement of multiple sensors Wide measuring range and short response time; Weakness: the detection of minimum distance is too big; Infrared range finder is larger, the influence of the environment of the distance for the approximate blackbody, transparent objects cannot be detected, only suitable for short distance transmission. When there are other obstructions cannot work normally, need to each room, corridor, receiving antenna installation, laid guide rail, the cost is higher. Auxiliary production refers to the use of visual technology for robot movement based on execution, is currently widely used two-dimensional positioning technology based on monocular vision. But because most of the available 2 d visual positioning of the production location can be used to replace mechanical positioning way, the cost and complexity are more simple than visual positioning, only a handful of occasions have visual positioning. In the production process of the robot, most need positioning is given 3 d coordinate, that is to say, the location of the measured object relative to the robot is uncertain. But such positioning requirements, technical threshold is higher, despite such technology, but has not been widely used. Robot vision system composition and the algorithm is the core of the robotics research is: navigation localization, path planning and obstacle avoidance, multiple sensor fusion. There are several positioning technology, do not care, only care about visual. Vision technology use 'eyes' can be divided into: monocular and binocular, more orders, the RGB - D, after three makes a depth image, the eyes can also be called VO ( Visual odometer: monocular stereo) or , wikipedia is introduced: in robotics and computer vision problems, visual odometer is a process by analyzing the related image sequence to determine the position and posture of the robot. Today, due to the rapid development of digital image processing and computer vision technology, more and more researchers used the camera as a fully autonomous mobile robot's perception of the sensor. This is mainly because the original ultrasonic or infrared sensors to the amount of information is limited, poor robustness, while the visual system can make up for these shortcomings. And the real world is three-dimensional, and the projection on the camera ( CCD/CMOS) Images are two-dimensional, the final purpose of visual processing is to from awareness to two-dimensional images to extract the information about the three-dimensional world. Robot localization technology is actually a very broad concept, it is not only the current common two-dimensional coordinates, three-dimensional coordinates. Robot localization is actually through the other sensors to provide judgment robot to perform an action, in which visual sensor ( Who let the image is all sensors provides the largest amount of information contained in the signal carrier? ) Give priority to. Robots or study first should be the basis of 'intelligent robots' research 'machine vision technology. The article from the network, if there is any infringement please contact deleted!
Custom message