Autonomous navigation robots are increasingly used in industrial and service areas. In addition to the robot body, its core modules include modules for sensing, mapping, positioning, navigation, and scheduling.
Perceptual module
Generally refers to sensor technology. The sensors commonly used in robots are laser radar, vision sensors, and the like. Lidar can be divided into single-line and multi-line laser radars, which are relatively expensive. With the development of computer vision and artificial intelligence, visual sensors are increasingly being used on robots. The vision sensor is divided into a normal camera and a depth camera. A normal camera only contains information about the texture (depth, color) of the environment. The depth camera contains the depth information of the scene, so it is also called a three-dimensional camera. Due to its information richness and cost reasons, the depth sensor is almost a necessary sensor for robots, and even some shouji are also equipped with depth sensors for 3D face recognition and AR applications.
Depth sensors include binocular structured light and TOF technology. The binocular structured light is a light source that projects a certain code by projection, and the parallax calculation is performed by the result of the binocular camera acquisition, and the depth is obtained by combining the camera parameters. Binoculars capture images with many features or codes, making feature matching easy. The TOF technique is the distance obtained by calculating the flight time. The two methods are different in use. The structured light technology is suitable for high precision and low real-time requirements. TOF technology benefits from high frame rate and robustness, so it is more suitable for real-time 3D reconstruction and positioning and mapping. Through the independently developed algorithms and SDKs, Bluecore Technology can provide customers with different configurations of sensors, such as volume measurement, item positioning and pose detection, depending on the application.
Mapping and positioning
In layman's terms, the sensor is the eye of the robot and the navigation module is the brain of the robot. The navigation module enables the robot to recognize, perceive, and explore the surrounding environment. Navigation mainly includes three main parts: Location, Mapping, and Path Planning. The positioning and mapping module is usually called SLAM (Simultaneous Localization and Mapping). SLAM is generally divided into laser SLAM and visual SLAM.
Lidars used in the field of robots are generally single-line lidars, mainly for mapping and positioning on a two-dimensional plane. Multi-line laser radar is more used in the field of automatic driving. The mapping scheme of the laser radar relies on the two-dimensional contour line for matching and splicing to complete the contour reconstruction of the scene. The laser map is generally sag, and the ICP algorithm is used to match the contour of the existing map to achieve the effect of pose positioning. Since there is only two-dimensional information, the drawbacks of the laser scheme are also obvious, that is, it is easy to locate failure when dealing with long corridors and environmental changes.
Visual SLAM technology is a research hotspot in the field of computer vision and robotics. In addition to robot navigation, SLAM technology is also an AR field and core technology. The principle of visual SLAM is similar to that of GPS. The spatial texture information is obtained by the camera, and the current position of the robot is deduced in combination with the camera parameters. However, the application of visual SLAM technology in the fields of service robots and logistics has many challenges. Such as environmental changes, lighting changes, scene textures, etc.
Figure 1 zhenkong camera space model abstraction
In order to improve the stability of the visual SLAM technology, the mapping and positioning process will generally be added to the IMU as an aid, while the depth camera directly acquires the 3D information of the texture features. Panoramic cameras are also widely used in SLAM technology. With the gradual maturity of deep learning, many researchers have proposed a solution for semantic SLAM and have achieved certain applications.
Path planning and motion control
Motion planning refers to the planning scheme of robot navigation, that is, how the robot walks in the environment. Path planning is divided into global path planning and partial path planning. The global path planning mainly refers to specifying a target point in the map, and planning a global path, and the walking path of the robot depends on the path. Common algorithms include A-star, Dijkstar, Voronoi and other algorithms. Local path planning is based on global planning, combined with local obstacles and other information, through the motor to control the motion of the robot, such as DWA algorithm. In the actual scenario, the path planning has many changes according to requirements, including fixed point planning, fixed route planning, shortest distance planning, and remote obstacle planning.
The most used logistics and service robots are two-wheel differential approach to walking and turning. The local path planning sends the line speed and angular velocity to the motor in real time to achieve the purpose of walking.
Avoidance
The obstacle avoidance system is the guarantee for the safe operation of the robot. Common obstacle avoidance schemes include laser radar, depth camera, ultrasonic, infrared sensor and so on. These programs have their own advantages and disadvantages, and the robot should choose the appropriate obstacle avoidance scheme according to the application and its specificity. The ultrasonic sensor detects whether there is an obstacle in front by emitting a cone surface, which is generally suitable for use in a reversing radar, and is not suitable for an automatic navigation robot; the infrared sensor is mostly applied to the cleaning robot. Robots used in the industrial and service sectors use the radar obstacle avoidance scheme extensively.
Dispatch system
The dispatching system is responsible for the task assignment, scheduling, operation and maintenance of the robot. Including multi-tasking writing, mutual obstacle avoidance and so on. Operation and maintenance include automatic charging, detection of alarms, etc. Multi-machine scheduling technology plays a key role when the application scenario is large and the number of robots is increased. Multi-machine scheduling first ensures that the interface is provided to the user, and at the same time, the scheduling efficiency is ensured when there are many tasks, and a good scheduling scheme is selected for the user to use.
Multi-machine scheduling technology plays a key role when the application scenario is large and the number of robots is increased. Multi-machine scheduling first ensures that the interface is provided to the user, and at the same time, the scheduling efficiency is ensured when there are many tasks, and a good scheduling scheme is selected for the user to use.

Stationary High Frequency Welding Machine

We supply machinery and services to many major international companies and we have activity in the following industries:


Stationery: Including Pockets, Binders, Tax Discs produced on high frequency welding, rotary, inline and shuttle machinery.


Machine with excellent performance for welding plastic stationery items like pass cover, money purse, inflatable toys nad packaging pouches.


Rf Welding Machine,Stationary High Frequency Welding Machine,High Frequency Sealing Machine,High Frequency Bonding Machine

Hangzhou Xiaoshan Wanfeng Mechanical & Electrical Equipment Factory , https://www.wfengmachinery.com