University (Chung-Ang University) researchers have developed a high-precision, efficient camera orientation estimation method that maintains the orientation of the built-in camera during driving.
Gasgoo News Because self-driving cars usually use built-in cameras to monitor the road ahead, it is critical to ensure that the camera's orientation is fixed during driving. According to foreign media reports, Chung-Ang University researchers have developed a high-precision, efficient camera orientation estimation method that will help ensure the safe driving of autonomous vehicles.
With the advancement of automotive technology, a large number of vehicles will deploy autonomous driving systems in the near future. To this end, scientists have developed cameras and image sensing technologies that will enable autonomous vehicles to reliably perceive and 'see' the surrounding environment.
In developing this technology, researchers face various challenges. One of them is to maintain the orientation of the built-in camera while driving. Self-driving cars use a built-in camera for navigation and distance measurement, but the camera often comes off during dynamic driving. Professor Joonki Paik of Central University explained, 'Camera calibration is critical for future vehicle systems, especially autonomous driving systems, because camera parameters such as focal length, rotation angle, and translation vector are necessary for analyzing 3D information in the real world. Essential. '
Over the years, researchers have continuously developed and improved the azimuth estimation methods of in-vehicle cameras, including calculation methods such as voting algorithms, the use of Gaussian spheres, the application of deep learning and machine learning. However, under real-world conditions, none of these methods is fast enough to accurately estimate the camera's orientation in real time.
In order to solve the problem of estimating speed, a group of researchers from Central University, led by Professor Paik, combined with the previously developed method, proposed a new more accurate and efficient algorithm designed for the fixed focal length camera of the head.
The algorithm consists of three steps. First, the camera captures an image of the environment in front of the vehicle, and maps parallel lines on the objects in the image along three rectangular coordinate axes. Then project onto a Gaussian sphere and extract the plane normals of these parallel lines. Second, the feature extraction technology Hough transform is used to determine the 'vanishing point' of the driving direction (vanishing point refers to the point where parallel lines intersect in an image taken at an angle, such as the two sides of a railroad crossing at a distance). Third, use circular histograms to determine vanishing points on two vertical Cartesian planes.
The team tested the method on experiments on real roads in Manhattan. They captured three driving environments in three videos and documented the accuracy and efficiency of this method in each environment. In two scenarios, this method can accurately and stably estimate the camera direction. In another scenario, this method performs poorly because there are many trees and bushes in the camera's field of view. But overall, the method performs well under actual driving conditions. Dr. Paik and his team stated that their method enabled high-speed estimation because the 3D voting space was transformed into a 2D plane at each step of the estimation process.
More importantly, Professor Paik said that their method 'can be used immediately in ADAS systems.' In the future, it can also be used in other applications such as collision avoidance, parking assistance and 3D maps to prevent accidents and ensure driving safety.