The Vehicle Research Center of Széchenyi István University was giving an exciting presentation for the audience at the Digital Welfare Program Exhibition of the Artificial Intelligence Coalition. The colleagues of the center presented live the environmental perception visions of the autonomous vehicle to the audience.
Artificial intelligence, data-driven solutions offer innovative efficiencies in many areas and almost all industries. The professional day and exhibition organized by the Artificial Intelligence Coalition drew attention to that.
The autonomous car of the SZTAKI System and Control Theory Research Laboratory and the Vehicle Industry Research Center of the Széchenyi István University of Győr was on display at the exhibition. “The autonomous vehicle, built on the basis of a Nissan Leaf is equipped with several sensors, with the help of it, the car is able to orient itself, even in traffic,” explained dr. Soumelidis Alexandros, Senior Research Fellow, SZTAKI.
Dr. Ferenc Szauter, head of the Vehicle Industry Research Center, said that the exhibition focused on the presentation of artificial intelligence and machine vision-based environmental sensing methods within the field of autonomous vehicles.
During the exhibition, the research center created an interactive demonstration in which the vehicle sensed passers-by walking in front of it and demonstrated live on a monitor how the vehicle recognizes human figures in the camera’s field of view and also displays recognized people in three-dimensional space based on LIDAR data. “We have developed a method for sensing the environment that is capable of fusing the camera and LIDAR, so the two sensors can be used as one system,” emphasized Norbert Markó, a research engineer at the Vehicle Industry Research Center. “Based on the camera’s image, the vehicle, like human vision, is able to realize what objects are in the field of view. We are currently focusing on recognizing the lane area, passers-by and means of transport, such as cars, trucks or even bicycles, ”continued János Hollósi, a research engineer at the research center. “The machine vision-based artificial intelligence system is able to determine which pixels of the image seen by the camera belong to each object type,” added dr. Ernő Horváth, research group leader of the research center.
Based on the distance information of the detected objects and their individual points, the software system creates a three-dimensional replica of the vehicle environment, which is necessary for safe autonom driving. This information is available in a form that can be interpreted and processed by additional algorithms, such as route planning and obstacle avoidance. Contrary to human vision, the position of each object is determined only on the plane of the camera based on the camera’s image: that is, only where the object is in the image, but not where it is in real space.
According to Péter Kőrös, the professional head of the Autonomous Transport Systems Center of the Széchenyi István University, the current goal is not to determine the distance of the seen object to the vehicle on the basis of a camera image. Human vision is not capable of this task either, it only gives an estimate that is sufficient for a person to be able to travel with relative safety. In the case of a vehicle, it is not the camera but the LIDAR sensors used that can determine the distance to objects around the vehicle with centimeter accuracy. LIDAR generates a so-called 3D point cloud, where the vehicle’s environment is represented in the form of spatial points at a given density.