LiDAR sensors are the current solution for making sure that self-driving cars have a safe and accurate perception, but what if something even better and more cost-effective could be out there? This is what recent studies are trying to find out, by developing a way to give self-driving cars stereo vision.
When it comes to perceiving, autonomous cars have a tough job. They have to be able to distinguish other vehicles, pedestrians, riders, as well as objects such as stop signs. And they must know how to assess distances, so that they can avoid these “obstacles” once they detect them. Light Detection and Ranging (LiDAR) sensors are used for measuring distance, by pulsating light and determining how long it takes for the beam to reflect off objects that are at a distance.
According to Kilian Q. Weinberger, from Cornell’s Computer Science department, this is a safe, but very expensive technology. Together with researchers from other departments and Universities, Weinberger is designing algorithms that can help self-driving cars detect 3D objects, with similar results to LiDAR, but for a much lower cost.
The concept of simulating stereo vision by using 2 cameras is not a new one, but this team was the first one to show exactly how it can be done, in a series of papers that have been published since 2018. What Weinberger and the other researchers did was to design algorithms called neural network architectures, which can use the data generated by the 2 cameras, to actually detect objects.
This was done by using machine learning, which is basically enabling artificial intelligence to learn from experience. Instead of writing a software program that tells a computer exactly what to do, machine learning is more about giving it examples of what it should to, so it can establish patterns.
But there’s a kick – apparently, even AI can become too self-confident. When it makes no more mistakes, based on the examples it was given, it acts as if it’s always right. And, in the real world, this could lead to serious mistakes. So, Weinberger and his collaborators are also working on making AI capable of determining its own level of accuracy – an algorithm that would support self-driving systems and have many other applications as well.
According to Kilian Q. Weinberger, from Cornell’s Computer Science department, this is a safe, but very expensive technology. Together with researchers from other departments and Universities, Weinberger is designing algorithms that can help self-driving cars detect 3D objects, with similar results to LiDAR, but for a much lower cost.
The concept of simulating stereo vision by using 2 cameras is not a new one, but this team was the first one to show exactly how it can be done, in a series of papers that have been published since 2018. What Weinberger and the other researchers did was to design algorithms called neural network architectures, which can use the data generated by the 2 cameras, to actually detect objects.
This was done by using machine learning, which is basically enabling artificial intelligence to learn from experience. Instead of writing a software program that tells a computer exactly what to do, machine learning is more about giving it examples of what it should to, so it can establish patterns.
But there’s a kick – apparently, even AI can become too self-confident. When it makes no more mistakes, based on the examples it was given, it acts as if it’s always right. And, in the real world, this could lead to serious mistakes. So, Weinberger and his collaborators are also working on making AI capable of determining its own level of accuracy – an algorithm that would support self-driving systems and have many other applications as well.