There are several things holding back the mass proliferation of driverless vehicles, and the legislative environment is only part of it. Whether we like it or not, the technology isn’t yet ready to face the multitude of challenges posed by the open world.
One of the key aspects of autonomous cars development is giving these vehicles their own set of eyes. Instead of the two receptors humans use to perceive the things around them, these cars will use a myriad: starting with standard video cameras and ending with 3D laser scanning devices. All these put together will help the smart cars of the future map their surroundings, while clever programming will enable them to actually make sense of all the data being received.
Last month, a team from the New York Times and ScanLAB Projects - a London-based design studio founded by two British architects fascinated by the world of laser scanning - mounted a scanning device on a Honda CR-V and drove it through the old streets of Great Britain’s capital city. Their purpose? To show people how these autonomous vehicles we hear so much about see what’s around them - basically, to allow everyone to see a parallel universe that exists all around, but escapes our grasp because of the very efficient sensors mother nature gave us.
But 3D scanning isn’t as straight-forward as you and I might think. Working essentially just like a sonar does under water, its main problem is that the laser beams it fires can easily be thrown off by shiny surfaces, creating anomalies that the car’s software will have difficulties reading and interpreting. Something as simple as a light drizzle can have a devastating effect on the efficiency of these laser scanners. While these little errors are exactly what make the images captured by ScanLAB so fascinating, they’re also the reason fully-autonomous cars aren’t expected to become the norm sooner than 2030.
Until then, though, we have plenty of time to enjoy this interesting insight into how the artificial intelligence we’re creating sees the world.
Last month, a team from the New York Times and ScanLAB Projects - a London-based design studio founded by two British architects fascinated by the world of laser scanning - mounted a scanning device on a Honda CR-V and drove it through the old streets of Great Britain’s capital city. Their purpose? To show people how these autonomous vehicles we hear so much about see what’s around them - basically, to allow everyone to see a parallel universe that exists all around, but escapes our grasp because of the very efficient sensors mother nature gave us.
But 3D scanning isn’t as straight-forward as you and I might think. Working essentially just like a sonar does under water, its main problem is that the laser beams it fires can easily be thrown off by shiny surfaces, creating anomalies that the car’s software will have difficulties reading and interpreting. Something as simple as a light drizzle can have a devastating effect on the efficiency of these laser scanners. While these little errors are exactly what make the images captured by ScanLAB so fascinating, they’re also the reason fully-autonomous cars aren’t expected to become the norm sooner than 2030.
Until then, though, we have plenty of time to enjoy this interesting insight into how the artificial intelligence we’re creating sees the world.