autoevolution
Car video reviews:
 

Roborace Car Crashes Right Off the Line - Do You Still Believe Elon?

In case you missed it, three days ago one of the cars taking part in the Roborace Season Beta Event 1.1 Round 1 smashed into the concrete barrier in the most hysterically funny way.
Acronis SIT Autonomous Team race carAcronis SIT Autonomous Team race carAcronis SIT Autonomous Team race car crashAcronis SIT Autonomous Team race car presentationAcronis SIT Autonomous Team race car presentationAcronis SIT Autonomous Team race car presentation
The AI-controlled vehicle was lined up on the starting line, ready to start the race. Or so it seemed. Once it got the green line, however, it didn't seem interested at all in the sinuous stretch of asphalt in front of it. Instead, it found the concrete barrier to its right-hand side extremely appealing, so it wasted no time and drove straight into it.

Why do we find that funny? Well, for the same reason we think a lot of technical malfunctions are amusing. Robots are supposed to be a better version of humans. Thanks to their sensors and their millimetric precision, they should be capable of performing tasks with near perfection, unlike us with our clumsy hands and dubious eyesight.

Well, the truth is a robot is only as good as the imperfect human who programmed it, which is why they fail so many times. And, yes, the car ran by SIT Autonomous Roborace Team, the one that smacked into the barrier three days ago, was also the victim of a programming lapse.

A Reddit user called Grouchy-Big9198 made a post describing everything that went on. He says he's an engineer with the team and offered some insight into the whole shenanigan, leaving no doubt over what had happened. We'll keep this short since it's pretty technical and those interested in more details can go have a look at the full Reddit post.

Briefly, the car reported an issue during the initialization lap - the lap that takes the car from the boxes to the start line in the hands of a human driver. The failure caused the steering control signal to go to NaN, which stands for Not a Number. Think division by zero or infinity - the kind of operations that don't have an actual number as a result.

This problem caused the steering to lock to its maximum value to the right. Once the go signal was given, the vehicle accelerated as it should have, except the wheels were pointing in the right direction, which, in this case, was the wrong direction.

The engineer goes into more detail as to how the failure managed to squeeze through all the fail-safes undetected, but the important part to take away from this incident is that some things can't be foreseen and will only rear their ugly heads out in the field.

We're not entirely sure why the vehicle's LIDAR system didn't jump in when it detected the hard object approaching to stop the car. We're no programmers but in our book, when something like that threatens to happen, the equivalent of a preservation instinct should kick in and override everything, stopping the car dead in its tracks. Or at least try to. Was the collision prevention system switched off because the car was alone on the track? Sadly, we're not that familiar with the Roborace environment, but it sounds like a plausible explanation.

Yes, this was an unmanned racecar on an empty racetrack, but imagine something similar happened out on a busy road. To think it's impossible means to ignore the nearly infinite variables the real world can provide that the AI needs to react to. You know, an AI that was programmed by humans, a species so famous for its ability to err that they put it in an idiom. No matter what people say - even if they own car companies and are a hundred times smarter and a million times wealthier than us - autonomous driving is not ready, and it won't fully be for years ahead. This is just one more bit of evidence that fortunately resulted in no casualties, but they might not all be like this.

 
 
 
 
 

Would you like AUTOEVOLUTION to send you notifications?

You will only receive our top stories