Tesla Patents Show Bird's Eye View Will Be Offered via Vision-Based Machine Learning

Tesla patents show bird's eye view will be offered via vision-based machine learning 6 photos
Photo: u/ghost_snyped via Reddit | Edited
Tesla patents show bird's eye view will be offered via vision-based machine learningTesla patents show bird's eye view will be offered via vision-based machine learningTesla patents show bird's eye view will be offered via vision-based machine learningTesla Autopilot and FSDTesla Autopilot and FSD
Tesla thinks its FSD Beta software should interpret the world as humans do, based on vision only. The EV maker claimed other sensors could add inconsistencies and started a war against radar, LiDAR, and even ultrasonic sensors. We can now better understand how Tesla Vision works thanks to new patent filings published on February 23.
Patent filings are a fascinating window into future technologies, allowing us to learn about new technologies long before they influence our lives. Many patents never make it into production, while others are only the building blocks for other technologies that are in development. They are interesting to study, nonetheless, offering a glimpse into the future. Talking about the future, Tesla is probably the one company that is poised to change our lives in ways we couldn't even fathom. That's why studying Tesla patents is perhaps more rewarding than other companies.

This time, we wouldn't discuss some crazy new windshield or a new take on the Cybertruck's Gigawiper. Instead, we focus on two patent filings describing technologies already affecting people's lives. Filed in August 2022, the two patents describe the way Tesla Vision maps and interprets the surroundings using neural networks. This is interesting because other carmakers have claimed that you cannot have a reliable interpretation of the environment without using all the sensors in the world, including expensive LiDAR scanners.

On the other hand, Tesla wants to use only vision information to train its neural networks and interpret data for autonomous driving. Tesla says humans navigate the world and drive their cars based solely on what they see with their two eyes. A car has many more eyes, which means that putting them to good work should result in much safer driving than humans are capable of. That's why the EV maker has started stripping off sensors from its vehicles, claiming they add clutter and inconsistencies, feeding the machine learning model with redundant and sometimes confusing data.

Based on the two patents published by USPTO on February 23, Tesla wants the car to become more human, at least as far as seeing the world is concerned. The patents describe a vision-based machine-learning model that relies on software to enable a reduction in sensor-based hardware complexity while at the same time enhancing accuracy. The model uses images generated by cameras to simulate human-like vision-based driving.

While we're no AI experts, the two patents describe methods of combining the images generated by the car's eight external cameras into a vector space for the neural networks to chew up and understand. One of the patents describes a "vision-based machine learning model for aggregation of static objects and systems for autonomous driving." The patent shows how the software can generate a "bird's eye view" of the surroundings based on the machine learning model. This is useful for depicting static objects.

The other patent is a "vision-based machine learning model for autonomous driving with adjustable virtual camera." The patent describes how the software uses the images acquired by the car's cameras to output several views of the surroundings, as viewed from different angles and heights. This pertains to moving objects, like cars and pedestrians.

For instance, vulnerable road users (VRU) are projected by the machine-learning model at lower heights, comparable to human height. In contrast, non-VRUs (mainly other vehicles) are projected into the vector space as if the virtual camera is positioned much higher, like 20-30 meters (65-100 feet) high. This is intended to allow for a reduction in object occlusions while preserving a substantial maximum range of detection of objects.

The new patent applications indicate that Tesla is already advanced with its Vision system, although not advanced enough to have Park assist working on vehicles without ultrasonic sensors. This time, Tesla might have jumped the gun by removing the parking sensors before the Vision system offers the same functionality. Nevertheless, the bird's eye view and virtual cameras seem to be able to compensate for the lack of sensors and provide similar performance. Fingers crossed.
If you liked the article, please follow us:  Google News icon Google News Youtube Instagram

 Download: Vision-based machine learning model (PDF)

About the author: Cristian Agatie
Cristian Agatie profile photo

After his childhood dream of becoming a "tractor operator" didn't pan out, Cristian turned to journalism, first in print and later moving to online media. His top interests are electric vehicles and new energy solutions.
Full profile


Would you like AUTOEVOLUTION to send you notifications?

You will only receive our top stories