We are living in the dawn of the age of autonomous driving. It’s still awhile before a vehicle capable of driving by itself under all conditions begins to take the road. That would be Level 5 automation. At Level 4 you get High Automation, where the vehicle is capable of all driving functions under certain conditions. In Level 4 and 5, the driver may have the option to control the vehicle. Level 3 is conditional automation, wherein a driver is a necessity, but not required to monitor the environment. The driver must be ready to take control of the vehicle at all times with notice. Level 2 is partial automation, where functions like steering and braking is automated but the driver must be engaged with the driving task and monitor the environment at all times. We are somewhere between Level 2 and Level 3 now.
Autonomous Vehicles (AVs) bring us safety. Imagine the number of accidents that can be prevented that happen now due to human error. That would be a huge win. AVs enable people of disability to lead a fuller life. AVs would improve productivity by enabling drivers to work or do something else useful while on the road. Family summer trips could be much more fun.
On the flip side, AVs won’t be accepted until they are as safe as human drivers. Humans have become extremely good drivers and vehicles are safer than ever. The technology to develop AVs is incredibly expensive. On top of it, a connected car can be hacked.
Tesla’s Elon Musk said on April 22 2019 – “Next year for sure, we will have over 1 million robotaxis on the road”. We are past April 2019 but the roads are quiet, for a different reason.
Technology wise bad weather, uneven terrain and difficulty of identifying and anticipating movement of moving objects remain a significant hurdle. Lots of legal and regulatory standards remain to be crafted. Insurance liability is one key hurdle. How does a AV decide when confronted with the decision of hitting a pedestrian vs crashing itself and potentially injuring it’s occupants? Consumer distrust and cyber security fears round up the challenges.
How does an AV see and navigate the world around it? There are a number of technologies:-
- LiDAR – Light Detection and Ranging. It fires millions of laser light pulses per second and uses the reflection to scan surroundings and create high resolution 3D maps of the vehicle’s surroundings. It’s able to detect if an object is a bicycle or a motorcycle or if a pedestrian is facing forward or backward. The drawback is that it’s very expensive and doesn’t work well in bad weather.
- Cameras – Used for traffic sign recognition, side and rear surround view, parking assistance. It provides highest resolution images and can do wide angle and narrower view of what’s ahead. Again, weather can be a factor in proper functioning.
- Radar – It sends out radio waves that bounce off distant surfaces. Most common uses include adaptive cruise control, automatic emergency braking, blind spot detection, parking assistance, collision avoidance and obstacle detection. Radar is proven for seeing hundreds of yards out and detect object size/speed. However, it can’t see detail and therefore no able to detect identity of objects.
Another cool aspect of AV is the development of V2X – Vehicle to Everything. This means passing of information to any entity / object that may affect the vehicle. A vehicle may communicate with another vehicle or communicate with infrastructure (street lights, buildings, pedestrians etc). All these scenarios are likely to come alive with 5G technology. In reality, this will take a long time.
An AV is really a data center on wheels. It represents the truest form of computing at the edge.
Fascinating developments in this area await us. I am really looking forward to a Level 5 AV hitting the road in 10 years. May the best technology win.