Mrunal Manmay Dash

Every year, car accidents claim the lives of more than 1,55,600 individuals in India alone. With the advent of Self-driving cars, the number should go down by significantly increasing vehicle safety because they can theoretically react faster than human drivers and don't get fatigued, they don’t text or drive under influence of alcohol. They also promise to boost the mobility and independence of elderly people and other people who find it difficult to drive.

First, let’s define what we mean by “self-driving”. The SAE International (Society of Automotive Engineers, USA) has defined six levels of driving automation as mentioned below:

Level 0 - The human driver performs all driving functions

Level 1 - Some level of driver assistance (either adaptive cruise control or lane keeping / centering)

Level 2 - Partial driver automation (both adaptive cruise control and lane centering) but driver must maintain awareness (hands on wheel, eyes on road, or both)

Level 3 - Conditional driver automation (human may take hands off the wheel and read a book under specified conditions but must intervene when instructed to do so by the vehicle)

ADS in Control Level 4 - High automation (human does not ever need to intervene as long as the car is operating under specific operating conditions such as specific city streets and/or campus shuttles)

Level 5 - Full automation (human never needs to intervene)

Levels 3-5 are considered Automated Driving Systems (ADSs) in which the driver does not need to pay attention to the road. At Level 3, the driver can read a book or watch a movie but must be able to take over control within 10-60 seconds if asked to do so by the vehicle. One big issue for Level 3 vehicles is that a crash might occur in the 10 seconds the driver spends taking over, so Level 3 vehicles will probably need to include an Operational Design Domain (ODD) where 10 seconds is reasonably safe (e.g. low-speed highway traffic jams).

Level 4 vehicles are constrained to an ODD, which typically includes constrained geography (for example, a small number of streets in a city), and may also include other restrictions based on the weather, time of day, precipitation, road grades and curvature, and other factors. This is how Level 4 vehicles differ from Level 5 vehicles. Theoretically, Level 5 vehicles can effectively replace passenger cars and commercial trucks because they are unrestricted and can travel anywhere.

Today's consumer vehicles include a wide variety of driver-aid features, including Teslas. They may automatically accelerate and brake as well as maintain the car centred in its lane. In India, there are manufacturers like Mahindra and MG who have already introduced ADAS in some of their latest SUVs.

However, reading a book or watching a movie while driving is not safe. The driver must keep a steady eye on the road and be prepared to take over at any time. Since they are Level 2 and 3 vehicles, they are not fully ADSs. Drivers need to focus on the road and be prepared to take control at any time.

But will self-driving vehicles actually be safer? The biggest issue for the automotive industry revolves around handling unexpected situations that arise from edge cases. There are good reasons to believe that some types of autonomous vehicles may not be capable of handling these edge cases.

When something unexpected happens while driving, like a stray dog darting onto the road, people utilise common sense and reasoning to deal with it. There may be routes impassable or difficult to travel on during a flood or on a ghat road.

All of these potential edge scenarios are not covered in driving school. Instead, we forecast events and act using everyday common sense reasoning. When a ball rolls onto the sidewalk, we know to watch out for kids pursuing it. When we notice the car in front of us swerving, we adjust our driving style because we assume the driver may be inebriated or texting.

Moreover, computer vision systems are prone to errors because they can be tricked in ways that people cannot, which is another problem for ADSs. Some American researchers demonstrated how; for instance, small adjustments to a speed limit sign could trick a machine learning system into believing it reads 85 kmph rather than 35 kmph. Similar to this, some hackers created a fake lane using brightly colored stickers to deceive Tesla's autopilot into changing lanes. Both times, these modifications deceived automobiles but not people.

Unfortunately, no one knows how to build commonsense and reasoning into cars, or into computers in general. In lieu of commonsense reasoning capabilities, ADS developers must anticipate and code every possible situation.

There are millions, maybe billions of these edge cases. Everyone has at least one unusual driving story. There are 1.4 billion drivers in the world. If there are 1.4 billion of these edge cases, how can they possibly all be identified much less coded?

Split-second decisions, rapidly changing weather conditions, being able to look into another driver’s eyes at a crossroads – these are real-life conditions best left for an engaged driver. Technology can undoubtedly be enormously helpful; in some instances, some of the new automotive assist technologies can be lifesaving when properly used. But driving is complicated; roads, lanes, and conditions vary, and the same actions aren’t always the best under all circumstances.

And if ADSs cannot perform commonsense reasoning to handle all these edge cases, are they really safer than human drivers? This is a question that does not possibly seem to have an answer in the near future.