You have probably heard that self-driving cars are coming soon. But people have been saying that for at least a decade and still I can’t buy a car that will drive me to work while I nap in the passenger seat.
Some cars already come with partial autonomy, systems like tesla’s Autopilot, that assists drivers or sometimes even take control. But they still need a human driver who can grab the steering and pedals on short notice if things get capricious or unpredictable, which is why Bhavesh Patel from London gets arrested in April 2018 for trying the passenger seat thing. There are some fully driverless vehicles that might be released in next few years, but they are only meant for very specific uses like long-haul trucking or taxis confined to certain streets and neighbourhoods.
General purpose driving is hard. Because to do that the software has to work out a lot of really tricky questions to turn information from its sensors into commands for the steering and pedals. Despite all the money and brainpower that’s being poured into research, there are still major challenges at every step along that path.
The first thing a self-driving car has to do is to figure out what’s in its surrounding. It’s called the PERCEPTION stage. Humans can do this at a glance, but a car needs a whole cornucopia of sensor data, like cameras, radar, ultrasonic sensors and lidar (lidar is basically a detailed 3D radar that uses lasers instead of radio ). Today’s autonomous vehicles do pretty well at interpreting all that data to get a 3D digital model of their surroundings i.e the lanes, cars, traffic lights and so on. But it’s not always easy to figure out what is what. For example, if lots of objects are closed together in a big crowd of people it is hard for software to separate them. So to work properly in pedestrian-packed areas like major cities the car might have to consider not just the current image but also the past few milliseconds of context too. That way, it can group a smaller blob of points moving together into a distinct pedestrian about to step into the street. But this has not been done yet.
Some things are just inherently hard for computers to identify, think of a situation where a plastic bag drifting on air looks like as solid to the sensors as heavier and more dangerous bag full of trash. That particular mix-up would just lead to unnecessary braking, but mistaken identities can be fatal. In a deadly Tesla crash in 2016, the autopilot cameras mistook the side of a truck for a washed out sky. You also need to make sure the system is dependable, even if there are surprises. If a camera goes haywire, for example, the car has to be able to fall back on overlapping sources of information. It also needs enough experience to learn about dead skunks, conference bikes, backhoes sliding off trucks and all the other weird situations that might show up on the road.
Academics often resort to running simulations in Grand Theft Auto, yes, you heard it correct. Some companies have more sophisticated simulators, but even those are limited by the designer’s imaginations. So there are still some cases where perception is tricky. The really stubborn problem, though, comes with the next stage PERDICATION.
It is not enough to know where the pedestrians and other drivers are right now. The car has to predict where they are going next before it can move on to stage three, PLANING its own moves. Sometimes prediction is straightforward like a car’s right blinker suggests it’s about to merge right. That’s where planning is easy. But sometimes computers just don’t get their human overlords. For Example, let’s just say an oncoming car slows down and flashes its lights as you wait for left. It’s probably safe to turn, but that’s a subtle thing for a computer to realize. What makes prediction really complicated, though, is that the safety of the turn is not something you just recognize it’s a negotiation. If you edge forward like you are about to make the left, the other driver will react. So there is this feedback loop between prediction and planning. In fact, researchers have found that when you are merging onto the highway, if you don’t rely on other people to react to you, you might never be able to proceed safely. So if a self-driving car is not assertive enough, it can get stuck. This is called the freezing robot problem.
Freezing robot problem is just more than a problem it’s a nightmare for consumers and researchers. But there is some solution to problems like this. There are two main ways programmers try to work around all this. One option is to have the car think of everyone else’s actions as dependent on its own. But that can lead to overly aggressive behaviour, which is also dangerous. People who drive that way are ones who end up swerving all over the highway trying to weave between the cars. Another option is to have the car predict everyone’s actions collectively treating itself as just one more car interacting like all the rest and then whatever fits the situation best. The Problem with that approach is that you have to oversimplify things to decide quickly.
Finding a better solution to prediction and planning is one of the biggest unsolved problems in autonomous driving. So between identifying what’s around them then interpreting what other drivers will do and figuring out how to respond, there are a lot of scenarios self-driving cars are not totally prepared for yet.
That does not mean driverless cars won’t hit some roads soon. There are plenty of more straightforward situations where you just don’t encounter these type of problems.
Also Read:-MIT Researchers Have Developed A Completely Passive Solar-Powered Way Of Combating Ice Buildup
Also Read:-MIT Researchers Have Developed A Completely Passive Solar-Powered Way Of Combating Ice Buildup