Robotics Expert Says Tesla’s Self-Driving Tech Is a Danger to Cyclists

Tesla’s autonomous or self-driving car technology has helped make the company a **-billion success story. But the system does have flaws, according to a Stanford University robotics researcher, and one of them is a disturbing inability to recognize cyclists.

Heather Knight is an expert in human-robot interfaces who has a PhD from the Robotics Institute at Carnegie Mellon University and is working on her post-doctoral research in social robotics. She published her concerns about Tesla’s self-driving tech in an essay on the blogging platform Medium, entitled “Tesla Autopilot Review: Bikers Will Die.”

Knight described how she and a colleague from Stanford took a Tesla for a drive in order to see how its self-driving technology matched their expectations about human-robot interaction.

Although Knight said the Tesla vehicle’s autopilot worked flawlessly in most situations while the two drove around the streets and highways of southern California, she said she was “concerned that some will ignore its limitations and put biker lives at risk; we found the Autopilot’s agnostic behavior around bicyclists to be frightening.”

In particular, Knight said that the Tesla’s “situation awareness display”—which helps human drivers understand what the car’s self-driving technology can “see” in the environment around it—was fairly accurate with cars, but not even close to accurate with cyclists.

Although she gave the display an A+ rating overall, the Stanford researcher said: “I’d estimate that Autopilot classified ~30% of other cars, and 1% of bicyclists. Not being able to classify objects doesn’t mean the tesla doesn’t see that something is there, but given the lives at stake, we recommend that people NEVER USE TESLA AUTOPILOT AROUND BICYCLISTS!”

There appear to have been no reported incidents where Tesla cars driving on auto-pilot came into contact with cyclists, although there was an accident in Norway in 2016 where authorities said a Tesla failed to properly recognize a motorcycle and injured the rider. In a Wall Street Journal article, one Tesla driver credited the autopilot with preventing an accident with a cyclist.

In a number of responses to readers of her piece, both on Medium and on Twitter, Knight said that she likes Tesla, but was trying to point out that its system is still what she called a “human in the loop” technology because of such inaccuracies in detection.

Tesla makes it clear to drivers that its autopilot system is not fully autonomous, and that they should keep their hands on the steering wheel and pay attention to their surroundings at all times. If a driver takes their hands off the wheel for a sustained period of time, the autopilot disengages and can’t be re-enabled unless the car is stopped.

Despite its flaws, Knight said that the Tesla’s Situation Awareness Display was her favorite feature of the car, because “it helps the driver understand shortcomings of the car, i.e., its perception sucks.” Robot-based systems in general, she added, “would benefit from communicating their limitations to people.”

Some of those who criticized Knight’s essay said that it didn’t provide enough evidence of flaws in the detection system, and that the headline on the article was unnecessarily inflammatory about the risks that autonomous driving poses to cyclists.

https://twitter.com/Lee_Ars/status/869183198974152709

Knight said that her concern is that people will see the so-called “autopilot” feature as being fully autonomous at all times, and will fail to pay attention to their surroundings, “as we see in the fatal crashes so far.” Even if detection rates improve, she said, it’s important that human drivers “have the correct mental model of the car, complete with its shortcomings.”

Fortune has asked Tesla for a comment on Knight’s conclusions, and will update this post if and when one is provided.

Leave a Reply

Your email address will not be published. Required fields are marked *