Some would say it was bound to happen eventually. In March, the ridesharing platform Uber made the news in spectacular fashion when one of its self-driving prototype vehicles struck and killed a pedestrian in Tempe, Arizona. In theory, at least, autonomous cars are supposed to make us safer because human error gets removed from the equation. This accident underscores that developers may still be far from realizing that goal—and critics have already used the incident as a push to eliminate autonomous vehicles completely.
So if autonomous (i.e., self-driving) vehicles are programmed properly, why would they ever be involved in an accident? What causes autonomous cars to crash? The most common answer might surprise you—or maybe not:
Humans.
It sounds incredibly ironic, doesn’t it? Cars designed to eliminate human error crash due to…human error. To explain: In the vast majority of autonomous mishaps, the cars behaved exactly as expected. The problem is that the cars aren’t functioning in a vacuum; they’re functioning in the world of humans, and the humans don’t always act predictably.
How does human error contribute to autonomous car crashes? Most commonly, it happens in one of two ways:
In many applications, it feels like technology is running way ahead of us—but in the context of autonomous vehicles, the tech actually has some catching up to do. The story is far from over where autonomous vehicle crashes are concerned.
If you are injured in an accident due to someone else’s liability, we may be able to help. Call our our Washington D.C. personal injury attorneys today.