July 6, 2016
From Los Angeles Times

by Paresh Dave

By rolling out self-driving technology to consumers more aggressively than its competitors, Tesla Motors secured a spot in the forefront of a coming industry.

But that strategy could expose the company to a risk it has sought to avoid: liability in crashes.

Tesla in 2015 activated its autopilot mode, which automates steering, braking and lane switching. Tesla asserts the technology doesn’t shift blame for accidents from the driver to the company. 

But Google, Zoox and other firms seeking to develop autonomous driving software say it’s dangerous to expect people in the driver’s seat to exercise any responsibility. Drivers get lulled into acting like passengers after a few minutes of the car doing most of the work, the companies say, so relying on them to suddenly brake when their cars fail to spot a hazard isn’t a safe bet.

Such a concern could undermine Tesla, whose autopilot feature is central to a fatal-accident investigation launched last week by federal regulators. 

The National Highway Traffic Safety Administration is considering the role played by autopilot technology in a Florida collision between a Tesla Model S and a big rig. Tesla said autopilot sensors failed to detect the white truck, turning in front of a Model S, against a bright May sky, killing 40-year-old Joshua Brown.

Were the victim’s family to sue Tesla over an accident caused -- or not avoided -- by autopilot, one of several arguments they might make is that Tesla acted negligently by not doing what a reasonable manufacturer would do, said Stephen Nichols, an attorney in the Los Angeles office of law firm Polsinelli. The fact that others have developed similar technology, but have chosen not to release it or have branded it in ways that don’t suggest automation, could leave Tesla vulnerable.

"You could say, 'Tesla, you're not doing what these other companies are doing, so you're being unreasonable,'" Nichols said.

To view the full article, click here.