At the annual Code Conference in 2016, Elon Musk, then arguably at the height of his self-aggrandizing, hyperbolic powers, made an assertion that would ripple into the present. “Autonomous driving is a solved problem,” he declared, onstage with Kara Swisher and Walt Mossberg. Highways were now “relatively easy” to navigate using the technology, he continued. Incredibly, he claimed, the Tesla Model X “can drive autonomously with greater safety than a person right now.”
Of course, seven years later, neither the Model X nor any other Tesla vehicle can truly drive autonomously, let alone more safely than a human. In that time, Tesla’s “Autopilot” software has been involved in hundreds of crashes, some of them fatal. Now, Musk’s over-the-top salesmanship may finally come back to haunt him. On Thursday, the Tesla legal team finally relented to allowing Musk to be deposed in a lawsuit over one of those fatalities, which may impact the future development of A.I., deepfakes, ChatGPT, and more.
Like many headaches involving Musk, the deposition, which Tesla vehemently fought for 18 months, is largely the consequence of his own epic hubris. On April 26, 2019, Tesla was sued by the family of Walter Huang, an Apple engineer who had been relying on the company’s Autopilot system when his Model X crashed into a highway barrier. Huang, who had been engrossed in a video game on his phone at the time, died from his injuries. According to the family, Huang relied on Tesla’s marketing that it was a “state-of-the-art” vehicle and believed his Model X was safer than a human-operated car because he heard about Tesla’s claimed technical superiority. The suit doesn’t specify the damages sought.