self-driving

tesla-says-model-3-that-burst-into-flames-in-fatal-tree-crash-wasn’t-defective

Tesla says Model 3 that burst into flames in fatal tree crash wasn’t defective

Tesla says Model 3 that burst into flames in fatal tree crash wasn’t defective

Tesla has denied that “any defect in the Autopilot system caused or contributed” to the 2022 death of a Tesla employee, Hans von Ohain, whose Tesla Model 3 burst into flames after the car suddenly veered off a road and crashed into a tree.

“Von Ohain fought to regain control of the vehicle, but, to his surprise and horror, his efforts were prevented by the vehicle’s Autopilot features, leaving him helpless and unable to steer back on course,” a wrongful death lawsuit filed in May by von Ohain’s wife, Nora Bass, alleged.

In Tesla’s response to the lawsuit filed Thursday, the carmaker also denied that the 2021 vehicle had any defects, contradicting Bass’ claims that Tesla knew that the car should have been recalled but chose to “prioritize profits over consumer safety.”

As detailed in her complaint, initially filed in a Colorado state court, Bass believes the Tesla Model 3 was defective in that it “did not perform as safely as an ordinary consumer would have expected it to perform” and “the benefits of the vehicle’s design did not outweigh the risks.”

Instead of acknowledging alleged defects and exploring alternative designs, Tesla marketed the car as being engineered “to be the safest” car “built to date,” Bass’ complaint said.

Von Ohain was particularly susceptible to this marketing, Bass has said, because he considered Tesla CEO Elon Musk to be a “brilliant man,” The Washington Post reported. “We knew the technology had to learn, and we were willing to be part of that,” Bass said, but the couple didn’t realize how allegedly dangerous it could be to help train “futuristic technology,” The Post reported.

In Tesla’s response, the carmaker defended its marketing of the Tesla Model 3, denying that the company “engaged in unfair and deceptive acts or practices.”

“The product in question was not defective or unreasonably dangerous,” Tesla’s filing said.

Insisting in its response that the vehicle was safe when it was sold, Tesla again disputed Bass’ complaint, which claimed that “at no time after the purchase of the 2021 Tesla Model 3 did any person alter, modify, or change any aspect or component of the vehicle’s design or manufacture.” Contradicting this, Tesla suggested that the car “may not have been in the same condition at the time of the crash as it was at the time when it left Tesla’s custody.”

The Washington Post broke the story about von Ohain’s fatal crash, reporting that it may be “the first documented fatality linked to the most advanced driver assistance technology offered” by Tesla. In response to Tesla’s filing, Bass’ attorney, Jonathan Michaels, told The Post that his team is “committed to advocating fiercely for the von Ohain family, ensuring they receive the justice they deserve.”

Michaels told The Post that perhaps as significant as alleged autonomous driving flaws, the Tesla Model 3 was also allegedly defective “because of the intensity of the fire that ensued after von Ohain hit the tree, which ultimately caused his death.” According to the Colorado police officer looking into the crash, Robert Madden, the vehicle fire was among “the most intense” he’d ever investigated, The Post reported.

Lawyers for Bass and Tesla did not immediately respond to Ars’ request for comment.

Tesla says Model 3 that burst into flames in fatal tree crash wasn’t defective Read More »

what-happens-when-chatgpt-tries-to-solve-50,000-trolley-problems?

What happens when ChatGPT tries to solve 50,000 trolley problems?

Images of cars on a freeway with green folder icons superimposed on each vehicle.

There’s a puppy on the road. The car is going too fast to stop in time, but swerving means the car will hit an old man on the sidewalk instead.

What choice would you make? Perhaps more importantly, what choice would ChatGPT make?

Autonomous driving startups are now experimenting with AI chatbot assistants, including one self-driving system that will use one to explain its driving decisions. Beyond announcing red lights and turn signals, the large language models (LLMs) powering these chatbots may ultimately need to make moral decisions, like prioritizing passengers’ or pedestrian’s safety. In November, one startup called Ghost Autonomy announced experiments with ChatGPT to help its software navigate its environment.

But is the tech ready? Kazuhiro Takemoto, a researcher at the Kyushu Institute of Technology in Japan, wanted to check if chatbots could make the same moral decisions when driving as humans. His results showed that LLMs and humans have roughly the same priorities, but some showed clear deviations.

The Moral Machine

After ChatGPT was released in November 2022, it didn’t take long for researchers to ask it to tackle the Trolley Problem, a classic moral dilemma. This problem asks people to decide whether it is right to let a runaway trolley run over and kill five humans on a track or switch it to a different track where it kills only one person. (ChatGPT usually chose one person.)

But Takemoto wanted to ask LLMs more nuanced questions. “While dilemmas like the classic trolley problem offer binary choices, real-life decisions are rarely so black and white,” he wrote in his study, recently published in the journal Proceedings of the Royal Society.

Instead, he turned to an online initiative called the Moral Machine experiment. This platform shows humans two decisions that a driverless car may face. They must then decide which decision is more morally acceptable. For example, a user might be asked if, during a brake failure, a self-driving car should collide with an obstacle (killing the passenger) or swerve (killing a pedestrian crossing the road).

But the Moral Machine is also programmed to ask more complicated questions. For example, what if the passengers were an adult man, an adult woman, and a boy, and the pedestrians were two elderly men and an elderly woman walking against a “do not cross” signal?

The Moral Machine can generate randomized scenarios using factors like age, gender, species (saving humans or animals), social value (pregnant women or criminals), and actions (swerving, breaking the law, etc.). Even the fitness level of passengers and pedestrians can change.

In the study, Takemoto took four popular LLMs (GPT-3.5, GPT-4, PaLM 2, and Llama 2) and asked them to decide on over 50,000 scenarios created by the Moral Machine. More scenarios could have been tested, but the computational costs became too high. Nonetheless, these responses meant he could then compare how similar LLM decisions were to human decisions.

What happens when ChatGPT tries to solve 50,000 trolley problems? Read More »

cruise-failed-to-disclose-disturbing-details-of-self-driving-car-crash

Cruise failed to disclose disturbing details of self-driving car crash

full disclosure —

Company did not share all it knew about the accident with regulators.

A Cruise robotaxi test vehicle in San Francisco.

Enlarge / A Cruise robotaxi test vehicle in San Francisco.

Cruise

A law firm hired by the General Motors’ self-driving subsidiary Cruise to investigate the company’s response to a gruesome San Francisco crash last year found that the company failed to fully disclose disturbing details to regulators, the tech company said today in a blog post. The incident in October led California regulators to suspend Cruise’s license to operate driverless vehicles in San Francisco.

The new report by law firm Quinn Emanuel says that Cruise failed to tell California’s Department of Motor Vehicles that after striking a pedestrian knocked into its path by a human-driven vehicle, the autonomous car pulled out of traffic—dragging her some 20 feet. Cruise said it had accepted the firm’s version of events, as well as its recommendations.

The investigators found that when Cruise played a video of the crash taken from its autonomous vehicle for government officials, it did not “verbally point out” the vehicle’s pullover maneuver. Internet connectivity issues that occurred when the company tried to share video of the incident “likely precluded or hampered” regulators from seeing the full video, the report concluded.

Cruise executives are singled out in the report for failing to properly communicate with regulators. Company leaders assumed that regulators would ask questions that would lead the company to provide more information about the pedestrian dragging, the report says. And Cruise leadership is described as “fixated” on demonstrating to the media that it was a human-driven car, not its autonomous vehicle, that first struck the pedestrian. That “myopic focus,” the law firm concludes, led Cruise to “omit other important information” about the incident.

“The reasons for Cruise’s failings in this instance are numerous,” the law firm concluded, “poor leadership, mistakes in judgment, lack of coordination, an ‘us versus them’ mentality with regulators, and a fundamental misapprehension of Cruise’s obligations of accountability and transparency to the government and the public.” It said the company must take “decisive steps” to restore public trust.

Another third-party report on the crash released by Cruise today, by the engineering consulting firm Exponent, found that technical issues contributed to the autonomous vehicle’s dangerous pullover maneuver. Although the self-driving car’s software correctly detected, perceived, and tracked the pedestrian and the human-driven car, it classified the crash as a side-impact collision, which led it to pull over and drag the woman underneath it. Cruise says its technical issues were corrected when it recalled its software in November.

Cruise has paused its self-driving operations across the US since late October. Nine executives, plus CEO and cofounder Kyle Vogt, left in the fallout from the crash. In late 2023, the company laid off almost a quarter of its employees. General Motors says it will cut spending on the tech company by hundreds of millions of dollars this year compared to last.

This story originally appeared on wired.com.

Cruise failed to disclose disturbing details of self-driving car crash Read More »