Author name: Paul Patrick

it-seems-the-faa-office-overseeing-spacex’s-starship-probe-still-has-some-bite

It seems the FAA office overseeing SpaceX’s Starship probe still has some bite


The political winds have shifted in Washington, but the FAA hasn’t yet changed its tune on Starship.

Liftoff of SpaceX’s seventh full-scale test flight of the Super Heavy/Starship launch vehicle on January 16. Credit: SpaceX

The seventh test flight of SpaceX’s gigantic Starship rocket came to a disappointing end a little more than two weeks ago. The in-flight failure of the rocket’s upper stage, or ship, about eight minutes after launch on January 16 rained debris over the Turks and Caicos Islands and the Atlantic Ocean.

Amateur videos recorded from land, sea, and air showed fiery debris trails streaming overhead at twilight, appearing like a fireworks display gone wrong. Within hours, posts on social media showed small pieces of debris recovered by residents and tourists in the Turks and Caicos. Most of these items were modest in size, and many appeared to be chunks of tiles from Starship’s heat shield.

Unsurprisingly, the Federal Aviation Administration grounded Starship and ordered an investigation into the accident on the day after the launch. This decision came three days before the inauguration of President Donald Trump. Elon Musk’s close relationship with Trump, coupled with the new administration’s appetite for cutting regulations and reducing the size of government, led some industry watchers to question whether Musk’s influence might change the FAA’s stance on SpaceX.

So far, the FAA hasn’t budged on its requirement for an investigation, an agency spokesperson told Ars on Friday. After a preliminary assessment of flight data, SpaceX officials said a fire appeared to develop in the aft section of the ship before it broke apart and fell to Earth.

“The FAA has directed SpaceX to lead an investigation of the Starship Super Heavy Flight 7 mishap with FAA oversight,” the spokesperson said. “Based on the investigation findings for root cause and corrective actions, the FAA may require a company to modify its license.”

This is much the same language the FAA used two weeks ago, when it first ordered the investigation.

Damage report

The FAA’s Office of Commercial Space Transportation is charged with ensuring commercial space launches and reentries don’t endanger the public, and requires launch operators obtain liability insurance or demonstrate financial ability to cover any third-party property damages.

For each Starship launch, the FAA requires SpaceX maintain liability insurance policies worth at least $500 million for such claims. It’s rare for debris from US rockets to fall over land during a launch. This would typically only happen if a launch failed at certain parts of the flight. And there’s no public record of any claims of third-party property damage in the era of commercial spaceflight. Under federal law, the US government would pay for damages to a much higher amount if any claims exceeded a launch company’s insurance policies.

Here’s a piece of Starship 33 @SpaceX @elonmusk found in Turks and Caicos! 🚀🏝️ pic.twitter.com/HPZDCqA9MV

— @maximzavet (@MaximZavet) January 17, 2025

The good news is there were no injuries or reports of significant damage from the wreckage that fell over the Turks and Caicos. “The FAA confirmed one report of minor damage to a vehicle located in South Caicos,” an FAA spokesperson told Ars on Friday. “To date, there are no other reports of damage.”

It’s not clear if the vehicle owner in South Caicos will file a claim against SpaceX for the damage. It would the first time someone makes such a claim related to an accident with a commercial rocket overseen by the FAA. Last year, a Florida homeowner submitted a claim to NASA for damage to his house from a piece of debris that fell from the International Space Station.

Nevertheless, the Turks and Caicos government said local officials met with representatives from SpaceX and the UK Air Accident Investigations Branch on January 25 to develop a recovery plan for debris that fell on the islands, which are a British Overseas Territory.

A prickly relationship

Musk often bristled at the FAA last year, especially after regulators proposed fines of more than $600,000 alleging that SpaceX violated terms of its launch licenses during two Falcon 9 missions. The alleged violations involved the relocation of a propellant farm at one of SpaceX’s launch pads in Florida, and the use of a new launch control center without FAA approval.

In a post on X, Musk said the FAA was conducting “lawfare” against his company. “SpaceX will be filing suit against the FAA for regulatory overreach,” Musk wrote.

There was no such lawsuit, and the issue may now be moot. Sean Duffy, Trump’s new secretary of transportation, vowed to review the FAA fines during his confirmation hearing in the Senate. It is rare for the FAA to fine launch companies, and the fines last year made up the largest civil penalty ever imposed by the FAA’s commercial spaceflight division.

SpaceX also criticized delays in licensing Starship test flights last year. The FAA cited environmental issues and concerns about the extent of the sonic boom from Starship’s 23-story-tall Super Heavy booster returning to its launch pad in South Texas. SpaceX successfully caught the returning first stage booster at the launch pad for the first time in October, and repeated the feat after the January 16 test flight.

What separates the FAA’s ongoing oversight of Starship’s recent launch failure from these previous regulatory squabbles is that debris fell over populated areas. This would appear to be directly in line with the FAA’s responsibility for public safety.

During last month’s test flight, Starship did not deviate from its planned ground track, which took the rocket over the Gulf of Mexico, the waters between Florida and Cuba, and then the Atlantic Ocean. But the debris field extended beyond the standard airspace closure for the launch. After the accident, FAA air traffic controllers cleared additional airspace over the debris zone for more than an hour, rerouting, diverting, and delaying dozens of commercial aircraft.

These actions followed pre-established protocols. However, it highlighted the small but non-zero risk of rocket debris falling to Earth after a launch failure. “The potential for a bad day downrange just got real,” Lori Garver, a former NASA deputy administrator, posted on X.

Public safety is not sole mandate of the FAA’s commercial space office. It is also chartered to “encourage, facilitate, and promote commercial space launches and reentries by the private sector,” according to an FAA website. There’s a balance to strike.

Lawmakers last year urged the FAA to speed up its launch approvals, primarily because Starship is central to strategic national objectives. NASA has contracts with SpaceX to develop a variant of Starship to land astronauts on the Moon, and Starship’s unmatched ability to deliver more than 100 tons of cargo to low-Earth orbit is attractive to the Pentagon.

While Musk criticized the FAA in 2024, SpaceX officials in 2023 took a different tone, calling for Congress to increase the budget for the FAA’s Office of Commercial Spaceflight and for the regulator to double the space division’s workforce. This change, SpaceX officials argued, would allow the FAA to more rapidly assess and approve a fast-growing number of commercial launch and reentry applications.

In September, SpaceX released a statement accusing the former administrator of the FAA, Michael Whitaker, of making inaccurate statements about SpaceX to a congressional subcommittee. In a different post on X, Musk directly called for Whitaker’s resignation.

He needs to resign https://t.co/pG8htfTYHb

— Elon Musk (@elonmusk) September 25, 2024

That’s exactly what happened. Whitaker, who took over the FAA’s top job in 2023 under the Biden administration, announced in December he would resign on Inauguration Day. Since the agency’s establishment in 1958, three FAA administrators have similarly resigned when a new administration takes power, but the office has been largely immune from presidential politics in recent decades. Since 1993, FAA administrators have stayed in their post during all presidential transitions.

There’s no evidence Whitaker’s resignation had any role in the mid-air collision of an American Eagle passenger jet and a US Army helicopter Wednesday night near Ronald Reagan Washington National Airport. But his departure from the FAA less than two years into a five-year term on January 20 left the agency without a leader. Trump named Chris Rocheleau as the FAA’s acting administrator Thursday.

Next flight, next month?

SpaceX has not released an official schedule for the next Starship test flight or outlined its precise objectives. However, it will likely repeat many of the goals planned for the previous flight, which ended before SpaceX could accomplish some of its test goals. These missed objectives included the release of satellite mockups in space for the first demonstration of Starship’s payload deployment mechanism, and a reentry over the Indian Ocean to test new, more durable heat shield materials.

The January 16 test flight was the first launch up an upgraded, slightly taller Starship, known as Version 2 or Block 2. The next flight will use the same upgraded version.

A SpaceX filing with the Federal Communications Commission suggests the next Starship flight could launch as soon as February 24. Sources told Ars that SpaceX teams believe a launch before the end of February is realistic.

But SpaceX has more to do before Flight 8. These tasks include completing the FAA-mandated investigation and the installation of all 39 Raptor engines on the rocket. Then, SpaceX will likely test-fire the booster and ship before stacking the two elements together to complete assembly of the 404-foot-tall (123.1-meter) rocket.

SpaceX is also awaiting a new FAA launch license, pending its completion of the investigation into what happened on Flight 7.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

It seems the FAA office overseeing SpaceX’s Starship probe still has some bite Read More »

to-help-ais-understand-the-world,-researchers-put-them-in-a-robot

To help AIs understand the world, researchers put them in a robot


There’s a difference between knowing a word and knowing a concept.

Large language models like ChatGPT display conversational skills, but the problem is they don’t really understand the words they use. They are primarily systems that interact with data obtained from the real world but not the real world itself. Humans, on the other hand, associate language with experiences. We know what the word “hot” means because we’ve been burned at some point in our lives.

Is it possible to get an AI to achieve a human-like understanding of language? A team of researchers at the Okinawa Institute of Science and Technology built a brain-inspired AI model comprising multiple neural networks. The AI was very limited—it could learn a total of just five nouns and eight verbs. But their AI seems to have learned more than just those words; it learned the concepts behind them.

Babysitting robotic arms

“The inspiration for our model came from developmental psychology. We tried to emulate how infants learn and develop language,” says Prasanna Vijayaraghavan, a researcher at the Okinawa Institute of Science and Technology and the lead author of the study.

While the idea of teaching AIs the same way we teach little babies is not new—we applied it to standard neural nets that associated words with visuals. Researchers also tried teaching an AI using a video feed from a GoPro strapped to a human baby. The problem is babies do way more than just associate items with words when they learn. They touch everything—grasp things, manipulate them, throw stuff around, and this way, they learn to think and plan their actions in language. An abstract AI model couldn’t do any of that, so Vijayaraghavan’s team gave one an embodied experience—their AI was trained in an actual robot that could interact with the world.

Vijayaraghavan’s robot was a fairly simple system with an arm and a gripper that could pick objects up and move them around. Vision was provided by a simple RGB camera feeding videos in a somewhat crude 64×64 pixels resolution.

 The robot and the camera were placed in a workspace, put in front of a white table with blocks painted green, yellow, red, purple, and blue. The robot’s task was to manipulate those blocks in response to simple prompts like “move red left,” “move blue right,” or “put red on blue.” All that didn’t seem particularly challenging. What was challenging, though, was building an AI that could process all those words and movements in a manner similar to humans. “I don’t want to say we tried to make the system biologically plausible,” Vijayaraghavan told Ars. “Let’s say we tried to draw inspiration from the human brain.”

Chasing free energy

The starting point for Vijayaraghavan’s team was the free energy principle, a hypothesis that the brain constantly makes predictions about the world based on internal models, then updates these predictions based on sensory input. The idea is that we first think of an action plan to achieve a desired goal, and then this plan is updated in real time based on what we experience during execution. This goal-directed planning scheme, if the hypothesis is correct, governs everything we do, from picking up a cup of coffee to landing a dream job.

All that is closely intertwined with language. Neuroscientists at the University of Parma found that motor areas in the brain got activated when the participants in their study listened to action-related sentences. To emulate that in a robot, Vijayaraghavan used four neural networks working in a closely interconnected system. The first was responsible for processing visual data coming from the camera. It was tightly integrated with a second neural net that handled proprioception: all the processes that ensured the robot was aware of its position and the movement of its body. This second neural net also built internal models of actions necessary to manipulate blocks on the table. Those two neural nets were additionally hooked up to visual memory and attention modules that enabled them to reliably focus on the chosen object and separate it from the image’s background.

The third neural net was relatively simple and processed language using vectorized representations of those “move red right” sentences. Finally, the fourth neural net worked as an associative layer and predicted the output of the previous three at every time step. “When we do an action, we don’t always have to verbalize it, but we have this verbalization in our minds at some point,” Vijayaraghavan says. The AI he and his team built was meant to do just that: seamlessly connect language, proprioception, action planning, and vision.

When the robotic brain was up and running, they started teaching it some of the possible combinations of commands and sequences of movements. But they didn’t teach it all of them.

The birth of compositionality

In 2016, Brenden Lake, a professor of psychology and data science, published a paper in which his team named a set of competencies machines need to master to truly learn and think like humans. One of them was compositionality: the ability to compose or decompose a whole into parts that can be reused. This reuse lets them generalize acquired knowledge to new tasks and situations. “The compositionality phase is when children learn to combine words to explain things. They [initially] learn the names of objects, the names of actions, but those are just single words. When they learn this compositionality concept, their ability to communicate kind of explodes,” Vijayaraghavan explains.

The AI his team built was made for this exact purpose: to see if it would develop compositionality. And it did.

Once the robot learned how certain commands and actions were connected, it also learned to generalize that knowledge to execute commands it never heard before. recognizing the names of actions it had not performed and then performing them on combinations of blocks it had never seen. Vijayaraghavan’s AI figured out the concept of moving something to the right or the left or putting an item on top of something. It could also combine words to name previously unseen actions, like putting a blue block on a red one.

While teaching robots to extract concepts from language has been done before, those efforts were focused on making them understand how words were used to describe visuals. Vijayaragha built on that to include proprioception and action planning, basically adding a layer that integrated sense and movement to the way his robot made sense of the world.

But some issues are yet to overcome. The AI had very limited workspace. The were only a few objects and all had a single, cubical shape. The vocabulary included only names of colors and actions, so no modifiers, adjectives, or adverbs. Finally, the robot had to learn around 80 percent of all possible combinations of nouns and verbs before it could generalize well to the remaining 20 percent. Its performance was worse when those ratios dropped to 60/40 and 40/60.

But it’s possible that just a bit more computing power could fix this. “What we had for this study was a single RTX 3090 GPU, so with the latest generation GPU, we could solve a lot of those issues,” Vijayaraghavan argued. That’s because the team hopes that adding more words and more actions won’t result in a dramatic need for computing power. “We want to scale the system up. We have a humanoid robot with cameras in its head and two hands that can do way more than a single robotic arm. So that’s the next step: using it in the real world with real world robots,” Vijayaraghavan said.

Science Robotics, 2025. DOI: 10.1126/scirobotics.adp0751

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

To help AIs understand the world, researchers put them in a robot Read More »

fda-approves-first-non-opioid-pain-medicine-in-more-than-20-years

FDA approves first non-opioid pain medicine in more than 20 years

The approval “is an important public health milestone in acute pain management,” Jacqueline Corrigan-Curay, J.D., M.D., acting director of the FDA’s Center for Drug Evaluation and Research, said in a statement. “A new non-opioid analgesic therapeutic class for acute pain offers an opportunity to mitigate certain risks associated with using an opioid for pain and provides patients with another treatment option.”

The company behind the drug, Vertex, said a 50 mg pill that works for 12 hours will have a wholesale cost of $15.50, making the daily cost $31 and the weekly cost $217. The cost is higher than cheap, generic opioids. But, a report from The Institute for Clinical and Economic Review in December estimated that suzetrigine would be “slightly cost-saving” relative to opioids if the price was set at $420 per week, given the drug’s ability to avert opioid addiction cases.

In a statement, Reshma Kewalramani, the CEO and President of Vertex, trumpeted the approval as a “historic milestone for the 80 million people in America who are prescribed a medicine for moderate-to-severe acute pain each year … [W]e have the opportunity to change the paradigm of acute pain management and establish a new standard of care.”

FDA approves first non-opioid pain medicine in more than 20 years Read More »

fcc-demands-cbs-provide-unedited-transcript-of-kamala-harris-interview

FCC demands CBS provide unedited transcript of Kamala Harris interview

The Federal Communications Commission demanded that CBS provide the unedited transcript of a 60 Minutes interview with Kamala Harris that is the subject of a complaint to the FCC and a lawsuit filed by President Donald Trump.

CBS News on Wednesday received a letter of inquiry in which the FCC requested “the full, unedited transcript and camera feeds” of the Harris interview, The New York Times reported today. “We are working to comply with that inquiry as we are legally compelled to do,” a CBS News spokesperson told media outlets.

FCC Chairman Brendan Carr repeatedly echoed Trump’s complaints about alleged media bias before the election and has taken steps to punish news broadcasters since Trump promoted him to the chairmanship. Complaints against CBS, ABC, and NBC stations were dismissed under former Chairwoman Jessica Rosenworcel, but Carr reversed those dismissals in his first week as chair. Carr also ordered investigations into NPR and CBS.

FCC Commissioner Anna Gomez, a Democrat, criticized what she called Carr’s “latest action to weaponize our broadcast licensing authority.”

“This is a retaliatory move by the government against broadcasters whose content or coverage is perceived to be unfavorable,” Gomez said today. “It is designed to instill fear in broadcast stations and influence a network’s editorial decisions. The Communications Act clearly prohibits the Commission from censoring broadcasters and the First Amendment protects journalistic decisions against government intimidation. We must respect the rule of law, uphold the Constitution, and safeguard public trust in our oversight of broadcasters.”

CBS considers settling Trump lawsuit

Trump sued CBS over the Harris interview, and executives at CBS owner Paramount Global have held settlement talks with Trump representatives. “A settlement would be an extraordinary concession by a major U.S. media company to a sitting president, especially in a case in which there is no evidence that the network got facts wrong or damaged the plaintiff’s reputation,” The New York Times wrote.

FCC demands CBS provide unedited transcript of Kamala Harris interview Read More »

dell-risks-employee-retention-by-forcing-all-teams-back-into-offices-full-time

Dell risks employee retention by forcing all teams back into offices full-time

In a statement to Ars, Dell’s PR team said:

“We continually evolve our business so we’re set up to deliver the best innovation, value, and service to our customers and partners. That includes more in-person connections to drive market leadership.”

The road to full RTO

After Dell allowed employees to work from home two days per week, Dell’s sales team in March became the first department to order employees back into offices full-time. At the time, Dell said it had data showing that salespeople are more productive on site. Dell corporate strategy SVP Vivek Mohindra said last month that sales’ RTO brought “huge benefits” in “learning from each other, training, and mentorship.”

The company’s “manufacturing teams, engineers in the labs, onsite team members, and leaders” had also previously been called into offices full-time, Business Insider reported today.

Since February, Dell has been among the organizations pushing for more in-person work since pandemic restrictions lifted, with reported efforts including VPN and badge tracking.

Risking personnel

Like other organizations, Dell risks losing employees by implementing a divisive mandate. For Dell specifically, internal tracking data reportedly found that nearly half of workers already opted for remote work over being eligible for promotions or new roles, according to a September Business Insider report.

Research has suggested that companies that issue RTO mandates subsequently lose some of their best talent. A November research paper (PDF) from the University of Pittsburgh, Baylor University, The Chinese University of Hong Kong, and Cheung Kong Graduate School of Business researchers that cited LinkedIn data found this particularly true for “high-tech” and financial firms. The researchers concluded that average turnover rates increased by 14 percent on average after companies issued RTO policies. This research, in addition to other studies, has also found that companies with in-office work mandates are at risk of losing senior-level employees especially.

Some analysts don’t believe Dell is in danger of a mass exodus, though. Bob O’Donnell, president and chief analyst at Technalysis Research, told Business Insider in December, “It’s not like I think Dell’s going to lose a whole bunch of people to HP or Lenovo.”

Patrick Moorhead, CEO and chief analyst at Moor Insights & Strategy, said he believes RTO would be particularly beneficial to Dell’s product development.

Still, some workers have accused Dell of using RTO policies to try to reduce headcount. There’s no proof of this, but broader research, including commentary from various company executives outside of Dell, has shown that some companies have used RTO policies to try to get people to quit.

Dell declined to comment about potential employee blowback to Ars Technica.

Dell risks employee retention by forcing all teams back into offices full-time Read More »

“just-give-me-the-f***ing-links!”—cursing-disables-google’s-ai-overviews

“Just give me the f***ing links!”—Cursing disables Google’s AI overviews

If you search Google for a way to turn off the company’s AI-powered search results, you may well get an AI Overview telling you that AI Overviews can’t be directly disabled in Google Search. But if you instead ask Google how to turn off “fucking Google AI results,” you’ll get a standard set of useful web suggestions without any AI Overview at the top.

The existence of this “curse to disable Google AI” trick has been making the rounds on social media in recent days, and it holds up in Ars’ own testing. For instance, when searching for “how do you turn off [adjective] Google AI results,” a variety of curse word adjectives reliably disabled the AI Overviews, while adjectives like “dumb” or “lousy” did not. Inserting curse words randomly at any point in the search query seems to have a similar effect.

There’s long been evidence that Google’s Gemini AI system tries to avoid swearing if at all possible, which might help explain why AI Overviews balk at queries that contain curses. Users should also keep in mind, though, that the actual web link results to a query can change significantly when curse words are inserted, especially if SafeSearch is turned off.

“Just give me the f***ing links!”—Cursing disables Google’s AI overviews Read More »

here’s-why-the-tech-industry-gets-excited-about-sports-car-racing

Here’s why the tech industry gets excited about sports car racing


It would take IMSA 700 years to drive to Mars

Racing has always been used to improve the breed, but now mostly with software.

NASA worm logo with race imagery over a backdrop of Mars

Credit: Aurich Lawson | Getty Images | NASA

Credit: Aurich Lawson | Getty Images | NASA

DAYTONA BEACH—Last week, ahead of the annual Rolex 24 at Daytona and the start of the North American road racing season, IMSA (the sport’s organizers) held a tech symposium across the road from the vast speedway at Embry-Riddle University. Last year, panelists, including Crowdstrike’s CSO, explained the draw of racing to their employers; this time, organizations represented included NASA, Michelin, AMD, and Microsoft. And while they were all there to talk about racing, it seems everyone was also there to talk about simulation and AI.

I’ve long maintained that endurance racing, where grids of prototypes and road car-based racers compete over long durations—24 hours, for example—is the most relevant form of motorsport, the one that makes road cars better. Formula 1 has budgets and an audience to dwarf all others, and there’s no doubt about the level of talent and commitment required to triumph in that arena. The Indy 500 might have more history. And rallying looks like the hardest challenge for both humans and machines.

But your car owes its disc brakes to endurance racing, plus its dual-clutch transmission, if it’s one of the increasing number of cars fitted with such. But let’s not overblow it. Over the years, budgets have had to be reined in for the health of the sport. That—plus a desire for parity among the teams so that no one clever idea runs away with the series—means there are plenty of spec or controlled components on a current endurance racer. Direct technology transfer, then, happens less and less often—at least in terms of new mechanical bits or bobs you might find inside your next car.

Software has become a new competitive advantage for the teams that race hybrid sports prototypes from Acura, BMW, Cadillac, Porsche, and Lamborghini, just as it is between teams in Formula E.

But this year’s symposium shone a light on a different area of tech transfer, where Microsoft or NASA can use the vast streams of data that pour out of a 60-car, 24-hour race to build more accurate simulations and AI tools—maybe even ones that will babysit a crewed mission to Mars.

Sorry, did you say Mars?

“Critically, it takes light 20 minutes to make that trip, which has some really unfortunate operational impacts,” said Ian Maddox of NASA’s Marshall Space Flight Center’s Habitation office. A 40-minute delay between asking a question and getting an answer wouldn’t work for a team trying to win the Rolex 24, and “it certainly isn’t going to work for us,” he said.

“And so we’re placed in—I’ll be frank—the really uncomfortable position of having to figure out how to build AI tools to help the crew on board a Mars ship diagnose and respond to their own problems. So to be their own crew, to be their own engineering teams, at least for the subset of problems that can get really bad in the course of 45 minutes to an hour,” Maddox said.

Building those kinds of tools will require a “giant bucket of really good data,” Maddox said, “and that’s why we’ve come to IMSA.”

Individually, the hybrid prototypes and GT cars in an IMSA race are obviously far less complicated than a Mars-bound spacecraft. But when you get that data from all the cars in the race together, the size starts to become comparable.

“And fundamentally, you guys have things that roll and we have things that rotate, and you have things that get hot and cold, and so do we,” Maddox said. “When you get down to the actual measurement level, there are a lot of similarities between the stuff that you guys use to understand vehicle performance and the stuff we use to understand vehicle performance.”

Not just Mars

Other speakers pointed to areas of technology development—like tire development—that you may have read about recently here on Ars Technica. “[A tire is] a composite material made with more than 200 components with very non-linear behavior. It’s pressure-sensitive, it’s temperature-sensitive. It changes with wear… and actually, the ground interaction is also one of the worst mechanisms to try to anticipate and to understand,” said Phillippe Tramond, head of research of motorsport at Michelin.

For the past four years, Michelin has been crunching data gathered from cars racing on its rubber (and the other 199 components). “And eventually, we are able to build and develop a thermomechanical tire model able to mimic and simulate tire behavior, tire performance, whatever the specification is,” Tramond said.

That tool has been quite valuable to the teams racing in the GTP class of hybrid prototypes, as it means that their driver-in-the-loop simulators are now even more faithful to real life. But Michelin has also started using the tire model when developing road tires for specific cars with individual OEMs.

For Sid Siddhartha, a principal researcher at Microsoft Research, the data is again the draw. Siddhartha has been using AI to study human behavior, including in the game Rocket League. “We were able to actually show that we can really understand and home in on individual human behavior in a very granular way, to the point where if I just observe you for two or three seconds, or if I look at some of your games, I can tell you who played it,” Siddhartha said.

That led to a new approach by the Alpine F1 team, which wanted to use Siddhartha’s AI to improve its simulation tools. F1 teams will run entirely virtual simulations on upgraded cars long before they fire those changes up in the big simulator and let their human drivers have a go (as described above). In Alpine’s case, they wanted something more realistic than a lap time simulator that just assumed perfect behavior.

The dreaded BoP

“Eventually, we are connected to IMSA, and IMSA is interested in a whole host of questions that are very interesting to us at Microsoft Research,” Siddhartha said. “They’re interested in what are the limits of driver and car? How do you balance that performance across different classes? How do you anticipate what might happen when people make different strategic decisions during the race? And how do you communicate all of this to a fan base, which has really blown me away, as John was saying, who are interested in following the sport and understanding what’s going on.”

“Sports car racing is inherently complex,” said Matt Kurdock, IMSA’s managing director of engineering. “We’ve got four different classes. We have, in each car, four different drivers. And IMSA’s challenge is to extract from this race data that’s being collected and figure out how to get an appropriate balance so that manufacturers stay engaged in the sport,” Kurdock said.

IMSA has the cars put through wind tunnels and runs CFD simulations on them as well. “We then plug all this information into one of Michelin’s tools, which is their canopy vehicle dynamic simulation, which runs in the cloud, and from this, we start generating a picture of where we believe the optimized performance of each platform is,” Kurdock said.

That’s something to think about the next time your favorite team gets the short end of the stick in the latest balance of performance—better known as BoP—update.

Photo of Jonathan M. Gitlin

Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica’s automotive coverage. He lives in Washington, DC.

Here’s why the tech industry gets excited about sports car racing Read More »

in-apple’s-first-quarter-earnings,-the-mac-leads-the-way-in-sales-growth

In Apple’s first-quarter earnings, the Mac leads the way in sales growth

Apple fell slightly short of investor expectations when it reported its first-quarter earnings today. While sales were up 4 percent overall, the iPhone showed signs of weakness, and sales in the Chinese market slipped by just over 11 percent.

CEO Tim Cook told CNBC that the iPhone performed better in countries where Apple Intelligence was available, like the US—seemingly suggesting that the slip was partially because Chinese consumers do not see enough reason to buy new phones without Apple Intelligence. (He also said, “Half of the decline is due to a change in channel inventory.”) iPhone sales also slipped in China during this same quarter last year; this was the first full quarter during which the iPhone 16 was available.

In any case, Cook said the company plans to roll out Apple Intelligence in additional languages, including Mandarin, this spring.

Apple’s wearables category also declined slightly, but only by 2 percent.

Despite the trends that worried investors, Apple reported $36.33 billion in net revenue for the first quarter. That’s 7.1 percent more than last year’s Q1. This was driven by the Mac, the iPad, and Services (which includes everything from Apple Music to iCloud)—all of which saw slight upticks in sales. Services was up 14 percent, continuing a strong streak for that business, while the Mac and the iPad both jumped up 15 percent.

The uptick in Mac and iPad sales was likely helped by several new Mac models and a new iPad mini starting shipments last October.

Cook shared some other interesting numbers in the earnings call with investors and the press: The company has an active base of 2.35 billion devices, and it has more than 1 billion active subscriptions.

In Apple’s first-quarter earnings, the Mac leads the way in sales growth Read More »

how-one-youtuber-is-trying-to-poison-the-ai-bots-stealing-her-content

How one YouTuber is trying to poison the AI bots stealing her content

If you’ve been paying careful attention to YouTube recently, you may have noticed the rising trend of so-called “faceless YouTube channels” that never feature a visible human talking in the video frame. While some of these channels are simply authored by camera-shy humans, many more are fully automated through AI-powered tools to craft everything from the scripts and voiceovers to the imagery and music. Unsurprisingly, this is often sold as a way to make a quick buck off the YouTube algorithm with minimal human effort.

It’s not hard to find YouTubers complaining about a flood of these faceless channels stealing their embedded transcript files and running them through AI summarizers to generate their own instant knock-offs. But one YouTuber is trying to fight back, seeding her transcripts with junk data that is invisible to humans but poisonous to any AI that dares to try to work from a poached transcript file.

The power of the .ass

YouTuber F4mi, who creates some excellent deep dives on obscure technology, recently detailed her efforts “to poison any AI summarizers that were trying to steal my content to make slop.” The key to F4mi’s method is the .ass subtitle format, created decades ago as part of fansubbing software Advanced SubStation Alpha. Unlike simpler and more popular subtitle formats, .ass supports fancy features like fonts, colors, positioning, bold, italic, underline, and more.

It’s these fancy features that let F4mi hide AI-confounding garbage in her YouTube transcripts without impacting the subtitle experience for her human viewers. For each chunk of actual text in her subtitle file, she also inserted “two chunks of text out of bounds using the positioning feature of the .ass format, with their size and transparency set to zero so they are completely invisible.”

In those “invisible” subtitle boxes, F4mi added text from public domain works (with certain words replaced with synonyms to avoid detection) or her own LLM-generated scripts full of completely made-up facts. When those transcript files were fed into popular AI summarizer sites, that junk text ended up overwhelming the actual content, creating a totally unrelated script that would be useless to any faceless channel trying to exploit it.

How one YouTuber is trying to poison the AI bots stealing her content Read More »

copyright-office-suggests-ai-copyright-debate-was-settled-in-1965

Copyright Office suggests AI copyright debate was settled in 1965


Most people think purely AI-generated works shouldn’t be copyrighted, report says.

Ars used Copilot to generate this AI image using the precise prompt the Copyright Office used to determine that prompting alone isn’t authorship. Credit: AI image generated by Copilot

The US Copyright Office issued AI guidance this week that declared no laws need to be clarified when it comes to protecting authorship rights of humans producing AI-assisted works.

“Questions of copyrightability and AI can be resolved pursuant to existing law, without the need for legislative change,” the Copyright Office said.

More than 10,000 commenters weighed in on the guidance, with some hoping to convince the Copyright Office to guarantee more protections for artists as AI technologies advance and the line between human- and AI-created works seems to increasingly blur.

But the Copyright Office insisted that the AI copyright debate was settled in 1965 after commercial computer technology started advancing quickly and “difficult questions of authorship” were first raised. That was the first time officials had to ponder how much involvement human creators had in works created using computers.

Back then, the Register of Copyrights, Abraham Kaminstein—who was also instrumental in codifying fair use—suggested that “there is no one-size-fits-all answer” to copyright questions about computer-assisted human authorship. And the Copyright Office agrees that’s still the case today.

“Very few bright-line rules are possible,” the Copyright Office said, with one obvious exception. Because of “insufficient human control over the expressive elements” of resulting works, “if content is entirely generated by AI, it cannot be protected by copyright.”

The office further clarified that doesn’t mean that works assisted by AI can never be copyrighted.

“Where AI merely assists an author in the creative process, its use does not change the copyrightability of the output,” the Copyright Office said.

Following Kaminstein’s advice, officials plan to continue reviewing AI disclosures and weighing, on a case-by-case basis, what parts of each work are AI-authored and which parts are human-authored. Any human-authored expressive element can be copyrighted, the office said, but any aspect of the work deemed to have been generated purely by AI cannot.

Prompting alone isn’t authorship, Copyright Office says

After doing some testing on whether the same exact prompt can generate widely varied outputs, even from the same AI tool, the Copyright Office further concluded that “prompts do not alone provide sufficient control” over outputs to allow creators to copyright purely AI-generated works based on highly intelligent or creative prompting.

That decision could change, the Copyright Office said, if AI technologies provide more human control over outputs through prompting.

New guidance noted, for example, that some AI tools allow prompts or other inputs “to be substantially retained as part of the output.” Consider an artist uploading an original drawing, the Copyright Office suggested, and prompting AI to modify colors, or an author uploading an original piece and using AI to translate it. And “other generative AI systems also offer tools that similarly allow users to exert control over the selection, arrangement, and content of the final output.”

The Copyright Office drafted this prompt to test artists’ control over expressive inputs that are retained in AI outputs. Credit: Copyright Office

“Where a human inputs their own copyrightable work and that work is perceptible in the output, they will be the author of at least that portion of the output,” the guidelines said.

But if officials conclude that even the most iterative prompting doesn’t perfectly control the resulting outputs—even slowly, repeatedly prompting AI to produce the exact vision in an artist’s head—some artists are sure to be disappointed. One artist behind a controversial prize-winning AI-generated artwork has staunchly defended his rigorous AI prompting as authorship.

However, if “even expert researchers are limited in their ability to understand or predict the behavior of specific models,” the Copyright Office said it struggled to see how artists could. To further prove their point, officials drafted a lengthy, quirky prompt about a cat reading a Sunday newspaper to compare different outputs from the same AI image generator.

Copyright Office drafted a quirky, lengthy prompt to test creative control over AI outputs. Credit: Copyright Office

Officials apparently agreed with Adobe, which submitted a comment advising the Copyright Office that any output is “based solely on the AI’s interpretation of that prompt.” Academics further warned that copyrighting outputs based only on prompting could lead copyright law to “effectively vest” authorship adopters with “rights in ideas.”

“The Office concludes that, given current generally available technology, prompts alone do not provide sufficient human control to make users of an AI system the authors of the output. Prompts essentially function as instructions that convey unprotectable ideas,” the guidance said. “While highly detailed prompts could contain the user’s desired expressive elements, at present they do not control how the AI system processes them in generating the output.”

Hundreds of AI artworks are copyrighted, officials say

The Copyright Office repeatedly emphasized that most commenters agreed with the majority of their conclusions. Officials also stressed that hundreds of AI artworks submitted for registration, under existing law, have been approved to copyright the human-authored elements of their works. Rejections are apparently expected to be less common.

“In most cases,” the Copyright Office said, “humans will be involved in the creation process, and the work will be copyrightable to the extent that their contributions qualify as authorship.”

For stakeholders who have been awaiting this guidance for months, the Copyright Office report may not change the law, but it offers some clarity.

For some artists who hoped to push the Copyright Office to adapt laws, the guidelines may disappoint, leaving many questions about a world of possible creative AI uses unanswered. But while a case-by-case approach may leave some artists unsure about which parts of their works are copyrightable, seemingly common cases are being resolved more readily. According to the Copyright Office, after each decision, it gets easier to register AI works that meet similar standards for copyrightability. Perhaps over time, artists will grow more secure in how they use AI and whether it will impact their exclusive rights to distribute works.

That’s likely cold comfort for the artist advocating for prompting alone to constitute authorship. One AI artist told Ars in October that being denied a copyright has meant suffering being mocked and watching his award-winning work freely used anywhere online without his permission and without payment. But in the end, the Copyright Office was apparently more sympathetic to other commenters who warned that humanity’s progress in the arts could be hampered if a flood of easily generated, copyrightable AI works drowned too many humans out of the market.

“We share the concerns expressed about the impact of AI-generated material on human authors and the value that their creative expression provides to society. If a flood of easily and rapidly AI-generated content drowns out human-authored works in the marketplace, additional legal protection would undermine rather than advance the goals of the copyright system. The availability of vastly more works to choose from could actually make it harder to find inspiring or enlightening content.”

New guidance likely a big yawn for AI companies

For AI companies, the copyright guidance may mean very little. According to AI company Hugging Face’s comments to the Copyright Office, no changes in the law were needed to ensure the US continued leading in AI innovation, because “very little to no innovation in generative AI is driven by the hope of obtaining copyright protection for model outputs.”

Hugging Face’s Head of ML & Society, Yacine Jernite, told Ars that the Copyright Office seemed to “take a constructive approach” to answering some of artists’ biggest questions about AI.

“We believe AI should support, not replace, artists,” Jernite told Ars. “For that to happen, the value of creative work must remain in its human contribution, regardless of the tools used.”

Although the Copyright Office suggested that this week’s report might be the most highly anticipated, Jernite said that Hugging Face is eager to see the next report, which officials said would focus on “the legal implications of training AI models on copyrighted works, including licensing considerations and the allocation of any potential liability.”

“As a platform that supports broader participation in AI, we see more value in distributing its benefits than in concentrating all control with a few large model providers,” Jernite said. “We’re looking forward to the next part of the Copyright Office’s Report, particularly on training data, licensing, and liability, key questions especially for some types of output, like code.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Copyright Office suggests AI copyright debate was settled in 1965 Read More »

openai-teases-“new-era”-of-ai-in-us,-deepens-ties-with-government

OpenAI teases “new era” of AI in US, deepens ties with government

On Thursday, OpenAI announced that it is deepening its ties with the US government through a partnership with the National Laboratories and expects to use AI to “supercharge” research across a wide range of fields to better serve the public.

“This is the beginning of a new era, where AI will advance science, strengthen national security, and support US government initiatives,” OpenAI said.

The deal ensures that “approximately 15,000 scientists working across a wide range of disciplines to advance our understanding of nature and the universe” will have access to OpenAI’s latest reasoning models, the announcement said.

For researchers from Los Alamos, Lawrence Livermore, and Sandia National Labs, access to “o1 or another o-series model” will be available on Venado—an Nvidia supercomputer at Los Alamos that will become a “shared resource.” Microsoft will help deploy the model, OpenAI noted.

OpenAI suggested this access could propel major “breakthroughs in materials science, renewable energy, astrophysics,” and other areas that Venado was “specifically designed” to advance.

Key areas of focus for Venado’s deployment of OpenAI’s model include accelerating US global tech leadership, finding ways to treat and prevent disease, strengthening cybersecurity, protecting the US power grid, detecting natural and man-made threats “before they emerge,” and ” deepening our understanding of the forces that govern the universe,” OpenAI said.

Perhaps among OpenAI’s flashiest promises for the partnership, though, is helping the US achieve a “a new era of US energy leadership by unlocking the full potential of natural resources and revolutionizing the nation’s energy infrastructure.” That is urgently needed, as officials have warned that America’s aging energy infrastructure is becoming increasingly unstable, threatening the country’s health and welfare, and without efforts to stabilize it, the US economy could tank.

But possibly the most “highly consequential” government use case for OpenAI’s models will be supercharging research safeguarding national security, OpenAI indicated.

OpenAI teases “new era” of AI in US, deepens ties with government Read More »

microsoft-now-hosts-ai-model-accused-of-copying-openai-data

Microsoft now hosts AI model accused of copying OpenAI data

Fresh on the heels of a controversy in which ChatGPT-maker OpenAI accused the Chinese company behind DeepSeek R1 of using its AI model outputs against its terms of service, OpenAI’s largest investor, Microsoft, announced on Wednesday that it will now host DeepSeek R1 on its Azure cloud service.

DeepSeek R1 has been the talk of the AI world for the past week because it is a freely available simulated reasoning model that reportedly matches OpenAI’s o1 in performance—while allegedly being trained for a fraction of the cost.

Azure allows software developers to rent computing muscle from machines hosted in Microsoft-owned data centers, as well as rent access to software that runs on them.

“R1 offers a powerful, cost-efficient model that allows more users to harness state-of-the-art AI capabilities with minimal infrastructure investment,” wrote Microsoft Corporate Vice President Asha Sharma in a news release.

DeepSeek R1 runs at a fraction of the cost of o1, at least through each company’s own services. Comparative prices for R1 and o1 were not immediately available on Azure, but DeepSeek lists R1’s API cost as $2.19 per million output tokens, while OpenAI’s o1 costs $60 per million output tokens. That’s a massive discount for a model that performs similarly to o1-pro in various tasks.

Promoting a controversial AI model

On its face, the decision to host R1 on Microsoft servers is not unusual: The company offers access to over 1,800 models on its Azure AI Foundry service with the hopes of allowing software developers to experiment with various AI models and integrate them into their products. In some ways, whatever model they choose, Microsoft still wins because it’s being hosted on the company’s cloud service.

Microsoft now hosts AI model accused of copying OpenAI data Read More »