Author name: Paul Patrick

pay-up-or-stop-scraping:-cloudflare-program-charges-bots-for-each-crawl

Pay up or stop scraping: Cloudflare program charges bots for each crawl

“Imagine asking your favorite deep research program to help you synthesize the latest cancer research or a legal brief, or just help you find the best restaurant in Soho—and then giving that agent a budget to spend to acquire the best and most relevant content,” Cloudflare said, promising that “we enable a future where intelligent agents can programmatically negotiate access to digital resources.”

AI crawlers now blocked by default

Cloudflare’s announcement comes after rolling out a feature last September, allowing website owners to block AI crawlers in a single click. According to Cloudflare, over 1 million customers chose to block AI crawlers, signaling that people want more control over their content at a time when Cloudflare observed that writing instructions for AI crawlers in robots.txt files was widely “underutilized.”

To protect more customers moving forward, any new customers (including anyone on a free plan) who sign up for Cloudflare services will have their domains, by default, set to block all known AI crawlers.

This marks Cloudflare’s transition away from the dreaded opt-out models of AI scraping to a permission-based model, which a Cloudflare spokesperson told Ars is expected to “fundamentally change how AI companies access web content going forward.”

In a world where some website owners have grown sick and tired of attempting and failing to block AI scraping through robots.txt—including some trapping AI crawlers in tarpits to punish them for ignoring robots.txt—Cloudflare’s feature allows users to choose granular settings to prevent blocks on AI bots from impacting bots that drive search engine traffic. That’s critical for small content creators who want their sites to still be discoverable but not digested by AI bots.

“AI crawlers collect content like text, articles, and images to generate answers, without sending visitors to the original source—depriving content creators of revenue, and the satisfaction of knowing someone is reading their content,” Cloudflare’s blog said. “If the incentive to create original, quality content disappears, society ends up losing, and the future of the Internet is at risk.”

Disclosure: Condé Nast, which owns Ars Technica, is a partner involved in Cloudflare’s beta test.

This story was corrected on July 1 to remove publishers incorrectly listed as participating in Cloudflare’s pay-per-crawl beta.

Pay up or stop scraping: Cloudflare program charges bots for each crawl Read More »

a-mammoth-tusk-boomerang-from-poland-is-40,000-years-old

A mammoth tusk boomerang from Poland is 40,000 years old

A boomerang carved from a mammoth tusk is one of the oldest in the world, and it may be even older than archaeologists originally thought, according to a recent round of radiocarbon dating.

Archaeologists unearthed the mammoth-tusk boomerang in Poland’s Oblazowa Cave in the 1990s, and they originally dated it to around 18,000 years old, which made it one of the world’s oldest intact boomerangs. But according to recent analysis by University of Bologna researcher Sahra Talamo and her colleagues, the boomerang may have been made around 40,000 years ago. If they’re right, it offers tantalizing clues about how people lived on the harsh tundra of what’s now Poland during the last Ice Age.

A boomerang carved from mammoth tusk

The mammoth-tusk boomerang is about 72 centimeters long, gently curved, and shaped so that one end is slightly more rounded than the other. It still bears scratches and scuffs from the mammoth’s life, along with fine, parallel grooves that mark where some ancient craftsperson shaped and smoothed the boomerang. On the rounded end, a series of diagonal marks would have made the weapon easier to grip. It’s smoothed and worn from frequent handling: the last traces of the life of some Paleolithic hunter.

Based on experiments with a replica, the Polish mammoth boomerang flies smoothly but doesn’t return, similar to certain types of Aboriginal Australian boomerangs. In fact, it looks a lot like a style used by Aboriginal people from Queensland, Australia, but that’s a case of people in different times and places coming up with very similar designs to fit similar needs.

But critically, according to Talamo and her colleagues, the boomerang is about 40,000 years old.

That’s a huge leap from the original radiocarbon date, made in 1996, which was based on a sample of material from the boomerang itself and estimated an age of 18,000 years. But Talamo and her colleagues claim that original date didn’t line up well with the ages of other nearby artifacts from the same layer of the cave floor. That made them suspect that the boomerang sample may have gotten contaminated by modern carbon somewhere along the way, making it look younger. To test the idea, the archaeologists radiocarbon dated samples from 13 animal bones—plus one from a human thumb—unearthed from the same layer of cave floor sediment as the boomerang.

A mammoth tusk boomerang from Poland is 40,000 years old Read More »

research-roundup:-6-cool-science-stories-we-almost-missed

Research roundup: 6 cool science stories we almost missed


Final Muon g-2 results, an ultrasonic mobile brain imaging helmet, re-creating Egyptian blue, and more.

The “world’s smallest violin” created by Loughborough University physicists. Credit: Loughborough University

It’s a regrettable reality that there is never enough time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. June’s list includes the final results from the Muon g-2 experiment, re-creating the recipe for Egyptian blue, embedding coded messages in ice bubbles, and why cats seem to have a marked preference for sleeping on their left sides.

Re-creating Egyptian blues

Closeup image of an ancient wooden Egyptian falcon. Researchers have found a way to repoduce the blue pigment visible on the artifact

Close-up image of an ancient wooden Egyptian falcon. Researchers have found a way to reproduce the blue pigment visible on the artifact. Credit: Matt Unger, Carnegie Museum of Natural History

Artists in ancient Egypt were particularly fond of the color known as Egyptian blue—deemed the world’s oldest synthetic pigment—since it was a cheap substitute for pricier materials like lapis lazuli or turquoise. But archaeologists have puzzled over exactly how it was made, particularly given the wide range of hues, from deep blue to gray or green. That knowledge had long been forgotten. However, scientists at Washington State University have finally succeeded in recreating the recipe, according to a paper published in the journal npj Heritage Science.

The interdisciplinary team came up with 12 different potential recipes using varying percentages of silicon dioxide, copper, calcium, and sodium carbonate. They heated the samples to 1,000° Celsius (about what ancient artists could have achieved), varying the time between one and 11 hours. They also cooled the samples at different rates. Then they analyzed the samples using microscopy and other modern techniques and compared them to the Egyptian blue on actual Egyptian artifacts to find the best match.

Their samples are now on display at the Carnegie Museum of Natural History in Pittsburgh. Apart from its historical interest, Egyptian blue also has fascinating optical, magnetic, and biological properties that could prove useful in practical applications today, per the authors. For instance, it might be used for counterfeit-proof inks, since it emits light in the near-infrared, and its chemistry is similar to high-temperature superconductors.

npj Heritage Science, 2025. DOI: 10.1038/s40494-025-01699-7  (About DOIs).

World’s smallest violin

It’s an old joke, possibly dating back to the 1970s. Whenever someone is complaining about an issue that seems trivial in the grand scheme of things, it’s tradition to rub one’s thumb and forefinger together and declare, “This the world’s smallest violin playing just for you.” (In my snarky circles we used to say the violin was “playing ‘My Heart Bleeds for You.'”) Physicists at Loughborough University have now made what they claim really is the world’s smallest violin, just 35 microns long and 13 microns wide.

There are various lithographic methods for creating patterned electronic devices, such as photolithography, which can be used either with a mask or without. The authors relied on scanning probe thermal lithography instead, specifically a cutting-edge nano-sculpting machine they dubbed the NanoFrazor. The first step was to coat a small chip with two layers of a gel material and then place it under the NanoFrazor. The instrument’s heated tip burned the violin pattern into the gel. Then they “developed” the gel by dissolving the underlayer so that only a violin-shaped cavity remained.

Next, they poured on a thin layer of platinum and rinsed off the chip with acetone. The resulting violin is a microscopic image rather than a playable tiny instrument—you can’t even see it without a microscope—but it’s still an impressive achievement that demonstrates the capabilities of the lab’s new nano lithography system. And the whole process can take as little as three hours.

Muon g-2 anomaly no more?

overhead view of the Muon g-2 experiment at Fermilab

Overhead view of the Muon g-2 experiment at Fermilab. Credit: Fermilab

The Muon g-2 experiment (pronounced “gee minus two”) is designed to look for tantalizing hints of physics beyond the Standard Model of particle physics. It does this by measuring the magnetic field (aka the magnetic moment) generated by a subatomic particle known as the muon. Back in 2001, an earlier run of the experiment at Brookhaven National Laboratory found a slight discrepancy, hinting at possible new physics, but that controversial result fell short of the critical threshold required to claim discovery.

Physicists have been making new measurements ever since in hopes of resolving this anomaly. For instance, in 2021, we reported on data from the updated Muon g-2 experiment that showed “excellent agreement” with the discrepancy Brookhaven recorded. They improved on their measurement precision in 2023. And now it seems the anomaly is very close to being resolved, according to a preprint posted to the physics arXiv based on analysis of a data set triple the size as the one used for the 2023 analysis. (You can watch a video explanation here.)

The final Muon g-2 result is in agreement with the 2021 and 2023 results, but much more precise, with error bars four times smaller than those of the original Brookhaven experiment. Combine that with new predictions by the related Muon g-2 Theory Initiative using a new means of calculating the muon’s magnetic moment, and the discrepancy between theoretical prediction and experiment narrows even further.

While some have declared victory, and the Muon g-2 experiment is completed, theorists are still sounding a note of caution as they seek to further refine their models. Meanwhile, Fermilab is building a new experiment designed to hunt for muon-to-electron conversions. If they find any, that would definitely comprise new physics beyond the Standard Model.

arXiv, 2025. DOI: 10.48550/arXiv.2506.03069 (About DOIs).

Message in a bubble

Physicists have embedded Morse code messages in ice bubbles.

Physicists have embedded Morse code messages in ice bubbles. Credit: Keke Shao et al., 2025

Forget sending messages in a bottle. Scientists have figured out how to encode messages in both binary and Morse code in air bubbles trapped in ice, according to a paper published in the journal Cell Physical Science. Trapped air bubbles are usually shaped like eggs or needles, and the authors discovered that they could manipulate the sizes, shapes, and distribution of those ice bubbles by varying the freezing rate. (Faster rates produce egg-shaped bubbles, slower rates produce needle-shaped ones, for example.)

To encode messages, the researchers assigned different bubble sizes, shapes, and orientations to Morse code and binary characters and used their freezing method to produce ice bubbles representing the desired characters. Next, they took a photograph of the ice layer and converted it to gray scale, training a computer to identify the position and the size of the bubbles and decode the message into English letters and Arabic numerals. The team found that binary coding could store messages 10 times longer than Morse code.

Someday, this freezing method could be used for short message storage in Antarctica and similar very cold regions where traditional information storage methods are difficult and/or too costly, per the authors. However, Qiang Tang of the University of Australia, who was not involved in the research, told New Scientist that he did not see much practical application for the breakthrough in cryptography or security, “unless a polar bear may want to tell someone something.”

Cell Physical Science, 2025. DOI: 10.1016/j.xcrp.2025.102622 (About DOIs).

Cats prefer to sleep on left side

sleepy tuxedo cat blissfully stretched out on a blue rug

Caliban marches to his own drum and prefers to nap on his right side. Credit: Sean Carroll

The Internet was made for cats, especially YouTube, which features millions of videos of varying quality, documenting the crazy antics of our furry feline friends. Those videos can also serve the interests of science, as evidenced by the international team of researchers who analyzed 408 publicly available videos of sleeping cats to study whether the kitties showed any preference for sleeping on their right or left sides. According to a paper published in the journal Current Biology, two-thirds of those videos showed cats sleeping on their left sides.

Why should this behavioral asymmetry be the case? There are likely various reasons, but the authors hypothesize that it has something to do with kitty perception and their vulnerability to predators while asleep (usually between 12 to 16 hours a day). The right hemisphere of the brain dominates in spatial attention, while the right amygdala is dominant for processing threats. That’s why most species react more quickly when a predator approaches from the left. Because a cat’s left visual field is processed in the dominant right hemisphere of their brains, “sleeping on the left side can therefore be a survival strategy,” the authors concluded.

Current Biology, 2025. DOI: 10.1016/j.cub.2025.04.043 (About DOIs).

A mobile ultrasonic brain imaging helmet

A personalized 3D-printed helmet for mobile functional ultrasound brain imaging.

A personalized 3D-printed helmet for mobile functional ultrasound brain imaging. Credit: Sadaf Soloukey et al., 2025

Brain imaging is a powerful tool for both medical diagnosis and neuroscience research, from noninvasive methods like EEGs, MRI,  fMRI, and diffuse optical tomography, to more invasive techniques like intracranial EEG. But the dream is to be able to capture the human brain functioning in real-world scenarios instead of in the lab. Dutch scientists are one step closer to achieving that goal with a specially designed 3D-printed helmet that relies upon functional ultrasound imaging (fUSi) to enable high-quality 2D imaging, according to a paper published in the journal Science Advances.

Unlike fMRI, which requires subjects to remain stationary, the helmet monitors the brain as subjects are walking and talking (accompanied by a custom mobile fUSi acquisition cart). The team recruited two 30-something male subjects who had undergone cranioplasty to embed an implant made of polyetheretherketone (PEEK). While wearing the helmet, the subjects were asked to perform stationary motor and sensory tasks: pouting or brushing their lips, for example. Then the subjects walked in a straight line, pushing the cart for a minute up to 30 meters while licking their lips to demonstrate multitasking. The sessions ran over a 20-month period, thereby demonstrating that the helmet is suitable for long-term use. The next step is to improve the technology to enable mobile 3D imaging of the brain.

Science Advances, 2025. DOI: 10.1126/sciadv.adu9133  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 6 cool science stories we almost missed Read More »

senate-gop-budget-bill-has-little-noticed-provision-that-could-hurt-your-wi-fi

Senate GOP budget bill has little-noticed provision that could hurt your Wi-Fi


Cruz bill could take 6 GHz spectrum away from Wi-Fi, give it to mobile carriers.

Credit: Getty Image | BlackJack3D

Sen. Ted Cruz (R-Texas) has a plan for spectrum auctions that could take frequencies away from Wi-Fi and reallocate them for the exclusive use of wireless carriers. The plan would benefit AT&T, which is based in Cruz’s home state, along with Verizon and T-Mobile.

Cruz’s proposal revives a years-old controversy over whether the entire 6 GHz band should be devoted to Wi-Fi, which can use the large spectrum band for faster speeds than networks that rely solely on the 2.4 and 5 GHz bands. Congress is on the verge of passing legislation that would require spectrum to be auctioned off for full-power, commercially licensed use, and the question is where that spectrum will come from.

When the House of Representatives passed its so-called “One Big Beautiful Bill,” it excluded all of the frequencies between 5.925 and 7.125 gigahertz from the planned spectrum auctions. But Cruz’s version of the budget reconciliation bill, which is moving quickly toward a final vote, removed the 6 GHz band’s protection from spectrum auctions. The Cruz bill is also controversial because it would penalize states that regulate artificial intelligence.

Instead of excluding the 6 GHz band from auctions, Cruz’s bill would instead exclude the 7.4–8.4 GHz band used by the military. Under conditions set by the bill, it could be hard for the Commerce Department and Federal Communications Commission to fulfill the Congressional mandate without taking some spectrum away from Wi-Fi.

The agencies will have to take spectrum “from somebody who you can take it away from,” Harold Feld, senior VP of consumer advocacy group Public Knowledge, told Ars.

“The most vulnerable non-federal bands”

The Cruz plan could take 200 MHz or more away from the 1,200 MHz currently allocated to Wi-Fi between 5.925 and 7.125 GHz. It could also take spectrum from the Citizens Broadband Radio Service (CBRS), which goes from 3.55 to 3.7 GHz. (See this previous article for a much longer discussion of CBRS.)

Michael Calabrese of New America’s Open Technology Institute told Ars that 6 GHz and CBRS “are the most vulnerable non-federal bands for reallocation and auction.” While the spectrum for auctions is to come from frequencies between 1.3 and 10.5 GHz, much of that spectrum will be off-limits either because it’s specifically excluded or because it would be more difficult to reallocate.

“About half the spectrum in that range is federal, and then the rest has already been auctioned for cellular mobile use or is assigned to other critical users such as aviation and satellites,” said Calabrese, who directs the Open Technology Institute’s Wireless Future Project.

Another factor cited by Calabrese is that the FCC, under Chairman Brendan Carr, is looking to make new spectrum available to low-Earth orbit satellites like those used by Elon Musk’s Starlink network. Carr is also “the leading champion of 5G in the mobile industry” and inclined to devote more frequencies to mobile carriers, Calabrese said.

Wi-Fi bottleneck

Feld said the 6 GHz Wi-Fi spectrum would be a likely target because deployments in the band are just starting. By contrast, the 2.4 GHz and 5 GHz bands have been allocated to Wi-Fi for a long time, are heavily used, and modifying existing devices to stop using parts of the bands would be impractical.

Arguing that 6 GHz is crucial for Wi-Fi’s future, Calabrese said that “the bottleneck limiting home and business broadband capacity is no longer the Internet connection, but the quality of the Wi-Fi. Most Wi-Fi still relies on a much smaller amount of unlicensed spectrum at 2.4 and 5 GHz, which limits throughput to about 400Mbps and connects fewer devices to the same access point.”

The Wi-Fi 6E standard adds support for 6 GHz spectrum, and the in-development Wi-Fi 7 will take full advantage of the band, Calabrese said. “By leveraging access to the entire 6 GHz band, Wi-Fi 7 can nearly double speeds, support hundreds of devices in a location, prioritize lag-sensitive applications like real-time video, and support emerging future apps such as virtual reality and telepresence that will be used almost entirely indoors,” he said.

We contacted Cruz’s office last week about his bill’s potential impact on Wi-Fi in the 6 GHz band but did not receive a response.

Ajit Pai’s FCC allocated 6 GHz to Wi-Fi

The 6 GHz band was allocated to Wi-Fi in April 2020 under then-FCC Chairman Ajit Pai, during the first Trump administration. CTIA-The Wireless Association, the major lobby group representing mobile carriers seeking more exclusive licenses, argued that Wi-Fi didn’t need the entire band. The CTIA called it a “6 GHz giveaway,” saying that “cable, Facebook, and Google are demanding more than double the 6 GHz spectrum that other nations are considering making available for services like Wi-Fi.”

Pai—who is now the president and CEO of CTIA—rejected the group’s arguments in the April 2020 decision. The Pai FCC’s order said that “providing new opportunities for unlicensed operations across the entire 6 GHz band can help address the critical need for providing additional spectrum resources for unlicensed operations,” and enable use of “several 160-megahertz channels as well as 320-megahertz channels.”

Making the whole band available for Wi-Fi “promotes more efficient and productive use of the spectrum,” whereas “repurposing large portions of the 6 GHz band for new licensed services would diminish the benefits of such use to the American public,” the Pai FCC said. With home Internet services providing gigabit speeds, Wi-Fi needed more spectrum to avoid becoming “the bottleneck for faster speeds at home,” the FCC said.

Now that he’s CEO of the CTIA, Pai is leading the primary group that is pushing for 6 GHz to be partially reallocated to mobile carriers. When contacted by Ars, a CTIA spokesperson said last week that the “upper 6 GHz band is the next global 5G band,” and that many countries are “using or planning to use at least the upper part of the band (6.425–7.125 GHz) for licensed commercial use.”

CTIA also said that Wi-Fi adoption in 6 GHz “is moving very slowly,” citing OpenSignal research, and that the Trump administration and FCC should “consider all possible options to address our spectrum shortfall.”

While CTIA has repeatedly claimed that US carriers are facing a spectrum shortfall, executives at the major telecoms have told investors the opposite. AT&T CFO Pascal Desroches said this month that the company has “no pressing need” to “acquire spectrum in the next 12, 24, even 36 months.” Verizon Consumer Group CEO Sowmyanarayan Sampath said in May 2024 that the company has “almost unlimited spectrum.” T-Mobile CEO Mike Sievert said in December that “we have lots of spectrum we haven’t put into the fight yet,” as the carrier had only deployed 60 percent of its midband spectrum for 5G.

Divvying up spectrum

The 6 GHz band is not just for Wi-Fi as it is also used for fixed microwave links, satellite services, and certain types of mobile operations. Wi-Fi devices operating in 6 GHz must do so at low power to avoid interfering with incumbent services, and in most of the band must operate indoors only. Currently, Wi-Fi is allowed to use the entire 1,200 MHz band indoors at low power. Outdoor, higher-power use is allowed in 850 of the 1,200 MHz.

While Wi-Fi’s access to 6 GHz is limited, Feld said the band is extremely important. He said that Wi-Fi in 6 GHz needs bigger channels than traditional Wi-Fi had, and that taking part of the band away from Wi-Fi would reduce the number of large channels and require “crowding a lot more devices into a much smaller space.”

The House-approved spectrum plan pertains to frequencies between 1.3 and 10 GHz, while Cruz’s Senate plan is for frequencies between 1.3 and 10.5 GHz. The House would require at least 600 MHz to be auctioned from the entire band. Cruz calls for at least 800 MHz to be auctioned, of which 500 MHz would be taken from federal users. The House and Cruz auction plans both exclude 3.1 to 3.45 GHz, which is used by the military.

For non-federal spectrum, Cruz’s plan says that “not less than 300 megahertz” must be auctioned. This must include at least 100 MHz from 3.98 to 4.2 GHz, but the plan doesn’t specify where the rest of the 300 MHz or more would be taken from.

Because of the “not less than” language, more than 200 MHz could be taken from sources that include the current Wi-Fi and CBRS allocations. Calabrese said he worries that the amount taken from Wi-Fi could be significantly higher than 200 MHz, as “the mobile industry wants much more.”

Big venues need better Wi-Fi

Calabrese said he expects the biggest impact of reducing Wi-Fi’s use of 6 GHz at “busy venues such as schools, airports, sporting arenas, shopping malls, all the different places where many people gather together and try to get on the same access points and unlicensed spectrum through Wi-Fi.”

Calabrese said that enterprise use of Internet of Things (IoT) technologies would also be affected. He gave the example of Amazon using indoor Wi-Fi to operate thousands of robots in fulfillment centers. Extending Wi-Fi to 6 GHz is “about connecting the dozens of in-home devices that we can expect in the future as well as supporting the extremely high-bandwidth applications that are emerging for indoor use,” he said.

Calabrese argued that Wi-Fi can make better use of the spectrum than mobile carriers because cellular signals have trouble penetrating walls, and most Internet traffic on mobile devices travels over Wi-Fi instead of cellular networks.

“All the new applications envisioned for both 5G and 6G are inherently indoor applications, and mobile signals don’t penetrate well indoors… Wi-Fi would use the band ubiquitously, indoors and outdoors,” he said.

Taking spectrum from federal users has also fueled concerns about military operations. Senator Maria Cantwell (D-Wash.), ranking member of the Senate Committee on Commerce, Science and Transportation, said in a speech last night that the “auction will fundamentally compromise our defense capabilities, while endangering aviation and important federal capabilities like weather forecasting and scientific research.” Drone operations are among the uses that would be compromised, she said.

Feld: People downplaying risk “kidding themselves”

Cable companies are deploying Wi-Fi 7 routers and supporting continued use of the 6 GHz band for Wi-Fi. The CableLabs industry group said the band is particularly crucial in high-density environments, that “any proposals to reduce or repurpose 6 GHz unlicensed spectrum would be devastating to Wi-Fi performance,” and that policymakers should allocate more spectrum for unlicensed use to support the growth of Wi-Fi instead of reallocating spectrum from Wi-Fi to mobile carriers.

Comcast and Charter joined tech companies and advocacy groups in a June 2 letter, organized by the Wi-Fi Alliance industry group, that urged Cruz and other congressional leaders to preserve 6 GHz for Wi-Fi. (Disclosure: The Advance/Newhouse Partnership, which owns 12 percent of Charter, is part of Advance Publications, which owns Ars Technica parent Condé Nast.) Tech companies that signed the letter include HP, Cisco, Broadcom, Juniper, Apple, Amazon, and Meta.

The 6 GHz band is “perfectly suited to indoor networking that is the hallmark of Wi-Fi, while being flexible enough to support targeted outdoor uses… Shipments of 6 GHz-enabled consumer devices in North America, totaling 95 million last year, are expected to reach nearly 370 million per year by 2029,” the letter said.

Aside from that letter, Feld said that cable and tech companies haven’t been particularly active in opposing the potential reallocation of 6 GHz frequencies. “Amazon and the other companies that signed onto this letter, they’re like, ‘well we have a lot of things that we want as part of this bill. We want the tax break. We want other stuff. We’re not willing to get out there and make a big deal about it for fear of pissing off Cruz,'” Feld said.

Feld also speculated that some people think that lawmakers “can’t possibly be serious about pulling back Wi-Fi now that we’re deploying in the band.” In Feld’s opinion, “they’re kidding themselves.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Senate GOP budget bill has little-noticed provision that could hurt your Wi-Fi Read More »

the-second-launch-of-new-glenn-will-aim-for-mars

The second launch of New Glenn will aim for Mars

Notably, the company plans to launch each new rocket as soon as it is ready to fly to gather data about the vehicle’s performance, attempt to catch and reuse first stages, and move closer to a rapid launch cadence. Therefore, if a customer payload is not ready, the company has also developed an inspirational mission called “Cube for the Future,” which appears to be part of the company’s initiative to inspire future generations to pursue careers in science. This may also fly as a rideshare on one of the launches listed above.

All eyes on the Moon

Among these missions, the payload likely to spark the most interest is the Blue Moon MK1 lander, which is part of the company’s plans to develop a large, reusable lander capable of landing humans on the Moon.

Blue Origin shared a snippet of video last week on social media showing the mid-section of the MK1 lander arriving at the company’s assembly facilities in Rocket Park, Florida. This will be the tallest vehicle ever landed on the Moon. It is eight meters (26.4 feet) tall, which is 1 meter taller than the Lunar Module NASA landed humans in during the Apollo Program.

MK1 is a cargo version of a larger vehicle, MK2, that Blue Origin is developing for humans. The cargo version is rated to carry about 3 tons to the metric surface, about 10 times the capacity of currently available commercial landers available to NASA.

Barring a major setback, it now appears highly likely that Blue Origin will beat SpaceX in landing a vehicle on the lunar surface. Due to the struggles with development of the Starship vehicle—whether on the ground or in space, the last four Starship upper stages have been lost before achieving a nominal success—some industry officials believe Blue Origin now has a realistic chance to compete with SpaceX in the effort to land NASA astronauts on the Moon as part of the Artemis Program.

Both companies are developing large, ambitious vehicles—SpaceX with Starship, and Blue Origin with its MK2 lander—but Blue Origin’s vehicle is somewhat less technically challenging. Blue Origin founder Jeff Bezos is also far more committed to a lunar program than is SpaceX founder Elon Musk, sources said, and if he sees an opportunity to finally best his rival in space, he may go for it.

The second launch of New Glenn will aim for Mars Read More »

ars-reflects-on-apollo-13-turning-30

Ars reflects on Apollo 13 turning 30


Ron Howard’s 1995 love letter to NASA’s Apollo program takes a few historical liberties but it still inspires awe.

Credit: Universal Pictures

This year marks the 30th anniversary of the 1995 Oscar-winning film, Apollo 13, director Ron Howard’s masterful love letter to NASA’s Apollo program in general and the eponymous space mission in particular. So we’re taking the opportunity to revisit this riveting homage to American science, ingenuity, and daring.

(Spoilers below.)

Apollo 13 is a fictional retelling of the aborted 1970 lunar mission that became a “successful failure” for NASA because all three astronauts made it back to Earth alive against some pretty steep odds. The film opens with astronaut Jim Lovell (Tom Hanks) hosting a watch party in July 1969 for Neil Armstrong’s historic first walk on the Moon. He is slated to command the Apollo 14 mission, and is ecstatic when he and his crew—Ken Mattingly (Gary Sinise) and Fred Haise (Bill Paxton)—are bumped to Apollo 13 instead. His wife, Marilyn (Kathleen Quinlan) is more superstitious and hence less thrilled: “It had to be 13.” To which her pragmatic husband replies, “It comes after 12.”

A few days before launch, Mattingly is grounded because he was exposed to the measles and replaced with backup Jack Swigert (Kevin Bacon), who is the only one happy about the situation. But Lovell and Haise rebound from the disappointment and the launch goes off without a hitch. The public, alas, just isn’t interested in what they think has become routine. But the mission is about to become anything but that.

During a maintenance task to stir the oxygen tanks, an electrical short causes one of the tanks to explode, with the other rapidly venting its oxygen into space. The crew has less than an hour to evacuate the command module Odyssey into the lunar module Aquarius, using it as a lifeboat. There is no longer any chance of landing on the Moon; the new mission is to keep the astronauts alive long enough to figure out how to bring them safely home. That means overcoming interpersonal tensions, freezing conditions, dwindling rations, and unhealthy CO2 levels, among other challenges, as well as taking on a pulse-pounding manual course correction with no navigational computer. (Spoiler alert: they make it!)

The Apollo 13 crew: Jim Lovell (Tom Hanks), Jack Swigert (Kevin Bacon), and Fred Haise (Bill Paxton). Universal Pictures

The film is loosely based on Lovell’s 1994 memoir, Lost Moon. While Lovell initially hoped Kevin Costner would portray him, Howard ultimately cast Hanks in the role, in part because the latter already had extensive knowledge of the Apollo program and space history. Hanks, Paxton, and Bacon all went to US Space Camp to prepare for their roles, participating in astronaut training exercises and flying on the infamous “Vomit Comet” (the KC-135) to experience simulated weightlessness. Howard ultimately shot most of the weightless scenes aboard the KC-135 since recreating those conditions on a soundstage and with CGI would have been prohibitively expensive.

In fact, Howard didn’t rely on archival mission footage at all, insisting on shooting his own footage. That meant constructing realistic spacecraft interiors—incorporating some original Apollo materials—and reproducing exactly the pressure suits worn by astronauts. (The actors, once locked in, breathed air pumped into the suits just like the original Apollo astronauts.) The Mission Control set at Universal Studios was so realistic that one NASA consultant kept looking for the elevator when he left each day, only to remember he was on a movie set.

The launch sequence was filmed using miniature models augmented with digital image stitching. Ditto for the splashdown, in which actual parachutes and a prop capsule were tossed out of a helicopter to shoot the scene. Only the exhaust from the attitude control thrusters was generated with CGI. A failed attempt at using CGI for the in-space urine dump was scrapped in favor of just spraying droplets from an Evian bottle.

It all paid off in the end. Apollo 13 premiered on June 30, 1995, to critical acclaim and racked up over $355 million globally at the box office. It was nominated for nine Oscars and won two—Best Film Editing and Best Sound—although it lost Best Picture to another Hanks film, Forrest Gump. (We can’t quite believe it either.) And the film has stood the test of time, capturing the essence of America’s early space program for posterity. A few Ars staffers shared their thoughts on Apollo 13‘s enduring legacy.

Failure should be an option

White Team Flight Director Gene Krantz (Ed Harris) insists, “We are not losing those men!” Universal Pictures

The tagline for Apollo 13 is “Failure is not an option.” But this is a bit of Hollywood magic. It turns out that NASA Flight Director Gene Kranz never said the line during the actual Apollo 13 mission to the Moon, or the subsequent efforts to save the crew.

Instead the line was conceived after the script writers, Al Reinert and Bill Broyles, interviewed Kranz at his home Texas, south of Johnson Space Center. They were so taken by the notion it became synonymous with the film and with Kranz himself, one of NASA most storied flight directors. He has lived with the line in the decades since, and embraced it by using it as the title of his autobiography. Ever since then the public has associated the idea that NASA would never accept failure with the space agency.

Of course it is great that the public believes so strongly in NASA. But this also turned out to be a millstone around the agency’s neck. This is not really the fault of Kranz. However, as the public became unaccepting of failure, so did Congress, and NASA’s large programs became intolerant of failure. This is one of the reasons why the timeline and cost of NASA’s rockets and spacecraft and interplanetary missions have ballooned. There are so many people looking for things that could possibly go wrong, the people actually trying to build hardware and fly missions are swamped by requirements.

This is why companies like SpaceX, with an iterative design methodology that accepts some level of failure in order to go more quickly, have thrived. They have moved faster, and at significantly less cost, than the government. I asked Kranz about this a few years ago, the idea that NASA (and its Congressional paymasters) should probably be a little more tolerant of failure.

“Space involves risk, and I think that’s the one thing about Elon Musk and all the various space entrepreneurs: they’re willing to risk their future in order to accomplish the objective that they have decided on,” he told me. “I think we as a nation have to learn that, as an important part of this, to step forward and accept risk.”

Eric Berger

The perfect gateway drug

“Gentlemen, that’s not good enough.” Universal Pictures

Technically I am a child of the ’60s (early Gen-X), but I was far too young to grasp the significance of the Apollo 11 moon landing in 1969, or just how impressive NASA’s achievement really was. The adults made us sit around the TV in our PJs and seemed very excited about the grainy picture. That’s it. That’s all I remember. My conscious knowledge of space exploration was more influenced by Star Wars and the 1986 Challenger explosion. So going to see Apollo 13 in 1995 as a young science writer was a revelation. I walked out of the theater practically vibrating with excitement, turned to my friends and exclaimed, “Oh my god, we went to the Moon in a souped-up Buick!”

Apollo 13 makes space exploration visceral, makes the audience feel like they are right there in the capsule with the crew battling the odds to get back home. It perfectly conveys the huge risks and stalwart courage of everyone involved in the face of unimaginable pressure. Nerds are the heroes and physics and math are critical: I love the scene where Lovell has to calculate gimbal conversions by hand and asks mission control to check his work. A line of men with slide rules feverishly make their own calculations and one-by-one give the thumbs up.

Then there’s the pragmatic ingenuity of the engineers who had to come up with a way to fit square air filters into a round hole using nothing but items already onboard the spacecraft. There’s a reason I rewatch Apollo 13 every couple of years when I’m in the mood for a “let’s work the problem, people” pick-me-up. (Shoutout to Lovell’s mother, Blanche—played by Howard’s mother, the late Jean Speegle Howard—and her classic line: “If they could get a washing machine to fly, my Jimmy could land it.”)

Naturally, Howard had to sacrifice some historical accuracy in the name of artistic license, sparking the inevitable disgruntled griping among hardcore space nerds. For instance, the mission’s original commander, Alan Shepard, wasn’t grounded because of an ear infection but by Meniere’s disease (an inner ear issue that can cause dizziness). Mission control didn’t order the shutdown of the fuel cells; they were already dead. Swigert and Haise didn’t really argue about who was to blame for the accident. And the film ignores the critical role of Flight Director Glynn Lunney and his Black Team (among others), choosing to focus on Kranz’s White Team to keep the story streamlined.

Look, I get it: nobody wants to see a topic they’re passionate about misrepresented in a movie. But there’s no question that thanks to Howard’s narrative instincts, the film continues to resonate with the general public in ways that a by-the-book docudrama obsessing over the tiniest technical details never could.

In the grand scheme of things, that matters far more than whether Lovell really said, “Houston, we have a problem” in those exact words.  If you want the public to support space exploration and—crucially—for Congress to fund it, you need to spark their imaginations and invite them to share in the dream. Apollo 13 is the perfect gateway drug for future space fans, who might find themselves also vibrating with excitement afterward, so inspired by the film that they decide they want to learn more—say, by watching the 12-part Emmy-winning docuseries From the Earth to the Moon that Howard and Hanks co-produced (which is historically accurate). And who knows? They might even decide they want to be space explorers themselves one day.

Jennifer Ouellette

A common touchstone

Lift-off! Universal Pictures

My relationship with Apollo 13 is somewhat different from most folks: I volunteer as a docent at Space Center Houston, the visitor’s center for Houston’s Johnson Space Center. Specifically, I’m an interpretive guide for the center’s Saturn V exhibit—the only one of the three remaining Saturn V exhibits in the world composed of tip-to-tip of flight stages.

I reference Apollo 13 constantly during guide shifts because it’s a common touchstone that I can count on most folks visiting SCH to have seen, and it visually explicates so many of the more technical aspects of the Apollo program. If I’m explaining that the near-avalanche of white stuff one sees falling off of a Saturn V at launch is actually ice (the rocket’s cryogenic fuels are fantastically cold, and the launch pad at Florida is usually warm and humid, so ice forms on the rocket’s outer skin over the liquid oxygen and liquid hydrogen tanks as it sits on the pad), I reference the launch scene in the movie. If I’m explaining the transposition and docking maneuver by which the Apollo command module docked with and extracted the lunar module from its little garage, I reference the T&D scene in the movie.

Questions about breathing and carbon dioxide? Movie scene. The well-known tension between the astronaut corps and the flight surgeons? Movie scene. And the list goes on. It’s the most amazing reference material I could possibly have.

The film has its detractors, of course, and most geeks wanting to take issue with it will fire shots at the film’s historical accuracy. (Apollo EECOM Sy Liebergot, played in the film by director Ron Howard’s brother Clint, griped once to me that the movie had the audacity to depict the Apollo spacecraft’s trans-lunar injection burn as occurring with the Moon visible in the windows instead of on the far side of the planet—an apparently unforgivable astronavigational sin.) The movie amps up the drama in all respects, adds dialog no astronaut or controller would say, mashes people together into composite characters, compresses or expands the timelines of many of the events in the mission, shows many of those same events happening out of order, and puts people (like Gary Sinise’s Ken Mattingly) in places and roles they were never in.

All these things are true—but they’re also necessary additions in order to get one’s hands around a messy historical event (an event, like all events, that was basically just a whole bunch of stuff all happening at the same time) and fit it into a three-act structure that preserves the important things and that non-technical non-astronaut audiences can follow and understand. And the film succeeds brilliantly, telling a tale that both honors the historicity and technical details of the mission, and that also continues to function as a powerful interpretive tool that teaches people even 35 years after release.

Is every button pressed in the right way? No. Does it bug the crap out of me every time Kevin Bacon answers Tom Hanks’ “How’s the alignment?” question by nonsensically saying “GDC align” and pressing the GDC align button, which is neither what Lovell was asking nor the proper procedure to get the answer Lovell was looking for? Yes. But’s also pure competence porn—an amazing love letter to the space program and the 400,000 men and women who put humans on the Moon.

And like Lovell says: “It’s not a miracle. We just decided to go.”

Lee Hutchinson

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Ars reflects on Apollo 13 turning 30 Read More »

a-neural-brain-implant-provides-near-instantaneous-speech

A neural brain implant provides near instantaneous speech


Focusing on sound production instead of word choice makes for a flexible system.

The participant’s implant gets hooked up for testing. Credit: UC Regents

Stephen Hawking, a British physicist and arguably the most famous man suffering from amyotrophic lateral sclerosis (ALS), communicated with the world using a sensor installed in his glasses. That sensor used tiny movements of a single muscle in his cheek to select characters on a screen. Once he typed a full sentence at a rate of roughly one word per minute, the text was synthesized into speech by a DECtalk TC01 synthesizer, which gave him his iconic, robotic voice.

But a lot has changed since Hawking died in 2018. Recent brain-computer-interface (BCI) devices have made it possible to translate neural activity directly into text and even speech. Unfortunately, these systems had significant latency, often limiting the user to a predefined vocabulary, and they did not handle nuances of spoken language like pitch or prosody. Now, a team of scientists at the University of California, Davis has built a neural prosthesis that can instantly translate brain signals into sounds—phonemes and words. It may be the first real step we have taken toward a fully digital vocal tract.

Text messaging

“Our main goal is creating a flexible speech neuroprosthesis that enables a patient with paralysis to speak as fluently as possible, managing their own cadence, and be more expressive by letting them modulate their intonation,” says Maitreyee Wairagkar, a neuroprosthetics researcher at UC Davis who led the study. Developing a prosthesis ticking all these boxes was an enormous challenge because it meant Wairagkar’s team had to solve nearly all the problems BCI-based communication solutions have faced in the past. And they had quite a lot of problems.

The first issue was moving beyond text—most successful neural prostheses developed so far have translated brain signals into text—the words a patient with an implanted prosthesis wanted to say simply appeared on a screen. Francis R. Willett led a team at Stanford University that achieved brain-to-text translation with around a 25 percent error rate. “When a woman with ALS was trying to speak, they could decode the words. Three out of four words were correct. That was super exciting but not enough for daily communication,” says Sergey Stavisky, a neuroscientist at UC Davis and a senior author of the study.

Delays and dictionaries

One year after the Stanford work, in 2024, Stavisky’s team published its own research on a brain-to-text system that bumped the accuracy to 97.5 percent. “Almost every word was correct, but communicating over text can be limiting, right?” Stavisky said. “Sometimes you want to use your voice. It allows you to make interjections, it makes it less likely other people interrupt you—you can sing, you can use words that aren’t in the dictionary.” But the most common approach to generating speech relied on synthesizing it from text, which led straight into another problem with BCI systems: very high latency.

In nearly all BCI speech aids, sentences appeared on a screen after a significant delay, long after the patient finished stringing the words together in their mind. The speech synthesis part usually happened after the text was ready, which caused even more delay. Brain-to-text solutions also suffered from a limited vocabulary. The latest system of this kind supported a dictionary of roughly 1,300 words. When you tried to speak a different language, use more elaborate vocabulary, or even say the unusual name of a café just around the corner, the systems failed.

So, Wairagkar designed her prosthesis to translate brain signals into sounds, not words—and do it in real time.

Extracting sound

The patient who agreed to participate in Wairagkar’s study was codenamed T15 and was a 46-year-old man suffering from ALS. “He is severely paralyzed and when he tries to speak, he is very difficult to understand. I’ve known him for several years, and when he speaks, I understand maybe 5 percent of what he’s saying,” says David M. Brandman, a neurosurgeon and co-author of the study. Before working with the UC Davis team, T15 communicated using a gyroscopic head mouse to control a cursor on a computer screen.

To use an early version of Stavisky’s brain-to-text system, the patient had 256 microelectrodes implanted into his ventral precentral gyrus, an area of the brain responsible for controlling vocal tract muscles.

For the new brain-to-speech system, Wairagkar and her colleagues relied on the same 256 electrodes. “We recorded neural activities from single neurons, which is the highest resolution of information we can get from our brain,” Wairagkar says. The signal registered by the electrodes was then sent to an AI algorithm called a neural decoder that deciphered those signals and extracted speech features such as pitch or voicing. In the next step, these features were fed into a vocoder, a speech synthesizing algorithm designed to sound like the voice that T15 had when he was still able to speak normally. The entire system worked with latency down to around 10 milliseconds—the conversion of brain signals into sounds was effectively instantaneous.

Because Wairagkar’s neural prosthesis converted brain signals into sounds, it didn’t come with a limited selection of supported words. The patient could say anything he wanted, including pseudo-words that weren’t in a dictionary and interjections like “um,” “hmm,” or “uh.” Because the system was sensitive to features like pitch or prosody, he could also vocalize questions saying the last word in a sentence with a slightly higher pitch and even sing a short melody.

But Wairagkar’s prosthesis had its limits.

Intelligibility improvements

To test the prosthesis’s performance, Wairagkar’s team first asked human listeners to match a recording of some synthesized speech by the T15 patient with one transcript from a set of six candidate sentences of similar length. Here, the results were completely perfect, with the system achieving 100 percent intelligibility.

The issues began when the team tried something a bit harder: an open transcription test where listeners had to work without any candidate transcripts. In this second test, the word error rate was 43.75 percent, meaning participants identified a bit more than half of the recorded words correctly. This was certainly an improvement compared to the intelligibility of the T15’s unaided speech where the word error in the same test with the same group of listeners was 96.43 percent. But the prosthesis, while promising, was not yet reliable enough to use it for day-to-day communication.

“We’re not at the point where it could be used in open-ended conversations. I think of this as a proof of concept,” Stavisky says. He suggested that one way to improve future designs would be to use more electrodes. “There are a lot of startups right now building BCIs that are going to have over a thousand electrodes. If you think about what we’ve achieved with just 250 electrodes versus what could be done with a thousand or two thousand—I think it would just work,” he argued. And the work to make that happen is already underway.

Paradromics, a BCI-focused startup based in Austin, Texas, wants to go ahead with clinical trials of a speech neural prosthesis and is already seeking FDA approval. “They have a 1,600 electrode system, and they publicly stated they are going to do speech,” Stavisky says. “David Brandman, our co-author, is going to be the lead principal investigator for these trials, and we’re going to do it here at UC Davis.”

Nature, 2025.  DOI: 10.1038/s41586-025-09127-3

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

A neural brain implant provides near instantaneous speech Read More »

in-a-wild-time-for-copyright-law,-the-us-copyright-office-has-no-leader

In a wild time for copyright law, the US Copyright Office has no leader


Rudderless Copyright Office has taken on new prominence during the AI boom.

It’s a tumultuous time for copyright in the United States, with dozens of potentially economy-shaking AI copyright lawsuits winding through the courts. It’s also the most turbulent moment in the US Copyright Office’s history. Described as “sleepy” in the past, the Copyright Office has taken on new prominence during the AI boom, issuing key rulings about AI and copyright. It also hasn’t had a leader in more than a month.

In May, Copyright Register Shira Perlmutter was abruptly fired by email by the White House’s deputy director of personnel. Perlmutter is now suing the Trump administration, alleging that her firing was invalid; the government maintains that the executive branch has the authority to dismiss her. As the legality of the ouster is debated, the reality within the office is this: There’s effectively nobody in charge. And without a leader actually showing up at work, the Copyright Office is not totally business-as-usual; in fact, there’s debate over whether the copyright certificates it’s issuing could be challenged.

The firing followed a pattern. The USCO is part of the Library of Congress; Perlmutter had been appointed to her role by Librarian of Congress Carla Hayden. A few days before Perlmutter’s dismissal, Hayden, who had been in her role since 2016, was also fired by the White House via email. The White House appointed Deputy Attorney General Todd Blanche, who had previously served as President Trump’s defense attorney, as the new acting Librarian of Congress.

Two days after Pelmutter’s firing, Justice Department official Paul Perkins showed up at the Copyright Office, along with his colleague Brian Nieves. According to an affidavit from Perlmutter, they were carrying “printed versions of emails” from Blanche indicating that they had been appointed to new roles within the Copyright Office. Perkins, the email said, was designated as Acting Register of Copyrights. In other words, he was Perlmutter’s replacement.

But was Blanche actually the acting Librarian, and thus able to appoint Perkins as such? Within the Library of Congress, someone else had already assumed the role—Robert Newlen, Hayden’s former second-in-command, who has worked at the LOC since the 1970s. Following Hayden’s ouster, Newlen emailed LOC staff asserting that he was the acting Librarian—never mentioning Blanche—and noting that “Congress is engaged with the White House” on how to proceed.

In her lawsuit, Perlmutter argues that only the Librarian of Congress can fire and appoint a new Register. In a filing on Tuesday, defendants argued that the president does indeed have the authority to fire and appoint the Librarian of Congress and that his appointees then have the ability to choose a new Copyright Register.

Neither the Department of Justice nor the White House responded to requests for comment on this issue; the Library of Congress declined to comment.

Perkins and Nieves did not enter the USCO office or assume the roles they purported to fill the day they showed up. And since they left, sources within the Library of Congress tell WIRED, they have never returned, nor have they assumed any of the duties associated with the roles. These sources say that Congress is in talks with the White House to reach an agreement over these personnel disputes.

A congressional aide familiar with the situation told WIRED that Blanche, Perkins, and Nieves had not shown up for work “because they don’t have jobs to show up to.” The aide continued: “As we’ve always maintained, the President has no authority to appoint them. Robert Newlen has always been the Acting Librarian of Congress.”

If talks are happening, they remain out of public view. But Perlmutter does have some members of Congress openly on her side. “The president has no authority to remove the Register of Copyrights. That power lies solely with the Librarian of Congress. I’m relieved that the situation at the Library and Copyright Office has stabilized following the administration’s unconstitutional attempt to seize control for the executive branch. I look forward to quickly resolving this matter in a bipartisan way,” Senator Alex Padilla tells WIRED in a statement.

In the meantime, the Copyright Office is in the odd position of attempting to carry on as though it wasn’t missing its head. Immediately after Perlmutter’s dismissal, the Copyright Office paused issuing registration certificates “out of an abundance of caution,” according to USCO spokesperson Lisa Berardi Marflak, who says the pause impacted around 20,000 registrations. It resumed activities on May 29 but is now sending out registration certificates with a blank spot where Perlmutter’s signature would ordinarily be.

This unusual change has prompted discussion amongst copyright experts as to whether the registrations are now more vulnerable to legal challenges. The Copyright Office maintains that they are valid: “There is no requirement that the Register’s signature must appear on registration certificates,” says Berardi Marflak.

In a motion related to Perlmutter’s lawsuit, though, she alleges that sending out the registrations without a signature opens them up to “challenges in litigation,” something outside copyright experts have also pointed out. “It’s true the law doesn’t explicitly require a signature,” IP lawyer Rachael Dickson says. “However, the law really explicitly says that it’s the Register of Copyright determining whether the material submitted for the application is copyrightable subject matter.”

Without anyone acting as Register, Dickson thinks it would be reasonable to argue that the statutory requirements are not being met. “If you take them completely out of the equation, you have a really big problem,” she says. “Litigators who are trying to challenge a copyright registration’s validity will jump on this.”

Perlmutter’s lawyers have argued that leaving the Copyright Office without an active boss will cause dysfunction beyond the registration certificate issue, as the Register performs a variety of tasks, from advising Congress on copyright to recertifying organizations like the Mechanical Licensing Collective, the nonprofit in charge of administering royalties for streaming and download music in the United States. Since the MLC’s certification is up right now, Perlmutter would ordinarily be moving forward with recertifying the organization; as her lawsuit notes, right now, the recertification process is not moving forward.

The MLC may not be as impacted by Perlmutter’s absence as the complaint suggests. A source close to the MLC told WIRED that the organization does indeed need to be recertified but that the law doesn’t require the recertification process to be completed within a specific time frame, so it will be able to continue operating as usual.

Still, there are other ways that the lack of a boss is a clear liability. The Copyright Claims Board, a three-person tribunal that resolves some copyright disputes, needs to replace one of its members this year, as a current board member, who did not reply to a request for comment, is leaving. The job posting is already live and says applications are being reviewed, but as the position is supposed to be appointed by the Librarian of Congress with the guidance of the Copyright Register, it’s unclear how exactly it will be filled. A source familiar at the Library of Congress tells WIRED that Newlen could make the appointment if necessary, but they “expect there to be some kind of greater resolution by then.”

As they wait for the resolution, it remains an especially inopportune time for a headless Copyright Office. Perlmutter was fired just days after the office released a hotly contested report on generative AI training and fair use. That report has already been heavily cited in a new class action lawsuit against AI tools Suno and Udio, even though it was technically a “prepublication” version and not finalized. But everyone looking to see what a final report will say—or what guidance the office will issue next—can only keep waiting.

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

In a wild time for copyright law, the US Copyright Office has no leader Read More »

robotic-sucker-can-adapt-to-surroundings-like-an-actual-octopus

Robotic sucker can adapt to surroundings like an actual octopus

This isn’t the first time suction cups were inspired by highly adaptive octopus suckers. Some models have used pressurized chambers meant to push against a surface and conform to it. Others have focused more on matching the morphology of a biological sucker. This has included giving the suckers microdenticles, the tiny tooth-like projections on octopus suckers that give them a stronger grip.

Previous methods of artificial conformation have had some success, but they could be prone to leakage from gaps between the sucker and the surface it is trying to stick to, and they often needed vacuum pumps to operate. Yue and his team created a sucker that was morphologically and mechanically similar to that of an octopus.

Suckers are muscular structures with an extreme flexibility that helps them conform to objects without leakage, contract when gripping objects, and release tension when letting them go. This inspired the researchers to create suckers from a silicone sponge material on the inside and a soft silicone pad on the outside.

For the ultimate biomimicry, Yue thought that the answer to the problems experienced with previous models was to come up with a sucker that simulated the mucus secretion of octopus suckers.

This really sucks

Cephalopod suction was previously thought to be a product of these creatures’ soft, flexible bodies, which can deform easily to adapt to whatever surface it needs to grip. Mucus secretion was mostly overlooked until Yue decided to incorporate it into his robo-suckers.

Mollusk mucus is known to be five times more viscous than water. For Yue’s suckers, an artificial fluidic system, designed to mimic the secretions released by glands on a biological sucker, creates a liquid seal between the sucker and the surface it is adhering to, just about eliminating gaps. It might not have the strength of octopus slime, but water is the next best option for a robot that is going to be immersed in water when it goes exploring, possibly in underwater caves or at the bottom of the ocean.

Robotic sucker can adapt to surroundings like an actual octopus Read More »

scotus-upholds-part-of-aca-that-makes-preventive-care-fully-covered

SCOTUS upholds part of ACA that makes preventive care fully covered

The USPSTF is made up of 16 medical experts who carefully review scientific data and run models to assess what preventive health interventions are best, using a framework of recommendation gradings from A to F. Any recommendations graded A or B by the task force are required by the ACA to be covered by health plans at no cost to patients.

The US health department argued that the task force members are, in fact, appointed, and under control of the health secretary, a role currently filled by anti-vaccine advocate Robert F. Kennedy Jr.

Two lower courts in Texas sided with the Christian group, saying that the government violated the appointments clause.

But today, in a 6–3 ruling, the Supreme Court disagreed. Chief Justice John Roberts and Justices Amy Coney Barrett, Brett Kavanaugh, Elena Kagan, Ketanji Brown Jackson, and Sonia Sotomayor made up the majority.

Writing on their behalf, Kavanaugh explained: “Task Force members are supervised and directed by the Secretary, who in turn answers to the President, preserving the chain of command in Article II.”

While the ruling means that coverage of preventive health care is no longer under threat, the ruling clarifies that the health secretary has direct authority over the USPSTF. The clarification raises concern that the current secretary, Kennedy, could remove task force members and/or undo recommendations to suit his personal ideology, as he is now doing with the vaccine advisory board at the Centers for Disease Control and Prevention.

SCOTUS upholds part of ACA that makes preventive care fully covered Read More »

man-eats-dubious-street-food—ends-up-blowing-apart-his-gi-tract

Man eats dubious street food—ends up blowing apart his GI tract

Bilious blowout

Doctors noted that his breath was fast and shallow, with crackling in his neck. But breathing sounds from the base of his right lung were quiet. A computed tomography (CT) scan revealed the problems. There was air in his chest space and into his neck. Fluid was also building up around his lungs, and his right lung was collapsing. The scan also showed a perforation in the esophagus.

The doctors inserted a chest tube to remove the fluid, which did not include gastric contents, suggesting the fluid build-up was from chest inflammation.

The doctors then did an additional X-ray exam of the esophagus using a water-soluble contrast agent. This clearly revealed a large gash in the man’s esophagus resulting from the robust eruption. The imaging also showed the contrast agent leaking out into the man’s chest.

The doctors quickly sent the man into emergency surgery to repair his esophagus. He spent the next 35 days in the hospital recovering. When he was discharged, he still had a feeding tube that passed through his nose and into his small intestine. It took an additional three months for the perforation to completely heal, at which point doctors could finally remove the feeding tube.

It’s not entirely clear what causes Boerhaave syndrome. Researchers hypothesize that it occurs from a loss of neuromuscular coordination, which, in particular, causes the upper sphincter in the esophagus—the cricopharyngeus—to fail to relax at the onset of vomiting. The rapid rise of internal pressure overwhelms the esophagus, typically causing a lengthwise tear in the lower third of the tube, which is the weakest portion. On average, tears can be up to 8 centimeters (about 3 inches) long.

Though researchers expect that cases are underreported, the estimated incidence based on reports is about three cases per million people globally each year.

Man eats dubious street food—ends up blowing apart his GI tract Read More »

judge:-pirate-libraries-may-have-profited-from-meta-torrenting-80tb-of-books

Judge: Pirate libraries may have profited from Meta torrenting 80TB of books

It could certainly look worse for Meta if authors manage to present evidence supporting the second way that torrenting could be relevant to the case, Chhabaria suggested.

“Meta downloading copyrighted material from shadow libraries” would also be relevant to the character of the use, “if it benefitted those who created the libraries and thus supported and perpetuated their unauthorized copying and distribution of copyrighted works,” Chhabria wrote.

Counting potential strikes against Meta, Chhabria pointed out that the “vast majority of cases” involving “this sort of peer-to-peer file-sharing” are found to “constitute copyright infringement.” And it likely doesn’t help Meta’s case that “some of the libraries Meta used have themselves been found liable for infringement.”

However, Meta may overcome this argument, too, since book authors “have not submitted any evidence” that potentially shows how Meta’s downloading may perhaps be “propping up” or financially benefiting pirate libraries.

Finally, Chhabria noted that the “last issue relating to the character of Meta’s use” of books in regards to its torrenting is “the relationship between Meta’s downloading of the plaintiffs’ books and Meta’s use of the books to train Llama.”

Authors had tried to argue that these elements were distinct. But Chhabria said there’s no separating the fact that Meta downloaded the books to serve the “highly transformative” purpose of training Llama.

“Because Meta’s ultimate use of the plaintiffs’ books was transformative, so too was Meta’s downloading of those books,” Chhabria wrote.

AI training rulings may get more authors paid

Authors only learned of Meta’s torrenting through discovery in the lawsuit, and because of that, Chhabria noted that “the record on Meta’s alleged distribution is incomplete.”

It’s possible that authors may be able to show evidence that Meta “contributed to the BitTorrent network” by providing significant computing power that could’ve meaningfully assisted shadow libraries, Chhabria said in a footnote.

Judge: Pirate libraries may have profited from Meta torrenting 80TB of books Read More »