Author name: Kris Guyer

openai-ceo-altman-wasn’t-fired-because-of-scary-new-tech,-just-internal-politics

OpenAI CEO Altman wasn’t fired because of scary new tech, just internal politics

Adventures in optics —

As Altman cements power, OpenAI announces three new board members—and a returning one.

OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 6, 2023, in San Francisco.

Enlarge / OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 6, 2023, in San Francisco.

On Friday afternoon Pacific Time, OpenAI announced the appointment of three new members to the company’s board of directors and released the results of an independent review of the events surrounding CEO Sam Altman’s surprise firing last November. The current board expressed its confidence in the leadership of Altman and President Greg Brockman, and Altman is rejoining the board.

The newly appointed board members are Dr. Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation; Nicole Seligman, former EVP and global general counsel of Sony; and Fidji Simo, CEO and chair of Instacart. These additions notably bring three women to the board after OpenAI met criticism about its restructured board composition last year. In addition, Sam Altman has rejoined the board.

The independent review, conducted by law firm WilmerHale, investigated the circumstances that led to Altman’s abrupt removal from the board and his termination as CEO on November 17, 2023. Despite rumors to the contrary, the board did not fire Altman because they got a peek at scary new AI technology and flinched. “WilmerHale… found that the prior Board’s decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.”

Instead, the review determined that the prior board’s actions stemmed from a breakdown in trust between the board and Altman.

After reportedly interviewing dozens of people and reviewing over 30,000 documents, WilmerHale found that while the prior board acted within its purview, Altman’s termination was unwarranted. “WilmerHale found that the prior Board acted within its broad discretion to terminate Mr. Altman,” OpenAI wrote, “but also found that his conduct did not mandate removal.”

Additionally, the law firm found that the decision to fire Altman was made in undue haste: “The prior Board implemented its decision on an abridged timeframe, without advance notice to key stakeholders and without a full inquiry or an opportunity for Mr. Altman to address the prior Board’s concerns.”

Altman’s surprise firing occurred after he attempted to remove Helen Toner from OpenAI’s board due to disagreements over her criticism of OpenAI’s approach to AI safety and hype. Some board members saw his actions as deceptive and manipulative. After Altman returned to OpenAI, Toner resigned from the OpenAI board on November 29.

In a statement posted on X, Altman wrote, “i learned a lot from this experience. one think [sic] i’ll say now: when i believed a former board member was harming openai through some of their actions, i should have handled that situation with more grace and care. i apologize for this, and i wish i had done it differently.”

A tweet from Sam Altman posted on March 8, 2024.

Enlarge / A tweet from Sam Altman posted on March 8, 2024.

Following the review’s findings, the Special Committee of the OpenAI Board recommended endorsing the November 21 decision to rehire Altman and Brockman. The board also announced several enhancements to its governance structure, including new corporate governance guidelines, a strengthened Conflict of Interest Policy, a whistleblower hotline, and additional board committees focused on advancing OpenAI’s mission.

After OpenAI’s announcements on Friday, resigned OpenAI board members Toner and Tasha McCauley released a joint statement on X. “Accountability is important in any company, but it is paramount when building a technology as potentially world-changing as AGI,” they wrote. “We hope the new board does its job in governing OpenAI and holding it accountable to the mission. As we told the investigators, deception, manipulation, and resistance to thorough oversight should be unacceptable.”

OpenAI CEO Altman wasn’t fired because of scary new tech, just internal politics Read More »

apple-and-tesla-feel-the-pain-as-china-opts-for-homegrown-products

Apple and Tesla feel the pain as China opts for homegrown products

Domestically made smartphones were much in evidence at the National People’s Congress in Beijing

Enlarge / Domestically made smartphones were much in evidence at the National People’s Congress in Beijing

Wang Zhao/AFP/Getty Images

Apple and Tesla cracked China, but now the two largest US consumer companies in the country are experiencing cracks in their own strategies as domestic rivals gain ground and patriotic buying often trumps their allure.

Falling market share and sales figures reported this month indicate the two groups face rising competition and the whiplash of US-China geopolitical tensions. Both have turned to discounting to try to maintain their appeal.

A shift away from Apple, in particular, has been sharp, spurred on by a top-down campaign to reduce iPhone usage among state employees and the triumphant return of Chinese national champion Huawei, which last year overcame US sanctions to roll out a homegrown smartphone capable of near 5G speeds.

Apple’s troubles were on full display at China’s annual Communist Party bash in Beijing this month, where a dozen participants told the Financial Times they were using phones from Chinese brands.

“For people coming here, they encourage us to use domestic phones, because phones like Apple are not safe,” said Zhan Wenlong, a nuclear physicist and party delegate. “[Apple phones] are made in China, but we don’t know if the chips have back doors.”

Wang Chunru, a member of China’s top political advisory body, the Chinese People’s Political Consultative Conference, said he was using a Huawei device. “We all know Apple has eavesdropping capabilities,” he said.

Delegate Li Yanfeng from Guangxi said her phone was manufactured by Huawei. “I trust domestic brands, using them was a uniform request.”

Financial Times using Bloomberg data

Outside of the US, China is both Apple and Tesla’s single-largest market, respectively contributing 19 percent and 22 percent of total revenues during their most recent fiscal years. Their mounting challenges in the country have caught Wall Street’s attention, contributing to Apple’s 9 percent share price slide this year and Tesla’s 28 percent fall, making them the poorest performers among the so-called Magnificent Seven tech stocks.

Apple and Tesla are the latest foreign companies to feel the pain of China’s shift toward local brands. Sales of Nike and Adidas clothing have yet to return to their 2021 peak. A recent McKinsey report showed a growing preference among Chinese consumers for local brands.

Apple and Tesla feel the pain as China opts for homegrown products Read More »

here’s-porsche’s-newest-ev,-the-1,000-horsepower-taycan-turbo-gt

Here’s Porsche’s newest EV, the 1,000-horsepower Taycan Turbo GT

Turbo, Turbo S, Turboest —

The new EV has set lap records at Laguna Seca and the Nürburgring.

A purple Porsche Taycan at Laguna Seca

Enlarge / Porsche’s midlife refresh for the Taycan arrives this summer. Among the changes is a new model called the Taycan Turbo GT.

Porsche

Porsche is adding another new variant to its highly competent Taycan electric sedan. As part of the electric Porsche’s midlife refresh, there’s a new Taycan that sits atop the performance tree, appropriately named the Taycan Turbo GT. It’s the most powerful Taycan you can buy, with a peak output of 1,092 hp (815 kW), and the boffins in Germany have even cut about 157 lbs (72 kg) from the curb weight.

The eagle-eyed Porsche observer probably knew a faster Taycan was on the way. In January, wearing manufacturer’s camo and described as an “unnamed pre-production prototype,” the Taycan Turbo GT lapped the Nürburgring Nordschleife in 7: 05.55, 20 seconds faster than Tesla’s best time with its Model S Plaid (and 26 seconds faster than the Taycan Turbo S, previously the fastest variant).

Now, shorn of its camouflage and proudly bearing the Turbo GT name—as well as sporting the Taycan’s new facelifted look—it has set a new record at Laguna Seca in California. With development driver Lars Kern behind the wheel, the Taycan Turbo GT lapped the 2.2-mile (3.6 km) circuit in 1: 27.87, which is a track record for a production EV at that track.

Power output is 777 hp (580 kW) in normal operation, but engage launch control, and that overboosts to 1,019 hp (760 Nm), with peak power of 1,092 hp for a couple of seconds. And Porsche’s Formula E program has had some influence here: While driving, you can engage “attack mode,” which boosts power by an additional 160 hp (120 kW) for 10-second bursts. Porsche has set up attack mode so it can be engaged or deactivated by a paddle on the steering wheel for ease of use while on track.

Among the changes under the skin are a new rear drive unit, common to the facelifted Taycans. But for the Turbo GT there’s a new silicon carbide pulse inverter that’s capable of up to 900 V, rather than the 600 V inverter in the Taycan Turbo S. The two-speed transmission has also been upgraded to cope with the Turbo GT’s peak torque of 988 lb-ft (1,340 Nm).

  • The purple car with the big rear wing is the Turbo GT with the Weissach package.

    Porsche

  • Lars Kern set laps in a pair of Taycan Turbo GTs—the purple car and this black one.

    Porsche

  • The Taycan Turbo GT negotiates the corkscrew.

    Porsche

  • The “purple sky metallic” paint is one of two new colors for the Turbo GT for this year.

    Porsche

  • The Turbo GT interior.

    Porsche

  • A rather happy-looking Lars Kern next to his record-setting rides.

    Porsche

You’ll need to specify the Turbo GT with the Weissach package—named for Porsche’s R&D base near Stuttgart—if you want to save weight as well as add power. This uses carbon fiber to replace heavier trim pieces, thinner glass, and a lighter sound system and ditches the automatic closing of the rear trunk, plus one of the two charge ports and even the rear speakers in order to save the aforementioned 157 lbs.

The Weissach package also cuts the 0-60 mph time from 2.2 seconds down to 2.1 seconds and allows for a 190 mph (306 km/h) top speed instead of topping out at 180 mph (290 km/h).

“The track at Laguna Seca pushes the Taycan Turbo GT to the limit. It’s the overall package that makes the difference,” said Kern. “The Turbo GT with Weissach package sets new standards in almost every metric. These include acceleration and braking, an “Attack Mode” that’s intuitive to use, and a powertrain designed for maximum traction and performance. And the cornering grip levels are just as impressive. The controllability and light-footedness are unbelievable. The tires work very well, and you have the right balance in every driving situation. It is incredibly good fun to drive this car around the undulating track at Laguna Seca.”

None of this comes cheap, however. The 2025 Taycan Turbo GT will start at $230,000 when it arrives in showrooms this summer.

Here’s Porsche’s newest EV, the 1,000-horsepower Taycan Turbo GT Read More »

op-ed:-charges-against-journalist-tim-burke-are-a-hack-job

Op-ed: Charges against journalist Tim Burke are a hack job

Permission required? —

Burke was indicted after sharing outtakes of a Fox News interview.

Op-ed: Charges against journalist Tim Burke are a hack job

Caitlin Vogus is the deputy director of advocacy at Freedom of the Press Foundation and a First Amendment lawyer. Jennifer Stisa Granick is the surveillance and cybersecurity counsel with the ACLU’s Speech, Privacy, and Technology Project. The opinions in this piece do not necessarily reflect the views of Ars Technica.

Imagine a journalist finds a folder on a park bench, opens it, and sees a telephone number inside. She dials the number. A famous rapper answers and spews a racist rant. If no one gave her permission to open the folder and the rapper’s telephone number was unlisted, should the reporter go to jail for publishing what she heard?

If that sounds ridiculous, it’s because it is. And yet, add in a computer and the Internet, and that’s basically what a newly unsealed federal indictment accuses Florida journalist Tim Burke of doing when he found and disseminated outtakes of Tucker Carlson’s Fox News interview with Ye, the artist formerly known as Kanye West, going on the first of many antisemitic diatribes.

The vast majority of the charges against Burke are under the Computer Fraud and Abuse Act (CFAA), a law that the ACLU and Freedom of the Press Foundation have long argued is vague and subject to abuse. Now, in a new and troubling move, the government suggests in the Burke indictment that journalists violate the CFAA if they don’t ask for permission to use information they find publicly posted on the Internet.

According to news reports and statements from Burke’s lawyer, the charges are, in part, related to the unaired segments of the interview between Carlson and Ye. After Burke gave the video to news sites to publish, Ye’s disturbing remarks, and Fox’s decision to edit them out of the interview when broadcast, quickly made national news.

According to Burke, the video of Carlson’s interview with Ye was streamed via a publicly available, unencrypted URL that anyone could access by typing the address into your browser. Those URLs were not listed in any search engine, but Burke says that a source pointed him to a website on the Internet Archive where a radio station had posted “demo credentials” that gave access to a page where the URLs were listed.

The credentials were for a webpage created by LiveU, a company that provides video streaming services to broadcasters. Using the demo username and password, Burke logged into the website, and, Burke’s lawyer claims, the list of URLs for video streams automatically downloaded to his computer.

And that, the government says, is a crime. It charges Burke with violating the CFAA’s prohibition on intentionally accessing a computer “without authorization” because he accessed the LiveU website and URLs without having been authorized by Fox or LiveU. In other words, because Burke didn’t ask Fox or LiveU for permission to use the demo account or view the URLs, the indictment alleges, he acted without authorization.

But there’s a difference between LiveU and Fox’s subjective wishes about what journalists or others would find, and what the services and websites they maintained and used permitted people to find. The relevant question should be the latter. Generally, it is both a First Amendment and a due process problem to allow a private party’s desire to control information to form the basis of criminal prosecutions.

The CFAA charges against Burke take advantage of the vagueness of the statutory term “without authorization.” The law doesn’t define the term, and its murkiness has enabled plenty of ill-advised prosecutions over the years. In Burke’s case, because the list of unencrypted URLs was password protected and the company didn’t want outsiders to access the URLs, the government claims that Burke acted “without authorization.”

Using a published demo password to get a list of URLs, which anyone could have used a software program to guess and access, isn’t that big of a deal. What was a big deal is that Burke’s research embarrassed Fox News. But that’s what journalists are supposed to do—uncover questionable practices of powerful entities.

Journalists need never ask corporations for permission to investigate or embarrass them, and the law shouldn’t encourage or force them to. Just because someone doesn’t like what a reporter does online doesn’t mean that it’s without authorization and that what he did is therefore a crime.

Still, this isn’t the first time that prosecutors have abused computer hacking laws to go after journalists and others, like security researchers. Until a 2021 Supreme Court ruling, researchers and journalists worried that their good faith investigations of algorithmic discrimination could expose them to CFAA liability for exceeding sites’ terms of service.

Even now, the CFAA and similarly vague state computer crime laws continue to threaten press freedom. Just last year, in August, police raided the newsroom of the Marion County Record and accused its journalists of breaking state computer hacking laws by using a government website to confirm a tip from a source. Police dropped the case after a national outcry.

The White House seemed concerned about the Marion ordeal. But now the same administration is using an overly broad interpretation of a hacking law to target a journalist. Merely filing charges against Burke sends a chilling message that the government will attempt to penalize journalists for engaging in investigative reporting it dislikes.

Even worse, if the Burke prosecution succeeds, it will encourage the powerful to use the CFAA as a veto over news reporting based on online sources just because it is embarrassing or exposes their wrongdoing. These charges were also an excuse for the government to seize Burke’s computer equipment and digital work—and demand to keep it permanently. This seizure interferes with Burke’s ongoing reporting, a tactic that it could repeat in other investigations.

If journalists must seek permission to publish information they find online from the very people they’re exposing, as the government’s indictment of Burke suggests, it’s a good bet that most information from the obscure but public corners of the Internet will never see the light of day. That would endanger both journalism and public access to important truths. The court reviewing Burke’s case should dismiss the charges.

Op-ed: Charges against journalist Tim Burke are a hack job Read More »

shields-up:-new-ideas-might-make-active-shielding-viable

Shields up: New ideas might make active shielding viable

Shields up: New ideas might make active shielding viable

Aurich Lawson | Getty Images | NASA

On October 19, 1989, at 12: 29 UT, a monstrous X13 class solar flare triggered a geomagnetic storm so strong that auroras lit up the skies in Japan, America, Australia, and even Germany the following day. Had you been flying around the Moon at that time, you would have absorbed well over 6 Sieverts of radiation—a dose that would most likely kill you within a month or so.

This is why the Orion spacecraft that is supposed to take humans on a Moon fly-by mission this year has a heavily shielded storm shelter for the crew. But shelters like that aren’t sufficient for a flight to Mars—Orion’s shield is designed for a 30-day mission.

To obtain protection comparable to what we enjoy on Earth would require hundreds of tons of material, and that’s simply not possible in orbit. The primary alternative—using active shields that deflect charged particles just like the Earth’s magnetic field does—was first proposed in the 1960s. Today, we’re finally close to making it work.

Deep-space radiation

Space radiation comes in two different flavors. Solar events like flares or coronal mass ejections can cause very high fluxes of charged particles (mostly protons). They’re nasty when you have no shelter but are relatively easy to shield against since solar protons are mostly low energy. The majority of solar particle events flux is between 30 Mega-electronVolts to 100 MeV and could be stopped by Orion-like shelters.

Then there are galactic cosmic rays: particles coming from outside the Solar System, set in motion by faraway supernovas or neutron stars. These are relatively rare but are coming at you all the time from all directions. They also have high energies, starting at 200 MeV and going to several GeVs, which makes them extremely penetrating. Thick masses don’t provide much shielding against them. When high-energy cosmic ray particles hit thin shields, they produce many lower-energy particles—you’d be better off with no shield at all.

The particles with energies between 70 MeV and 500 MeV are responsible for 95 percent of the radiation dose that astronauts get in space. On short flights, solar storms are the main concern because they can be quite violent and do lots of damage very quickly. The longer you fly, though, GCRs become more of an issue because their dose accumulates over time, and they can go through pretty much everything we try to put in their way.

What keeps us safe at home

The reason nearly none of this radiation can reach us is that Earth has a natural, multi-stage shielding system. It begins with its magnetic field, which deflects most of the incoming particles toward the poles. A charged particle in a magnetic field follows a curve—the stronger the field, the tighter the curve. Earth’s magnetic field is very weak and barely bends incoming particles, but it is huge, extending thousands of kilometers into space.

Anything that makes it through the magnetic field runs into the atmosphere, which, when it comes to shielding, is the equivalent of an aluminum wall that’s 3 meters thick. Finally, there is the planet itself, which essentially cuts the radiation in half since you always have 6.5 billion trillion tons of rock shielding you from the bottom.

To put that in perspective, the Apollo crew module had on average 5 grams of mass per square centimeter standing between the crew and radiation. A typical ISS module has twice that, about 10 g/cm2. The Orion shelter has 35–45 g/cm2, depending on where you sit exactly, and it weighs 36 tons. On Earth, the atmosphere alone gives you 810 g/cm2—roughly 20 times more than our best shielded spaceships.

The two options are to add more mass—which gets expensive quickly—or to shorten the length of the mission, which isn’t always possible. So solving radiation with passive mass won’t cut it for longer missions, even using the best shielding materials like polyethylene or water. This is why making a miniaturized, portable version of the Earth’s magnetic field was on the table from the first days of space exploration. Unfortunately, we discovered it was far easier said than done.

Shields up: New ideas might make active shielding viable Read More »

florida-middle-schoolers-charged-with-making-deepfake-nudes-of-classmates

Florida middle-schoolers charged with making deepfake nudes of classmates

no consent —

AI tool was used to create nudes of 12- to 13-year-old classmates.

Florida middle-schoolers charged with making deepfake nudes of classmates

Jacqui VanLiew; Getty Images

Two teenage boys from Miami, Florida, were arrested in December for allegedly creating and sharing AI-generated nude images of male and female classmates without consent, according to police reports obtained by WIRED via public record request.

The arrest reports say the boys, aged 13 and 14, created the images of the students who were “between the ages of 12 and 13.”

The Florida case appears to be the first arrests and criminal charges as a result of alleged sharing of AI-generated nude images to come to light. The boys were charged with third-degree felonies—the same level of crimes as grand theft auto or false imprisonment—under a state law passed in 2022 which makes it a felony to share “any altered sexual depiction” of a person without their consent.

The parent of one of the boys arrested did not respond to a request for comment in time for publication. The parent of the other boy said that he had “no comment.” The detective assigned to the case, and the state attorney handling the case, did not respond for comment in time for publication.

As AI image-making tools have become more widely available, there have been several high-profile incidents in which minors allegedly created AI-generated nude images of classmates and shared them without consent. No arrests have been disclosed in the publicly reported cases—at Issaquah High School in Washington, Westfield High School in New Jersey, and Beverly Vista Middle School in California—even though police reports were filed. At Issaquah High School, police opted not to press charges.

The first media reports of the Florida case appeared in December, saying that the two boys were suspended from Pinecrest Cove Academy in Miami for 10 days after school administrators learned of allegations that they created and shared fake nude images without consent. After parents of the victims learned about the incident, several began publicly urging the school to expel the boys.

Nadia Khan-Roberts, the mother of one of the victims, told NBC Miami in December that for all of the families whose children were victimized the incident was traumatizing. “Our daughters do not feel comfortable walking the same hallways with these boys,” she said. “It makes me feel violated, I feel taken advantage [of] and I feel used,” one victim, who asked to remain anonymous, told the TV station.

WIRED obtained arrest records this week that say the incident was reported to police on December 6, 2023, and that the two boys were arrested on December 22. The records accuse the pair of using “an artificial intelligence application” to make the fake explicit images. The name of the app was not specified and the reports claim the boys shared the pictures between each other.

“The incident was reported to a school administrator,” the reports say, without specifying who reported it, or how that person found out about the images. After the school administrator “obtained copies of the altered images” the administrator interviewed the victims depicted in them, the reports say, who said that they did not consent to the images being created.

After their arrest, the two boys accused of making the images were transported to the Juvenile Service Department “without incident,” the reports say.

A handful of states have laws on the books that target fake, nonconsensual nude images. There’s no federal law targeting the practice, but a group of US senators recently introduced a bill to combat the problem after fake nude images of Taylor Swift were created and distributed widely on X.

The boys were charged under a Florida law passed in 2022 that state legislators designed to curb harassment involving deepfake images made using AI-powered tools.

Stephanie Cagnet Myron, a Florida lawyer who represents victims of nonconsensually shared nude images, tells WIRED that anyone who creates fake nude images of a minor would be in possession of child sexual abuse material, or CSAM. However, she claims it’s likely that the two boys accused of making and sharing the material were not charged with CSAM possession due to their age.

“There’s specifically several crimes that you can charge in a case, and you really have to evaluate what’s the strongest chance of winning, what has the highest likelihood of success, and if you include too many charges, is it just going to confuse the jury?” Cagnet Myron added.

Mary Anne Franks, a professor at the George Washington University School of Law and a lawyer who has studied the problem of nonconsensual explicit imagery, says it’s “odd” that Florida’s revenge porn law, which predates the 2022 statute under which the boys were charged, only makes the offense a misdemeanor, while this situation represented a felony.

“It is really strange to me that you impose heftier penalties for fake nude photos than for real ones,” she says.

Franks adds that although she believes distributing nonconsensual fake explicit images should be a criminal offense, thus creating a deterrent effect, she doesn’t believe offenders should be incarcerated, especially not juveniles.

“The first thing I think about is how young the victims are and worried about the kind of impact on them,” Franks says. “But then [I] also question whether or not throwing the book at kids is actually going to be effective here.”

This story originally appeared on wired.com.

Florida middle-schoolers charged with making deepfake nudes of classmates Read More »

a-hunk-of-junk-from-the-international-space-station-hurtles-back-to-earth

A hunk of junk from the International Space Station hurtles back to Earth

In March 2021, the International Space Station's robotic arm released a cargo pallet with nine expended batteries.

Enlarge / In March 2021, the International Space Station’s robotic arm released a cargo pallet with nine expended batteries.

NASA

A bundle of depleted batteries from the International Space Station careened around Earth for almost three years before falling out of orbit and plunging back into the atmosphere Friday. Most of the trash likely burned up during reentry, but it’s possible some fragments may have reached Earth’s surface intact.

Larger pieces of space junk regularly fall to Earth on unguided trajectories, but they’re usually derelict satellites or spent rocket stages. This involved a pallet of batteries from the space station with a mass of more than 2.6 metric tons (5,800 pounds). NASA intentionally sent the space junk on a path toward an unguided reentry.

Naturally self-cleaning

Sandra Jones, a NASA spokesperson, said the agency “conducted a thorough debris analysis assessment on the pallet and has determined it will harmlessly reenter the Earth’s atmosphere.” This was, by far, the most massive object ever tossed overboard from the International Space Station.

The batteries reentered the atmosphere at 2: 29 pm EST (1929 UTC), according to US Space Command. At that time, the pallet would have been flying between Mexico and Cuba. “We do not expect any portion to have survived reentry,” Jones told Ars.

The European Space Agency (ESA) also monitored the trajectory of the battery pallet. In a statement this week, the ESA said the risk of a person being hit by a piece of the pallet was “very low” but said “some parts may reach the ground.” Jonathan McDowell, an astrophysicist who closely tracks spaceflight activity, estimated about 500 kilograms (1,100 pounds) of debris would hit the Earth’s surface.

“The general rule of thumb is that 20 to 40 percent of the mass of a large object will reach the ground, though it depends on the design of the object,” the Aerospace Corporation says.

A dead ESA satellite reentered the atmosphere in a similar uncontrolled manner February 21. At 2.3 metric tons, this satellite was similar in mass to the discarded battery pallet. ESA, which has positioned itself as a global leader in space sustainability, set up a website that provided daily tracking updates on the satellite’s deteriorating orbit.

This map shows the track of the unguided cargo pallet around the Earth over the course of six hours Friday. It reentered the atmosphere near Cuba on southwest-to-northeast heading.

Enlarge / This map shows the track of the unguided cargo pallet around the Earth over the course of six hours Friday. It reentered the atmosphere near Cuba on southwest-to-northeast heading.

As NASA and ESA officials have said, the risk of injury or death from a spacecraft reentry is quite low. Falling space debris has never killed anyone. According to ESA, the risk of a person getting hit by a piece of space junk is about 65,000 times lower than the risk of being struck by lightning.

This circumstance is unique in the type and origin of the space debris, which is why NASA purposely cast it away on an uncontrolled trajectory back to Earth.

The space station’s robotic arm released the battery cargo pallet on March 11, 2021. Since then, the batteries have been adrift in orbit, circling the planet about every 90 minutes. Over a span of months and years, low-Earth orbit is self-cleaning thanks to the influence of aerodynamic drag. The resistance of rarefied air molecules in low-Earth orbit gradually slowed the pallet’s velocity until, finally, gravity pulled it back into the atmosphere Friday.

The cargo pallet, which launched inside a Japanese HTV cargo ship in 2020, carried six new lithium-ion batteries to the International Space Station. The station’s two-armed Dextre robot, assisted by astronauts on spacewalks, swapped out aging nickel-hydrogen batteries for the upgraded units. Nine of the old batteries were installed on the HTV cargo pallet before its release from the station’s robotic arm.

A hunk of junk from the International Space Station hurtles back to Earth Read More »

study-finds-that-we-could-lose-science-if-publishers-go-bankrupt

Study finds that we could lose science if publishers go bankrupt

Need backups —

A scan of archives shows that lots of scientific papers aren’t backed up.

A set of library shelves with lots of volumes stacked on them.

Back when scientific publications came in paper form, libraries played a key role in ensuring that knowledge didn’t disappear. Copies went out to so many libraries that any failure—a publisher going bankrupt, a library getting closed—wouldn’t put us at risk of losing information. But, as with anything else, scientific content has gone digital, which has changed what’s involved with preservation.

Organizations have devised systems that should provide options for preserving digital material. But, according to a recently published survey, lots of digital documents aren’t consistently showing up in the archives that are meant to preserve it. And that puts us at risk of losing academic research—including science paid for with taxpayer money.

Tracking down references

The work was done by Martin Eve, a developer at Crossref. That’s the organization that organizes the DOI system, which provides a permanent pointer toward digital documents, including almost every scientific publication. If updates are done properly, a DOI will always resolve to a document, even if that document gets shifted to a new URL.

But it also has a way of handling documents disappearing from their expected location, as might happen if a publisher went bankrupt. There are a set of what’s called “dark archives” that the public doesn’t have access to, but should contain copies of anything that’s had a DOI assigned. If anything goes wrong with a DOI, it should trigger the dark archives to open access, and the DOI updated to point to the copy in the dark archive.

For that to work, however, copies of everything published have to be in the archives. So Eve decided to check whether that’s the case.

Using the Crossref database, Eve got a list of over 7 million DOIs and then checked whether the documents could be found in archives. He included well-known ones, like the Internet Archive at archive.org, as well as some dedicated to academic works, like LOCKSS (Lots of Copies Keeps Stuff Safe) and CLOCKSS (Controlled Lots of Copies Keeps Stuff Safe).

Not well-preserved

The results were… not great.

When Eve broke down the results by publisher, less than 1 percent of the 204 publishers had put the majority of their content into multiple archives. (The cutoff was 75 percent of their content in three or more archives.) Fewer than 10 percent had put more than half their content in at least two archives. And a full third seemed to be doing no organized archiving at all.

At the individual publication level, under 60 percent were present in at least one archive, and over a quarter didn’t appear to be in any of the archives at all. (Another 14 percent were published too recently to have been archived or had incomplete records.)

The good news is that large academic publishers appear to be reasonably good about getting things into archives; most of the unarchived issues stem from smaller publishers.

Eve acknowledges that the study has limits, primarily in that there may be additional archives he hasn’t checked. There are some prominent dark archives that he didn’t have access to, as well as things like Sci-hub, which violates copyright in order to make material from for-profit publishers available to the public. Finally, individual publishers may have their own archiving system in place that could keep publications from disappearing.

Should we be worried?

The risk here is that, ultimately, we may lose access to some academic research. As Eve phrases it, knowledge gets expanded because we’re able to build upon a foundation of facts that we can trace back through a chain of references. If we start losing those links, then the foundation gets shakier. Archiving comes with its own set of challenges: It costs money, it has to be organized, consistent means of accessing the archived material need to be established, and so on.

But, to an extent, we’re failing at the first step. “An important point to make,” Eve writes, “is that there is no consensus over who should be responsible for archiving scholarship in the digital age.”

A somewhat related issue is ensuring that people can find the archived material—the issue that DOIs were designed to solve. In many cases, the authors of the manuscript place copies in places like the arXiv/bioRxiv, or the NIH’s PubMed Centra (this sort of archiving is increasingly being made a requirement by funding bodies). The problem here is that the archived copies may not include the DOI that’s meant to ensure it can be located. That doesn’t mean it can’t be identified through other means, but it definitely makes finding the right document much more difficult.

Put differently, if you can’t find a paper or can’t be certain you’re looking at the right version of it, it can be just as bad as not having a copy of the paper at all.

None of this is to say that we’ve already lost important research documents. But Eve’s paper serves a valuable function by highlighting that the risk is real. We’re well into the era where print copies of journals are irrelevant to most academics, and digital-only academic journals have proliferated. It’s long past time for us to have clear standards in place to ensure that digital versions of research have the endurance that print works have enjoyed.

Journal of Librarianship and Scholarly Communication, 2024. DOI: 10.31274/jlsc.16288  (About DOIs).

Study finds that we could lose science if publishers go bankrupt Read More »

matrix-multiplication-breakthrough-could-lead-to-faster,-more-efficient-ai-models

Matrix multiplication breakthrough could lead to faster, more efficient AI models

The Matrix Revolutions —

At the heart of AI, matrix math has just seen its biggest boost “in more than a decade.”

Futuristic huge technology tunnel and binary data.

Enlarge / When you do math on a computer, you fly through a numerical tunnel like this—figuratively, of course.

Computer scientists have discovered a new way to multiply large matrices faster than ever before by eliminating a previously unknown inefficiency, reports Quanta Magazine. This could eventually accelerate AI models like ChatGPT, which rely heavily on matrix multiplication to function. The findings, presented in two recent papers, have led to what is reported to be the biggest improvement in matrix multiplication efficiency in over a decade.

Multiplying two rectangular number arrays, known as matrix multiplication, plays a crucial role in today’s AI models, including speech and image recognition, chatbots from every major vendor, AI image generators, and video synthesis models like Sora. Beyond AI, matrix math is so important to modern computing (think image processing and data compression) that even slight gains in efficiency could lead to computational and power savings.

Graphics processing units (GPUs) excel in handling matrix multiplication tasks because of their ability to process many calculations at once. They break down large matrix problems into smaller segments and solve them concurrently using an algorithm.

Perfecting that algorithm has been the key to breakthroughs in matrix multiplication efficiency over the past century—even before computers entered the picture. In October 2022, we covered a new technique discovered by a Google DeepMind AI model called AlphaTensor, focusing on practical algorithmic improvements for specific matrix sizes, such as 4×4 matrices.

By contrast, the new research, conducted by Ran Duan and Renfei Zhou of Tsinghua University, Hongxun Wu of the University of California, Berkeley, and by Virginia Vassilevska Williams, Yinzhan Xu, and Zixuan Xu of the Massachusetts Institute of Technology (in a second paper), seeks theoretical enhancements by aiming to lower the complexity exponent, ω, for a broad efficiency gain across all sizes of matrices. Instead of finding immediate, practical solutions like AlphaTensor, the new technique addresses foundational improvements that could transform the efficiency of matrix multiplication on a more general scale.

Approaching the ideal value

The traditional method for multiplying two n-by-n matrices requires n³ separate multiplications. However, the new technique, which improves upon the “laser method” introduced by Volker Strassen in 1986, has reduced the upper bound of the exponent (denoted as the aforementioned ω), bringing it closer to the ideal value of 2, which represents the theoretical minimum number of operations needed.

The traditional way of multiplying two grids full of numbers could require doing the math up to 27 times for a grid that’s 3×3. But with these advancements, the process is accelerated by significantly reducing the multiplication steps required. The effort minimizes the operations to slightly over twice the size of one side of the grid squared, adjusted by a factor of 2.371552. This is a big deal because it nearly achieves the optimal efficiency of doubling the square’s dimensions, which is the fastest we could ever hope to do it.

Here’s a brief recap of events. In 2020, Josh Alman and Williams introduced a significant improvement in matrix multiplication efficiency by establishing a new upper bound for ω at approximately 2.3728596. In November 2023, Duan and Zhou revealed a method that addressed an inefficiency within the laser method, setting a new upper bound for ω at approximately 2.371866. The achievement marked the most substantial progress in the field since 2010. But just two months later, Williams and her team published a second paper that detailed optimizations that reduced the upper bound for ω to 2.371552.

The 2023 breakthrough stemmed from the discovery of a “hidden loss” in the laser method, where useful blocks of data were unintentionally discarded. In the context of matrix multiplication, “blocks” refer to smaller segments that a large matrix is divided into for easier processing, and “block labeling” is the technique of categorizing these segments to identify which ones to keep and which to discard, optimizing the multiplication process for speed and efficiency. By modifying the way the laser method labels blocks, the researchers were able to reduce waste and improve efficiency significantly.

While the reduction of the omega constant might appear minor at first glance—reducing the 2020 record value by 0.0013076—the cumulative work of Duan, Zhou, and Williams represents the most substantial progress in the field observed since 2010.

“This is a major technical breakthrough,” said William Kuszmaul, a theoretical computer scientist at Harvard University, as quoted by Quanta Magazine. “It is the biggest improvement in matrix multiplication we’ve seen in more than a decade.”

While further progress is expected, there are limitations to the current approach. Researchers believe that understanding the problem more deeply will lead to the development of even better algorithms. As Zhou stated in the Quanta report, “People are still in the very early stages of understanding this age-old problem.”

So what are the practical applications? For AI models, a reduction in computational steps for matrix math could translate into faster training times and more efficient execution of tasks. It could enable more complex models to be trained more quickly, potentially leading to advancements in AI capabilities and the development of more sophisticated AI applications. Additionally, efficiency improvement could make AI technologies more accessible by lowering the computational power and energy consumption required for these tasks. That would also reduce AI’s environmental impact.

The exact impact on the speed of AI models depends on the specific architecture of the AI system and how heavily its tasks rely on matrix multiplication. Advancements in algorithmic efficiency often need to be coupled with hardware optimizations to fully realize potential speed gains. But still, as improvements in algorithmic techniques add up over time, AI will get faster.

Matrix multiplication breakthrough could lead to faster, more efficient AI models Read More »

amd-stops-certifying-monitors,-tvs-under-144-hz-for-freesync

AMD stops certifying monitors, TVs under 144 Hz for FreeSync

60 Hz is so 2015 —

120 Hz is good enough for consoles, but not for FreeSync.

AMD's depiction of a game playing without FreeSync (left) and with FreeSync (right).

Enlarge / AMD’s depiction of a game playing without FreeSync (left) and with FreeSync (right).

AMD announced this week that it has ceased FreeSync certification for monitors or TVs whose maximum refresh rates are under 144 Hz. Previously, FreeSync monitors and TVs could have refresh rates as low as 60 Hz, allowing for screens with lower price tags and ones not targeted at serious gaming to carry the variable refresh-rate technology.

AMD also boosted the refresh-rate requirements for its higher AdaptiveSync tiers, FreeSync Premium and FreeSync Premium Pro, from 120 Hz to 200 Hz.

Here are the new minimum refresh-rate requirements for FreeSync, which haven’t changed for laptops.

Laptops Monitors and TVs
FreeSync Max refresh rate: 40-60 Hz < 3440 Horizontal resolution:

Max refresh rate: ≥ 144 Hz
FreeSync Premium Max refresh rate: ≥ 120 Hz < 3440 Horizontal resolution:

Max refresh rate: ≥ 200 Hz≥ 3440 Horizontal resolution:

Max refresh rate: ≥ 120 Hz
FreeSync Premium Pro FreeSync Premium requirements, plus FreeSync support with HDR FreeSync Premium requirements, plus FreeSync support with HDR

AMD will continue supporting already-certified FreeSync displays even if they don’t meet the above requirements.

Interestingly, AMD’s minimum refresh-rate requirements for TVs go beyond 120 Hz, which many premium TVs currently max out at, due to the current-generation Xbox and PlayStation supporting max refresh rates of 120 frames per second (FPS).

Announcing the changes this week in a blog post, Oguzhan Andic, AMD FreeSync and Radeon product marketing manager, claimed that the changes were necessary, noting that 60 Hz is no longer “considered great for gaming.” Andic wrote that the majority of gaming monitors are 144 Hz or higher, compared to in 2015, when FreeSync debuted, and even 120 Hz was “a rarity.”

Since 2015, refresh rates have climbed ever higher, with the latest sports targeting competitive players hitting 500 Hz, with display stakeholders showing no signs of ending the push for more speed. Meanwhile, FreeSync cemented itself as the more accessible flavor of Adaptive Sync than Nvidia’s G-Sync, which for a long time required specific hardware to run, elevating the costs of supporting products.

AMD’s announcement didn’t address requirements for refresh-rate ranges. Hopefully, OEMs will continue making FreeSync displays, especially monitors, that can still fight screen tears when framerates drop to the double digits.

The changes should also elevate the future price of entry for a monitor or TV with FreeSync TV. Sometimes the inclusion of FreeSync served as a differentiator for people seeking an affordable display and who occasionally do some light gaming or enjoy other media with fast-paced video playback. FreeSync committing itself to 144 Hz and faster screens could help the certification be aligned more with serious gaming.

Meanwhile, there is still hope for future, slower screens to get certification for variable refresh rates. In 2022, the Video Electronics Standards Association (VESA) released its MediaSync Display for video playback and AdaptiveSync for gaming, certifications that have minimum refresh-rate requirements of 60 Hz. VESA developed the lengthy detailed certifications with its dozens of members, including AMD (a display could be MediaSync/AdaptiveSync and/or FreeSync and/or G-Sync certified). In addition to trying to appeal to core gamers, it’s possible that AMD also sees the VESA certifications as more appropriate for slower displays.

AMD stops certifying monitors, TVs under 144 Hz for FreeSync Read More »

thousands-of-us-kids-are-overdosing-on-melatonin-gummies,-er-study-finds

Thousands of US kids are overdosing on melatonin gummies, ER study finds

treats or treatment? —

In the majority of cases, the excessive amounts led to only minimal side effects.

In this photo illustration, melatonin gummies are displayed on April 26, 2023, in Miami, Florida.

Enlarge / In this photo illustration, melatonin gummies are displayed on April 26, 2023, in Miami, Florida.

Federal regulators have long decried drug-containing products that appeal to kids—like nicotine-containing e-cigarette products with fruity and dessert-themed flavors or edible cannabis products sold to look exactly like name-brand candies.

But a less-expected candy-like product is sending thousands of kids to emergency departments in the US in recent years: melatonin, particularly in gummy form. According to a new report from researchers at the Centers for Disease Control and Prevention, use of the over-the-counter sleep-aid supplement has skyrocketed in recent years—and so have calls to poison control centers and visits to emergency departments.

Melatonin, a neurohormone that regulates the sleep-wake cycle, has become very popular for self-managing conditions like sleep disorders and jet lag—even in children. Use of melatonin in adults rose from 0.4 percent in 1999–2000 to 2.1 percent in 2017–2018. But the more people have these tempting, often candy-like supplements in their homes, the more risk that children will get ahold of them unsupervised. Indeed, the rise in use led to a 530 percent increase in poison control center calls and a 420 percent increase in emergency department visits for accidental melatonin ingestion in infants and kids between 2009 and 2020.

And the problem is ongoing. In the new study, researchers estimate that between 2019 and 2022, nearly 11,000 kids went to the emergency department after accidentally gulping down melatonin supplements. Nearly all the cases involved a solid form of melatonin, with about 47 percent identified specifically as gummies and 49 percent listed as an unspecified sold form, likely to include gummies. These melatonin-based emergency visits made up 7 percent of all emergency visits by infants and kids who ingested medications unsupervised.

The candy-like appeal of melatonin products seems evident in the ages of kids rushed to emergency departments. Most emergency department visits for unsupervised medicine exposures are in infants and toddlers ages 1 to 2, but for melatonin-related visits, half were ages 3 to 5. The researchers noted that among the emergency visits with documentation, about three-quarters of the melatonin products involved came out of bottles, suggesting that the young kids managed to open the bottles themselves or that the bottles weren’t properly closed. Manufacturers are not required to use child-resistant packaging on melatonin supplements.

Luckily, most of the cases had only mild to no effects. Still, about 6.5 percent of the cases—a little over 700 children—were hospitalized from their melatonin binge. A 2022 study led by researchers in Michigan found that among poison control center calls for children consuming melatonin, the reported symptoms involved gastrointestinal, cardiovascular, or central nervous systems. For children who are given supervised doses of melatonin to improve sleep, known side effects include drowsiness, increased bedwetting or urination in the evening, headache, dizziness, and agitation.

According to the National Center for Complementary and Integrative Health—part of the National Institutes of Health—supervised use of melatonin in children appears to be safe for short-term use. But there’s simply not much data on use in children, and the long-term effects of regular use or acute exposures are unknown. The NCCIH cautions: “Because melatonin is a hormone, it’s possible that melatonin supplements could affect hormonal development, including puberty, menstrual cycles, and overproduction of the hormone prolactin, but we don’t know for sure.”

For now, the authors of the new study say the data “highlights the continued need to educate parents and other caregivers about the importance of keeping all medications and supplements (including gummies) out of children’s reach and sight.”

Thousands of US kids are overdosing on melatonin gummies, ER study finds Read More »

gm-starts-selling-chevy-blazer-evs-again,-cuts-prices-up-to-$6,250

GM starts selling Chevy Blazer EVs again, cuts prices up to $6,250

still no carplay, though —

Bad software, charging problems led Chevy to stop selling the Blazer EV in December.

A red chevrolet Blazer EV

Enlarge / Chevrolet suspended sales of the Blazer EV for three months as a result of software and charging problems.

Jonathan Gitlin

The Chevrolet Blazer EV is back on sale today, more than three months after the automaker took its newest electric vehicle off the market due to a litany of software problems. It has also cut Blazer EV prices, and the company says that the EV now qualifies for the full $7,500 IRS clean vehicle tax credit.

The Blazer EV was the first Chevrolet-badged EV using General Motors’ new Ultium battery platform. Introduced at CES in 2022, Chevrolet actually started delivering the first Blazer EVs last August. But not very many of them—by the end of the year, only 463 Blazer EVs had found homes.

Ars drove the Blazer EV in December and came away unconvinced. On the road, it suffered from lots of NVH, and we ran into software bugs with the Ultifi infotainment system. We weren’t the only ones to experience problems—charging bugs left another journalist stranded on a road trip with a Blazer EV that refused to charge.

Three days later, GM issued a stop-sale order for the car.

Now, the Blazer EV is on sale again and with what Chevrolet is calling “significant software updates.” Some of these are to the UI, but those charging problems should be solved now as well.

The other problem with the Blazer EV, beyond quality issues, was its rather high price. The promised $44,995 version was ditched before the car was launched; instead, the cheapest Blazer EV on sale was the $56,715 all-wheel drive LT, and at $60,215 the all-wheel drive Blazer EV RS was more expensive than the closely related Cadillac Lyriq.

To help tempt customers back into the showroom, Chevrolet has also given the Blazer EV a hefty price cut. Now, the Blazer EV LT costs $50,195, $6,520 less than at launch. The Blazer EV RS got $5,620 cheaper and now starts at $56,175.

GM starts selling Chevy Blazer EVs again, cuts prices up to $6,250 Read More »