Author name: Kris Guyer

op-ed:-charges-against-journalist-tim-burke-are-a-hack-job

Op-ed: Charges against journalist Tim Burke are a hack job

Permission required? —

Burke was indicted after sharing outtakes of a Fox News interview.

Op-ed: Charges against journalist Tim Burke are a hack job

Caitlin Vogus is the deputy director of advocacy at Freedom of the Press Foundation and a First Amendment lawyer. Jennifer Stisa Granick is the surveillance and cybersecurity counsel with the ACLU’s Speech, Privacy, and Technology Project. The opinions in this piece do not necessarily reflect the views of Ars Technica.

Imagine a journalist finds a folder on a park bench, opens it, and sees a telephone number inside. She dials the number. A famous rapper answers and spews a racist rant. If no one gave her permission to open the folder and the rapper’s telephone number was unlisted, should the reporter go to jail for publishing what she heard?

If that sounds ridiculous, it’s because it is. And yet, add in a computer and the Internet, and that’s basically what a newly unsealed federal indictment accuses Florida journalist Tim Burke of doing when he found and disseminated outtakes of Tucker Carlson’s Fox News interview with Ye, the artist formerly known as Kanye West, going on the first of many antisemitic diatribes.

The vast majority of the charges against Burke are under the Computer Fraud and Abuse Act (CFAA), a law that the ACLU and Freedom of the Press Foundation have long argued is vague and subject to abuse. Now, in a new and troubling move, the government suggests in the Burke indictment that journalists violate the CFAA if they don’t ask for permission to use information they find publicly posted on the Internet.

According to news reports and statements from Burke’s lawyer, the charges are, in part, related to the unaired segments of the interview between Carlson and Ye. After Burke gave the video to news sites to publish, Ye’s disturbing remarks, and Fox’s decision to edit them out of the interview when broadcast, quickly made national news.

According to Burke, the video of Carlson’s interview with Ye was streamed via a publicly available, unencrypted URL that anyone could access by typing the address into your browser. Those URLs were not listed in any search engine, but Burke says that a source pointed him to a website on the Internet Archive where a radio station had posted “demo credentials” that gave access to a page where the URLs were listed.

The credentials were for a webpage created by LiveU, a company that provides video streaming services to broadcasters. Using the demo username and password, Burke logged into the website, and, Burke’s lawyer claims, the list of URLs for video streams automatically downloaded to his computer.

And that, the government says, is a crime. It charges Burke with violating the CFAA’s prohibition on intentionally accessing a computer “without authorization” because he accessed the LiveU website and URLs without having been authorized by Fox or LiveU. In other words, because Burke didn’t ask Fox or LiveU for permission to use the demo account or view the URLs, the indictment alleges, he acted without authorization.

But there’s a difference between LiveU and Fox’s subjective wishes about what journalists or others would find, and what the services and websites they maintained and used permitted people to find. The relevant question should be the latter. Generally, it is both a First Amendment and a due process problem to allow a private party’s desire to control information to form the basis of criminal prosecutions.

The CFAA charges against Burke take advantage of the vagueness of the statutory term “without authorization.” The law doesn’t define the term, and its murkiness has enabled plenty of ill-advised prosecutions over the years. In Burke’s case, because the list of unencrypted URLs was password protected and the company didn’t want outsiders to access the URLs, the government claims that Burke acted “without authorization.”

Using a published demo password to get a list of URLs, which anyone could have used a software program to guess and access, isn’t that big of a deal. What was a big deal is that Burke’s research embarrassed Fox News. But that’s what journalists are supposed to do—uncover questionable practices of powerful entities.

Journalists need never ask corporations for permission to investigate or embarrass them, and the law shouldn’t encourage or force them to. Just because someone doesn’t like what a reporter does online doesn’t mean that it’s without authorization and that what he did is therefore a crime.

Still, this isn’t the first time that prosecutors have abused computer hacking laws to go after journalists and others, like security researchers. Until a 2021 Supreme Court ruling, researchers and journalists worried that their good faith investigations of algorithmic discrimination could expose them to CFAA liability for exceeding sites’ terms of service.

Even now, the CFAA and similarly vague state computer crime laws continue to threaten press freedom. Just last year, in August, police raided the newsroom of the Marion County Record and accused its journalists of breaking state computer hacking laws by using a government website to confirm a tip from a source. Police dropped the case after a national outcry.

The White House seemed concerned about the Marion ordeal. But now the same administration is using an overly broad interpretation of a hacking law to target a journalist. Merely filing charges against Burke sends a chilling message that the government will attempt to penalize journalists for engaging in investigative reporting it dislikes.

Even worse, if the Burke prosecution succeeds, it will encourage the powerful to use the CFAA as a veto over news reporting based on online sources just because it is embarrassing or exposes their wrongdoing. These charges were also an excuse for the government to seize Burke’s computer equipment and digital work—and demand to keep it permanently. This seizure interferes with Burke’s ongoing reporting, a tactic that it could repeat in other investigations.

If journalists must seek permission to publish information they find online from the very people they’re exposing, as the government’s indictment of Burke suggests, it’s a good bet that most information from the obscure but public corners of the Internet will never see the light of day. That would endanger both journalism and public access to important truths. The court reviewing Burke’s case should dismiss the charges.

Op-ed: Charges against journalist Tim Burke are a hack job Read More »

shields-up:-new-ideas-might-make-active-shielding-viable

Shields up: New ideas might make active shielding viable

Shields up: New ideas might make active shielding viable

Aurich Lawson | Getty Images | NASA

On October 19, 1989, at 12: 29 UT, a monstrous X13 class solar flare triggered a geomagnetic storm so strong that auroras lit up the skies in Japan, America, Australia, and even Germany the following day. Had you been flying around the Moon at that time, you would have absorbed well over 6 Sieverts of radiation—a dose that would most likely kill you within a month or so.

This is why the Orion spacecraft that is supposed to take humans on a Moon fly-by mission this year has a heavily shielded storm shelter for the crew. But shelters like that aren’t sufficient for a flight to Mars—Orion’s shield is designed for a 30-day mission.

To obtain protection comparable to what we enjoy on Earth would require hundreds of tons of material, and that’s simply not possible in orbit. The primary alternative—using active shields that deflect charged particles just like the Earth’s magnetic field does—was first proposed in the 1960s. Today, we’re finally close to making it work.

Deep-space radiation

Space radiation comes in two different flavors. Solar events like flares or coronal mass ejections can cause very high fluxes of charged particles (mostly protons). They’re nasty when you have no shelter but are relatively easy to shield against since solar protons are mostly low energy. The majority of solar particle events flux is between 30 Mega-electronVolts to 100 MeV and could be stopped by Orion-like shelters.

Then there are galactic cosmic rays: particles coming from outside the Solar System, set in motion by faraway supernovas or neutron stars. These are relatively rare but are coming at you all the time from all directions. They also have high energies, starting at 200 MeV and going to several GeVs, which makes them extremely penetrating. Thick masses don’t provide much shielding against them. When high-energy cosmic ray particles hit thin shields, they produce many lower-energy particles—you’d be better off with no shield at all.

The particles with energies between 70 MeV and 500 MeV are responsible for 95 percent of the radiation dose that astronauts get in space. On short flights, solar storms are the main concern because they can be quite violent and do lots of damage very quickly. The longer you fly, though, GCRs become more of an issue because their dose accumulates over time, and they can go through pretty much everything we try to put in their way.

What keeps us safe at home

The reason nearly none of this radiation can reach us is that Earth has a natural, multi-stage shielding system. It begins with its magnetic field, which deflects most of the incoming particles toward the poles. A charged particle in a magnetic field follows a curve—the stronger the field, the tighter the curve. Earth’s magnetic field is very weak and barely bends incoming particles, but it is huge, extending thousands of kilometers into space.

Anything that makes it through the magnetic field runs into the atmosphere, which, when it comes to shielding, is the equivalent of an aluminum wall that’s 3 meters thick. Finally, there is the planet itself, which essentially cuts the radiation in half since you always have 6.5 billion trillion tons of rock shielding you from the bottom.

To put that in perspective, the Apollo crew module had on average 5 grams of mass per square centimeter standing between the crew and radiation. A typical ISS module has twice that, about 10 g/cm2. The Orion shelter has 35–45 g/cm2, depending on where you sit exactly, and it weighs 36 tons. On Earth, the atmosphere alone gives you 810 g/cm2—roughly 20 times more than our best shielded spaceships.

The two options are to add more mass—which gets expensive quickly—or to shorten the length of the mission, which isn’t always possible. So solving radiation with passive mass won’t cut it for longer missions, even using the best shielding materials like polyethylene or water. This is why making a miniaturized, portable version of the Earth’s magnetic field was on the table from the first days of space exploration. Unfortunately, we discovered it was far easier said than done.

Shields up: New ideas might make active shielding viable Read More »

florida-middle-schoolers-charged-with-making-deepfake-nudes-of-classmates

Florida middle-schoolers charged with making deepfake nudes of classmates

no consent —

AI tool was used to create nudes of 12- to 13-year-old classmates.

Florida middle-schoolers charged with making deepfake nudes of classmates

Jacqui VanLiew; Getty Images

Two teenage boys from Miami, Florida, were arrested in December for allegedly creating and sharing AI-generated nude images of male and female classmates without consent, according to police reports obtained by WIRED via public record request.

The arrest reports say the boys, aged 13 and 14, created the images of the students who were “between the ages of 12 and 13.”

The Florida case appears to be the first arrests and criminal charges as a result of alleged sharing of AI-generated nude images to come to light. The boys were charged with third-degree felonies—the same level of crimes as grand theft auto or false imprisonment—under a state law passed in 2022 which makes it a felony to share “any altered sexual depiction” of a person without their consent.

The parent of one of the boys arrested did not respond to a request for comment in time for publication. The parent of the other boy said that he had “no comment.” The detective assigned to the case, and the state attorney handling the case, did not respond for comment in time for publication.

As AI image-making tools have become more widely available, there have been several high-profile incidents in which minors allegedly created AI-generated nude images of classmates and shared them without consent. No arrests have been disclosed in the publicly reported cases—at Issaquah High School in Washington, Westfield High School in New Jersey, and Beverly Vista Middle School in California—even though police reports were filed. At Issaquah High School, police opted not to press charges.

The first media reports of the Florida case appeared in December, saying that the two boys were suspended from Pinecrest Cove Academy in Miami for 10 days after school administrators learned of allegations that they created and shared fake nude images without consent. After parents of the victims learned about the incident, several began publicly urging the school to expel the boys.

Nadia Khan-Roberts, the mother of one of the victims, told NBC Miami in December that for all of the families whose children were victimized the incident was traumatizing. “Our daughters do not feel comfortable walking the same hallways with these boys,” she said. “It makes me feel violated, I feel taken advantage [of] and I feel used,” one victim, who asked to remain anonymous, told the TV station.

WIRED obtained arrest records this week that say the incident was reported to police on December 6, 2023, and that the two boys were arrested on December 22. The records accuse the pair of using “an artificial intelligence application” to make the fake explicit images. The name of the app was not specified and the reports claim the boys shared the pictures between each other.

“The incident was reported to a school administrator,” the reports say, without specifying who reported it, or how that person found out about the images. After the school administrator “obtained copies of the altered images” the administrator interviewed the victims depicted in them, the reports say, who said that they did not consent to the images being created.

After their arrest, the two boys accused of making the images were transported to the Juvenile Service Department “without incident,” the reports say.

A handful of states have laws on the books that target fake, nonconsensual nude images. There’s no federal law targeting the practice, but a group of US senators recently introduced a bill to combat the problem after fake nude images of Taylor Swift were created and distributed widely on X.

The boys were charged under a Florida law passed in 2022 that state legislators designed to curb harassment involving deepfake images made using AI-powered tools.

Stephanie Cagnet Myron, a Florida lawyer who represents victims of nonconsensually shared nude images, tells WIRED that anyone who creates fake nude images of a minor would be in possession of child sexual abuse material, or CSAM. However, she claims it’s likely that the two boys accused of making and sharing the material were not charged with CSAM possession due to their age.

“There’s specifically several crimes that you can charge in a case, and you really have to evaluate what’s the strongest chance of winning, what has the highest likelihood of success, and if you include too many charges, is it just going to confuse the jury?” Cagnet Myron added.

Mary Anne Franks, a professor at the George Washington University School of Law and a lawyer who has studied the problem of nonconsensual explicit imagery, says it’s “odd” that Florida’s revenge porn law, which predates the 2022 statute under which the boys were charged, only makes the offense a misdemeanor, while this situation represented a felony.

“It is really strange to me that you impose heftier penalties for fake nude photos than for real ones,” she says.

Franks adds that although she believes distributing nonconsensual fake explicit images should be a criminal offense, thus creating a deterrent effect, she doesn’t believe offenders should be incarcerated, especially not juveniles.

“The first thing I think about is how young the victims are and worried about the kind of impact on them,” Franks says. “But then [I] also question whether or not throwing the book at kids is actually going to be effective here.”

This story originally appeared on wired.com.

Florida middle-schoolers charged with making deepfake nudes of classmates Read More »

a-hunk-of-junk-from-the-international-space-station-hurtles-back-to-earth

A hunk of junk from the International Space Station hurtles back to Earth

In March 2021, the International Space Station's robotic arm released a cargo pallet with nine expended batteries.

Enlarge / In March 2021, the International Space Station’s robotic arm released a cargo pallet with nine expended batteries.

NASA

A bundle of depleted batteries from the International Space Station careened around Earth for almost three years before falling out of orbit and plunging back into the atmosphere Friday. Most of the trash likely burned up during reentry, but it’s possible some fragments may have reached Earth’s surface intact.

Larger pieces of space junk regularly fall to Earth on unguided trajectories, but they’re usually derelict satellites or spent rocket stages. This involved a pallet of batteries from the space station with a mass of more than 2.6 metric tons (5,800 pounds). NASA intentionally sent the space junk on a path toward an unguided reentry.

Naturally self-cleaning

Sandra Jones, a NASA spokesperson, said the agency “conducted a thorough debris analysis assessment on the pallet and has determined it will harmlessly reenter the Earth’s atmosphere.” This was, by far, the most massive object ever tossed overboard from the International Space Station.

The batteries reentered the atmosphere at 2: 29 pm EST (1929 UTC), according to US Space Command. At that time, the pallet would have been flying between Mexico and Cuba. “We do not expect any portion to have survived reentry,” Jones told Ars.

The European Space Agency (ESA) also monitored the trajectory of the battery pallet. In a statement this week, the ESA said the risk of a person being hit by a piece of the pallet was “very low” but said “some parts may reach the ground.” Jonathan McDowell, an astrophysicist who closely tracks spaceflight activity, estimated about 500 kilograms (1,100 pounds) of debris would hit the Earth’s surface.

“The general rule of thumb is that 20 to 40 percent of the mass of a large object will reach the ground, though it depends on the design of the object,” the Aerospace Corporation says.

A dead ESA satellite reentered the atmosphere in a similar uncontrolled manner February 21. At 2.3 metric tons, this satellite was similar in mass to the discarded battery pallet. ESA, which has positioned itself as a global leader in space sustainability, set up a website that provided daily tracking updates on the satellite’s deteriorating orbit.

This map shows the track of the unguided cargo pallet around the Earth over the course of six hours Friday. It reentered the atmosphere near Cuba on southwest-to-northeast heading.

Enlarge / This map shows the track of the unguided cargo pallet around the Earth over the course of six hours Friday. It reentered the atmosphere near Cuba on southwest-to-northeast heading.

As NASA and ESA officials have said, the risk of injury or death from a spacecraft reentry is quite low. Falling space debris has never killed anyone. According to ESA, the risk of a person getting hit by a piece of space junk is about 65,000 times lower than the risk of being struck by lightning.

This circumstance is unique in the type and origin of the space debris, which is why NASA purposely cast it away on an uncontrolled trajectory back to Earth.

The space station’s robotic arm released the battery cargo pallet on March 11, 2021. Since then, the batteries have been adrift in orbit, circling the planet about every 90 minutes. Over a span of months and years, low-Earth orbit is self-cleaning thanks to the influence of aerodynamic drag. The resistance of rarefied air molecules in low-Earth orbit gradually slowed the pallet’s velocity until, finally, gravity pulled it back into the atmosphere Friday.

The cargo pallet, which launched inside a Japanese HTV cargo ship in 2020, carried six new lithium-ion batteries to the International Space Station. The station’s two-armed Dextre robot, assisted by astronauts on spacewalks, swapped out aging nickel-hydrogen batteries for the upgraded units. Nine of the old batteries were installed on the HTV cargo pallet before its release from the station’s robotic arm.

A hunk of junk from the International Space Station hurtles back to Earth Read More »

study-finds-that-we-could-lose-science-if-publishers-go-bankrupt

Study finds that we could lose science if publishers go bankrupt

Need backups —

A scan of archives shows that lots of scientific papers aren’t backed up.

A set of library shelves with lots of volumes stacked on them.

Back when scientific publications came in paper form, libraries played a key role in ensuring that knowledge didn’t disappear. Copies went out to so many libraries that any failure—a publisher going bankrupt, a library getting closed—wouldn’t put us at risk of losing information. But, as with anything else, scientific content has gone digital, which has changed what’s involved with preservation.

Organizations have devised systems that should provide options for preserving digital material. But, according to a recently published survey, lots of digital documents aren’t consistently showing up in the archives that are meant to preserve it. And that puts us at risk of losing academic research—including science paid for with taxpayer money.

Tracking down references

The work was done by Martin Eve, a developer at Crossref. That’s the organization that organizes the DOI system, which provides a permanent pointer toward digital documents, including almost every scientific publication. If updates are done properly, a DOI will always resolve to a document, even if that document gets shifted to a new URL.

But it also has a way of handling documents disappearing from their expected location, as might happen if a publisher went bankrupt. There are a set of what’s called “dark archives” that the public doesn’t have access to, but should contain copies of anything that’s had a DOI assigned. If anything goes wrong with a DOI, it should trigger the dark archives to open access, and the DOI updated to point to the copy in the dark archive.

For that to work, however, copies of everything published have to be in the archives. So Eve decided to check whether that’s the case.

Using the Crossref database, Eve got a list of over 7 million DOIs and then checked whether the documents could be found in archives. He included well-known ones, like the Internet Archive at archive.org, as well as some dedicated to academic works, like LOCKSS (Lots of Copies Keeps Stuff Safe) and CLOCKSS (Controlled Lots of Copies Keeps Stuff Safe).

Not well-preserved

The results were… not great.

When Eve broke down the results by publisher, less than 1 percent of the 204 publishers had put the majority of their content into multiple archives. (The cutoff was 75 percent of their content in three or more archives.) Fewer than 10 percent had put more than half their content in at least two archives. And a full third seemed to be doing no organized archiving at all.

At the individual publication level, under 60 percent were present in at least one archive, and over a quarter didn’t appear to be in any of the archives at all. (Another 14 percent were published too recently to have been archived or had incomplete records.)

The good news is that large academic publishers appear to be reasonably good about getting things into archives; most of the unarchived issues stem from smaller publishers.

Eve acknowledges that the study has limits, primarily in that there may be additional archives he hasn’t checked. There are some prominent dark archives that he didn’t have access to, as well as things like Sci-hub, which violates copyright in order to make material from for-profit publishers available to the public. Finally, individual publishers may have their own archiving system in place that could keep publications from disappearing.

Should we be worried?

The risk here is that, ultimately, we may lose access to some academic research. As Eve phrases it, knowledge gets expanded because we’re able to build upon a foundation of facts that we can trace back through a chain of references. If we start losing those links, then the foundation gets shakier. Archiving comes with its own set of challenges: It costs money, it has to be organized, consistent means of accessing the archived material need to be established, and so on.

But, to an extent, we’re failing at the first step. “An important point to make,” Eve writes, “is that there is no consensus over who should be responsible for archiving scholarship in the digital age.”

A somewhat related issue is ensuring that people can find the archived material—the issue that DOIs were designed to solve. In many cases, the authors of the manuscript place copies in places like the arXiv/bioRxiv, or the NIH’s PubMed Centra (this sort of archiving is increasingly being made a requirement by funding bodies). The problem here is that the archived copies may not include the DOI that’s meant to ensure it can be located. That doesn’t mean it can’t be identified through other means, but it definitely makes finding the right document much more difficult.

Put differently, if you can’t find a paper or can’t be certain you’re looking at the right version of it, it can be just as bad as not having a copy of the paper at all.

None of this is to say that we’ve already lost important research documents. But Eve’s paper serves a valuable function by highlighting that the risk is real. We’re well into the era where print copies of journals are irrelevant to most academics, and digital-only academic journals have proliferated. It’s long past time for us to have clear standards in place to ensure that digital versions of research have the endurance that print works have enjoyed.

Journal of Librarianship and Scholarly Communication, 2024. DOI: 10.31274/jlsc.16288  (About DOIs).

Study finds that we could lose science if publishers go bankrupt Read More »

matrix-multiplication-breakthrough-could-lead-to-faster,-more-efficient-ai-models

Matrix multiplication breakthrough could lead to faster, more efficient AI models

The Matrix Revolutions —

At the heart of AI, matrix math has just seen its biggest boost “in more than a decade.”

Futuristic huge technology tunnel and binary data.

Enlarge / When you do math on a computer, you fly through a numerical tunnel like this—figuratively, of course.

Computer scientists have discovered a new way to multiply large matrices faster than ever before by eliminating a previously unknown inefficiency, reports Quanta Magazine. This could eventually accelerate AI models like ChatGPT, which rely heavily on matrix multiplication to function. The findings, presented in two recent papers, have led to what is reported to be the biggest improvement in matrix multiplication efficiency in over a decade.

Multiplying two rectangular number arrays, known as matrix multiplication, plays a crucial role in today’s AI models, including speech and image recognition, chatbots from every major vendor, AI image generators, and video synthesis models like Sora. Beyond AI, matrix math is so important to modern computing (think image processing and data compression) that even slight gains in efficiency could lead to computational and power savings.

Graphics processing units (GPUs) excel in handling matrix multiplication tasks because of their ability to process many calculations at once. They break down large matrix problems into smaller segments and solve them concurrently using an algorithm.

Perfecting that algorithm has been the key to breakthroughs in matrix multiplication efficiency over the past century—even before computers entered the picture. In October 2022, we covered a new technique discovered by a Google DeepMind AI model called AlphaTensor, focusing on practical algorithmic improvements for specific matrix sizes, such as 4×4 matrices.

By contrast, the new research, conducted by Ran Duan and Renfei Zhou of Tsinghua University, Hongxun Wu of the University of California, Berkeley, and by Virginia Vassilevska Williams, Yinzhan Xu, and Zixuan Xu of the Massachusetts Institute of Technology (in a second paper), seeks theoretical enhancements by aiming to lower the complexity exponent, ω, for a broad efficiency gain across all sizes of matrices. Instead of finding immediate, practical solutions like AlphaTensor, the new technique addresses foundational improvements that could transform the efficiency of matrix multiplication on a more general scale.

Approaching the ideal value

The traditional method for multiplying two n-by-n matrices requires n³ separate multiplications. However, the new technique, which improves upon the “laser method” introduced by Volker Strassen in 1986, has reduced the upper bound of the exponent (denoted as the aforementioned ω), bringing it closer to the ideal value of 2, which represents the theoretical minimum number of operations needed.

The traditional way of multiplying two grids full of numbers could require doing the math up to 27 times for a grid that’s 3×3. But with these advancements, the process is accelerated by significantly reducing the multiplication steps required. The effort minimizes the operations to slightly over twice the size of one side of the grid squared, adjusted by a factor of 2.371552. This is a big deal because it nearly achieves the optimal efficiency of doubling the square’s dimensions, which is the fastest we could ever hope to do it.

Here’s a brief recap of events. In 2020, Josh Alman and Williams introduced a significant improvement in matrix multiplication efficiency by establishing a new upper bound for ω at approximately 2.3728596. In November 2023, Duan and Zhou revealed a method that addressed an inefficiency within the laser method, setting a new upper bound for ω at approximately 2.371866. The achievement marked the most substantial progress in the field since 2010. But just two months later, Williams and her team published a second paper that detailed optimizations that reduced the upper bound for ω to 2.371552.

The 2023 breakthrough stemmed from the discovery of a “hidden loss” in the laser method, where useful blocks of data were unintentionally discarded. In the context of matrix multiplication, “blocks” refer to smaller segments that a large matrix is divided into for easier processing, and “block labeling” is the technique of categorizing these segments to identify which ones to keep and which to discard, optimizing the multiplication process for speed and efficiency. By modifying the way the laser method labels blocks, the researchers were able to reduce waste and improve efficiency significantly.

While the reduction of the omega constant might appear minor at first glance—reducing the 2020 record value by 0.0013076—the cumulative work of Duan, Zhou, and Williams represents the most substantial progress in the field observed since 2010.

“This is a major technical breakthrough,” said William Kuszmaul, a theoretical computer scientist at Harvard University, as quoted by Quanta Magazine. “It is the biggest improvement in matrix multiplication we’ve seen in more than a decade.”

While further progress is expected, there are limitations to the current approach. Researchers believe that understanding the problem more deeply will lead to the development of even better algorithms. As Zhou stated in the Quanta report, “People are still in the very early stages of understanding this age-old problem.”

So what are the practical applications? For AI models, a reduction in computational steps for matrix math could translate into faster training times and more efficient execution of tasks. It could enable more complex models to be trained more quickly, potentially leading to advancements in AI capabilities and the development of more sophisticated AI applications. Additionally, efficiency improvement could make AI technologies more accessible by lowering the computational power and energy consumption required for these tasks. That would also reduce AI’s environmental impact.

The exact impact on the speed of AI models depends on the specific architecture of the AI system and how heavily its tasks rely on matrix multiplication. Advancements in algorithmic efficiency often need to be coupled with hardware optimizations to fully realize potential speed gains. But still, as improvements in algorithmic techniques add up over time, AI will get faster.

Matrix multiplication breakthrough could lead to faster, more efficient AI models Read More »

amd-stops-certifying-monitors,-tvs-under-144-hz-for-freesync

AMD stops certifying monitors, TVs under 144 Hz for FreeSync

60 Hz is so 2015 —

120 Hz is good enough for consoles, but not for FreeSync.

AMD's depiction of a game playing without FreeSync (left) and with FreeSync (right).

Enlarge / AMD’s depiction of a game playing without FreeSync (left) and with FreeSync (right).

AMD announced this week that it has ceased FreeSync certification for monitors or TVs whose maximum refresh rates are under 144 Hz. Previously, FreeSync monitors and TVs could have refresh rates as low as 60 Hz, allowing for screens with lower price tags and ones not targeted at serious gaming to carry the variable refresh-rate technology.

AMD also boosted the refresh-rate requirements for its higher AdaptiveSync tiers, FreeSync Premium and FreeSync Premium Pro, from 120 Hz to 200 Hz.

Here are the new minimum refresh-rate requirements for FreeSync, which haven’t changed for laptops.

Laptops Monitors and TVs
FreeSync Max refresh rate: 40-60 Hz < 3440 Horizontal resolution:

Max refresh rate: ≥ 144 Hz
FreeSync Premium Max refresh rate: ≥ 120 Hz < 3440 Horizontal resolution:

Max refresh rate: ≥ 200 Hz≥ 3440 Horizontal resolution:

Max refresh rate: ≥ 120 Hz
FreeSync Premium Pro FreeSync Premium requirements, plus FreeSync support with HDR FreeSync Premium requirements, plus FreeSync support with HDR

AMD will continue supporting already-certified FreeSync displays even if they don’t meet the above requirements.

Interestingly, AMD’s minimum refresh-rate requirements for TVs go beyond 120 Hz, which many premium TVs currently max out at, due to the current-generation Xbox and PlayStation supporting max refresh rates of 120 frames per second (FPS).

Announcing the changes this week in a blog post, Oguzhan Andic, AMD FreeSync and Radeon product marketing manager, claimed that the changes were necessary, noting that 60 Hz is no longer “considered great for gaming.” Andic wrote that the majority of gaming monitors are 144 Hz or higher, compared to in 2015, when FreeSync debuted, and even 120 Hz was “a rarity.”

Since 2015, refresh rates have climbed ever higher, with the latest sports targeting competitive players hitting 500 Hz, with display stakeholders showing no signs of ending the push for more speed. Meanwhile, FreeSync cemented itself as the more accessible flavor of Adaptive Sync than Nvidia’s G-Sync, which for a long time required specific hardware to run, elevating the costs of supporting products.

AMD’s announcement didn’t address requirements for refresh-rate ranges. Hopefully, OEMs will continue making FreeSync displays, especially monitors, that can still fight screen tears when framerates drop to the double digits.

The changes should also elevate the future price of entry for a monitor or TV with FreeSync TV. Sometimes the inclusion of FreeSync served as a differentiator for people seeking an affordable display and who occasionally do some light gaming or enjoy other media with fast-paced video playback. FreeSync committing itself to 144 Hz and faster screens could help the certification be aligned more with serious gaming.

Meanwhile, there is still hope for future, slower screens to get certification for variable refresh rates. In 2022, the Video Electronics Standards Association (VESA) released its MediaSync Display for video playback and AdaptiveSync for gaming, certifications that have minimum refresh-rate requirements of 60 Hz. VESA developed the lengthy detailed certifications with its dozens of members, including AMD (a display could be MediaSync/AdaptiveSync and/or FreeSync and/or G-Sync certified). In addition to trying to appeal to core gamers, it’s possible that AMD also sees the VESA certifications as more appropriate for slower displays.

AMD stops certifying monitors, TVs under 144 Hz for FreeSync Read More »

thousands-of-us-kids-are-overdosing-on-melatonin-gummies,-er-study-finds

Thousands of US kids are overdosing on melatonin gummies, ER study finds

treats or treatment? —

In the majority of cases, the excessive amounts led to only minimal side effects.

In this photo illustration, melatonin gummies are displayed on April 26, 2023, in Miami, Florida.

Enlarge / In this photo illustration, melatonin gummies are displayed on April 26, 2023, in Miami, Florida.

Federal regulators have long decried drug-containing products that appeal to kids—like nicotine-containing e-cigarette products with fruity and dessert-themed flavors or edible cannabis products sold to look exactly like name-brand candies.

But a less-expected candy-like product is sending thousands of kids to emergency departments in the US in recent years: melatonin, particularly in gummy form. According to a new report from researchers at the Centers for Disease Control and Prevention, use of the over-the-counter sleep-aid supplement has skyrocketed in recent years—and so have calls to poison control centers and visits to emergency departments.

Melatonin, a neurohormone that regulates the sleep-wake cycle, has become very popular for self-managing conditions like sleep disorders and jet lag—even in children. Use of melatonin in adults rose from 0.4 percent in 1999–2000 to 2.1 percent in 2017–2018. But the more people have these tempting, often candy-like supplements in their homes, the more risk that children will get ahold of them unsupervised. Indeed, the rise in use led to a 530 percent increase in poison control center calls and a 420 percent increase in emergency department visits for accidental melatonin ingestion in infants and kids between 2009 and 2020.

And the problem is ongoing. In the new study, researchers estimate that between 2019 and 2022, nearly 11,000 kids went to the emergency department after accidentally gulping down melatonin supplements. Nearly all the cases involved a solid form of melatonin, with about 47 percent identified specifically as gummies and 49 percent listed as an unspecified sold form, likely to include gummies. These melatonin-based emergency visits made up 7 percent of all emergency visits by infants and kids who ingested medications unsupervised.

The candy-like appeal of melatonin products seems evident in the ages of kids rushed to emergency departments. Most emergency department visits for unsupervised medicine exposures are in infants and toddlers ages 1 to 2, but for melatonin-related visits, half were ages 3 to 5. The researchers noted that among the emergency visits with documentation, about three-quarters of the melatonin products involved came out of bottles, suggesting that the young kids managed to open the bottles themselves or that the bottles weren’t properly closed. Manufacturers are not required to use child-resistant packaging on melatonin supplements.

Luckily, most of the cases had only mild to no effects. Still, about 6.5 percent of the cases—a little over 700 children—were hospitalized from their melatonin binge. A 2022 study led by researchers in Michigan found that among poison control center calls for children consuming melatonin, the reported symptoms involved gastrointestinal, cardiovascular, or central nervous systems. For children who are given supervised doses of melatonin to improve sleep, known side effects include drowsiness, increased bedwetting or urination in the evening, headache, dizziness, and agitation.

According to the National Center for Complementary and Integrative Health—part of the National Institutes of Health—supervised use of melatonin in children appears to be safe for short-term use. But there’s simply not much data on use in children, and the long-term effects of regular use or acute exposures are unknown. The NCCIH cautions: “Because melatonin is a hormone, it’s possible that melatonin supplements could affect hormonal development, including puberty, menstrual cycles, and overproduction of the hormone prolactin, but we don’t know for sure.”

For now, the authors of the new study say the data “highlights the continued need to educate parents and other caregivers about the importance of keeping all medications and supplements (including gummies) out of children’s reach and sight.”

Thousands of US kids are overdosing on melatonin gummies, ER study finds Read More »

gm-starts-selling-chevy-blazer-evs-again,-cuts-prices-up-to-$6,250

GM starts selling Chevy Blazer EVs again, cuts prices up to $6,250

still no carplay, though —

Bad software, charging problems led Chevy to stop selling the Blazer EV in December.

A red chevrolet Blazer EV

Enlarge / Chevrolet suspended sales of the Blazer EV for three months as a result of software and charging problems.

Jonathan Gitlin

The Chevrolet Blazer EV is back on sale today, more than three months after the automaker took its newest electric vehicle off the market due to a litany of software problems. It has also cut Blazer EV prices, and the company says that the EV now qualifies for the full $7,500 IRS clean vehicle tax credit.

The Blazer EV was the first Chevrolet-badged EV using General Motors’ new Ultium battery platform. Introduced at CES in 2022, Chevrolet actually started delivering the first Blazer EVs last August. But not very many of them—by the end of the year, only 463 Blazer EVs had found homes.

Ars drove the Blazer EV in December and came away unconvinced. On the road, it suffered from lots of NVH, and we ran into software bugs with the Ultifi infotainment system. We weren’t the only ones to experience problems—charging bugs left another journalist stranded on a road trip with a Blazer EV that refused to charge.

Three days later, GM issued a stop-sale order for the car.

Now, the Blazer EV is on sale again and with what Chevrolet is calling “significant software updates.” Some of these are to the UI, but those charging problems should be solved now as well.

The other problem with the Blazer EV, beyond quality issues, was its rather high price. The promised $44,995 version was ditched before the car was launched; instead, the cheapest Blazer EV on sale was the $56,715 all-wheel drive LT, and at $60,215 the all-wheel drive Blazer EV RS was more expensive than the closely related Cadillac Lyriq.

To help tempt customers back into the showroom, Chevrolet has also given the Blazer EV a hefty price cut. Now, the Blazer EV LT costs $50,195, $6,520 less than at launch. The Blazer EV RS got $5,620 cheaper and now starts at $56,175.

GM starts selling Chevy Blazer EVs again, cuts prices up to $6,250 Read More »

google-says-the-ai-focused-pixel-8-can’t-run-its-latest-smartphone-ai-models

Google says the AI-focused Pixel 8 can’t run its latest smartphone AI models

we’re all trying to find the guy who did this —

Gemini Nano can’t run on the smaller Pixel 8 due to mysterious “hardware limitations.”

The bigger Pixel 8 Pro gets the latest AI features. The smaller model does not.

Enlarge / The bigger Pixel 8 Pro gets the latest AI features. The smaller model does not.

Google

If you believe Google’s marketing hype, AI in a phone is really, really important, the best AI is Google’s, and the best place to get that AI is Google’s flagship smartphone, the Pixel 8. We’re five months removed from the launch of the Pixel 8, and that doesn’t seem like a justifiable position anymore: Google says its latest AI models can’t run on the Pixel 8.

Google dropped that news in a Mobile World Congress wrap-up video that was spotted by Mishaal Rahman. At the end of the show in a Q&A session, Googler Terence Zhang, a member of the Gemini-on-Android team, said “[Gemini] Nano will not be coming to the Pixel 8 because of some hardware limitations. It’s currently on the Pixel 8 Pro and very recently available on the Samsung S24 family. It’ll be coming to more high-end devices in the near future.”

That is a wild statement. Gemini is Google’s latest AI model, and it made a big deal of the launch last month. Gemini comes in a few different sizes, and the smallest “Nano” size is specifically designed to run on smartphones as a much-hyped “on-device AI.” The Pixel 8 and Pixel 8 Pro are Google’s flagship smartphones. Google designed the phone and the chip and the AI model and somehow can’t make these things play nice together?

Adding to the weirdness is that Gemini Nano can run on the Pixel 8 Pro but not the smaller Pixel 8 due to “hardware limitations.” What limitations would those be, exactly? The two phones have the exact same Google Tensor SoC. They run the same software. The main differences between the two phones are screen size (6.7 inches versus 6.2), battery size, a different camera loadout, and 8GB of RAM versus 12GB. RAM is the only known difference you can point to that could create a processing limitation, but Gemini Nano also runs on the Galaxy S24 series, where the base model has 8GB of RAM. RAM being the issue would mean Samsung phones are somehow more RAM efficient than Pixel phones, which is hard to believe. If the Pixel 8 Pro Tensor 3 and Pixel 8 Tensor 3 are different somehow, that’s not on the spec sheet.

Five months ago at the Pixel 8 launch event, Google painted a very different picture of the Pixel 8 series: “I’m excited to introduce you to the next evolution of AI in your hand, Google Pixel 8 Pro and Google Pixel 8. Our latest phones bring together so many technologies from across Google. They’re the first phones to use our latest Google Tensor chip. They include the very best Android experience, first-of-its-kind camera experiences, and the latest AI advancements from Google.” Both devices feature the custom Google Tensor 3 SoC that Google claimed was “designed specifically to bring Google’s AI breakthroughs directly to Pixel users and show the world what’s possible.” This custom Google AI-focused design was supposed to deliver “unbelievably helpful experiences that no other phone can.”

Google's

Enlarge / Google’s “Compare” page does not clearly communicate to customers what they’re buying.

Google

When you launch two phones at once, it’s always hard to distinguish what the actual differences between the two models are. Sometimes, the devices get talked about in the plural, while other times “Pixel 8” is used to represent both devices. Sometimes, the more expensive device is singularly mentioned for no reason other than it’s the more expensive flagship. Between the hour-long presentation and private press pre-briefing that Ars was a part of, “What’s the difference” became a pretty well-worn question that was expected to be answered clearly. Usually, the go-to delineator here is the spec sheet, which is expected to spell out in clear language what you’re actually buying. The Google Store has a compare page where you can directly pit the Pixel 8 and Pixel 8 Pro against each other, and nothing spells out a difference in AI processing capabilities or a difference in the Tensor chips.

In the case of the Pixel 8 and Pixel 8 Pro, Google wasn’t clear enough in its communication at launch. Today, though, re-watching the launch presentation with the new knowledge that there is some dramatic difference in AI processing capabilities, you can pick up some language like talk of the “Pixel 8 Pro’s on-device LLM” that you could now interpret as a declaration of exclusive AI capabilities for the Pro model, but that wasn’t clear at the time.

As a consumer, it’s hard not to feel misled, and it’s embarrassing for Google, but to practically care about this, you’d need to know what the heck “Gemini Nano” actually does and why you should care about it. That’s a hard question to answer. Google has a page up here detailing some of the features Gemini Nano powers on the Pixel 8 Pro, but a feature could also be powered by different models on different devices. For what it’s worth, the rundown lists a “summarize” feature for the Google Recorder app and “smart reply” in Gboard. Plenty of Google apps already have a “smart reply” feature without Gemini Nano. Third-party developers can also plug into the onboard Gemini Nano model for their apps, but it’s hard to imagine anyone doing that with such limited device support.

The other option is to just forget about doing all of this AI stuff on-device and just do it in the cloud. As a great example of this, none of this Gemini Nano stuff has anything to do with the Google Gemini Chatbot, which all runs in the cloud. A big question is what this will mean for the smaller Google Pixel 8 going forward. Google promised seven years of OS updates for the new Pixels, and to already be stripping down features due to “hardware limitations” after five months is a disappointment.

Google says the AI-focused Pixel 8 can’t run its latest smartphone AI models Read More »

apple-backtracks,-reinstates-epic-games’-ios-developer-account-in-europe

Apple backtracks, reinstates Epic Games’ iOS developer account in Europe

Never mind —

After EU began investigation, Apple repaves path for Fortnite on European iOS.

Artist's conception of Epic Games celebrating their impending return to iOS in Europe.

Enlarge / Artist’s conception of Epic Games celebrating their impending return to iOS in Europe.

Epic Games

Apple has agreed to reinstate Epic Game’s Swedish iOS developer account just days after Epic publicized Apple’s decision to rescind that account. The move once again paves the way for Epic’s plans to release a sideloadable version of the Epic Games Store and Fortnite on iOS devices in Europe.

“Following conversations with Epic, they have committed to follow the rules, including our DMA policies,” Apple said in a statement provided to Ars Technica. “As a result, Epic Sweden AB has been permitted to re-sign the developer agreement and accepted into the Apple Developer Program.”

Apple’s new statement is in stark contrast to its position earlier this week when it cited “Epic’s egregious breach of its contractual obligations to Apple” as a reason why it couldn’t trust Epic’s commitments to stand by any new developer agreement. In correspondence with Epic shared by the Fortnite maker Wednesday, Apple executive Phil Schiller put an even finer point on it:

Your colorful criticism of our [Digital Markets Act] compliance plan, coupled with Epic’s past practice of intentionally violating contractual provisions with which it disagrees, strongly suggest that Epic Sweden does not intend to follow the rules… Developers who are unable or unwilling to keep their promises can’t continue to participate in the Developer Program.

A new regulatory world

Apple’s quick turnaround comes just a day after the European Commission said it was opening an investigation into Apple’s conduct under the new Digital Markets Act and other potentially applicable European regulations. That investigation could have entailed hefty fines of up to “10 percent of the company’s total worldwide turnover” if Apple was found to be in violation.

“We have the DMA coming into compliance [Thursday], so the demand of compliance is… listen, you need to be able to carry another app store, for instance, and you cannot put in place a fee structure that sort of disables the benefits of the DMA for all the market participants,” European Commission Executive Vice President Margrethe Vestager told Bloomberg TV Tuesday.

In an update on its official blog, Epic linked Apple’s decision to “public backlash for retaliation” and said the whole affair “sends a strong signal to developers that the European Commission will act swiftly to enforce the Digital Markets Act and hold gatekeepers accountable. We are moving forward as planned to launch the Epic Games Store and bring Fortnite back to iOS in Europe. Onward!”

In a social media post celebrating Apple’s move, Epic CEO Tim Sweeney said that “the DMA just had its first major victory” and called the move “a big win for European rule of law, for the European Commission, and for the freedom of developers worldwide to speak up.”

Apple’s apparent retreat on the issue preempts what would have likely been a lengthy legal and public relations battle between the two corporate giants, much like the one resulting from Epic’s 2020 decision to violate Apple’s developer agreement by adding third-party payment options to Fortnite on iOS. But that battle, which played out primarily in a series of US courts, differed in many particulars from the new conflict that was developing under the new enforcement regime surrounding Europe’s DMA rules.

Epic said last month that it plans to launch the Epic Games Store on iOS sometime in 2024.

Apple backtracks, reinstates Epic Games’ iOS developer account in Europe Read More »

microsoft-says-kremlin-backed-hackers-accessed-its-source-and-internal-systems

Microsoft says Kremlin-backed hackers accessed its source and internal systems

THE PLOT THICKENS —

Midnight Blizzard is now using stolen secrets in follow-on attacks against customers.

Microsoft says Kremlin-backed hackers accessed its source and internal systems

Microsoft said that Kremlin-backed hackers who breached its corporate network in January have expanded their access since then in follow-on attacks that are targeting customers and have compromised the company’s source code and internal systems.

The intrusion, which the software company disclosed in January, was carried out by Midnight Blizzard, the name used to track a hacking group widely attributed to the Federal Security Service, a Russian intelligence agency. Microsoft said at the time that Midnight Blizzard gained access to senior executives’ email accounts for months after first exploiting a weak password in a test device connected to the company’s network. Microsoft went on to say it had no indication any of its source code or production systems had been compromised.

Secrets sent in email

In an update published Friday, Microsoft said it uncovered evidence that Midnight Blizzard had used the information it gained initially to further push into its network and compromise both source code and internal systems. The hacking group—which is tracked under multiple other names, including APT29, Cozy Bear, CozyDuke, The Dukes, Dark Halo, and Nobelium—has been using the proprietary information in follow-on attacks, not only against Microsoft but also its customers.

“In recent weeks, we have seen evidence that Midnight Blizzard is using information initially exfiltrated from our corporate email systems to gain, or attempt to gain, unauthorized access,” Friday’s update said. “This has included access to some of the company’s source code repositories and internal systems. To date we have found no evidence that Microsoft-hosted customer-facing systems have been compromised.

In January’s disclosure, Microsoft said Midnight Blizzard used a password-spraying attack to compromise a “legacy non-production test tenant account” on the company’s network. Those details meant that the account hadn’t been removed once it was decommissioned, a practice that’s considered essential for securing networks. The details also meant that the password used to log in to the account was weak enough to be guessed by sending a steady stream of credentials harvested from previous breaches—a technique known as password spraying.

In the months since, Microsoft said Friday, Midnight Blizzard has been exploiting the information it obtained earlier in follow-on attacks that have stepped up an already high rate of password spraying.

Unprecedented global threat

Microsoft officials wrote:

It is apparent that Midnight Blizzard is attempting to use secrets of different types it has found. Some of these secrets were shared between customers and Microsoft in email, and as we discover them in our exfiltrated email, we have been and are reaching out to these customers to assist them in taking mitigating measures. Midnight Blizzard has increased the volume of some aspects of the attack, such as password sprays, by as much as 10-fold in February, compared to the already large volume we saw in January 2024.

Midnight Blizzard’s ongoing attack is characterized by a sustained, significant commitment of the threat actor’s resources, coordination, and focus. It may be using the information it has obtained to accumulate a picture of areas to attack and enhance its ability to do so. This reflects what has become more broadly an unprecedented global threat landscape, especially in terms of sophisticated nation-state attacks.

The attack began in November and wasn’t detected until January. Microsoft said then that the breach allowed Midnight Blizzard to monitor the email accounts of senior executives and security personnel, raising the possibility that the group was able to read sensitive communications for as long as three months. Microsoft said one motivation for the attack was for Midnight Blizzard to learn what the company knew about the threat group. Microsoft said at the time and reiterated again Friday that it had no evidence the hackers gained access to customer-facing systems.

Midnight Blizzard is among the most prolific APTs, short for advanced persistent threats, the term used for skilled, well-funded hacking groups that are mostly backed by nation-states. The group was behind the SolarWinds supply-chain attack that led to the hacking of the US Departments of Energy, Commerce, Treasury, and Homeland Security and about 100 private-sector companies.

Last week, the UK National Cyber Security Centre (NCSC) and international partners warned that in recent months, the threat group has expanded its activity to target aviation, education, law enforcement, local and state councils, government financial departments, and military organizations.

Microsoft says Kremlin-backed hackers accessed its source and internal systems Read More »