Author name: Mike M.

openwrt,-now-20-years-old,-is-crafting-its-own-future-proof-reference-hardware

OpenWrt, now 20 years old, is crafting its own future-proof reference hardware

It’s time for a new blue box —

There are, as you might expect, a few disagreements about what’s most important.

Linksys WRT54G

Enlarge / Failing an image of the proposed reference hardware by the OpenWrt group, let us gaze upon where this all started: inside a device that tried to quietly use open source software without crediting or releasing it.

Jim Salter

OpenWrt, the open source firmware that sprang from Linksys’ use of open source code in its iconic WRT54G router and subsequent release of its work, is 20 years old this year. To keep the project going, lead developers have proposed creating a “fully upstream supported hardware design,” one that would prevent the need for handling “binary blobs” in modern router hardware and let DIY router enthusiasts forge their own path.

OpenWRT project members, 13 of which signed off on this hardware, are keeping the “OpenWrt One” simple, while including “some nice features we believe all OpenWrt supported platforms should have,” including “almost unbrickable” low-level firmware, an on-board real-time clock with a battery backup, and USB-PD power. The price should be under $100 and the schematics and code publicly available.

But OpenWrt will not be producing or selling these boards, “for a ton of reasons.” The group is looking to the Banana Pi makers to distribute a fitting device, with every device producing a donation to the Software Freedom Conservancy earmarked for OpenWrt. That money could then be used for hosting expenses, or “maybe an OpenWrt summit.”

OpenWrt tries to answer some questions about its designs. There are two flash chips on the board to allow for both a main loader and a write-protected recovery. There’s no USB 3.0 because all the USB and PCIe buses are shared on the board. And there’s such an emphasis on a battery-backed RTC because “we believe there are many things a Wi-Fi … device should have on-board by default.”

But members of the site have more questions, some of them beyond the scope of what OpenWrt is promising. Some want to see a device that resembles the blue boxes of old, with four or five Ethernet ports built in. Others are asking about a lack of PoE support, or USB 3.0 for network-attached drives. Some are actually wondering why the proposed device includes NVMe storage. And quite a few are asking why the device has 1Gbps and 2.5Gbps ports, given that this means anyone with Internet faster than 1Gbps will be throttled, since the 2.5 port will likely be used for wireless output.

There is no expected release date, though it’s noted that it’s the “first” community-driven reference hardware.

OpenWrt, which has existed in parallel with the DD-WRT project that sprang from the same firmware moment, powers a number of custom-made routers. It and other open source router firmware faced an uncertain future in the mid-2010s, when Federal Communications Commission rules, or at least manufacturers’ interpretation of them, made them seem potentially illegal. Because open firmware often allowed for pushing wireless radios beyond their licensed radio frequency parameters, firms like TP-Link blocked them, while Linksys (at that point owned by Belkin) continued to allow them. In 2020, OpenWrt patched a code-execution exploit due to unencrypted update channels.

OpenWrt, now 20 years old, is crafting its own future-proof reference hardware Read More »

ios-17.3-adds-multiple-features-originally-planned-for-ios-17

iOS 17.3 adds multiple features originally planned for iOS 17

New Features —

macOS 14.3, watchOS 10.3, and tvOS 17.3 were also released.

An iPhone sits on a wood table

Enlarge / The iPhone 15 Pro.

Samuel Axon

Apple yesterday released iOS and iPadOS 17.3 as well as watchOS 10.3, tvOS 17.3, and macOS Sonoma 14.3 for all supported devices.

iOS 17.3 primarily adds collaborative playlists in Apple Music, and what Apple calls “Stolen Device Protection.” Collaborative playlists have been on a bit of a journey; they were promised as part of iOS 17, then added in the beta of iOS 17.2, but removed before that update went live. Now they’re finally reaching all users.

When enabled, Stolen Device Protection requires Face ID or Touch ID authentication “with no passcode fallback” for some sensitive actions on the phone.

And a related feature called Security Delay requires one use of Face ID or Touch ID, then a full hour’s wait, then another biometric authentication before certain particularly important actions can be performed, like changing the device’s passcode.

Other iOS 17.3 additions include support for AirPlay in participating hotels, an improved view for seeing the warranty status of all your devices, a new Unity wallpaper honoring Black History Month, and “crash detection optimizations.”

As is so often the case for these simultaneous operating system updates from Apple, iOS is the most robust. macOS 14.3 also adds the collaborative playlist feature and the AppleCare & Warranty Settings panel, but that’s about it as far as user-facing additions.

watchOS 10.3 adds a new 2024 Black Unity face that is meant to pair with a new watchband by the same name. And tvOS 17.3 simply reintroduces the previously removed iTunes Movie and TV Show Wishlist feature.

iOS 17.3 release notes

Stolen Device Protection

  • Stolen Device Protection increases security of iPhone and Apple ID by requiring Face ID or Touch ID with no passcode fallback to perform certain actions
  • Security Delay requires Face ID or Touch ID, an hour wait, and then an additional successful biometric authentication before sensitive operations like changing device passcode or Apple ID password can be performed

Lock Screen

  • New Unity wallpaper honors Black history and culture in celebration of Black History Month

Music

  • Collaborate on playlists allows you to invite friends to join your playlist and everyone can add, reorder, and remove songs
  • Emoji reactions can be added to any track in a collaborative playlist

This update also includes the following improvements:

  • AirPlay hotel support lets you stream content directly to the TV in your room in select hotels
  • AppleCare & Warranty in Settings shows your coverage for all devices signed in with your Apple ID
  • Crash detection optimizations (all iPhone 14 and iPhone 15 models)

macOS 14.3 Sonoma release notes

  • Collaborate on playlists in Apple Music allows you to invite friends to join your playlist and everyone can add, reorder, and remove songs
  • Emoji reactions can be added to any track in a collaborative playlist in Apple Music
  • AppleCare & Warranty in Settings shows your coverage for all devices signed in with your Apple ID

iOS 17.3 adds multiple features originally planned for iOS 17 Read More »

urban-agriculture’s-carbon-footprint-can-be-worse-than-that-of-large-farms

Urban agriculture’s carbon footprint can be worse than that of large farms

Greening your greens —

Saving on the emissions associated with shipping doesn’t guarantee a lower footprint.

Lots of plants in the foreground, and dense urban buildings in the background

A few years back, the Internet was abuzz with the idea of vertical farms running down the sides of urban towers, with the idea that growing crops where they’re actually consumed could eliminate the carbon emissions involved with shipping plant products long distances. But lifecycle analysis of those systems, which require a lot of infrastructure and energy, suggest they’d have a hard time doing better than more traditional agriculture.

But those systems represent only a small fraction of urban agriculture as it’s practiced. Most urban farming is a mix of local cooperative gardens and small-scale farms located within cities. And a lot less is known about the carbon footprint of this sort of farming. Now, a large international collaboration has worked with a number of these farms to get a handle on their emissions in order to compare those to large-scale agriculture.

The results suggest it’s possible that urban farming can have a lower impact. But it requires choosing the right crops and a long-term commitment to sustainability.

Tracking crops

Figuring out the carbon footprint of urban farms is a challenge, because it involves tracking all the inputs, from infrastructure to fertilizers, as well as the productivity of the farm. A lot of the urban farms, however, are nonprofits, cooperatives, and/or staffed primarily by volunteers, so detailed reporting can be a challenge. To get around this, the researchers worked with a lot of individual farms in France, Germany, Poland, the UK, and US in order to get accurate accounts of materials and practices.

Data from large-scale agriculture for comparison is widely available, and it includes factors like transport of the products to consumers. The researchers used data from the same countries as the urban farms.

On average, the results aren’t good for urban agriculture. An average serving from an urban farm was associated with 0.42 kg of carbon dioxide equivalents. By contrast, traditional produce resulted in emissions of about 0.07 kg per serving—six times less.

But that average obscures a lot of nuance. Of the 73 urban farms studied, 17 outperformed traditional agriculture by this measure. And, if the single highest-emitting farm was excluded from the analysis, the median of the urban farms ended up right around that 0.7 kg per serving.

All of this suggests the details of urban farming practices make a big difference. One thing that matters is the crop. Tomatoes tend to be fairly resource-intensive to grow and need to be shipped quickly in order to be consumed while ripe. Here, urban farms came in at 0.17 kg of carbon per serving, while conventional farming emits 0.27 kg/serving.

Difference-makers

One clear thing was that the intentions of those running the farms didn’t matter much. Organizations that had a mission of reducing environmental impact, or had taken steps like installing solar panels, were no better off at keeping their emissions low.

The researchers note two practical reasons for the differences they saw. One is infrastructure, which is the single largest source of carbon emissions at small sites. These include things like buildings, raised beds, and compost handling. The best sites the researchers saw did a lot of upcycling of things like construction waste into structures like the surrounds for raised beds.

Infrastructure in urban sites is also a challenge because of the often intense pressure on land, which can mean gardens have to relocate. This can shorten the lifetime of infrastructure and increase its environmental impact.

Another major factor was the use of urban waste streams for the consumables involved with farming. Composting from urban waste essentially eliminated fertilizer use (it was only 5 percent of the rate of conventional farming). Here, practices matter a great deal, as some composting techniques allow the material to become oxygen-free, which results in the anaerobic production of methane. Rainwater use also made a difference; in one case, the carbon impact of water treatment and distribution accounted for over two-thirds of an urban farm’s emissions.

These suggest that careful planning could make urban farms effective at avoiding some of the carbon emissions of conventional agriculture. This would involve figuring out best practices for infrastructure and consumables, as well as targeting crops that can have high carbon emissions when grown on conventional farms.

But any negatives are softened by a couple of additional considerations. One is that even the worst-performing produce seen in this analysis is far better in terms of carbon emissions than eating meat. The researchers also point out that many of the cooperative gardens provide a lot of social functions—things like after-school programs or informal classes—that can be difficult to put an emissions price on. Maximizing these could definitely boost the societal value of the operations, even if it doesn’t have a clear impact on the environment.

Nature Cities, 2019. DOI: 10.1038/s44284-023-00023-3  (About DOIs).

Urban agriculture’s carbon footprint can be worse than that of large farms Read More »

novel-camera-system-lets-us-see-the-world-through-eyes-of-birds-and-bees

Novel camera system lets us see the world through eyes of birds and bees

A fresh perspective —

It captures natural animal-view moving images with over 90 percent accuracy.

A new camera system and software package allows researchers and filmmakers to capture animal-view videos. Credit: Vasas et al., 2024.

Who among us hasn’t wondered about how animals perceive the world, which is often different from how humans do so? There are various methods by which scientists, photographers, filmmakers, and others attempt to reconstruct, say, the colors that a bee sees as it hunts for a flower ripe for pollinating. Now an interdisciplinary team has developed an innovative camera system that is faster and more flexible in terms of lighting conditions than existing systems, allowing it to capture moving images of animals in their natural setting, according to a new paper published in the journal PLoS Biology.

“We’ve long been fascinated by how animals see the world. Modern techniques in sensory ecology allow us to infer how static scenes might appear to an animal,” said co-author Daniel Hanley, a biologist at George Mason University in Fairfax, Virginia. “However, animals often make crucial decisions on moving targets (e.g., detecting food items, evaluating a potential mate’s display, etc.). Here, we introduce hardware and software tools for ecologists and filmmakers that can capture and display animal-perceived colors in motion.”

Per Hanley and his co-authors, different animal species possess unique sets of photoreceptors that are sensitive to a wide range of wavelengths, from ultraviolet to the infrared, dependent on each animal’s specific ecological needs. Some animals can even detect polarized light. So every species will perceive color a bit differently. Honeybees and birds, for instance, are sensitive to UV light, which isn’t visible to human eyes. “As neither our eyes nor commercial cameras capture such variations in light, wide swaths of visual domains remain unexplored,” the authors wrote. “This makes false color imagery of animal vision powerful and compelling.”

However, the authors contend that current techniques for producing false color imagery can’t quantify the colors animals see while in motion, an important factor since movement is crucial to how different animals communicate and navigate the world around them via color appearance and signal detection. Traditional spectrophotometry, for instance, relies on object-reflected light to estimate how a given animal’s photoreceptors will process that light, but it’s a time-consuming method, and much spatial and temporal information is lost.

Peacock feathers through eyes of four different animals: (a) a peafowl; (b) humans; (c) honeybees; and (d) dogs. Credit: Vasas et al., 2024.

Multispectral photography takes a series of photos across various wavelengths (including UV and infrared) and stacks them into different color channels to derive camera-independent measurements of color. This method trades some accuracy for better spatial information and is well-suited for studying animal signals, for instance, but it only works on still objects, so temporal information is lacking.

That’s a shortcoming because “animals present and perceive signals from complex shapes that cast shadows and generate highlights,” the authors wrote. ‘These signals vary under continuously changing illumination and vantage points. Information on this interplay among background, illumination, and dynamic signals is scarce. Yet it forms a crucial aspect of the ways colors are used, and therefore perceived, by free-living organisms in natural settings.”

So Hanley and his co-authors set out to develop a camera system capable of producing high-precision animal-view videos that capture the full complexity of visual signals as they would be perceived by an animal in a natural setting. They combined existing methods of multispectral photography with new hardware and software designs. The camera records video in four color channels simultaneously (blue, green, red, and UV). Once that data has been processed into “perceptual units,” the result is an accurate video of how a colorful scene would be perceived by various animals, based on what we know about which photoreceptors they possess. The team’s system predicts the perceived colors with 92 percent accuracy. The cameras are commercially available, and the software is open source so that others can freely use and build on it.

The video at the top of this article depicts the colors perceived by honeybees watching fellow bees foraging and interacting (even fighting) on flowers—an example of the camera system’s ability to capture behavior in a natural setting. Below, Hanley applies UV-blocking sunscreen in the field. His light-toned skin looks roughly the same in human vision and honeybee false color vision “because skin reflectance increases progressively at longer wavelengths,” the authors wrote.

Novel camera system lets us see the world through eyes of birds and bees Read More »

openai-opens-the-door-for-military-uses-but-maintains-ai-weapons-ban

OpenAI opens the door for military uses but maintains AI weapons ban

Skynet deferred —

Despite new Pentagon collab, OpenAI won’t allow customers to “develop or use weapons” with its tools.

The OpenAI logo over a camoflage background.

On Tuesday, ChatGPT developer OpenAI revealed that it is collaborating with the United States Defense Department on cybersecurity projects and exploring ways to prevent veteran suicide, reports Bloomberg. OpenAI revealed the collaboration during an interview with the news outlet at the World Economic Forum in Davos. The AI company recently modified its policies, allowing for certain military applications of its technology, while maintaining prohibitions against using it to develop weapons.

According to Anna Makanju, OpenAI’s vice president of global affairs, “many people thought that [a previous blanket prohibition on military applications] would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world.” OpenAI removed terms from its service agreement that previously blocked AI use in “military and warfare” situations, but the company still upholds a ban on its technology being used to develop weapons or to cause harm or property damage.

Under the “Universal Policies” section of OpenAI’s Usage Policies document, section 2 says, “Don’t use our service to harm yourself or others.” The prohibition includes using its AI products to “develop or use weapons.” Changes to the terms that removed the “military and warfare” prohibitions appear to have been made by OpenAI on January 10.

The shift in policy appears to align OpenAI more closely with the needs of various governmental departments, including the possibility of preventing veteran suicides. “We’ve been doing work with the Department of Defense on cybersecurity tools for open-source software that secures critical infrastructure,” Makanju said in the interview. “We’ve been exploring whether it can assist with (prevention of) veteran suicide.”

The efforts mark a significant change from OpenAI’s original stance on military partnerships, Bloomberg says. Meanwhile, Microsoft Corp., a large investor in OpenAI, already has an established relationship with the US military through various software contracts.

OpenAI opens the door for military uses but maintains AI weapons ban Read More »

harmonix-is-ending-rock-band-dlc-releases-after-16-years,-~2,800-songs

Harmonix is ending Rock Band DLC releases after 16 years, ~2,800 songs

Don’t look back in anger —

Previously purchased songs will still be playable via Rock Band 4.

After 16 (nearly unbroken) years of regular DLC releases, <em>Rock Band</em>‘s avatars haven’t aged a day.” src=”https://cdn.arstechnica.net/wp-content/uploads/2015/10/RockBand4_NoHUD_03-640×360.jpg”></img><figcaption>
<p>After 16 (nearly unbroken) years of regular DLC releases, <em>Rock Band</em>‘s avatars haven’t aged a day.</p>
</figcaption></figure>
<p>Here at Ars Technica, we remember covering <em>Rock Band</em>‘s weekly DLC song releases <a href=way back in 2007, when such regular content drops were still a new concept for a rhythm game. Now, Harmonix has announced the last of the series’ roughly 2,800 downloadable releases will finally come on January 25, marking the end of a nearly 16-year era in music gaming history.

Previously purchased DLC songs will still be playable in Rock Band 4, Harmonix’s Daniel Sussman writes in an announcement post. Rock Band 4 live services, including online play, will also continue as normal, after online game modes for earlier Rock Band games were finally shut down in late 2022.

“Taking a longer look back, I see the Rock Band DLC catalog as a huge achievement in persistence and commitment,” Sussman writes. “Over the years we’ve cleared, authored and released nearly 3,000 songs as DLC and well over 3,000 if you include all the game soundtracks. That’s wild.”

A long-lasting content commitment

You’d be forgiven for not realizing that Harmonix has kept up its regular releases of downloadable playable Rock Band songs to this day. While we were big fans of 2015’s Rock Band 4, the Xbox One and PS4 release generally failed to reignite the ’00s mania for plastic instruments that made both Guitar Hero and Rock Band into billion-dollar franchises during their heyday.

Yet, Harmonix has still been quietly releasing one to three new downloadable Rock Band 4 tracks to faithful fans every single week since the game’s release over eight years ago. Before that, Harmonix had kept a similar weekly release schedule for earlier Rock Band titles going back to 2007, broken up only by a 21-month gap starting in April 2013.

Those regular releases were key to maintaining interest and longevity in the Rock Band titles beyond the dozens of songs on the game discs. For a couple of bucks per song, players could customize their in-game soundtracks with thousands of tracks spanning hundreds of indie and mainstream acts across all sorts of genres. And even after all that time, the last year of newly released DLC has still included some absolute bangers from major groups like Steely Dan, Linkin Park, and Foo Fighters.

Rock Band 2 at that game’s 2008 launch party at LA’s Orpheum Theatre.” height=”481″ src=”https://cdn.arstechnica.net/wp-content/uploads/2022/12/GettyImages-81960955-640×481.jpg” width=”640″>

Enlarge / A couple of folks absolutely getting down to Rock Band 2 at that game’s 2008 launch party at LA’s Orpheum Theatre.

Getty Images

Harmonix also deserves credit for making its DLC cross-compatible across multiple different games and systems. That copy of The Police’s Roxanne that you bought to play on your Xbox 360 in 2007 could still be re-downloaded and played on Rock Band 4 via your Xbox Series X to this day (Switch and PlayStation 5 owners are less lucky, however). And for songs that were trapped on earlier game discs, Harmonix also went out of its way to offer song export options that let you transfer that content forward to newer Rock Band titles (with the notable exception of The Beatles: Rock Band, whose songs remain trapped on that version of the standalone game).

Compare that to the Guitar Hero franchise, which also relaunched in 2015 as the online-focused Guitar Hero Live. When Activision shut down the game’s “Guitar Hero TV” service in 2018, 92 percent of the new game’s playable songs became instantly inaccessible, leaving only 42 “on-disc” songs to play.

What’s next?

While official support for Rock Band DLC is finally ending, the community behind Clone Hero just recently hit an official Version 1.0 release for their PC-based rhythm game that’s compatible with many guitars, drums, keyboards, gamepads, and adapters used in Rock Band and other console rhythm games (microphones excluded). While that game doesn’t come with anything like Rock Band‘s list of officially licensed song content, it’s not hard to find a bevy of downloadable, fan-made custom Clone Hero tracks with a little bit of searching.

Rock Band DLC, but we do get… this.” height=”360″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/01/ff-640×360.jpg” width=”640″>

Enlarge / We might not get any more Rock Band DLC, but we do get… this.

Epic Games

Shortly after its acquisition by Epic in 2021, Harmonix has been working on “Fortnite Festival,” the incredibly Rock Band-esque mini-game embedded in Epic’s Fortnite “metaverse.” Sussman writes that a “rotating selection” of free-to-play songs will continue to cycle through that game mode, and that support for Rock Band 4 instruments will be coming to Fortnite in the future as well (peripheral-maker PDP looks like it will be getting in on the Fortnite guitar act as well).

As for the last few weeks of Rock Band DLC offerings, Sussman writes that Harmonix is planning “some tear jerkers that sum up our feelings about this moment.” Here’s hoping we finally get an official Rock Band version of November Rain as part of that closeout; as Guns N’ Roses memorably said, “Nothing lasts forever, and we both know hearts can change.”

Harmonix is ending Rock Band DLC releases after 16 years, ~2,800 songs Read More »

that’s-never-happened-before:-games-done-quick-video-stars-speedrunning-dog

That’s never happened before: Games Done Quick video stars speedrunning dog

Personal Best Boy —

The Shiba Inu was trained to use a custom controller in a game meant for a robot.

Peanut Butter the dog speedruns Gyromite at Awesome Games Done Quick 2024.

The twice-a-year video game speedrunning and fundraising live event Games Done Quick has been a source of amazement and joy for years, but we’re still saying “that’s never happened before” even now, more than a decade after the first event.

Case in point: Awesome Games Done Quick 2024, which is streaming live 24 hours a day this week on Twitch, saw the very first speedrun performed by a dog.

A Shiba Inu named Peanut Butter (shortened to PB, also a speedrunner term for “personal best” finish time) completed a 30-minute speedrun of the 1985 Nintendo Entertainment System (NES) game Gyromite.

Gyromite was originally bundled with the nostalgic but failed ROB (Robotic Operating Buddy) accessory for the NES. It’s a platformer of sorts, but not a conventional fast-paced one. Rather, it’s a comparatively slow-moving game where you make inputs to raise and lower pipes to allow a character to pass through the level safely.

In the speedrun, PB took over for the robot in a category called B Game. PB didn’t set a world record or a personal best (most GDQ runners don’t at the event), as there were a couple minor mistakes, but he still finished the game under its estimate by dutifully sitting, pressing buttons, and holding down those buttons at the right moments at owner JSR_’s prompts. JSR_ also prompted PB to periodically bark “hello” to the stream’s tens of thousands of viewers. PB received numerous treats throughout the run, including bits of cheese and ham.

The final time was 26 minutes and 24 seconds, compared to PB’s personal best of 25 minutes, 29 seconds. The human record is 24 minutes and 39 seconds, by a runner named Octopuscal. PB’s PB is currently the world record among dogs, but of course, he’s the only runner in that particular category.

Speedrunner JSR_ adopted PB during the height of the pandemic and has spent a portion of every day training him to press and hold large buttons on a custom controller for treats in order to play the game. “This took years of training,” he said. “I wanted to train him to do something special, when I realized as a puppy that he was much smarter than most other dogs I’ve seen. Since I’m a speedrunner (and PB was literally named after, you know, getting a ‘PB’ in a speedrun) it only made sense to me.”

You can see the full video of the run above. Awesome Games Done Quick is an annual event benefiting the Prevent Cancer Foundation. A sister event called Summer Games Done Quick benefits Doctors Without Borders later in the year. You can watch and donate on the event website.

That’s never happened before: Games Done Quick video stars speedrunning dog Read More »

openai-must-defend-chatgpt-fabrications-after-failing-to-defeat-libel-suit

OpenAI must defend ChatGPT fabrications after failing to defeat libel suit

One false move —

ChatGPT users may soon learn whether false outputs will be allowed to ruin lives.

OpenAI must defend ChatGPT fabrications after failing to defeat libel suit

OpenAI may finally have to answer for ChatGPT’s “hallucinations” in court after a Georgia judge recently ruled against the tech company’s motion to dismiss a radio host’s defamation suit.

OpenAI had argued that ChatGPT’s output cannot be considered libel, partly because the chatbot output cannot be considered a “publication,” which is a key element of a defamation claim. In its motion to dismiss, OpenAI also argued that Georgia radio host Mark Walters could not prove that the company acted with actual malice or that anyone believed the allegedly libelous statements were true or that he was harmed by the alleged publication.

It’s too early to say whether Judge Tracie Cason found OpenAI’s arguments persuasive. In her order denying OpenAI’s motion to dismiss, which MediaPost shared here, Cason did not specify how she arrived at her decision, saying only that she had “carefully” considered arguments and applicable laws.

There may be some clues as to how Cason reached her decision in a court filing from John Monroe, attorney for Walters, when opposing the motion to dismiss last year.

Monroe had argued that OpenAI improperly moved to dismiss the lawsuit by arguing facts that have yet to be proven in court. If OpenAI intended the court to rule on those arguments, Monroe suggested that a motion for summary judgment would have been the proper step at this stage in the proceedings, not a motion to dismiss.

Had OpenAI gone that route, though, Walters would have had an opportunity to present additional evidence. To survive a motion to dismiss, all Walters had to do was show that his complaint was reasonably supported by facts, Monroe argued.

Failing to convince the court that Walters had no case, OpenAI’s legal theories regarding its liability for ChatGPT’s “hallucinations” will now likely face their first test in court.

“We are pleased the court denied the motion to dismiss so that the parties will have an opportunity to explore, and obtain a decision on, the merits of the case,” Monroe told Ars.

What’s the libel case against OpenAI?

Walters sued OpenAI after a journalist, Fred Riehl, warned him that in response to a query, ChatGPT had fabricated an entire lawsuit. Generating an entire complaint with an erroneous case number, ChatGPT falsely claimed that Walters had been accused of defrauding and embezzling funds from the Second Amendment Foundation.

Walters is the host of Armed America Radio and has a reputation as the “Loudest Voice in America Fighting For Gun Rights.” He claimed that OpenAI “recklessly” disregarded whether ChatGPT’s outputs were false, alleging that OpenAI knew that “ChatGPT’s hallucinations were pervasive and severe” and did not work to prevent allegedly libelous outputs. As Walters saw it, the false statements were serious enough to be potentially career-damaging, “tending to injure Walter’s reputation and exposing him to public hatred, contempt, or ridicule.”

Monroe argued that Walters had “adequately stated a claim” of libel, per se, as a private citizen, “for which relief may be granted under Georgia law” where “malice is inferred” in “all actions for defamation” but “may be rebutted” by OpenAI.

Pushing back, OpenAI argued that Walters was a public figure who must prove that OpenAI acted with “actual malice” when allowing ChatGPT to produce allegedly harmful outputs. But Monroe told the court that OpenAI “has not shown sufficient facts to establish that Walters is a general public figure.”

Whether or not Walters is a public figure could be another key question leading Cason to rule against OpenAI’s motion to dismiss.

Perhaps also frustrating the court, OpenAI introduced “a large amount of material” in its motion to dismiss that fell outside the scope of the complaint, Monroe argued. That included pointing to a disclaimer in ChatGPT’s terms of use that warns users that ChatGPT’s responses may not be accurate and should be verified before publishing. According to OpenAI, this disclaimer makes Riehl the “owner” of any libelous ChatGPT responses to his queries.

“A disclaimer does not make an otherwise libelous statement non-libelous,” Monroe argued. And even if the disclaimer made Riehl liable for publishing the ChatGPT output—an argument that may give some ChatGPT users pause before querying—”that responsibility does not have the effect of negating the responsibility of the original publisher of the material,” Monroe argued.

Additionally, OpenAI referenced a conversation between Walters and OpenAI, even though Monroe said that the complaint “does not allege that Walters ever had a chat” with OpenAI. And OpenAI also somewhat oddly argued that ChatGPT outputs could be considered “intra-corporate communications” rather than publications, suggesting that ChatGPT users could be considered private contractors when querying the chatbot.

With the lawsuit moving forward, curious chatbot users everywhere may finally get the answer to a question that has been unclear since ChatGPT quickly became the fastest-growing consumer application of all time after its launch in November 2022: Will ChatGPT’s hallucinations be allowed to ruin lives?

In the meantime, the FTC is seemingly still investigating potential harms caused by ChatGPT’s “false, misleading, or disparaging” generations.

An FTC spokesperson previously told Ars that the FTC does not generally comment on nonpublic investigations.

OpenAI did not immediately respond to Ars’ request to comment.

OpenAI must defend ChatGPT fabrications after failing to defeat libel suit Read More »

watch-godzilla-minus-one-in-dazzling-black-and-white-during-limited-us-run

Watch Godzilla Minus One in dazzling black and white during limited US run

A masterful remastering —

“By eliminating color, a new sense of reality emerges.”

Watch Godzilla Minus One in dazzling black and white during limited US run

Toho Inc.

The critically acclaimed film, Godzilla Minus One, hit US theaters in early December and racked up $51 million in the US alone and over $96 million globally, shooting past 2016’s Shin Godzilla as the most successful Japanese-produced Godzilla film to date. The film is winding down its theatrical run, but director, writer, and VFX supervisor Takashi Yamazaki has remastered a black-and-white version of the film as an homage to the 1954 classic Godzilla, released in Japan last week. And now US audiences will have a chance to see that version when Godzilla Minus One/Minus Color arrives at AMC theaters in the US for a limited run from January 26 through February 1.

(Minor spoilers for Godzilla Minus One below.)

Yamakazi spent three years writing the script for Godzilla Minus One, drawing inspiration not just from the original 1954 film but also Jaws (1975), Godzilla, Mothra and Ghidorah (2001), Shin Godzilla, and the films of Hayao Miyazaki. He opted to set the film in postwar Japan, like the original, rather than more recent events like the Fukushima nuclear accident in 2011, in order to explore themes of postwar trauma and emerging hope. The monster itself was designed to be horrifying, with spiky dorsal fins and a bellowing roar produced by recording an amplified roar in a large stadium.

The plot follows a former WWII kamikaze pilot named Kōichi Shikishima (Ryunosuke Kamiki) who encountered Godzilla in 1945 when the monster attacked a Japanese base on Odo Island, but failed to act to help save the garrison. His parents were killed when Tokyo was bombed, so Shikishima is grappling with serious survivor’s guilt a few years later as he struggles to rebuild his life with a woman named Noriko (Minami Hamabe) and a rescued orphaned baby. Then Godzilla mutates and re-emerges for a renewed attack on Japan, and Shikishima gets the chance to redeem himself by helping to destroy the kaiju.

Godzilla Minus One was received with almost universal critical acclaim, with some declaring it not just one of the best films released in 2023 but possibly one of the best Godzilla films ever made. (We didn’t include the film in our own year’s best list because no Ars staffers had yet seen the film when the list was compiled, but it absolutely merits inclusion.) Among other accolades, the film made the Oscar shortlist for Best Visual Effects.

It was a painstaking process to remaster Godzilla Minus One into black and white. “Rather than just making it monochrome, it is a cut-by-cut,” Yamakazi said in a statement last month. “I had them make adjustments while making full use of various mattes as if they were creating a new movie. What I was aiming for was a style that looked like it was taken by masters of monochrome photography. We were able to unearth the texture of the skin and the details of the scenery that were hidden in the photographed data. Then, a frightening Godzilla, just like the one in the documentary, appeared. By eliminating color, a new sense of reality emerges.”

Godzilla Minus One/Minus Color will have a limited run in US AMC theaters from January 26 through February 1, 2024.

Watch Godzilla Minus One in dazzling black and white during limited US run Read More »

explaining-why-a-black-hole-produces-light-when-ripping-apart-a-star

Explaining why a black hole produces light when ripping apart a star

Image of a multi-colored curve, with two inset images of actual astronomical objects.

Enlarge / A model of a tidal disruption, along with some observations of one.

Supermassive black holes appear to be present at the core of nearly every galaxy. Every now and again, a star wanders too close to one of these monsters and experiences what’s called a tidal disruption event. The black hole’s gravity rips the star to shreds, resulting in a huge burst of radiation. We’ve observed this happening several times now.

But we don’t entirely know why it happens—”it” specifically referring to the burst of radiation. After all, stars produce radiation through fusion, and the tidal disruption results in the spaghettification of the star, effectively pulling the plug on the fusion reactions. Black holes brighten when they’re feeding on material, but that process doesn’t look like the sudden burst of radiation from a tidal disruption event.

It turns out that we don’t entirely know how the radiation is produced. There are several competing ideas, but we’ve not been able to figure out which one of them fits the data best. However, scientists have taken advantage of an updated software package to model a tidal disruption event and show that their improved model fits our observations pretty well.

Spaghettification simulation

As mentioned above, we’re not entirely sure about the radiation source in tidal disruption events. Yes, they’re big and catastrophic, and so a bit of radiation isn’t much of a surprise. But explaining the details of that radiation—what wavelengths predominate, how quickly its intensity rises and falls, etc.—can tell us something about the physics that dominates these events.

Ideally, software should act as a bridge between the physics of a tidal disruption and our observations of the radiation they produce. If we simulate a realistic disruption and have the physics right, then the software should produce a burst of radiation that is a decent match for our observations of these events. Unfortunately, so far, the software has let us down; to keep things computationally manageable, we’ve had to take a lot of shortcuts that have raised questions about the realism of our simulations.

The new work, done by Elad Steinberg and Nicholas Stone of The Hebrew University, relies on a software package called RICH that can track the motion of fluids (technically called hydrodynamics). And, while a star’s remains aren’t fluid in the sense of the liquids we’re familiar with here on Earth, their behavior is primarily dictated by fluid mechanics. RICH was recently updated to better model radiation emission and absorption by the materials in the fluid, which made it a better fit for modeling tidal disruptions.

The researchers still had to take a few shortcuts to ensure that the computations could be completed in a realistic amount of time. The version of gravity used in the simulation isn’t fully relativistic, and it’s only approximated in the area closest to the black hole. But that sped up computations enough that the researchers could track the remains of the star from spaghettification to the peak of the event’s radiation output, a period of nearly 70 days.

Explaining why a black hole produces light when ripping apart a star Read More »

just-10-lines-of-code-can-steal-ai-secrets-from-apple,-amd,-and-qualcomm-gpus

Just 10 lines of code can steal AI secrets from Apple, AMD, and Qualcomm GPUs

massive leakage —

Patching all affected devices, which include some Macs and iPhones, may be tough.

ai brain

MEHAU KULYK/Getty Images

As more companies ramp up development of artificial intelligence systems, they are increasingly turning to graphics processing unit (GPU) chips for the computing power they need to run large language models (LLMs) and to crunch data quickly at massive scale. Between video game processing and AI, demand for GPUs has never been higher, and chipmakers are rushing to bolster supply. In new findings released today, though, researchers are highlighting a vulnerability in multiple brands and models of mainstream GPUs—including Apple, Qualcomm, and AMD chips—that could allow an attacker to steal large quantities of data from a GPU’s memory.

The silicon industry has spent years refining the security of central processing units, or CPUs, so they don’t leak data in memory even when they are built to optimize for speed. However, since GPUs were designed for raw graphics processing power, they haven’t been architected to the same degree with data privacy as a priority. As generative AI and other machine learning applications expand the uses of these chips, though, researchers from New York-based security firm Trail of Bits say that vulnerabilities in GPUs are an increasingly urgent concern.

“There is a broader security concern about these GPUs not being as secure as they should be and leaking a significant amount of data,” Heidy Khlaaf, Trail of Bits’ engineering director for AI and machine learning assurance, tells WIRED. “We’re looking at anywhere from 5 megabytes to 180 megabytes. In the CPU world, even a bit is too much to reveal.”

To exploit the vulnerability, which the researchers call LeftoverLocals, attackers would need to already have established some amount of operating system access on a target’s device. Modern computers and servers are specifically designed to silo data so multiple users can share the same processing resources without being able to access each others’ data. But a LeftoverLocals attack breaks down these walls. Exploiting the vulnerability would allow a hacker to exfiltrate data they shouldn’t be able to access from the local memory of vulnerable GPUs, exposing whatever data happens to be there for the taking, which could include queries and responses generated by LLMs as well as the weights driving the response.

In their proof of concept, as seen in the GIF below, the researchers demonstrate an attack where a target—shown on the left—asks the open source LLM Llama.cpp to provide details about WIRED magazine. Within seconds, the attacker’s device—shown on the right—collects the majority of the response provided by the LLM by carrying out a LeftoverLocals attack on vulnerable GPU memory. The attack program the researchers created uses less than 10 lines of code.

An attacker (right) exploits the LeftoverLocals vulnerability to listen to LLM conversations.

Last summer, the researchers tested 11 chips from seven GPU makers and multiple corresponding programming frameworks. They found the LeftoverLocals vulnerability in GPUs from Apple, AMD, and Qualcomm and launched a far-reaching coordinated disclosure of the vulnerability in September in collaboration with the US-CERT Coordination Center and the Khronos Group, a standards body focused on 3D graphics, machine learning, and virtual and augmented reality.

The researchers did not find evidence that Nvidia, Intel, or Arm GPUs contain the LeftoverLocals vulnerability, but Apple, Qualcomm, and AMD all confirmed to WIRED that they are impacted. This means that well-known chips like the AMD Radeon RX 7900 XT and devices like Apple’s iPhone 12 Pro and M2 MacBook Air are vulnerable. The researchers did not find the flaw in the Imagination GPUs they tested, but others may be vulnerable.

Just 10 lines of code can steal AI secrets from Apple, AMD, and Qualcomm GPUs Read More »

the-galaxy-s24-gets-seven-years-of-updates,-$1,300-titanium-“ultra”-model

The Galaxy S24 gets seven years of updates, $1,300 Titanium “Ultra” model

Woo updates —

The new update plan on a Qualcomm SoC is a major ecosystem change.

Updated

The Galaxy S24 line.

Enlarge / The Galaxy S24 line.

Samsung

Samsung has unveiled its new flagship phones for 2024: the Galaxy S24, S24+, and S24 Ultra.  Considering Samsung’s usually conservative year-to-year changes, there are a lot of differences this year.

The S24 Ultra now has a titanium body, just like the iPhone 15. It also has a “fully flat display,” ending years of Android’s weird curved OLED panel gimmick that only served to distort the sides of the display. Samsung says the new Ultra design has “42 percent slimmer bezels” and a front hole-punch camera cutout that is “11 percent smaller” than those on the S23 Ultra. The rest of the design looks like Ultra models of past years, with rounded edges and a flat top and bottom. The bottom still houses an S-Pen for handwriting and drawing.

All that titanium will cost you. The S24 Ultra is $100 more than last year, coming to an eye-popping $1,300. An iPhone 15 Pro Max is $1,200, and a Pixel 8 Pro is $1,000, so that’s a tough sell.

The smaller S24+ and S24 models are aluminum and feature a new design with a flat, metal band that goes around the phone’s perimeter, making the devices look a lot like an iPhone 4 or 15. Both models have slimmer bezels and 120 Hz displays; Samsung says all the S23 displays can hit a peak brightness of 2600 nits in sunlight mode. The S24 and S24+ prices are the same as last year: $800 for the S24 and $1,000 for the S24+.

Another big announcement is that Samsung is matching Google’s new update plan and offering “seven years of security updates and seven generations of OS upgrades.” Previously, it gave four years of updates. Apple doesn’t have a formal update policy, but with the iPhone X recently lasting from iOS 11 to iOS 16, Samsung can now credibly say the S24 offers more major OS updates than a typical iPhone. (Let’s not bring up the speed of those OS updates, though, which can still take months.)

  • The S24 Ultra, now made of titanium, is still packing an S-Pen.

    Samsung

  • The top and bottom of the Ultra model are flat.

    Samsung

  • Here you can see a lineup of all the phones and where the S-Pen goes.

    Samsung

  • The camera layout.

    Samsung

  • The display is now totally flat.

    Samsung

  • With a totally flat screen and square corners, the Ultra is a unique-looking phone.

    Samsung

  • Circle to search, a contextal Google search feature that will also be on the Pixel 8.

    Samsung

Google announced seven years of updates for the Pixel 8, but as the maker of Android and with its own “Tensor” SoC, Google’s support system exists outside of the usual Android ecosystem that most OEMs have to deal with. Samsung has somehow gotten Qualcomm to commit to seven years of update support, which feels like a sea change in the industry. Previously, Qualcomm was very resistant to long chip life cycles, with Fairphone desperately sourcing an “industrial” Qualcomm chip just to get five years of support from the company in 2023. This change is what the Android ecosystem has needed for years, and we hope this level of support will be open to all companies in the future.

In the US, the Galaxy line is getting a Snapdragon 8 Gen 3. Last year, Samsung and Qualcomm signed a sweetheart deal to make the S23 line exclusively use Snapdragon chips worldwide and with that came an exclusive up-clocked “Snapdragon 8 Gen 2 for Galaxy” chip. This year Qualcomm isn’t the exclusive chip provider, but the “For Galaxy” branding is back, according to this Qualcomm press release, so this has the “Snapdragon 8 Gen 3 Mobile Platform for Galaxy”. We don’t have any hard data on what exactly the difference is, but the Qualcomm press release promises a “30 percent faster GPU” than last year, while the normal Gen 3 site says the GPU is “25 percent faster.” Exynos chips get an AMD Radeon GPU, so Qualcomm pumping up the GPU to compete makes sense.

And speaking of Exynos chips, they’re back! The S24 chip gets a Snapdragon chip in the US, while internationally, some models will go back to Samsung Exynos chips (specifically the Exynos 2400). Samsung only tells the US press about US specs, but an earlier SamMoble report claims that “the Exynos 2400 will power the Galaxy S24 and Galaxy S24+ in pretty much every country other than the US, Canada, Korea, China, and Japan.” Note that those are the two smaller models. If you’re in the market for an Ultra, the site says there is no Exynos Ultra model—they’re all Snapdragons. Qualcomm’s press release backs this up, saying Snapdragon powers “[the] Galaxy S24 Ultra globally and Galaxy S24 Plus and S24 in select regions.”

The Galaxy S24 gets seven years of updates, $1,300 Titanium “Ultra” model Read More »