Author name: Paul Patrick

photobucket-opted-inactive-users-into-privacy-nightmare,-lawsuit-says

Photobucket opted inactive users into privacy nightmare, lawsuit says

Photobucket was sued Wednesday after a recent privacy policy update revealed plans to sell users’ photos—including biometric identifiers like face and iris scans—to companies training generative AI models.

The proposed class action seeks to stop Photobucket from selling users’ data without first obtaining written consent, alleging that Photobucket either intentionally or negligently failed to comply with strict privacy laws in states like Illinois, New York, and California by claiming it can’t reliably determine users’ geolocation.

Two separate classes could be protected by the litigation. The first includes anyone who ever uploaded a photo between 2003—when Photobucket was founded—and May 1, 2024. Another potentially even larger class includes any non-users depicted in photographs uploaded to Photobucket, whose biometric data has also allegedly been sold without consent.

Photobucket risks huge fines if a jury agrees with Photobucket users that the photo-storing site unjustly enriched itself by breaching its user contracts and illegally seizing biometric data without consent. As many as 100 million users could be awarded untold punitive damages, as well as up to $5,000 per “willful or reckless violation” of various statutes.

If a substantial portion of Photobucket’s entire 13 billion-plus photo collection is found infringing, the fines could add up quickly. In October, Photobucket estimated that “about half of its 13 billion images are public and eligible for AI licensing,” Business Insider reported.

Users suing include a mother of a minor whose biometric data was collected and a professional photographer in Illinois who should have been protected by one of the country’s strongest biometric privacy laws.

So far, Photobucket has confirmed that at least one “alarmed” Illinois user’s data may have already been sold to train AI. The lawsuit alleged that most users eligible to join the class action likely similarly only learned of the “conduct long after the date that Photobucket began selling, licensing, and/or otherwise disclosing Class Members’ biometric data to third parties.”

Photobucket opted inactive users into privacy nightmare, lawsuit says Read More »

amd’s-trusted-execution-environment-blown-wide-open-by-new-badram-attack

AMD’s trusted execution environment blown wide open by new BadRAM attack


Attack bypasses AMD protection promising security, even when a server is compromised.

One of the oldest maxims in hacking is that once an attacker has physical access to a device, it’s game over for its security. The basis is sound. It doesn’t matter how locked down a phone, computer, or other machine is; if someone intent on hacking it gains the ability to physically manipulate it, the chances of success are all but guaranteed.

In the age of cloud computing, this widely accepted principle is no longer universally true. Some of the world’s most sensitive information—health records, financial account information, sealed legal documents, and the like—now often resides on servers that receive day-to-day maintenance from unknown administrators working in cloud centers thousands of miles from the companies responsible for safeguarding it.

Bad (RAM) to the bone

In response, chipmakers have begun baking protections into their silicon to provide assurances that even if a server has been physically tampered with or infected with malware, sensitive data funneled through virtual machines can’t be accessed without an encryption key that’s known only to the VM administrator. Under this scenario, admins inside the cloud provider, law enforcement agencies with a court warrant, and hackers who manage to compromise the server are out of luck.

On Tuesday, an international team of researchers unveiled BadRAM, a proof-of-concept attack that completely undermines security assurances that chipmaker AMD makes to users of one of its most expensive and well-fortified microprocessor product lines. Starting with the AMD Epyc 7003 processor, a feature known as SEV-SNP—short for Secure Encrypted Virtualization and Secure Nested Paging—has provided the cryptographic means for certifying that a VM hasn’t been compromised by any sort of backdoor installed by someone with access to the physical machine running it.

If a VM has been backdoored, the cryptographic attestation will fail and immediately alert the VM admin of the compromise. Or at least that’s how SEV-SNP is designed to work. BadRAM is an attack that a server admin can carry out in minutes, using either about $10 of hardware, or in some cases, software only, to cause DDR4 or DDR5 memory modules to misreport during bootup the amount of memory capacity they have. From then on, SEV-SNP will be permanently made to suppress the cryptographic hash attesting its integrity even when the VM has been badly compromised.

“BadRAM completely undermines trust in AMD’s latest Secure Encrypted Virtualization (SEV-SNP) technology, which is widely deployed by major cloud providers, including Amazon AWS, Google Cloud, and Microsoft Azure,” members of the research team wrote in an email. “BadRAM for the first time studies the security risks of bad RAM—rogue memory modules that deliberately provide false information to the processor during startup. We show how BadRAM attackers can fake critical remote attestation reports and insert undetectable backdoors into _any_ SEV-protected VM.”

Compromising the AMD SEV ecosystem

On a website providing more information about the attack, the researchers wrote:

Modern computers increasingly use encryption to protect sensitive data in DRAM, especially in shared cloud environments with pervasive data breaches and insider threats. AMD’s Secure Encrypted Virtualization (SEV) is a cutting-edge technology that protects privacy and trust in cloud computing by encrypting a virtual machine’s (VM’s) memory and isolating it from advanced attackers, even those compromising critical infrastructure like the virtual machine manager or firmware.

We found that tampering with the embedded SPD chip on commercial DRAM modules allows attackers to bypass SEV protections—including AMD’s latest SEV-SNP version. For less than $10 in off-the-shelf equipment, we can trick the processor into allowing access to encrypted memory. We build on this BadRAM attack primitive to completely compromise the AMD SEV ecosystem, faking remote attestation reports and inserting backdoors into any SEV-protected VM.

In response to a vulnerability report filed by the researchers, AMD has already shipped patches to affected customers, a company spokesperson said. The researchers say there are no performance penalties, other than the possibility of additional time required during boot up. The BadRAM vulnerability is tracked in the industry as CVE-2024-21944 and AMD-SB-3015 by the chipmaker.

A stroll down memory lane

Modern dynamic random access memory for servers typically comes in the form of DIMMs, short for Dual In-Line Memory Modules. The basic building block of these rectangular sticks are capacitors, which, when charged, represent a binary 1 and, when discharged, represent a 0. The capacitors are organized into cells, which are organized into arrays of rows and columns, which are further arranged into ranks and banks. The more capacitors that are stuffed into a DIMM, the more capacity it has to store data. Servers usually have multiple DIMMs that are organized into channels that can be processed in parallel.

For a server to store or access a particular piece of data, it first must locate where the bits representing it are stored in this vast configuration of transistors. Locations are tracked through addresses that map the channel, rank, bank row, and column. For performance reasons, the task of translating these physical addresses to DRAM address bits—a job assigned to the memory controller—isn’t a one-to-one mapping. Rather, consecutive addresses are spread across different channels, ranks, and banks.

Before the server can map these locations, it must first know how many DIMMs are connected and the total capacity of memory they provide. This information is provided each time the server boots, when the BIOS queries the SPD—short for Serial Presence Detect—chip found on the surface of the DIMM. This chip is responsible for providing the BIOS basic information about available memory. BadRAM causes the SPD chip to report that its capacity is twice what it actually is. It does this by adding an extra addressing bit.

To do this, a server admin need only briefly connect a specially programmed Raspberry Pi to the SPD chip just once.

The researchers’ Raspberry Pi connected to the SPD chip of a DIMM. Credit: De Meulemeester et al.

Hacking by numbers, 1, 2, 3

In some cases, with certain DIMM models that don’t adequately lock down the chip, the modification can likely be done through software. In either case, the modification need only occur once. From then on, the SPD chip will falsify the memory capacity available.

Next, the server admin configures the operating system to ignore the newly created “ghost memory,” meaning the top half of the capacity reported by the compromised SPD chip, but continue to map to the lower half of the real memory. On Linux, this configuration can be done with the `memmap` kernel command-line parameter. The researchers’ paper, titled BadRAM: Practical Memory Aliasing Attacks on Trusted Execution Environments, provides many more details about the attack.

Next, a script developed as part of BadRAM allows the attacker to quickly find the memory locations of ghost memory bits. These aliases give the attacker access to memory regions that SEV-SNP is supposed to make inaccessible. This allows the attacker to read and write to these protected memory regions.

Access to this normally fortified region of memory allows the attacker to copy the cryptographic hash SEV-SNP creates to attest to the integrity of the VM. The access also permits the attacker to boot an SEV-compliant VM that has been backdoored. Normally, this malicious VM would trigger a warning in the form of a cryptographic hash. BadRAM allows the attacker to replace this attestation failure hash with the attestation success hash collected earlier.

The primary steps involved in BadRAM attacks are:

  1. Compromise the memory module to lie about its size and thus trick the CPU into accessing the nonexistent ghost addresses that have been silently mapped to existing memory regions.
  2. Find aliases. These addresses map to the same DRAM location.
  3. Bypass CPU Access Control. The aliases allow the attacker to bypass memory protections that are supposed to prevent the reading of and writing to regions storing sensitive data.

Beware of the ghost bit

For those looking for more technical details, Jesse De Meulemeester, who along with Luca Wilke was lead co-author of the paper, provided the following, which more casual readers can skip:

In our attack, there are two addresses that go to the same DRAM location; one is the original address, the other one is what we call the alias.

When we modify the SPD, we double its size. At a low level, this means all memory addresses now appear to have one extra bit. This extra bit is what we call the “ghost” bit, it is the address bit that is used by the CPU, but is not used (thus ignored) by the DIMM. The addresses for which this “ghost” bit is 0 are the original addresses, and the addresses for which this bit is 1 is the “ghost” memory.

This explains how we can access protected data like the launch digest. The launch digest is stored at an address with the ghost bit set to 0, and this address is protected; any attempt to access it is blocked by the CPU. However, if we try to access the same address with the ghost bit set to 1, the CPU treats it as a completely new address and allows access. On the DIMM side, the ghost bit is ignored, so both addresses (with ghost bit 0 or 1) point to the same physical memory location.

A small example to illustrate this:

Original SPD: 4 bit addresses:

CPU: address 1101 -> DIMM: address 1101

Modified SPD: Reports 5 bits even though it only has 4:

CPU: address 01101 -> DIMM: address 1101

CPU: address 11101 -> DIMM: address 1101

In this case 01101 is the protected address, 11101 is the alias. Even though to the CPU they seem like two different addresses, they go to the same DRAM location.

As noted earlier, some DIMM models don’t lock down the SPD chip, a failure that likely makes software-only modifications possible. Specifically, the researchers found that two DDR4 models made by Corsair contained this flaw.

In a statement, AMD officials wrote:

AMD believes exploiting the disclosed vulnerability requires an attacker either having physical access to the system, operating system kernel access on a system with unlocked memory modules, or installing a customized, malicious BIOS. AMD recommends utilizing memory modules that lock Serial Presence Detect (SPD), as well as following physical system security best practices. AMD has also released firmware updates to customers to mitigate the vulnerability.

Members of the research team are from KU Leuven, the University of Lübeck, and the University of Birmingham. Specifically, they are:

The researchers tested BadRAM against the Intel SGX, a competing microprocessor sold by AMD’s much bigger rival promising integrity assurances comparable to SEV-SNP. The classic, now-discontinued version of the SGX did allow reading of protected regions, but not writing to them. The current Intel Scalable SGX and Intel TDX processors, however, allowed no reading or writing. Since a comparable Arm processor wasn’t available for testing, it’s unknown if it’s vulnerable.

Despite the lack of universality, the researchers warned that the design flaws underpinning the BadRAM vulnerability may creep into other systems and should always use the mitigations AMD has now put in place.

“Since our BadRAM primitive is generic, we argue that such countermeasures should be considered when designing a system against untrusted DRAM,” the researchers wrote in their paper. “While advanced hardware-level attacks could potentially circumvent the currently used countermeasures, further research is required to judge whether they can be carried out in an impactful attacker model.”

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

AMD’s trusted execution environment blown wide open by new BadRAM attack Read More »

paleolithic-deep-cave-compound-likely-used-for-rituals

Paleolithic deep-cave compound likely used for rituals

Archaeologists excavating a paleolithic cave site in Galilee, Israel, have found evidence that a deep-cave compound at the site may have been used for ritualistic gatherings, according to a new paper published in the Proceedings of the National Academy of Sciences (PNAS). That evidence includes the presence of a symbolically carved boulder in a prominent placement, and well as the remains of what may have been torches used to light the interior. And the acoustics would have been conducive to communal gatherings.

Dating back to the Early Upper Paleolithic period, Manot Cave was found accidentally when a bulldozer broke open its roof during construction in 2008. Archaeologists soon swooped in and recovered such artifacts as stone tools, bits of charcoal, remains of various animals, and a nearly complete human skull.

The latter proved to be especially significant, as subsequent analysis showed that the skull (dubbed Manot 1) had both Neanderthal and modern features and was estimated to be about 54,700 years old. That lent support to the hypothesis that modern humans co-existed and possibly interbred with Neanderthals during a crucial transition period in the region, further bolstered by genome sequencing.

The Manot Cave features an 80-meter-long hall connecting to two lower chambers from the north and south. The living section is near the entrance and was a hub for activities like flint-knapping, butchering animals, eating, and other aspects of daily life. But about eight stories below, there is a large cavern consisting of a high gallery and an adjoining smaller “hidden” chamber separated from the main area by a cluster of mineral deposits called speleothems.

That’s the area that is the subject of the new PNAS paper. Unlike the main living section, the authors found no evidence of daily human activities in this compound, suggesting it served another purpose—most likely ritual gatherings.

Paleolithic deep-cave compound likely used for rituals Read More »

ten-months-after-first-tease,-openai-launches-sora-video-generation-publicly

Ten months after first tease, OpenAI launches Sora video generation publicly

A music video by Canadian art collective Vallée Duhamel made with Sora-generated video. “[We] just shoot stuff and then use Sora to combine it with a more interesting, more surreal vision.”

During a livestream on Monday—during Day 3 of OpenAI’s “12 days of OpenAi”—Sora’s developers showcased a new “Explore” interface that allows people to browse through videos generated by others to get prompting ideas. OpenAI says that anyone can enjoy viewing the “Explore” feed for free, but generating videos requires a subscription.

They also showed off a new feature called “Storyboard” that allows users to direct a video with multiple actions in a frame-by-frame manner.

Safety measures and limitations

In addition to the release, OpenAI also publish Sora’s System Card for the first time. It includes technical details about how the model works and safety testing the company undertook prior to this release.

“Whereas LLMs have text tokens, Sora has visual patches,” OpenAI writes, describing the new training chunks as “an effective representation for models of visual data… At a high level, we turn videos into patches by first compressing videos into a lower-dimensional latent space, and subsequently decomposing the representation into spacetime patches.”

Sora also makes use of a “recaptioning technique”—similar to that seen in the company’s DALL-E 3 image generation, to “generate highly descriptive captions for the visual training data.” That, in turn, lets Sora “follow the user’s text instructions in the generated video more faithfully,” OpenAI writes.

Sora-generated video provided by OpenAI, from the prompt: “Loop: a golden retriever puppy wearing a superhero outfit complete with a mask and cape stands perched on the top of the empire state building in winter, overlooking the nyc it protects at night. the back of the pup is visible to the camera; his attention faced to nyc”

OpenAI implemented several safety measures in the release. The platform embeds C2PA metadata in all generated videos for identification and origin verification. Videos display visible watermarks by default, and OpenAI developed an internal search tool to verify Sora-generated content.

The company acknowledged technical limitations in the current release. “This early version of Sora will make mistakes, it’s not perfect,” said one developer during the livestream launch. The model reportedly struggles with physics simulations and complex actions over extended durations.

In the past, we’ve seen that these types of limitations are based on what example videos were used to train AI models. This current generation of AI video-synthesis models has difficulty generating truly new things, since the underlying architecture excels at transforming existing concepts into new presentations, but so far typically fails at true originality. Still, it’s early in AI video generation, and the technology is improving all the time.

Ten months after first tease, OpenAI launches Sora video generation publicly Read More »

innie-rebellion-is-brewing-in-trippy-severance-s2-trailer

Innie rebellion is brewing in trippy Severance S2 trailer

Severance returns to Apple TV in January for its sophomore season.

Severance was one of the most talked-about TV series of 2022, receiving widespread critical acclaim. We loved the series so much that Ars staffers actually wrote a group review so that everyone could weigh in with their thoughts on the first season, pronouncing it “one of the best shows on TV.” Needless to say, we have been eagerly awaiting the second season next month. Prime Video just released the official trailer at CCXP24 in São Paulo, Brazil and it does not disappoint.

(Spoilers for first season below.)

In the world of Severance, people can completely disconnect their work and personal lives. Thanks to a new procedure developed by Lumon Industries, workers can bifurcate themselves into “innies” (work selves) and “outies” (personal selves)—with no sharing of memories between them. This appeals to people like Mark (Adam Scott), who lost his wife in a car crash and has struggled to work through the grief. Why not forget all that pain for eight hours a day?

It’s no spoiler to say that things went… badly in S1 as a result of this process. As Ars Deputy Editor Nate Anderson noted at the time, “The show isn’t just bonkers—though it is that, too. It’s also about the lengths to which we will go to dull or avoid emotional pain, and the ways in which humans will reach out to connect with others even under the most unpromising of circumstances.” In the process, Severance brought out “the latent horror of fluorescent lights, baby goats, cubicles, waffles, middle managers, finger traps, and ‘work/life balance.’ Also cults. And vending machines. Plus corporate training manuals. And talk therapy. Oh, and ‘kind eyes.'”

The first season ended on quite the cliffhanger, with several Lumon employees activating an “overtime contingency” to escape the office confines to get a taste for how their “outies” live—and some pretty startling secrets were revealed. S2 will naturally grapple with the fallout from their brief mutiny. Per the official premise:

Innie rebellion is brewing in trippy Severance S2 trailer Read More »

two-european-satellites-launch-on-mission-to-blot-out-the-sun—for-science

Two European satellites launch on mission to blot out the Sun—for science


This will all happen nearly 40,000 miles above the Earth, so you won’t need your eclipse glasses.

An infrared view of a test of the Proba-3 mission’s laser ranging system, which will allow two spacecraft to fly in formation with millimeter-scale precision. Credit: ESA – M. Pédoussaut / J. Versluys

Two spacecraft developed by the European Space Agency launched on top of an Indian rocket Thursday, kicking off a mission to test novel formation flying technologies and observe a rarely seen slice of the Sun’s ethereal corona.

ESA’s Proba-3 mission is purely experimental. The satellites are loaded with sophisticated sensors and ranging instruments to allow the two spacecraft to orbit the Earth in lockstep with one another. Proba-3 will attempt to achieve millimeter-scale precision, several orders of magnitude better than the requirements for a spacecraft closing in for docking at the International Space Station.

“In a nutshell, it’s an experiment in space to demonstrate a new concept, a new technology that is technically challenging,” said Damien Galano, Proba-3’s project manager.

The two Proba-3 satellites launched from India at 5: 34 am EST (10: 34 UTC) Thursday, riding a Polar Satellite Launch Vehicle (PSLV). The PSLV released Proba-3 into a stretched-out orbit with a low point of approximately 356 miles (573 kilometers), a high point of 37,632 miles (60,563 kilometers), and an inclination of 59 degrees to the equator.

India’s PSLV accelerates through the speed of sound shortly after liftoff with the Proba-3 mission Thursday. Credit: ISRO

After initial checkouts, the two Proba-3 satellites, each smaller than a compact car, will separate from one another to begin their tech demo experiments early next year. The larger of the two satellites, known as the Coronagraph spacecraft, carries a suite of science instruments to image the Sun’s corona, or outer atmosphere. The smaller spacecraft, named Occulter, hosts navigation sensors and low-impulse thrusters to help it maneuver into position less than 500 feet (150 meters) from its Coronagraph companion.

From the point of view of the Coronagraph spacecraft, this is just the right distance for a 4.6-foot (1.4-meter) disk mounted to Proba-3’s Occulter spacecraft to obscure the surface of the Sun. The occultation will block the Sun’s blinding glare and cast a shadow just 3 inches (8 centimeters) onto the Coronagraph satellite, revealing the wispy, super-heated gases that make up the solar corona.

Why do this?

The corona is normally hidden by the brightness of the Sun and is best observed from Earth during total solar eclipses, but these events only last a few minutes. Scientists devised a way to create artificial eclipses using devices known as coronagraphs, which have flown in space on several previous solar research missions. However, these coronagraphs were placed inside a single instrument on a single spacecraft, limiting their effectiveness due to complications from diffraction or vignetting, where sunlight encroaches around the edge of the occulting disk or misses the imaging detector entirely.

Ideally, scientists would like to place the occulting disk much farther from the camera taking images of the Sun. This would more closely mimic what the human eye sees during a solar eclipse. With Proba-3, ESA will attempt to do just that.

“There was simply no other way of reaching the optical performance Proba-3 requires than by having its occulting disk fly on a separate, carefully controlled spacecraft,” said Joe Zender, ESA’s Proba-3 mission scientist. “Any closer and unwanted stray light would spill over the edges of the disk, limiting our close-up views of the Sun’s surrounding corona.”

But deploying one enormous 150-meter-long spacecraft would be cost-prohibitive. With contributions from 14 member states and Canada, ESA developed the dual-spacecraft Proba-3 mission on a budget of approximately 200 million euros ($210 million) over 10 years. Spain and Belgium, which are not among ESA’s highest-spending member states, funded nearly three-quarters of Proba-3’s cost.

The Proba-3 satellites will use several sensors to keep station roughly 150 meters away from one another, including inter-satellite radio links, satellite navigation receivers, and cameras on the Occulter spacecraft to help determine its relative position by monitoring LEDs on the Coronagraph satellite.

For the most precise navigation, the Occulter satellite will shine a laser toward a reflector on the Coronagraph spacecraft. The laser light bounced back to the Occulter spacecraft will allow it to autonomously and continuously track the range to its companion and send signals to fire cold gas thrusters and make fine adjustments.

The laser will give Proba-3 the ability to control the distance between the two satellites with an error of less than a millimeter—around the thickness of an average fingernailand hold position for up to six hours, 50 times longer than the maximum duration of a total solar eclipse. Proba-3 will create the eclipses while it is flying farthest from Earth in its nearly 20-hour orbit.

Scientists hope to achieve at least 1,000 hours of artificial totality during Proba-3’s two-year prime mission.

Proba-3’s Occulter spacecraft (top) and Coronagraph spacecraft (bottom) will hold position 150 meters away from one another. Credit: ESA-P. Carril

The corona extends millions of miles from the Sun’s convective surface, with temperatures as hot as 3.5 million degrees Fahrenheit. Still, the corona is easily outshined by the glare from the Sun itself. Scientists say it’s important to study this region to understand how the Sun generates the solar wind and drives geomagnetic storms that can affect the Earth.

NASA’s Parker Solar Probe, well-insulated from the scorching heat, became the first spacecraft to fly through the corona in 2021. It is collecting data on the actual conditions within the Sun’s atmosphere, while a network of other spacecraft monitor solar activity from afar to get the big picture.

Proba-3 is tasked with imaging a normally invisible part of the corona as close as 43,500 miles (70,000 kilometers) above the Sun’s surface. Extreme ultraviolet instruments are capable of observing the part of the corona closest to the Sun, while existing coronagraphs on other satellites are good at seeing the outermost portion of the corona.

“That leaves a significant observing gap, from about 3 solar radii down to 1.1 solar radii, that Proba-3 will be able to fill,” said Andrei Zhukov of the Royal Observatory of Belgium, principal investigator for Proba-3’s coronagraph instrument. “This will make it possible, for example, to follow the evolution of the colossal solar explosions called Coronal Mass Ejections as they rise from the solar surface and the outward acceleration of the solar wind.”

Proba-3’s coronagraph instrument will take images as often as once every two seconds, helping scientists search for small-scale fast-moving plasma waves that might be responsible for driving up the corona’s hellish temperatures. The mission will also hunt for the glow of plasma jets scientists believe have a role in accelerating the solar wind, a cloud of particles streaming away from the Sun at speeds of up to 1.2 million mph (2 million km/hr).

These are two of the core science objectives for the Proba-3 mission. But the project has a deeper purpose of proving two satellites can continually fly in tight formation. This level of precision could meet the exacting demands of future space missions, such as Mars Sample Return and the clearing of space junk from low-Earth orbit, according to ESA.

“Proba-3’s coronal observations will take place as part of a larger in-orbit demonstration of precise formation flying,” said Josef Aschbacher, ESA’s director general. “The best way to prove this new European technology works as intended is to produce novel science data that nobody has ever seen before.

“It is not practical today to fly a single 150-meter-long spacecraft in orbit, but if Proba-3 can indeed achieve an equivalent performance using two small spacecraft, the mission will open up new ways of working in space for the future,” Aschbacher said in a statement. “Imagine multiple small platforms working together as one to form far-seeing virtual telescopes or arrays.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Two European satellites launch on mission to blot out the Sun—for science Read More »

google’s-genie-2-“world-model”-reveal-leaves-more-questions-than-answers

Google’s Genie 2 “world model” reveal leaves more questions than answers


Making a command out of your wish?

Long-term persistence, real-time interactions remain huge hurdles for AI worlds.

A sample of some of the best-looking Genie 2 worlds Google wants to show off. Credit: Google Deepmind

In March, Google showed off its first Genie AI model. After training on thousands of hours of 2D run-and-jump video games, the model could generate halfway-passable, interactive impressions of those games based on generic images or text descriptions.

Nine months later, this week’s reveal of the Genie 2 model expands that idea into the realm of fully 3D worlds, complete with controllable third- or first-person avatars. Google’s announcement talks up Genie 2’s role as a “foundational world model” that can create a fully interactive internal representation of a virtual environment. That could allow AI agents to train themselves in synthetic but realistic environments, Google says, forming an important stepping stone on the way to artificial general intelligence.

But while Genie 2 shows just how much progress Google’s Deepmind team has achieved in the last nine months, the limited public information about the model thus far leaves a lot of questions about how close we are to these foundational model worlds being useful for anything but some short but sweet demos.

How long is your memory?

Much like the original 2D Genie model, Genie 2 starts from a single image or text description and then generates subsequent frames of video based on both the previous frames and fresh input from the user (such as a movement direction or “jump”). Google says it trained on a “large-scale video dataset” to achieve this, but it doesn’t say just how much training data was necessary compared to the 30,000 hours of footage used to train the first Genie.

Short GIF demos on the Google DeepMind promotional page show Genie 2 being used to animate avatars ranging from wooden puppets to intricate robots to a boat on the water. Simple interactions shown in those GIFs demonstrate those avatars busting balloons, climbing ladders, and shooting exploding barrels without any explicit game engine describing those interactions.

Those Genie 2-generated pyramids will still be there in 30 seconds. But in five minutes? Credit: Google Deepmind

Perhaps the biggest advance claimed by Google here is Genie 2’s “long horizon memory.” This feature allows the model to remember parts of the world as they come out of view and then render them accurately as they come back into the frame based on avatar movement. This kind of persistence has proven to be a persistent problem for video generation models like Sora, which OpenAI said in February “do[es] not always yield correct changes in object state” and can develop “incoherencies… in long duration samples.”

The “long horizon” part of “long horizon memory” is perhaps a little overzealous here, though, as Genie 2 only “maintains a consistent world for up to a minute,” with “the majority of examples shown lasting [10 to 20 seconds].” Those are definitely impressive time horizons in the world of AI video consistency, but it’s pretty far from what you’d expect from any other real-time game engine. Imagine entering a town in a Skyrim-style RPG, then coming back five minutes later to find that the game engine had forgotten what that town looks like and generated a completely different town from scratch instead.

What are we prototyping, exactly?

Perhaps for this reason, Google suggests Genie 2 as it stands is less useful for creating a complete game experience and more to “rapidly prototype diverse interactive experiences” or to turn “concept art and drawings… into fully interactive environments.”

The ability to transform static “concept art” into lightly interactive “concept videos” could definitely be useful for visual artists brainstorming ideas for new game worlds. However, these kinds of AI-generated samples might be less useful for prototyping actual game designs that go beyond the visual.

On Bluesky, British game designer Sam Barlow (Silent Hill: Shattered Memories, Her Story) points out how game designers often use a process called whiteboxing to lay out the structure of a game world as simple white boxes well before the artistic vision is set. The idea, he says, is to “prove out and create a gameplay-first version of the game that we can lock so that art can come in and add expensive visuals to the structure. We build in lo-fi because it allows us to focus on these issues and iterate on them cheaply before we are too far gone to correct.”

Generating elaborate visual worlds using a model like Genie 2 before designing that underlying structure feels a bit like putting the cart before the horse. The process almost seems designed to generate generic, “asset flip”-style worlds with AI-generated visuals papered over generic interactions and architecture.

As podcaster Ryan Zhao put it on Bluesky, “The design process has gone wrong when what you need to prototype is ‘what if there was a space.'”

Gotta go fast

When Google revealed the first version of Genie earlier this year, it also published a detailed research paper outlining the specific steps taken behind the scenes to train the model and how that model generated interactive videos. No such research paper has been published detailing Genie 2’s process, leaving us guessing at some important details.

One of the most important of these details is model speed. The first Genie model generated its world at roughly one frame per second, a rate that was orders of magnitude slower than would be tolerably playable in real time. For Genie 2, Google only says that “the samples in this blog post are generated by an undistilled base model, to show what is possible. We can play a distilled version in real-time with a reduction in quality of the outputs.”

Reading between the lines, it sounds like the full version of Genie 2 operates at something well below the real-time interactions implied by those flashy GIFs. It’s unclear how much “reduction in quality” is necessary to get a diluted version of the model to real-time controls, but given the lack of examples presented by Google, we have to assume that reduction is significant.

Oasis’ AI-generated Minecraft clone shows great potential, but still has a lot of rough edges, so to speak. Credit: Oasis

Real-time, interactive AI video generation isn’t exactly a pipe dream. Earlier this year, AI model maker Decart and hardware maker Etched published the Oasis model, showing off a human-controllable, AI-generated video clone of Minecraft that runs at a full 20 frames per second. However, that 500 million parameter model was trained on millions of hours of footage of a single, relatively simple game, and focused exclusively on the limited set of actions and environmental designs inherent to that game.

When Oasis launched, its creators fully admitted the model “struggles with domain generalization,” showing how “realistic” starting scenes had to be reduced to simplistic Minecraft blocks to achieve good results. And even with those limitations, it’s not hard to find footage of Oasis degenerating into horrifying nightmare fuel after just a few minutes of play.

What started as a realistic-looking soldier in this Genie 2 demo degenerates into this blobby mess just seconds later. Credit: Google Deepmind

We can already see similar signs of degeneration in the extremely short GIFs shared by the Genie team, such as an avatar’s dream-like fuzz during high-speed movement or NPCs that quickly fade into undifferentiated blobs at a short distance. That’s not a great sign for a model whose “long memory horizon” is supposed to be a key feature.

A learning crèche for other AI agents?

From this image, Genie 2 could generate a useful training environment for an AI agent and a simple “pick a door” task. Credit: Google Deepmind

Genie 2 seems to be using individual game frames as the basis for the animations in its model. But it also seems able to infer some basic information about the objects in those frames and craft interactions with those objects in the way a game engine might.

Google’s blog post shows how a SIMA agent inserted into a Genie 2 scene can follow simple instructions like “enter the red door” or “enter the blue door,” controlling the avatar via simple keyboard and mouse inputs. That could potentially make Genie 2 environment a great test bed for AI agents in various synthetic worlds.

Google claims rather grandiosely that Genie 2 puts it on “the path to solving a structural problem of training embodied agents safely while achieving the breadth and generality required to progress towards [artificial general intelligence].” Whether or not that ends up being true, recent research shows that agent learning gained from foundational models can be effectively applied to real-world robotics.

Using this kind of AI model to create worlds for other AI models to learn in might be the ultimate use case for this kind of technology. But when it comes to the dream of an AI model that can create generic 3D worlds that a human player could explore in real time, we might not be as close as it seems.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Google’s Genie 2 “world model” reveal leaves more questions than answers Read More »

after-critics-decry-orion-heat-shield-decision,-nasa-reviewer-says-agency-is-correct

After critics decry Orion heat shield decision, NASA reviewer says agency is correct


“If this isn’t raising red flags out there, I don’t know what will.”

NASA’s Orion spacecraft, consisting of a US-built crew module and European service module, is lifted during prelaunch processing at Kennedy Space Center in 2021. Credit: NASA/Amanda Stevenson

Within hours of NASA announcing its decision to fly the Artemis II mission aboard an Orion spacecraft with an unmodified heat shield, critics assailed the space agency, saying it had made the wrong decision.

“Expediency won over safety and good materials science and engineering. Sad day for NASA,” Ed Pope, an expert in advanced materials and heat shields, wrote on LinkedIn.

There is a lot riding on NASA’s decision, as the Artemis II mission involves four astronauts and the space agency’s first crewed mission into deep space in more than 50 years.

A former NASA astronaut, Charles Camarda, also expressed his frustrations on LinkedIn, saying the space agency and its leadership team should be “ashamed.” In an interview on Friday, Camarda, an aerospace engineer who spent two decades working on thermal protection for the space shuttle and hypersonic vehicles, said NASA is relying on flawed probabilistic risk assessments and Monte Carlo simulations to determine the safety of Orion’s existing heat shield.

“I worked at NASA for 45 years,” Camarda said. “I love NASA. I do not love the way NASA has become. I do not like that we have lost our research culture.”

NASA makes a decision

Pope, Camarada, and others—an official expected to help set space policy for the Trump administration told Ars on background, “It’s difficult to trust any of their findings”—note that NASA has spent two years assessing the char damage incurred by the Orion spacecraft during its first lunar flight in late 2022, with almost no transparency. Initially, agency officials downplayed the severity of the issue, and the full scope of the problem was not revealed until a report this May by NASA’s inspector general, which included photos of a heavily pock-marked heat shield.

This year, from April to August, NASA convened an independent review team (IRT) to assess its internal findings about the root cause of the charring on the Orion heat shield and determine whether its plan to proceed without modifications to the heat shield was the correct one. However, though this review team wrapped up its work in August and began briefing NASA officials in September, the space agency kept mostly silent about the problem until a news conference on Thursday.

The inspector general’s report on May 1 included new images of Orion’s heat shield.

Credit: NASA Inspector General

The inspector general’s report on May 1 included new images of Orion’s heat shield. Credit: NASA Inspector General

“Based on the data, we have decided—NASA unanimously and our decision-makers—to move forward with the current Artemis II Orion capsule and heat shield, with a modified entry trajectory,” Bill Nelson, NASA’s administrator, said Thursday. The heat shield investigation and other issues with the Orion spacecraft will now delay the Artemis II launch until April 2026, a slip of seven months from the previous launch date in September 2025.

Notably the chair of the IRT, a former NASA flight director named Paul Hill, was not present at Thursday’s news conference. Nor did the space agency release the IRT’s report on its recommendations to NASA.

In an interview, Camarda said he knew two people on the IRT who dissented from its conclusions that NASA’s plan to fly the Orion heat shield, without modifications to address the charring problem, was acceptable. He also criticized the agency for not publicly releasing the independent report. “NASA did not post the results of the IRT,” he said. “Why wouldn’t they post the results of what the IRT said? If this isn’t raising red flags out there, I don’t know what will.”

The view from the IRT

Ars took these concerns to NASA on Friday, and the agency responded by offering an interview with Paul Hill, the review team’s chair. He strongly denied there were any dissenting views.

“Every one of our conclusions, every one of our recommendations, was unanimously agreed to by our team,” Hill said. “We went through a lot of effort, arguing sentence by sentence, to make sure the entire team agreed. To get there we definitely had some robust and energetic discussions.”

Hill did acknowledge that, at the outset of the review team’s discussions, two people were opposed to NASA’s plan to fly the heat shield as is. “There was, early on, definitely a difference of opinion with a couple of people who felt strongly that Orion’s heat shield was not good enough to fly as built,” he said.

However, Hill said the IRT was won over by the depth of NASA’s testing and the openness of agency engineers who worked with them. He singled out Luis Saucedo, a NASA engineer at NASA’s Johnson Space Center who led the agency’s internal char loss investigation.

“The work that was done by NASA, it was nothing short of eye-watering, it was incredible,” Hill said.

At the base of Orion, which has a titanium shell, there are 186 blocks of a material called Avcoat individually attached to provide a protective layer that allows the spacecraft to survive the heating of atmospheric reentry. Returning from the Moon, Orion encounters temperatures of up to 5,000° Fahrenheit (2,760° Celsius). A char layer that builds up on the outer skin of the Avcoat material is supposed to ablate, or erode, in a predictable manner during reentry. Instead, during Artemis I, fragments fell off the heat shield and left cavities in the Avcoat material.

Work by Saucedo and others, including substantial testing in ground facilities, wind tunnels, and high-temperature arc jet chambers, allowed engineers to find the root cause of gases getting trapped in the heat shield and leading to cracking. Hill said his team was convinced that NASA successfully recreated the conditions observed during reentry and were able to replicate during testing the Avcoat cracking that occurred during Artemis I.

When he worked at the agency, Hill played a leading role during the investigation into the cause of the loss of space shuttle Columbia, in 2003. He said he could understand if NASA officials “circled the wagons” in response to the IRT’s work, but he said the agency could not have been more forthcoming. Every time the review team wanted more data or information, it was made available. Eventually, this made the entire IRT comfortable with NASA’s findings.

Publicly, NASA could have been more transparent

The stickiest point during the review team’s discussions involved the permeability of the heat shield. Counter-intuitively, the heat shield was not permeable enough during Artemis I. This led to gas buildup, higher pressures, and the cracking ultimately observed. The IRT was concerned because, as designed, the heat shield for Artemis II is actually more impermeable than the Artemis I vehicle.

Why is this? It has to do with the ultrasound testing that verifies the strength of the bond between the Avcoat blocks and the titanium skin of Orion. With a more permeable heat shield, it was difficult to complete this testing with the Artemis I vehicle. So the shield for Artemis II was made more impermeable to accommodate ultrasound testing. “That was a technical mistake, and when they made that decision they did not understand the ramifications,” Hill said.

However, Hill said NASA’s data convinced the IRT that modifying the entry profile for Artemis II, to minimize the duration of passage through the atmosphere, would offset the impermeability of the heat shield.

Hill said he did not have the authority to release the IRT report, but he did agree that the space agency has not been forthcoming with public information about their analyses before this week.

“This is a complex story to tell, and if you want everybody to come along with you, you’ve got to keep them informed,” he said of NASA. “I think they unintentionally did themselves a disservice by holding their cards too close.”

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

After critics decry Orion heat shield decision, NASA reviewer says agency is correct Read More »

the-2025-bmw-i5-m60-review:-an-ev-that-makes-you-want-to-drive-and-drive

The 2025 BMW i5 M60 review: An EV that makes you want to drive and drive

In fact, I think the cheaper, less powerful i5 eDrive40 (or the all-wheel drive xDrive40) is the better i5, but BMW didn’t have one of those available in the press fleet, so a review of that version will have to wait for one to show up. As I often write, the most powerful version of any given EV is usually a worse deal, as they’re invariably fitted with big, range-sapping wheels, and it’s not like a 0–60 time of 5.7 seconds is particularly slow, even by 2024’s standards.

And those big wheels cause a range hit—the EPA rates the i5 M60 at 240 miles (386 km) on a full charge, although in Efficient mode that should be beatable—according to our test car, over 1,000 miles (1,609 km), it averaged 3.2 miles/kWh (19.4 kWh/100 km). Then again, if you stick it in Sport mode and hoof the throttle too often, it’s not hard to see that number plummet to 2.4 miles/kWh (25.9 kWh/100 km).

BMW i5 Hofmeister kink

In case you forgot which series BMW this is, the panel set into the Hofmeister kink reminds you it’s a 5. Credit: BMW

As the latest version of BMW’s fifth-gen EV powertrain, the i5 has its most up-to-date fast charging software, which uses a new control strategy to maintain higher levels of power for longer while plugged into a DC fast charger, even when starting at a state of charge as high as 50 percent. During our testing, we fast-charged the i5 from 19 to 91 percent, which took a couple of seconds more than 37 minutes, delivering 62 kWh and peaking at an impressive 209 kW, although before long power delivery dropped to 150 kW.

Software-defined emotions

Sport mode is fast and sounds good, accompanied as it is by Hans Zimmer-crafted powertrain sounds. And Efficient, which mostly just relies on the 335 hp (250 kW), 317 lb-ft (430 Nm) rear motor, is quiet and comfortable. But the i5 offers you some other choices, including Expressive, Relax, and Digital Art modes, which reconfigure the cabin lighting, the dynamic wallpaper on the curved display, and the powertrain sounds.

The 2025 BMW i5 M60 review: An EV that makes you want to drive and drive Read More »

these-spiders-listen-for-prey-before-hurling-webs-like-slingshots

These spiders listen for prey before hurling webs like slingshots

Along came a spider

A) Untensed web shown from front view. (B) Tensed web shown from side view.

A) Untensed web shown from front view. (B) Tensed web shown from side view. Credit: S.I. Han and T.A. Blackledge, 2024

The 19 spiders built 26 webs over the testing period. For the experiments, Han and Blackledge used a weighted tuning fork with frequencies in the mid-range for whirring wings for many mosquito species in North America as a control stimulus. They also attached actual mosquitos to thin strips of black construction paper by dabbing a bit of superglue on their abdomens or hind legs. This ensured the mosquitos could still beat their wings when approaching the webs. The experiments were recorded on high-speed video for analysis.

As expected, spiders released their webs when flapping mosquitoes drew near, but the video footage showed that the releases occurred before the mosquitoes ever touched the web. The spiders released their webs just as frequently when the tuning fork was brandished nearby. It wasn’t likely that they were relying on visual cues because the spiders were centered at the vertex of the web and anchor line, facing away from the cone. Ray spiders also don’t have well-developed eyes. And one spider did not respond to a motionless mosquito held within the capture cone but released its web only when the insect started flapping its wings.

“The decision to release a web is therefore likely based upon vibrational information,” the authors concluded, noting that ray spiders have sound-sensitive hairs on their back legs that could be detecting air currents or sound waves since those legs are typically closest to the cone. Static webs are known to vibrate in response to airborne sounds, so it seems likely that ray spiders can figure out an insect’s approach, its size, or maybe even its behavior before the prey ever makes contact with the web.

As for the web kinematics, Han and Blackledge determined that they can accelerate up to 504 m/s2, reaching speeds as high as 1 m/s, and hence can catch mosquitos in 38 milliseconds or less. Even the speediest mosquitoes might struggle to outrun that.

Journal of Experimental Biology, 2024. DOI: 10.1242/jeb.249237  (About DOIs).

These spiders listen for prey before hurling webs like slingshots Read More »

openai-teases-12-days-of-mystery-product-launches-starting-tomorrow

OpenAI teases 12 days of mystery product launches starting tomorrow

On Wednesday, OpenAI CEO Sam Altman announced a “12 days of OpenAI” period starting December 5, which will unveil new AI features and products for 12 consecutive weekdays.

Altman did not specify the exact features or products OpenAI plans to unveil, but a report from The Verge about this “12 days of shipmas” event suggests the products may include a public release of the company’s text-to-video model Sora and a new “reasoning” AI model similar to o1-preview. Perhaps we may even see DALL-E 4 or a new image generator based on GPT-4o’s multimodal capabilities.

Altman’s full tweet included hints at releases both big and small:

🎄🎅starting tomorrow at 10 am pacific, we are doing 12 days of openai.

each weekday, we will have a livestream with a launch or demo, some big ones and some stocking stuffers.

we’ve got some great stuff to share, hope you enjoy! merry christmas.

If we’re reading the calendar correctly, 12 weekdays means a new announcement every day until December 20.

OpenAI teases 12 days of mystery product launches starting tomorrow Read More »

amazon-secretly-slowed-deliveries,-deceived-anyone-who-complained,-lawsuit-says

Amazon secretly slowed deliveries, deceived anyone who complained, lawsuit says

In a statement to Ars, Amazon spokesperson Kelly Nantel said that claims that Amazon’s “business practices are somehow discriminatory or deceptive” are “categorically false.”

Nantel said that Amazon started using third-party services to deliver to these areas to “put the safety of delivery drivers first.”

“In the ZIP codes in question, there have been specific and targeted acts against drivers delivering Amazon packages,” Nantel said. “We made the deliberate choice to adjust our operations, including delivery routes and times, for the sole reason of protecting the safety of drivers.”

Nantel also pushed back on claims that Amazon concealed this choice, claiming that the company is “always transparent with customers during the shopping journey and checkout process about when, exactly, they can expect their orders to arrive.”

But that doesn’t really gel with Schwalb’s finding that even customers using Amazon’s support chat were allegedly misled. During one chat, a frustrated user pointing out discrepancies between DC ZIP codes asked if Amazon “is a waste of money in my zip code?” Instead of confirming that the ZIP code was excluded from in-house delivery services, the support team member seemingly unhelpfully suggested the user delete and re-add their address to their account.

“Amazon has doubled down on its deception by refusing to disclose the fact of the delivery exclusion, and instead has deceptively implied that slower speeds are simply due to other circumstances, rather than an affirmative decision by Amazon,” Schwalb’s complaint said.

Schwalb takes no issue with Amazon diverting delivery drivers from perceived high-crime areas but insists that Amazon owes its subscribers in those regions an explanation for delivery delays and perhaps even cheaper subscription prices. He has asked for an injunction on Amazon’s allegedly deceptive advertising urging users to pay for fast shipments they rarely, if ever, receive. He also wants Amazon to refund subscribers seemingly cheated out of full subscription benefits and has asked a jury to award civil damages to deter future unfair business practices. Amazon could owe millions in a loss, with each delivery to almost 50,000 users since mid-2022 considered a potential violation.

Nantel said that Amazon has offered to “work together” with Schwalb’s office “to reduce crime and improve safety in these areas” but did not suggest Amazon would be changing how it advertises Prime delivery in the US. Instead, the e-commerce giant plans to fight the claims and prove that “providing fast and accurate delivery times and prioritizing the safety of customers and delivery partners are not mutually exclusive,” Nantel said.

Amazon secretly slowed deliveries, deceived anyone who complained, lawsuit says Read More »