Author name: Tim Belzer

blizzard’s-pulling-of-warcraft-i-&-ii-tests-gog’s-new-preservation-program

Blizzard’s pulling of Warcraft I & II tests GOG’s new Preservation Program

GOG’s version goes a bit beyond the classic versions that were on sale on Blizzard.net. Beyond the broad promise that “this is the best version of this game you can buy on any PC platform,” GOG has made specific tweaks to the networking code for Warcraft I and fixed up the DirectX wrapper for Warcraft II to improve its scaling on modern monitor resolutions.

It’s quite a novel commitment, keeping non-revenue-generating games playable for buyers, even after a publisher no longer makes them available for sale. The Warcraft titles certainly won’t be the only games for which publisher enthusiasm lags behind GOG and its classic gamers.

As noted at the Preservation Program’s launch, for some titles, GOG does not have the rights to modify a game’s build, and only its original developers can do so. So if GOG can’t make it work in, say, DOSBox, extraordinary efforts may be required.

A screenshot from Blizzard's Warcraft II: Remastered release, showing brick keeps, archers, footsoldiers, dragons around a roost, and knights on horseback units.

Warcraft II: Remastered lets you switch back and forth between classic and remastered graphics and promises to offer better support for widescreen monitors and more units selected at once.

Credit: Blizzard

Warcraft II: Remastered lets you switch back and forth between classic and remastered graphics and promises to offer better support for widescreen monitors and more units selected at once. Credit: Blizzard

Beyond being tied to Blizzard’s Battle.net service in perpetuity, there are other reasons Warcraft fans might want to hold onto the originals. Blizzard’s 2020 release of Warcraft III Reforged was widely panned as uneven, unfinished, and in some ways unfair, as it, too, removed the original Warcraft III from stores. Reforged was still in rough shape a year later, leading Ars’ list of 2020’s most disappointing games. A 2.0 update promised a total reboot, but fans remain torn on the new art styles and are somewhat wary.

Then again, you can now select more units in the first two Warcraft games’ remasters, and you get “numerous visual updates for the UI.”

Blizzard’s pulling of Warcraft I & II tests GOG’s new Preservation Program Read More »

supermassive-black-hole-binary-emits-unexpected-flares

Supermassive black hole binary emits unexpected flares

“In addition to stars, gas clouds can also be disrupted by SMBHs and their binaries,” they said in the same study. “The key difference is that the clouds can be comparable to or even larger than the binary separation, unlike stars, which are always much smaller. “

Looking at the results of a previous study that numerically modeled this type of situation also suggested a gas cloud. Just like the hypothetical supermassive black hole binary in the model, AT 2021hdr would accrete large amounts of material every time the black holes were halfway through orbiting each other and had to cross the cloud to complete the orbit—their gravity tears away some of the cloud, which ends up in their accretion disks, every time they cross it. They are now thought to take in anywhere between three and 30 percent of the cloud every few cycles. From a cloud so huge, that’s a lot of gas.

The supermassive black holes in AT 2021hdr are predicted to crash into each other and merge in another 70,000 years. They are also part of another merger, in which their host galaxy is gradually merging with a nearby galaxy, which was first discovered by the same team (this has no effect on the BSMBH tidal disruption of the gas cloud).

How the behavior of AT 2021hdr develops could tell us more about its nature and uphold or disprove the idea that it is eating away at a gaseous cloud instead of a star or something else. For now, it seems these black holes don’t just get gas from what they eat—they eat the gas itself.

Astronomy & Astrophysics, 2024.  DOI:  10.1051/0004-6361/202451305

Supermassive black hole binary emits unexpected flares Read More »

how-should-we-treat-beings-that-might-be-sentient?

How should we treat beings that might be sentient?


Being aware of the maybe self-aware

A book argues that we’ve not thought enough about things that might think.

What rights should a creature with ambiguous self-awareness, like an octopus, be granted. Credit: A. Martin UW Photography

If you aren’t yet worried about the multitude of ways you inadvertently inflict suffering onto other living creatures, you will be after reading The Edge of Sentience by Jonathan Birch. And for good reason. Birch, a Professor of Philosophy at the London College of Economics and Political Science, was one of a team of experts chosen by the UK government to establish the Animal Welfare Act (or Sentience Act) in 2022—a law that protects animals whose sentience status is unclear.

According to Birch, even insects may possess sentience, which he defines as the capacity to have valenced experiences, or experiences that feel good or bad. At the very least, Birch explains, insects (as well as all vertebrates and a selection of invertebrates) are sentience candidates: animals that may be conscious and, until proven otherwise, should be regarded as such.

Although it might be a stretch to wrap our mammalian minds around insect sentience, it is not difficult to imagine that fellow vertebrates have the capacity to experience life, nor does it come as a surprise that even some invertebrates, such as octopuses and other cephalopod mollusks (squid, cuttlefish, and nautilus) qualify for sentience candidature. In fact, one species of octopus, Octopus vulgaris, has been protected by the UK’s Animal Scientific Procedures Act (ASPA) since 1986, which illustrates how long we have been aware of the possibility that invertebrates might be capable of experiencing valenced states of awareness, such as contentment, fear, pleasure, and pain.

A framework for fence-sitters

Non-human animals, of course, are not the only beings with an ambiguous sentience stature that poses complicated questions. Birch discusses people with disorders of consciousness, embryos and fetuses, neural organoids (brain tissue grown in a dish), and even “AI technologies that reproduce brain functions and/or mimic human behavior,” all of which share the unenviable position of being perched on the edge of sentience—a place where it is excruciatingly unclear whether or not these individuals are capable of conscious experience.

What’s needed, Birch argues, when faced with such staggering uncertainty about the sentience stature of other beings, is a precautionary framework that outlines best practices for decision-making regarding their care. And in The Edge of Sentience, he provides exactly that, in meticulous, orderly detail.

Over more than 300 pages, he outlines three fundamental framework principles and 26 specific case proposals about how to handle complex situations related to the care and treatment of sentience-edgers. For example, Proposal 2 cautions that “a patient with a prolonged disorder of consciousness should not be assumed incapable of experience” and suggests that medical decisions made on their behalf cautiously presume they are capable of feeling pain. Proposal 16 warns about conflating brain size, intelligence, and sentience, and recommends decoupling the three so that we do not incorrectly assume that small-brained animals are incapable of conscious experience.

Surgeries and stem cells

Be forewarned, some topics in The Edge of Sentience are difficult. For example, Chapter 10 covers embryos and fetuses. In the 1980s, Birch shares, it was common practice to not use anesthesia on newborn babies or fetuses when performing surgery. Why? Because whether or not newborns and fetuses experience pain was up for debate. Rather than put newborns and fetuses through the risks associated with anesthesia, it was accepted practice to give them a paralytic (which prevents all movement) and carry on with invasive procedures, up to and including heart surgery.

After parents raised alarms over the devastating outcomes of this practice, such as infant mortality, it was eventually changed. Birch’s takeaway message is clear: When in doubt about the sentience stature of a living being, we should probably assume it is capable of experiencing pain and take all necessary precautions to prevent it from suffering. To presume the opposite can be unethical.

This guidance is repeated throughout the book. Neural organoids, discussed in Chapter 11, are mini-models of brains developed from stem cells. The potential for scientists to use neural organoids to unravel the mechanisms of debilitating neurological conditions—and to avoid invasive animal research while doing so—is immense. It is also ethical, Birch posits, since studying organoids lessens the suffering of research animals. However, we don’t yet know whether or not neural tissue grown in a dish has the potential to develop sentience, so he argues that we need to develop a precautionary approach that balances the benefits of reduced animal research against the risk that neural organoids are capable of being sentient.

A four-pronged test

Along this same line, Birch says, all welfare decisions regarding sentience-edgers require an assessment of proportionality. We must balance the nature of a given proposed risk to a sentience candidate with potential harms that could result if nothing is done to minimize the risk. To do this, he suggests testing four criteria: permissibility-in-principle, adequacy, reasonable necessity, and consistency. Birch refers to this assessment process as PARC, and deep dives into its implementation in chapter eight.

When applying the PARC criteria, one begins by testing permissibility-in-principle: whether or not the proposed response to a risk is ethically permissible. To illustrate this, Birch poses a hypothetical question: would it be ethically permissible to mandate vaccination in response to a pandemic? If a panel of citizens were in charge of answering this question, they might say “no,” because forcing people to be vaccinated feels unethical. Yet, when faced with the same question, a panel of experts might say “yes,” because allowing people to die who could be saved by vaccination also feels unethical. Gauging permissibility-in-principle, therefore, entails careful consideration of the likely possible outcomes of a proposed response. If an outcome is deemed ethical, it is permissible.

Next, the adequacy of a proposed response must be tested. A proportionate response to a risk must do enough to lessen the risk. This means the risk must be reduced to “an acceptable level” or, if that’s not possible, a response should “deliver the best level of risk reduction that can be achieved” via an ethically permissible option.

The third test is reasonable necessity. A proposed response to a risk must not overshoot—it should not go beyond what is reasonably necessary to reduce risk, in terms of either cost or imposed harm. And last, consistency should be considered. The example Birch presents is animal welfare policy. He suggests we should always “aim for taxonomic consistency: our treatment of one group of animals (e.g., vertebrates) should be consistent with our treatment of another (e.g., invertebrates).”

The Edge of Sentience, as a whole, is a dense text overflowing with philosophical rhetoric. Yet this rhetoric plays a crucial role in the storytelling: it is the backbone for Birch’s clear and organized conclusions, and it serves as a jumping-off point for the logical progression of his arguments. Much like “I think, therefore I am” gave René Descartes a foundation upon which to build his idea of substance dualism, Birch uses the fundamental position that humans should not inflict gratuitous suffering onto fellow creatures as a base upon which to build his precautionary framework.

For curious readers who would prefer not to wade too deeply into meaty philosophical concepts, Birch generously provides a shortcut to his conclusions: a cheat sheet of his framework principles and special case proposals is presented at the front of the book.

Birch’s ultimate message in The Edge of Sentience is that a massive shift in how we view beings with a questionable sentience status should be made. And we should ideally make this change now, rather than waiting for scientific research to infallibly determine who and what is sentient. Birch argues that one way that citizens and policy-makers can begin this process is by adopting the following decision-making framework: always avoid inflicting gratuitous suffering on sentience candidates; take precautions when making decisions regarding a sentience candidate; and make proportional decisions about the care of sentience candidates that are “informed, democratic and inclusive.”

You might be tempted to shake your head at Birch’s confidence in humanity. No matter how deeply you agree with his stance of doing no harm, it’s hard to have confidence in humanity given our track record of not making big changes for the benefit of living creatures, even when said creatures includes our own species (cue in global warming here). It seems excruciatingly unlikely that the entire world will adopt Birch’s rational, thoughtful, comprehensive plan for reducing the suffering of all potentially sentient creatures. Yet Birch, a philosopher at heart, ignores human history and maintains a tone of articulate, patient optimism. He clearly believes in us—he knows we can do better—and he offers to hold our hands and walk us through the steps to do so.

Lindsey Laughlin is a science writer and freelance journalist who lives in Portland, Oregon, with her husband and four children. She earned her BS from UC Davis with majors in physics, neuroscience, and philosophy.

How should we treat beings that might be sentient? Read More »

code-found-online-exploits-logofail-to-install-bootkitty-linux-backdoor

Code found online exploits LogoFAIL to install Bootkitty Linux backdoor

Normally, Secure Boot prevents the UEFI from running all subsequent files unless they bear a digital signature certifying those files are trusted by the device maker. The exploit bypasses this protection by injecting shell code stashed in a malicious bitmap image displayed by the UEFI during the boot-up process. The injected code installs a cryptographic key that digitally signs a malicious GRUB file along with a backdoored image of the Linux kernel, both of which run during later stages of the boot process on Linux machines.

The silent installation of this key induces the UEFI to treat the malicious GRUB and kernel image as trusted components, and thereby bypass Secure Boot protections. The final result is a backdoor slipped into the Linux kernel before any other security defenses are loaded.

Diagram illustrating the execution flow of the LogoFAIL exploit Binarly found in the wild. Credit: Binarly

In an online interview, HD Moore, CTO and co-founder at runZero and an expert in firmware-based malware, explained the Binarly report this way:

The Binarly paper points to someone using the LogoFAIL bug to configure a UEFI payload that bypasses secure boot (firmware) by tricking the firmware into accepting their self-signed key (which is then stored in the firmware as the MOK variable). The evil code is still limited to the user-side of UEFI, but the LogoFAIL exploit does let them add their own signing key to the firmware’s allow list (but does not infect the firmware in any way otherwise).

It’s still effectively a GRUB-based kernel backdoor versus a firmware backdoor, but it does abuse a firmware bug (LogoFAIL) to allow installation without user interaction (enrolling, rebooting, then accepting the new MOK signing key).

In a normal secure boot setup, the admin generates a local key, uses this to sign their updated kernel/GRUB packages, tells the firmware to enroll the key they made, then after reboot, the admin has to accept this new key via the console (or remotely via bmc/ipmi/ilo/drac/etc bios console).

In this setup, the attacker can replace the known-good GRUB + kernel with a backdoored version by enrolling their own signing key without user interaction via the LogoFAIL exploit, but it’s still effectively a GRUB-based bootkit, and doesn’t get hardcoded into the BIOS firmware or anything.

Machines vulnerable to the exploit include some models sold by Acer, HP, Fujitsu, and Lenovo when they ship with a UEFI developed by manufacturer Insyde and run Linux. Evidence found in the exploit code indicates the exploit may be tailored for specific hardware configurations of such machines. Insyde issued a patch earlier this year that prevents the exploit from working. Unpatched devices remain vulnerable. Devices from these manufacturers that use non-Insyde UEFIs aren’t affected.

Code found online exploits LogoFAIL to install Bootkitty Linux backdoor Read More »

player-456-is-back-for-revenge-in-squid-game-s2-trailer

Player 456 is back for revenge in Squid Game S2 trailer

Lee Jung-Jae returns as Player 456 in the second season of Squid Game.

The 2021 Korean series Squid Game was a massive hit for Netflix, racking up 1.65 billion viewing hours in its first four weeks and snagging 14 Emmy nominations. Fans have been longing for a second season ever since, and we’re finally getting it this year for Christmas. Netflix just released the official trailer.

(Spoilers for S1 below.)

The first season followed Seong Gi-hun (Lee Jung-Jae, seen earlier this year in The Acolyte), a down-on-his-luck gambler who has little left to lose when he agrees to play children’s playground games against 455 other players for money. The twist? If you lose a game, you die. If you cheat, you die. And if you win, you might also die.

“The grotesque spectacle of Squid Game is where it gets most of its appeal, but it resonates because of how relatable Gi-hun and the rest of the game’s contestants are,” Ars Senior Technology Reporter Andrew Cunningham wrote in our 2021 year-end TV roundup. “Alienated from society and each other, driven by guilt or shame or pride or desperation, each of the players we get to know is inescapably human, which is why Squid Game is more than just a gory sideshow.

In the S1 finale, Gi-hun faced off against fellow finalist and childhood friend Cho Sang-woo (Park Hae-soo) in the titular “squid game.” He won their fight but refused to kill his friend, begging Sang-woo to stop the game by invoking a special clause in their contract whereby they get to live—but do not get the prize money. Sang-woo instead stabbed himself in the neck and asked Gi-hun to take care of his mother. Wracked with guilt, Gi-hun was about to fly to America to live with his daughter when he spotted the game recruiter trying to entice another desperate person. He didn’t get on the plane, deciding instead to try and re-enter the game and take it down from the inside.

Player 456 is back for revenge in Squid Game S2 trailer Read More »

man-suffers-chemical-burn-that-lasted-months-after-squeezing-limes

Man suffers chemical burn that lasted months after squeezing limes

If Margaritaville were a real place, it should definitely keep a few dermatologists on hand.

In a case of an oft-overlooked food preparation risk, a 40-year-old man showed up to an allergy clinic in Texas with a severe, burning rash on both his hands that had two days earlier. A couple of days later, it blistered. And a few weeks after that, the skin darkened and scaled. After several months, the skin on his hands finally returned to normal.

The culprit: lime juice and sunlight.

It turns out that just before developing the nasty skin eruption, the man had manually squeezed a dozen limes, then headed to an outdoor soccer game without applying sunscreen. His doctors diagnosed the man’s rash as a classic case of phytophotodermatitis, according to a case report published Wednesday in the New England Journal of Medicine.

The condition is caused by toxic substances found in plants (phyto) that react with UV light (photo) to cause a burning, blistering, scaling, pigmented skin condition (dermatitis).

Specifically, the toxic chemicals are furocoumarins, which are found in some weeds and also a range of plants used in food. Those include celery, carrot, parsley, fennel, parsnip, lime, bitter orange, lemon, grapefruit, and sweet orange. Furocoumarins include chemicals with linear structures, psoralens, and angular structures, called angelicins, though not all of them are toxic.

Man suffers chemical burn that lasted months after squeezing limes Read More »

teaching-a-drone-to-fly-without-a-vertical-rudder

Teaching a drone to fly without a vertical rudder


We can get a drone to fly like a pigeon, but we needed to use feathers to do it.

Pigeons manage to get vertical without using a vertical tail. Credit: HamidEbrahimi

Most airplanes in the world have vertical tails or rudders to prevent Dutch roll instabilities, a combination of yawing and sideways motions with rolling that looks a bit like the movements of a skater. Unfortunately, a vertical tail adds weight and generates drag, which reduces fuel efficiency in passenger airliners. It also increases the radar signature, which is something you want to keep as low as possible in a military aircraft.

In the B-2 stealth bomber, one of the very few rudderless airplanes, Dutch roll instabilities are dealt with using drag flaps positioned at the tips of its wings, which can split and open to make one wing generate more drag than the other and thus laterally stabilize the machine. “But it is not really an efficient way to solve this problem,” says David Lentink, an aerospace engineer and a biologist at the University of Groningen, Netherlands. “The efficient way is solving it by generating lift instead of drag. This is something birds do.”

Lentink led the study aimed at better understanding birds’ rudderless flight mechanics.

Automatic airplanes

Birds flight involves near-constant turbulence—“When they fly around buildings, near trees, near rocks, near cliffs,” Lentink says. The leading hypothesis on how they manage this in a seemingly graceful, effortless manner was suggested by a German scientist named Franz Groebbels. He argued that birds’ ability relied on their reflexes. When he held a bird in his hands, he noticed that its tail would flip down when the bird was pitched up and down, and when the bird was moved left and right, its wings also responded to movement by extending left and right asymmetrically. “Another reason to think reflexes matter is comparing this to our own human locomotion—when we stumble, it is a reflex that saves us from falling,” Lentink claims.

Groebbels’ intuition about birds’ reflexes being responsible for flight stabilization was later backed by neuroscience. The movements of birds’ wings and muscles were recorded and found to be proportional to the extent that the bird was pitched or rolled. The hypothesis, however, was extremely difficult to test with a flying bird—all the experiments aimed at confirming it have been done on birds that were held in place. Another challenge was determining if those wing and tail movements were reflexive or voluntary.

“I think one pretty cool thing is that Groebbels wrote his paper back in 1929, long before autopilot systems or autonomous flight were invented, and yet he said that birds flew like automatic airplanes,” Lentink says. To figure out if he was right, Lentink and his colleagues started with the Groebbels’s analogy but worked their way backward—they started building autonomous airplanes designed to look and fly like birds.

Reverse-engineering pigeons

The first flying robot Lentink’s team built was called the Tailbot. It had fixed wings and a very sophisticated tail that could move with five actuated degrees of freedom. “It could spread—furl and unfurl—move up and down, move sideways, even asymmetrically if necessary, and tilt. It could do everything a bird’s tail can,” Lentink explains. The team put this robot in a wind tunnel that simulated turbulent flight and fine-tuned a controller that adjusted the tail’s position in response to changes in the robot’s body position, mimicking reflexes observed in real pigeons.

“We found that this reflexes controller that managed the tail’s movement worked and stabilized the robot in the wind tunnel. But when we took it outdoors, results were disappointing. It actually ended up crashing,” Lentink says. Given that relying on a morphing tail alone was not enough, the team built another robot called PigeonBot II, which added pigeon-like morphing wings.

Each wing could be independently tucked or extended. Combined with the morphing tail and nine servomotors—two per wing and five in the tail—the robot weighed around 300 grams, which is around the weight of a real pigeon. Reflexes were managed by the same controller that was modified to manage wing motions as well.

To enable autonomous flight, the team fitted the robot with two propellers and an off-the-shelf drone autopilot called Pixracer. The problem with the autopilot, though, was that it was designed for conventional controls you use in quadcopter drones. “We put an Arduino between the autopilot and the robot that translated autopilot commands to the morphing tail and wings’ motions of the robot,” Lentink says.

The Pigeon II passed the outdoor flying test. It could take off, land, and fly entirely on its own or with an operator issuing high-level commands like go up, go down, turn left, or turn right. Flight stabilization relied entirely on bird-like reflexes and worked well. But there was one thing electronics could not re-create: their robots used real pigeon feathers. “We used them because with current technology it is impossible to create structures that are as lightweight, as stiff, and as complex at the same time,” Lentink says.

Feathery marvels

Birds’ feathers appear simple, but they really are extremely advanced pieces of aerospace hardware. Their complexity starts with nanoscale features. “Feathers have 10-micron 3D hooks on their surface that prevent them from going too far apart. It is the only one-sided Velcro system in the world. This is something that has never been engineered, and there is nothing like this elsewhere in nature,” Lentink says. Those nanoscale hooks, when locked in, can bear loads reaching up to 20 grams.

Then there are macroscale properties. Feathers are not like aluminum structures that have one bending stiffness, one torque stiffness, and that’s it. “They are very stiff in one direction and very soft in another direction, but not soft in a weak way—they can bear significant loads,” Lentink says.

His team attempted to make artificial feathers with carbon fiber, but they couldn’t create anything as lightweight as a real feather.  “I don’t know of any 3D printer that could start with 10-micron nanoscale features and work all the way up to macro-scale structures that can be 20 centimeters long,” Lentink says. His team also discovered that pigeon’s feathers could filter out a lot of turbulence perturbations on their own. “It wasn’t just the form of the wing,” Lentink claims.

Lentink estimates that a research program aimed at developing aerospace materials even remotely comparable to feathers could take up to 20 years. But does this mean his whole concept of using reflex-based controllers to solve rudderless flight hangs solely on successfully reverse-engineering a pigeon’s feather? Not really.

Pigeon bombers?

The team thinks it could be possible to build airplanes that emulate the way birds stabilize rudderless flight using readily available materials. “Based on our experiments, we know what wing and tail shapes are needed and how to control them. And we can see if we can create the same effect in a more conventional way with the same types of forces and moments,” Lentink says. He suspects that developing entirely new materials with feather-like properties would only become necessary if the conventional approach bumps into some insurmountable roadblocks and fails.

“In aerospace engineering, you’ve got to try things out. But now we know it is worth doing,” Lentink claims. And he says military aviation ought to be the first to attempt it because the risk is more tolerable there. “New technologies are often first tried in the military, and we want to be transparent about it,” he says. Implementing bird-like rudderless flight stabilization in passenger airliners, which are usually designed in a very conservative fashion, would take a lot more research, “It may take easily take 15 years or more before this technology is ready to such level that we’d have passengers fly with it,” Lentink claims.

Still, he says there is still much we can learn from studying birds. “We know less about bird’s flight than most people think we know. There is a gap between what airplanes can do and what birds can do. I am trying to bridge this gap by better understanding how birds fly,” Lentink adds.

Science Robotics, 2024. DOI: 10.1126/scirobotics.ado4535

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Teaching a drone to fly without a vertical rudder Read More »

openai-is-at-war-with-its-own-sora-video-testers-following-brief-public-leak

OpenAI is at war with its own Sora video testers following brief public leak

“We are not against the use of AI technology as a tool for the arts (if we were, we probably wouldn’t have been invited to this program),” PR Puppets writes. “What we don’t agree with is how this artist program has been rolled out and how the tool is shaping up ahead of a possible public release. We are sharing this to the world in the hopes that OpenAI becomes more open, more artist friendly and supports the arts beyond PR stunts.”

An excerpt from the PR Puppets open letter, as it appeared on Hugging Face Tuesday. Credit: PR Puppets / HuggingFace

In a statement provided to Ars Technica, an OpenAI spokesperson noted that “Sora is still in research preview, and we’re working to balance creativity with robust safety measures for broader use. Hundreds of artists in our alpha have shaped Sora’s development, helping prioritize new features and safeguards. Participation is voluntary, with no obligation to provide feedback or use the tool.”

Throughout the day Tuesday, PR Puppets updated its open letter with signatures from 16 people and groups listed as “sora-alpha-artists.” But a source with knowledge of OpenAI’s testing program told Ars that only a couple of those artists were actually part of the alpha testing group and that those artists were asked to refrain from sharing confidential details during Sora’s development.

PR Puppets also later linked to a public petition encouraging others to sign on to the same message shared in their open letter. Artists Memo Akten, Jake Elwes, and CROSSLUCID, who are also listed as “sora-alpha-artists,” were among the first to sign that public petition.

When can we get in?

Made with Sora (see above for more info): pic.twitter.com/VlveALuvYS

— Kol Tregaskes (@koltregaskes) November 26, 2024

Sora made a huge splash when OpenAI first teased its video-generation capabilities in February, before shopping the tech around Hollywood and using it in a public advertisement for Toys R Us. Since then, though, publicly accessible video generators like Minimax and announcements of in-development competitors from Google and Meta have stolen some of Sora’s initial thunder.

Previous OpenAI CTO Mira Murati told The Wall Street Journal in March that it planned to release Sora publicly by the end of the year. But CPO Kevin Weil said in a recent Reddit AMA that the platform’s deployment has been delayed by the “need to perfect the model, need to get safety/impersonation/other things right, and need to scale compute!”

OpenAI is at war with its own Sora video testers following brief public leak Read More »

biased-ai-in-health-care-faces-crackdown-in-sweeping-biden-admin-proposals

Biased AI in health care faces crackdown in sweeping Biden admin proposals

Prior authorization

Elsewhere in the over 700-page proposal, the administration lays out policy that would bar Medicare Advantage plan providers from reopening and reneging on paying claims for inpatient hospital admission if those claims had already been granted approval through prior authorization. The proposal also wants to make criteria for coverage clearer and help ensure that patients know they can appeal denied claims.

The Department of Health and Human Services notes that when patients appeal claim denials from Medicare Advantage plans, the appeals are successful 80 percent of the time. But, only 4 percent of claim denials are appealed—”meaning many more denials could potentially be overturned by the plan if they were appealed.”

AI guardrails

Last, the administration’s proposal also tries to shore up guardrails for the use of AI in health care with edits to existing policy. The goal is to make sure Medicare Advantage insurers don’t adopt flawed AI recommendations that deepen bias and discrimination or exacerbate existing inequities.

As an example, the administration pointed to the use of AI to predict which patients would miss medical appointments—and then recommend that providers double-book the appointment slots for those patients. In this case, low-income patients are more likely to miss appointments, because they may struggle with transportation, childcare, and work schedules. “As a result of using this data within the AI tool, providers double-booked lower-income patients, causing longer wait times for lower-income patients and perpetuating the cycle of additional missed appointments for vulnerable patients.” As such, it should be barred, the administration says.

In general, people of color and people of lower socioeconomic status tend to be more likely to have gaps and flaws in their electronic health records. So, when AI is trained on large data sets of health records, it can generate flawed recommendations based on that spotty and incorrect information, thereby amplifying bias.

Biased AI in health care faces crackdown in sweeping Biden admin proposals Read More »

isps-say-their-“excellent-customer-service”-is-why-users-don’t-switch-providers

ISPs say their “excellent customer service” is why users don’t switch providers


Broadband customer service

ISPs tell FCC that mistreated users would switch to one of their many other options.

Credit: Getty Images | Thamrongpat Theerathammakorn

Lobby groups for Internet service providers claim that ISPs’ customer service is so good already that the government shouldn’t consider any new regulations to mandate improvements. They also claim ISPs face so much competition that market forces require providers to treat their customers well or lose them to competitors.

Cable lobby group NCTA-The Internet & Television Association told the Federal Communications Commission in a filing that “providing high-quality products and services and a positive customer experience is a competitive necessity in today’s robust communications marketplace. To attract and retain customers, NCTA’s cable operator members continuously strive to ensure that the customer support they provide is effective and user-friendly. Given these strong marketplace imperatives, new regulations that would micromanage providers’ customer service operations are unnecessary.”

Lobby groups filed comments in response to an FCC review of customer service that was announced last month, before the presidential election. While the FCC’s current Democratic leadership is interested in regulating customer service practices, the Republicans who will soon take over opposed the inquiry.

USTelecom, which represents telcos such as AT&T and Verizon, said that “the competitive broadband marketplace leaves providers of broadband and other communications services no choice but to provide their customers with not only high-quality broadband, but also high-quality customer service.”

“If a provider fails to efficiently resolve an issue, they risk losing not only that customer—and not just for the one service, but potentially for all of the bundled services offered to that customer—but also any prospective customers that come across a negative review online. Because of this, broadband providers know that their success is dependent upon providing and maintaining excellent customer service,” USTelecom wrote.

While the FCC Notice of Inquiry said that providers should “offer live customer service representative support by phone within a reasonable timeframe,” USTelecom’s filing touted the customer service abilities of AI chatbots. “AI chat agents will only get better at addressing customers’ needs more quickly over time—and if providers fail to provide the customer service and engagement options that their customers expect and fail to resolve their customers’ concerns, they may soon find that the consumer is no longer a customer, having switched to another competitive offering,” the lobby group said.

Say what?

The lobby groups’ description may surprise the many Internet users suffering from little competition and poor customer service, such as CenturyLink users who had to go without service for over a month because of the ISP’s failure to fix outages. The FCC received very different takes on the state of ISP customer service from regulators in California and Oregon.

The Mt. Hood Cable Regulatory Commission in northwest Oregon, where Comcast is the dominant provider, told the FCC that local residents complain about automated customer service representatives; spending hours on hold while attempting to navigate automated voice systems; billing problems including “getting charged after cancelling service, unexpected price increases, and being charged for equipment that was returned,” and service not being restored quickly after outages.

The California Public Utilities Commission (CPUC) told the FCC that it performed a recent analysis finding “that only a fraction of California households enjoy access to a highly competitive market for [broadband Internet service], with only 26 percent of households having a choice between two or more broadband providers utilizing either cable modem or fiber optic technologies.” The California agency said the result “suggests that competitive forces alone are insufficient to guarantee service quality for customers who depend upon these services.”

CPUC said its current rulemaking efforts for California “will establish standards for service outages, repair response time, and access to live representatives.” The agency told the FCC that if it adopts new customer service rules for the whole US, it should “permit state and local governments to set customer service standards that exceed the adopted standards.”

People with disabilities need more help, groups say

The FCC also received a filing from several advocacy groups focused on accessibility for people with disabilities. The groups asked for rules “establishing baseline standards to ensure high-quality DVC [direct video calling for American Sign Language users] across providers, requiring accommodations for consumers returning rental equipment, and ensuring accessible cancellation processes.” The groups said that “providers should be required to maintain dedicated, well-trained accessibility teams that are easily reachable via accessible communication channels, including ASL support.”

“We strongly caution against relying solely on emerging AI technologies without mandating live customer service support,” the groups said.

The FCC’s Notice of Inquiry on customer service was approved 3–2 in a party-line vote on October 10. FCC Chairwoman Jessica Rosenworcel said that hundreds of thousands of customers file complaints each year “because they have run into issues cancelling their service, are saddled with unexpected charges, are upset by unexplained outages, and are frustrated with billing issues they have not been able to resolve on their own. Many describe being stuck in ‘doom loops’ that make it difficult to get a real person on the line to help with service that needs repair or to address charges they believe are a mistake.”

If the FCC leadership wasn’t changing hands, the Notice of Inquiry could be the first step toward a rulemaking. “We cannot ignore these complaints, especially not when we know that it is possible to do better… We want to help improve the customer experience, understand what tools we have to do so, and what gaps there may be in the law that prevent consumers from having the ability to resolve routine problems quickly, simply, and easily,” Rosenworcel said.

ISPs have a friend in Trump admin

But the proceeding won’t go any further under incoming Chairman Brendan Carr, a Republican chosen by President-elect Donald Trump. Carr dissented from the Notice of Inquiry, saying that the potential actions explored by the FCC exceed its authority and that the topic should be handled instead by the Federal Trade Commission.

Carr said the FCC should work instead on “freeing up spectrum and eliminating regulatory barriers to deployment” and that the Notice of Inquiry is part of “the Biden-Harris Administration’s efforts to deflect attention away from the necessary course correction.”

Carr has made it clear that he is interested in regulating broadcast media and social networks more than the telecom companies the FCC traditionally focuses on. Carr wrote a chapter for the conservative Heritage Foundation’s Project 2025 in which he criticized the FCC for “impos[ing] heavy-handed regulation rather than relying on competition and market forces to produce optimal outcomes.”

With Carr at the helm, ISPs are likely to get what they’re asking for: No new regulations and elimination of at least some current rules. “Rather than saddling communications providers with unnecessary, unlawful, and potentially harmful regulation, the Commission should encourage the pro-consumer benefits of competition by reducing the regulatory burdens and disparities that are currently unfairly skewing the marketplace,” the NCTA told the FCC, arguing that cable companies face more onerous regulations than other communications providers.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

ISPs say their “excellent customer service” is why users don’t switch providers Read More »

licking-this-“lollipop”-will-let-you-taste-virtual-flavors

Licking this “lollipop” will let you taste virtual flavors

Demonstrating lollipop user interface to simulate taste in virtual and augmented reality environments. Credit: Lu et al, 2024/PNAS

Virtual reality (VR) technology has long sought to incorporate the human senses into virtual and mixed-reality environments. In addition to sight and sound, researchers have been trying to add the sensation of human touch and smell via various user interfaces, as well as taste. But the latter has proved to be quite challenging. A team of Hong Kong scientists has now developed a handheld user interface shaped like a lollipop capable of re-creating several different flavors in a virtual environment, according to a new paper published in the Proceedings of the National Academy of Sciences (PNAS).

It’s well established that human taste consists of sweet, salty, sour, bitter, and umami—five basic flavors induced by chemical stimulation of the tongue and, to a lesser extent, in parts of the pharynx, larynx, and epiglottis. Recreating those sensations in VR has resulted in a handful of attempts at a flavor user interface, relying on such mechanisms as chemical, thermal, and electrical stimulation, as well as iontophoresis.

The chemical approach usually involves applying flavoring chemicals directly onto the tongue, but this requires room for bulk storage of said chemicals, and there is a long delay time that is not ideal for VR applications. Thermal variations applied directly to the tongue can stimulate taste sensations but require a complicated system incorporating a cooling subsystem and temperature sensors, among other components.

The most mainstream method is electrical stimulation, in which the five basic flavors are simulated by varying the frequency, intensity, and direction of electrical signals on the tongue. But this method requires placing electrode patches on or near the tongue, which is awkward, and the method is prone to taste biases.

So Yiming Liu of City University of Hong Kong and co-authors opted to work with iontophoresis, in which stable taste feedback is achieved by using ions flowing through biologically safe hydrogels to transport flavor chemicals. This method is safe, requires low power consumption, allows for precise taste feedback, and offers a more natural human-machine interface. Liu et al. improved on recent advances in this area by developing their portable lollipop-shaped user interface device, which also improves flavor quality and consistency.

Licking this “lollipop” will let you taste virtual flavors Read More »

the-atari-7800+-is-a-no-frills-glimpse-into-a-forgotten-gaming-era

The Atari 7800+ is a no-frills glimpse into a forgotten gaming era


Awkward controls and a lack of features make a device for Atari completists only.

Shiny and chrome? In this economy? Credit: Kyle Orland

Like a lot of children of the ’80s, my early gaming nostalgia has a huge hole where the Atari 7800 might have lived. While practically everyone I knew had an NES during my childhood—and a few uncles and friends’ older siblings even had an Atari 2600 gathering dust in their dens—I was only vaguely aware of the 7800, Atari’s backward compatible, late ’80s attempt to maintain relevance in the quickly changing console market.

Absent that kind of nostalgia, the Atari 7800+ comes across as a real oddity. Fiddling with the system’s extremely cumbersome controllers and pixelated, arcade-port-heavy software library from a modern perspective is like peering into a fallen alternate universe, one where Nintendo wasn’t able to swoop in and revive a flailing Western home video game industry with the NES.

Even for those with fond memories of Atari 7800-filled childhoods, I’m not sure that this bare-bones package justifies its $130 price. There are many more full-featured ways to get your retro gaming fix, even for those still invested in the tail end of Atari’s dead-end branch of the gaming console’s evolutionary tree.

7800HD

Much like last year’s Atari 2600+, the 7800+ shell is a slightly slimmed-down version of Atari’s nostalgic hardware design. This time, Atari took design inspiration from the rainbow-adorned European version of the 7800 console (which released a year later), rather than the bulkier, less colorful US release.

A reverse angle showing how 7800 cartridges stick out with the art facing away from the front. Kyle Orland

The 7800+ plays any of the 58 officially licensed Atari 7800 cartridges released decades ago, as well as the dozens of homebrew cartridges released by coders in more recent years (some of which are now being sold for $30 each by the modern Atari corporation itself; more on those later). The data on those cartridges is run via the open source ProSystem emulator, which seems more than up to the job of re-creating the relatively ancient 7800 tech without any apparent slowdown, input lag, or graphical inconsistencies. The 15 to 30 seconds of loading time when you first plug in a new cartridge is more than a bit annoying, though.

The HDMI output from the 7800+ is the updated console’s main selling point, if anything is. The sharp, upscaled images work best on games with lots of horizontal and/or vertical lines and bright, single-colored sprites. But blowing up decades-old low-resolution graphics can also hurt the visual appeal of games designed to take advantage of the smoother edges and blended color gradients inherent to older cathode ray tube TVs.

Atari’s new console doesn’t offer the kind of scanline emulation or graphical filters that can help recreate that CRT glow in countless other emulation solutions (though a hardware switch does let you extend the standard 4:3 graphics to a sickeningly stretched-out 16:9). That means many of the sprites in games like Food Fight and Fatal Run end up looking like blocky riots of color when blown up to HD resolutions on the 7800+.

Beyond graphics, the 7800+ also doesn’t offer any modern emulation conveniences like save states, fast-forward and rewind, slow-mo, controller customization, or high-score tracking across sessions. Authenticity seems to have taken precedence over modern conveniences here.

Much like the original Atari 7800, the 7800+ is also backward-compatible with older Atari 2600 cartridges and controllers (re-created through the able Stella emulator). That’s a nice touch but also a little galling for anyone who already invested money in last year’s Atari 2600+, which the company is still selling for roughly the same price as the 7800+. Aside from the nostalgic styling of the box itself, we can’t see any reason why the less-capable 2600+ still needs to exist at all at this point.

A mess of a controller

In the US, the original Atari 7800 came with an oddly designed “ProLine” joystick featuring two buttons on either side of the base, designed to be hit with the thumb and index finger of your off hand. For the 7800+, Atari instead went with a controller modeled after the CX78 joypad released with the European version of the console.

This pad represents an odd inflection point in video game history, with a hard plastic thumbstick sticking out above a standard eight-way D-pad. Years before analog thumbsticks would become a console standard, this thumbstick feels incredibly fiddly for the console’s completely digital directional inputs. In a game like Asteroid Deluxe, for instance, I found turning to the right or left frequently led to thrusting forward with an accidental “up” push as well.

The CX78 pad was also the first packaged Atari controller with two face buttons, a la the familiar NES controller. Unfortunately, those buttons are spaced just far enough apart to make it extremely awkward to hit both at once using a single thumb, which is practically required in newer titles like Bentley Bear’s Crystal Quest. The whole thing seems designed for placing the controller in front of you and hitting the buttons with two separate fingers, which I found less than convenient.

The Atari 7800+ does feature two standard Atari console plugs in the front, making it compatible with pretty much all classic and revamped Atari controllers (and, oddly enough, Sega Genesis pads). Be wary, though; if a 7800 game requires two buttons, a lot of single-button Atari control options will prove insufficient.

The CX78+’s included wireless receivers (which plug into those controller ports) mean you don’t have to run any long cables from the system to your couch while playing the Atari 7800+. But a few important controls like pause and reset are stuck on the console itself—just as they were on the original Atari 7800—meaning you’ll probably want to have the system nearby anyway. It would have been nice to have additional buttons for these options on the controller itself, even if that would have diminished the authenticity of the controllers.

There are better versions of these games

The VIP package Atari sent me, along with a selection of cartridges. Credit: Kyle Orland

Since I’ve never owned an Atari 7800 cartridge, Atari sent me eight titles from its current line of retro cartridges to test alongside the updated hardware. This included a mix of original titles released in the ’80s and “homebrew elevation” cartridges that the company says are now “getting a well-deserved official Atari release.”

The titles I had to test were definitely a step up from the few dozen Atari 2600 games that I’ve accumulated and grown to tolerate over the years. A game like Asteroids Deluxe on the 7800 doesn’t quite match the vector graphics of the arcade original, but it comes a lot closer than the odd, colorful blobs of Asteroids on the 2600. The same goes for Frenzy on the 7800, which is a big step up from Berzerk on the 2600.

Still, I couldn’t help but feel that these arcade ports are better experienced these days on one of the many MAME-based or FPGA-based emulation boxes that can do justice to the original quarter munchers. And the more original titles I’ve sampled mostly ended up feeling like pale shadows of the NES games I knew and loved.

The new Bentley Bear’s Crystal Quest (which is included with the 7800+ package) comes across as an oversimplified knock-off of Adventure Island, for instance. And the rough vehicular combat of Fatal Run is much less engaging than the NES port of Atari’s own similar but superior Roadblasters arcade cabinet. The one exception to this rule that I found was Ninja Golf, a wacky, original mix of decent golfing and engaging run-and-punch combat.

Of course, I’m not really the target audience here. The ideal Atari 7800+ buyer is someone who still has nostalgic memories of the Atari 7800 games they played as a child and has held onto at least a few of them (and/or bought more modern homebrew cartridges) in the intervening decades.

If those retro gamers want an authentic but no-frills box that will upscale those cartridges for an HDTV, the Atari 7800+ will do the job and look cute on your mantel while it does. But any number of emulation solutions will probably do the job just as well and with more features to boot.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

The Atari 7800+ is a no-frills glimpse into a forgotten gaming era Read More »