Author name: Rejus Almole

amazon-joins-google-in-investing-in-small-modular-nuclear-power

Amazon joins Google in investing in small modular nuclear power


Small nukes is good nukes?

What’s with the sudden interest in nuclear power among tech titans?

Diagram of a reactor and its coolant system. There are two main components, the reactor itself, which has a top-to-bottom flow of fuel pellets, and the boiler, which receives hot gas from the reactor and uses it to boil water.

Fuel pellets flow down the reactor (left), as gas transfer heat to a boiler (right). Credit: X-energy

On Tuesday, Google announced that it had made a power purchase agreement for electricity generated by a small modular nuclear reactor design that hasn’t even received regulatory approval yet. Today, it’s Amazon’s turn. The company’s Amazon Web Services (AWS) group has announced three different investments, including one targeting a different startup that has its own design for small, modular nuclear reactors—one that has not yet received regulatory approval.

Unlike Google’s deal, which is a commitment to purchase power should the reactors ever be completed, Amazon will lay out some money upfront as part of the agreements. We’ll take a look at the deals and technology that Amazon is backing before analyzing why companies are taking a risk on unproven technologies.

Money for utilities and a startup

Two of Amazon’s deals are with utilities that serve areas where it already has a significant data center footprint. One of these is Energy Northwest, which is an energy supplier that sends power to utilities in the Pacific Northwest. Amazon is putting up the money for Energy Northwest to study the feasibility of adding small modular reactors to its Columbia Generating Station, which currently houses a single, large reactor. In return, Amazon will get the right to purchase power from an initial installation of four small modular reactors. The site could potentially support additional reactors, which Energy Northwest would be able to use to meet demands from other users.

The deal with Virginia’s Dominion Energy is similar in that it would focus on adding small modular reactors to Dominion’s existing North Anna Nuclear Generating Station. But the exact nature of the deal is a bit harder to understand. Dominion says the companies will “jointly explore innovative ways to advance SMR development and financing while also mitigating potential cost and development risks.”

Should either or both of these projects go forward, the reactor designs used will come from a company called X-energy, which is involved in the third deal Amazon is announcing. In this case, it’s a straightforward investment in the company, although the exact dollar amount is unclear (the company says Amazon is “anchoring” a $500 million round of investments). The money will help finalize the company’s reactor design and push it through the regulatory approval process.

Small modular nuclear reactors

X-energy is one of several startups attempting to develop small modular nuclear reactors. The reactors all have a few features that are expected to help them avoid the massive time and cost overruns associated with the construction of large nuclear power stations. In these small reactors, the limited size allows them to be made at a central facility and then be shipped to the power station for installation. This limits the scale of the infrastructure that needs to be built in place and allows the assembly facility to benefit from economies of scale.

This also allows a great deal of flexibility at the installation site, as you can scale the facility to power needs simply by adjusting the number of installed reactors. If demand rises in the future, you can simply install a few more.

The small modular reactors are also typically designed to be inherently safe. Should the site lose power or control over the hardware, the reactor will default to a state where it can’t generate enough heat to melt down or damage its containment. There are various approaches to achieving this.

X-energy’s technology is based on small, self-contained fuel pellets called TRISO particles for TRi-structural ISOtropic. These contain both the uranium fuel and a graphite moderator and are surrounded by a ceramic shell. They’re structured so that there isn’t sufficient uranium present to generate temperatures that can damage the ceramic, ensuring that the nuclear fuel will always remain contained.

The design is meant to run at high temperatures and extract heat from the reactor using helium, which is used to boil water and generate electricity. Each reactor can produce 80 megawatts of electricity, and the reactors are designed to work efficiently as a set of four, creating a 320 MW power plant. As of yet, however, there are no working examples of this reactor, and the design hasn’t been approved by the Nuclear Regulatory Commission.

Why now?

Why is there such sudden interest in small modular reactors among the tech community? It comes down to growing needs and a lack of good alternatives, even given the highly risky nature of the startups that hope to build the reactors.

It’s no secret that data centers require enormous amounts of energy, and the sudden popularity of AI threatens to raise that demand considerably. Renewables, as the cheapest source of power on the market, would be one way of satisfying that growth, but they’re not ideal. For one thing, the intermittent nature of the power they supply, while possible to manage at the grid level, is a bad match for the around-the-clock demands of data centers.

The US has also benefitted from over a decade of efficiency gains keeping demand flat despite population and economic growth. This has meant that all the renewables we’ve installed have displaced fossil fuel generation, helping keep carbon emissions in check. Should newly installed renewables instead end up servicing rising demand, it will make it considerably more difficult for many states to reach their climate goals.

Finally, renewable installations have often been built in areas without dedicated high-capacity grid connections, resulting in a large and growing backlog of projects (2.6 TW of generation and storage as of 2023) that are stalled as they wait for the grid to catch up. Expanding the pace of renewable installation can’t meet rising server farm demand if the power can’t be brought to where the servers are.

These new projects avoid that problem because they’re targeting sites that already have large reactors and grid connections to use the electricity generated there.

In some ways, it would be preferable to build more of these large reactors based on proven technologies. But not in two very important ways: time and money. The last reactor completed in the US was at the Vogtle site in Georgia, which started construction in 2009 but only went online this year. Costs also increased from $14 billion to over $35 billion during construction. It’s clear that any similar projects would start generating far too late to meet the near-immediate needs of server farms and would be nearly impossible to justify economically.

This leaves small modular nuclear reactors as the least-bad option in a set of bad options. Despite many startups having entered the space over a decade ago, there is still just a single reactor design approved in the US, that of NuScale. But the first planned installation saw the price of the power it would sell rise to the point where it was no longer economically viable due to the plunge in the cost of renewable power; it was canceled last year as the utilities that would have bought the power pulled out.

The probability that a different company will manage to get a reactor design approved, move to construction, and manage to get something built before the end of the decade is extremely low. The chance that it will be able to sell power at a competitive price is also very low, though that may change if demand rises sufficiently. So the fact that Amazon is making some extremely risky investments indicates just how worried it is about its future power needs. Of course, when your annual gross profit is over $250 billion a year, you can afford to take some risks.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Amazon joins Google in investing in small modular nuclear power Read More »

$250-analogue-3d-will-play-all-your-n64-cartridges-in-4k-early-next-year

$250 Analogue 3D will play all your N64 cartridges in 4K early next year

It’s been exactly one year since the initial announcement of the Analogue 3D, an HD-upscaled, FPGA-powered Nintendo 64 in the tradition of Analogue’s long-running line of high-end retro machines. Today, Analogue is revealing more details about the hardware, which will sell for $250 and plans to ship in the first quarter of 2025 (a slight delay from the previously announced 2024 release plan).

Like previous Analogue devices, the Analogue 3D uses a field-programmable gate array (FPGA) to simulate the actual logic gates found in original N64 hardware. That helps ensure 100 percent compatibility with the entire N64 cartridge library across all regions, Analogue promises, and should avoid the long-standing accuracy and lag issues inherent to most software-based emulation of the N64.

White and black hardware shells will be available for the Analogue 3D.

Credit: Analogue

White and black hardware shells will be available for the Analogue 3D. Credit: Analogue

To get that level of fidelity, the Analogue team spent four years programming an Altera Cyclone FPGA with a full 220,000 logic elements. That’s a big step up from previous Analogue devices—the Analogue Pocket’s main FPGA board featured just 49,000 logic elements three years ago. But the Analogue Pocket also included a second, 15,000-logic-element FPGA, which allowed it to run an expanding list of openFPGA cores to support games from other classic consoles.

Analogue has abandoned that additional FPGA for the Analogue 3D, meaning those openFPGA cores will not be usable on the new hardware. “If we wanted to offer Analogue 3D with openFPGA (which is not the purpose or focus of the product), it would require not only a second FPGA but an even more powerful base FPGA, therefore increasing the price to a price that doesn’t suit our goals,” Analogue founder Christopher Taber told Ars last year.

$250 Analogue 3D will play all your N64 cartridges in 4K early next year Read More »

fcc-republican-opposes-regulation-of-data-caps-with-analogy-to-coffee-refills

FCC Republican opposes regulation of data caps with analogy to coffee refills

Simington argued that regulating data caps would harm customers, using an analogy about the hypothetical regulation of coffee refills:

Suppose we were a different FCC, the Federal Coffee Commission, and rather than regulating the price of coffee (which we have vowed not to do), we instead implement a regulation whereby consumers are entitled to free refills on their coffees. What effects might follow? Well, I predict three things could happen: either cafés stop serving small coffees, or cafés charge a lot more for small coffees, or cafés charge a little more for all coffees.

Simington went on to compare the capacity of broadband networks to the coffee-serving capacity of coffee shops. He said that tiered coffee prices “can increase overall revenue for the café,” which can be invested “in more seats, more cafés, and faster coffee brewing.”

Simington is against rate regulation in general and said that regulation of usage-based plans (aka data caps) is just rate regulation with a different name. “Though only a Notice of Inquiry, because it is the first step down a path toward further rate regulation, I can’t support the item we’ve brewed up here. I dissent,” Simington wrote.

Carr: Data-capped plans “more affordable”

Carr’s statement said, “I dissent from today’s NOI because I cannot support the Biden-Harris Administration’s inexorable march towards rate regulation and because the FCC plainly does not have the legal authority to do so.”

Carr pointed to the recent 6th Circuit appeals court ruling that blocked the Rosenworcel FCC’s attempt to reinstate net neutrality rules under Title II of the Communications Act. Judges blocked enforcement of the net neutrality rules until the court makes a final ruling, saying that broadband providers are likely to win the case on the merits.

Carr said the FCC is “start[ing] down the path of directly regulating rates… by seeking comment on controlling the price of broadband capacity (‘data caps’). Prohibiting customers from choosing to purchase plans with data caps—which are more affordable than unlimited ones—necessarily regulates the service rates they are paying for.”

FCC Republican opposes regulation of data caps with analogy to coffee refills Read More »

apple-a17-pro-chip-is-the-star-of-the-first-ipad-mini-update-in-three-years

Apple A17 Pro chip is the star of the first iPad mini update in three years

Apple quietly announced a new version of its iPad mini tablet via press release this morning, the tablet’s first update since 2021.

The seventh-generation iPad mini looks mostly identical to the sixth-generation version, with a power-button-mounted Touch ID sensor and a slim-bezeled display. But Apple has swapped out the A15 Bionic chip for the Apple A17 Pro, the same processor it used in the iPhone 15 Pro last year.

The new iPad mini is available for preorder now and starts at $499 for 128GB (an upgrade over the previous base model’s 64GB of storage). 256GB and 512GB versions are available for $599 and $799, and cellular connectivity is an additional $150 on top of any of those prices.

Apple says the A17 Pro’s CPU performance is 30 percent faster than the A15’s and that its GPU performance is 25 percent faster (in addition to supporting hardware-accelerated ray tracing). But the biggest improvement will be an increase in RAM—the A17 Pro comes with 8GB instead of the A15’s 4GB, which appears to be Apple’s floor for the new Apple Intelligence AI features. The new iPad mini will be the only iPad mini capable of supporting Apple Intelligence, which will begin rolling out with the iPadOS 18.1 update within the next few weeks.

Apple A17 Pro chip is the star of the first iPad mini update in three years Read More »

apple-study-exposes-deep-cracks-in-llms’-“reasoning”-capabilities

Apple study exposes deep cracks in LLMs’ “reasoning” capabilities

This kind of variance—both within different GSM-Symbolic runs and compared to GSM8K results—is more than a little surprising since, as the researchers point out, “the overall reasoning steps needed to solve a question remain the same.” The fact that such small changes lead to such variable results suggests to the researchers that these models are not doing any “formal” reasoning but are instead “attempt[ing] to perform a kind of in-distribution pattern-matching, aligning given questions and solution steps with similar ones seen in the training data.”

Don’t get distracted

Still, the overall variance shown for the GSM-Symbolic tests was often relatively small in the grand scheme of things. OpenAI’s ChatGPT-4o, for instance, dropped from 95.2 percent accuracy on GSM8K to a still-impressive 94.9 percent on GSM-Symbolic. That’s a pretty high success rate using either benchmark, regardless of whether or not the model itself is using “formal” reasoning behind the scenes (though total accuracy for many models dropped precipitously when the researchers added just one or two additional logical steps to the problems).

An example showing how some models get mislead by irrelevant information added to the GSM8K benchmark suite.

An example showing how some models get mislead by irrelevant information added to the GSM8K benchmark suite. Credit: Apple Research

The tested LLMs fared much worse, though, when the Apple researchers modified the GSM-Symbolic benchmark by adding “seemingly relevant but ultimately inconsequential statements” to the questions. For this “GSM-NoOp” benchmark set (short for “no operation”), a question about how many kiwis someone picks across multiple days might be modified to include the incidental detail that “five of them [the kiwis] were a bit smaller than average.”

Adding in these red herrings led to what the researchers termed “catastrophic performance drops” in accuracy compared to GSM8K, ranging from 17.5 percent to a whopping 65.7 percent, depending on the model tested. These massive drops in accuracy highlight the inherent limits in using simple “pattern matching” to “convert statements to operations without truly understanding their meaning,” the researchers write.

Introducing irrelevant information to the prompts often led to “catastrophic” failure for most “reasoning” LLMs

Introducing irrelevant information to the prompts often led to “catastrophic” failure for most “reasoning” LLMs Credit: Apple Research

In the example with the smaller kiwis, for instance, most models try to subtract the smaller fruits from the final total because, the researchers surmise, “their training datasets included similar examples that required conversion to subtraction operations.” This is the kind of “critical flaw” that the researchers say “suggests deeper issues in [the models’] reasoning processes” that can’t be helped with fine-tuning or other refinements.

Apple study exposes deep cracks in LLMs’ “reasoning” capabilities Read More »

the-internet-archive-and-its-916-billion-saved-web-pages-are-back-online

The Internet Archive and its 916 billion saved web pages are back online

Last week, hackers defaced the Internet Archive website with a message that said, “Have you ever felt like the Internet Archive runs on sticks and is constantly on the verge of suffering a catastrophic security breach? It just happened. See 31 million of you on HIBP!”

HIBP is a reference to Have I Been Pwned, which was created by security researcher Troy Hunt and provides information and notifications on data breaches. The hacked Internet Archive data was sent to Have I Been Pwned and “contains authentication information for registered members, including their email addresses, screen names, password change timestamps, Bcrypt-hashed passwords, and other internal data,” BleepingComputer wrote.

Kahle said on October 9 that the Internet Archive fended off a DDoS attack and was working on upgrading security in light of the data breach and website defacement. The next day, he reported that the “DDoS folks are back” and had knocked the site offline. The Internet Archive “is being cautious and prioritizing keeping data safe at the expense of service availability,” he added.

“Services are offline as we examine and strengthen them… Estimated Timeline: days, not weeks,” he wrote on October 11. “Thank you for the offers of pizza (we are set).”

The Internet Archive and its 916 billion saved web pages are back online Read More »

routine-dental-x-rays-are-not-backed-by-evidence—experts-want-it-to-stop

Routine dental X-rays are not backed by evidence—experts want it to stop


The actual recommendations might surprise you—along with the state of modern dentistry.

An expert looking at a dental X-ray and saying “look at that unnecessary X-ray,” probably. Credit: Getty | MilanEXPO

Has your dentist ever told you that it’s recommended to get routine dental X-rays every year? My (former) dentist’s office did this year—in writing, even. And they claimed that the recommendation came from the American Dental Association.

It’s a common refrain from dentists, but it’s false. The American Dental Association does not recommend annual routine X-rays. And this is not new; it’s been that way for well over a decade.

The association’s guidelines from 2012 recommended that adults who don’t have an increased risk of dental caries (myself included) need only bitewing X-rays of the back teeth every two to three years. Even people with a higher risk of caries can go as long as 18 months between bitewings. The guidelines also note that X-rays should not be preemptively used to look for problems: “Radiographic screening for the purpose of detecting disease before clinical examination should not be performed,” the guidelines read. In other words, dentists are supposed to examine your teeth before they take any X-rays.

But, of course, the 2012 guidelines are outdated—the latest ones go further. In updated guidance published in April, the ADA doesn’t recommend any specific time window for X-rays at all. Rather, it emphasizes that patient exposure to X-rays should be minimized, and any X-rays should be clinically justified.

There’s a good chance you’re surprised. Dentistry’s overuse of X-rays is a problem dentists do not appear eager to discuss—and would likely prefer to skirt. My former dentist declined to comment for this article, for example. And other dentists have been doing that for years. Nevertheless, the problem is well-established. A New York Times article from 2016, titled “You Probably Don’t Need Dental X-Rays Every Year,” quoted a dental expert noting the exact problem:

“Many patients of all ages receive bitewing X-rays far more frequently than necessary or recommended. And adults in good dental health can go a decade between full-mouth X-rays.”

Data is lacking

The problem has bubbled up again in a series of commentary pieces published in JAMA Internal Medicine today. The pieces were all sparked by a viewpoint that Ars reported on in May, in which three dental and health experts highlighted that many routine aspects of dentistry, including biannual cleanings, are not evidence-based and that the industry is rife with overdiagnosis and overtreatment. That viewpoint, titled “Too Much Dentistry,” also appeared in JAMA Internal Medicine.

The new pieces take a more specific aim at dental radiography. But, as in the May viewpoint, experts also blasted dentistry more generally for being out of step with modern medicine in its lack of data to support its practices—practices that continue amid financial incentives to overtreat and little oversight to stop it, they note.

In a piece titled “Too Much Dental Radiography,” Sheila Feit, a retired medical expert based in New York, pointed out that using X-rays for dental screenings is not backed by evidence. “Data are lacking about outcomes,” she wrote. If anything, the weak data we have makes it look ineffective. For instance, a 2021 systemic review of 77 studies that included data on a total of 15,518 tooth sites or surfaces found that using X-rays to detect early tooth decay led to a high degree of false-negative results. In other words, it led to missed cases.

Feit called for gold-standard randomized clinical trials to evaluate the risks and benefits of X-ray screenings for patients, particularly adults at low risk of caries. “Financial aspects of dental radiography also deserve further study,” Feit added. Overall, Feit called the May viewpoint “a timely call for evidence to support or refute common clinical dental practices.”

Dentistry without oversight

In a response published simultaneously in JAMA Internal Medicine, oral medicine expert Yehuda Zadik championed Feit’s point, calling it “an essential discussion about the necessity and risks of routine dental radiography, emphasizing once again the need for evidence-based dental care.”

Zadik, a professor of dental medicine at The Hebrew University of Jerusalem, noted that the overuse of radiography in dentistry is a global problem, one aided by dentistry’s unique delivery:

“Dentistry is among the few remaining health care professions where clinical examination, diagnostic testing including radiographs, diagnosis, treatment planning, and treatment are all performed in place, often by the same care practitioner” Zadik wrote. “This model of care delivery prevents external oversight of the entire process.”

While routine X-rays continue at short intervals, Zadik notes that current data “favor the reduction of patient exposure to diagnostic radiation in dentistry,” while advancements in dentistry dictate that X-rays should be used at “longer intervals and based on clinical suspicion.”

Though the digital dental X-rays often used today provide smaller doses of radiation than the film X-rays used in the past, radiation’s harms are cumulative. Zadik emphasizes that with the primary tenet of medicine being “First, do no harm,” any unnecessary X-ray is an unnecessary harm. Further, other technology can sometimes be used instead of radiography, including electronic apex locators for root canal procedures.

“Just as it is now unimaginable that, in the past, shoe fittings for children were conducted using X-rays, in the future it will be equally astonishing to learn that the fit of dental crowns was assessed using radiographic imaging,” Zadik wrote.

X-rays do more harm than good in children

Feit’s commentary also prompted a reply from the three authors of the original May viewpoint: Paulo Nadanovsky, Ana Paula Pires dos Santos, and David Nunan. The three followed up on Feit’s point that data is weak on whether X-rays are useful for detecting early decay, specifically white spot lesions. The experts raise the damning point that even if dental X-rays were shown to be good at doing that, there’s still no evidence that that’s good for patients.

“[T]here is no evidence that detecting white spot lesions, with or without radiographs, benefits patients,” the researchers wrote. “Most of these lesions do not progress into dentine cavities,” and there’s no evidence that early treatments make a difference in the long run.

To bolster the point, the three note that data from children suggest that X-ray screening does more harm than good. In a randomized clinical trial published in 2021, 216 preschool children were split into two groups: one that received only a visual-tactile dental exam, while the others received both a visual-tactile exam and X-rays. The study found that adding X-rays caused more harm than benefit because the X-rays led to false positives and overdiagnosis of cavitated caries needing restorative treatment. The authors of the trial concluded that “visual inspection should be conducted alone in regular clinical practice.”

Like Zadik, the three researchers note that screenings for decay and cavities are not the only questionable use of X-rays in dental practice. Other common dental and orthodontic treatments involving radiography—practices often used in children and teens—might also be unnecessary harms. They raise the argument against the preventive removal of wisdom teeth, which is also not backed by evidence.

Like Feit, the three researchers reiterate the call for well-designed trials to back up or refute common dental practices.

Photo of Beth Mole

Beth is Ars Technica’s Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes.

Routine dental X-rays are not backed by evidence—experts want it to stop Read More »

people-think-they-already-know-everything-they-need-to-make-decisions

People think they already know everything they need to make decisions

The obvious difference was the decisions they made. In the group that had read the article biased in favor of merging the schools, nearly 90 percent favored the merger. In the group that had read the article that was biased by including only information in favor of keeping the schools separate, less than a quarter favored the merger.

The other half of the experimental population wasn’t given the survey immediately. Instead, they were given the article that they hadn’t read—the one that favored the opposite position of the article that they were initially given. You can view this group as doing the same reading as the control group, just doing so successively rather than in a single go. In any case, this group’s responses looked a lot like the control’s, with people roughly evenly split between merger and separation. And they became less confident in their decision.

It’s not too late to change your mind

There is one bit of good news about this. When initially forming hypotheses about the behavior they expected to see, Gehlbach, Robinson, and Fletcher suggested that people would remain committed to their initial opinions even after being exposed to a more complete picture. However, there was no evidence of this sort of stubbornness in these experiments. Instead, once people were given all the potential pros and cons of the options, they acted as if they had that information the whole time.

But that shouldn’t obscure the fact that there’s a strong cognitive bias at play here. “Because people assume they have adequate information, they enter judgment and decision-making processes with less humility and more confidence than they might if they were worrying whether they knew the whole story or not,” Gehlbach, Robinson, and Fletcher.

This is especially problematic in the current media environment. Many outlets have been created with the clear intent of exposing their viewers to only a partial view of the facts—or, in a number of cases, the apparent intent of spreading misinformation. The new work clearly indicates that these efforts can have a powerful effect on beliefs, even if accurate information is available from various sources.

PLOS ONE, 2024. DOI: 10.1371/journal.pone.0310216  (About DOIs).

People think they already know everything they need to make decisions Read More »

rare-bear-meat-at-gathering-gives-10-people-a-scare—and-parasitic-worms

Rare bear meat at gathering gives 10 people a scare—and parasitic worms

If you’re going to eat a bear, make sure it’s not rare.

You’d be forgiven for thinking that once the beast has been subdued, all danger has passed. But you might still be in for a scare. The animal’s flesh can be riddled with encased worm larvae, which, upon being eaten, will gladly reproduce in your innards and let their offspring roam the rest of your person, including invading your brain and heart. To defeat these savage squirmers, all one must do is cook the meat to at least 165° Fahrenheit.

But that simple solution continues to be ignored, according to a report today in the Centers for Disease Control and Prevention’s Morbidity and Mortality Weekly Report. In this week’s issue, health officials in North Carolina report that rare bear meat was served at a November 23 gathering, where at least 22 people ate the meat and at least 10 developed symptoms of a worm infection. Of the 10, six were kids and teens between the ages of 10 and 18.

The infection is from the roundworm Trichinella, which causes trichinellosis. While the infection is rarely fatal, the nematodes tend to burrow out of the bowels and meander through the body, embedding in whatever muscle tissue they come across. A telltale sign of an infection in people is facial swelling, caused when the larvae take harbor in the muscles of the face and around the eyes. Of the 10 ill people in North Carolina, nine had facial swelling.

Local health officials were onto the outbreak when one person developed flu-like symptoms and puzzling facial swelling. They then traced it back to the gathering. The report doesn’t specify what kind of gathering it was but noted that 34 attendees in total were surveyed, from which they found the 22 people who ate the rare meat. The 10 people found with symptoms are technically considered only “probable” cases because the infections were never diagnostically confirmed. To confirm a trichinellosis infection, researchers need blood samples taken after the person recovers to look for antibodies against the parasite. None of the 10 people returned for blood draws.

Rare bear meat at gathering gives 10 people a scare—and parasitic worms Read More »

asahi-linux’s-bespoke-gpu-driver-is-running-windows-games-on-apple-silicon-macs

Asahi Linux’s bespoke GPU driver is running Windows games on Apple Silicon Macs

A few years ago, the idea of running PC games on a Mac, in Linux, or on Arm processors would have been laughable. But the developers behind Asahi Linux—the independent project that is getting Linux working on Apple Silicon Macs—have managed to do all three of these things at once.

The feat brings together a perfect storm of open source projects, according to Asahi Linux GPU lead Alyssa Rosenzweig: the FEX project to translate x86 CPU code to Arm, the Wine project to get Windows binaries running on Linux, DXVK and the Proton project to translate DirectX 12 API calls into Vulkan API calls, and of course the Asahi project’s Vulkan-conformant driver for Apple’s graphics hardware.

Games are technically run inside a virtual machine because of differences in how Apple Silicon and x86 systems address memory—Apple’s systems use 16 KB memory pages, while x86 systems use 4 KB pages, something that causes issues for Asahi and some other Arm Linux distros on a regular basis and a gap that the VM bridges.

You’d never guess that this was the Windows version of Fallout 4 running on a Mac that was running Linux. Credit: Alyssa Rosenzweig

Rosenzweig’s post shows off screenshots of ControlFallout 4The Witcher 3GhostrunnerCyberpunk 2077, Portal 2, and Hollow Knight, though as she notes, most of these games won’t run at anywhere near 60 frames per second yet.

Asahi Linux’s bespoke GPU driver is running Windows games on Apple Silicon Macs Read More »

steam-adds-the-harsh-truth-that-you’re-buying-“a-license,”-not-the-game-itself

Steam adds the harsh truth that you’re buying “a license,” not the game itself

There comes a point in most experienced Steam shoppers’ lives where they wonder what would happen if their account was canceled or stolen, or perhaps they just stopped breathing. It’s scary to think about how many games in your backlog will never get played; scarier, still, to think about how you don’t, in most real senses of the word, own any of them.

Now Valve, seemingly working to comply with a new California law targeting “false advertising” of “digital goods,” has added language to its checkout page to confirm that thinking. “A purchase of a digital product grants a license for the product on Steam,” the Steam cart now tells its customers, with a link to the Steam Subscriber Agreement further below.

Credit: Kevin Purdy

California’s AB2426 law, signed by Gov. Gavin Newsom Sept. 26, excludes subscription-only services, free games, and digital goods that offer “permanent offline download to an external storage source to be used without a connection to the internet.” Otherwise, sellers of digital goods cannot use the terms “buy, purchase,” or related terms that would “confer an unrestricted ownership interest in the digital good.” And they must explain, conspicuously, in plain language, that “the digital good is a license” and link to terms and conditions.

Steam adds the harsh truth that you’re buying “a license,” not the game itself Read More »

trek-carback-bike-radar-lets-you-know-when-cars-are-approaching

Trek CarBack bike radar lets you know when cars are approaching

“Car back!”

If you’ve ever been on a group bike ride, you’ve no doubt heard these two words shouted by a nearby rider. It’s also the name of Trek’s new bike radar.

For safety-conscious cyclists, bike radars have been a game-changer. Usually mounted on the seat post, the radar units alert cyclists to cars approaching from behind. While they will work on any bike on any road, bike radar is most useful in suburban and rural settings. After all, if you’re doing some urban bike commuting, you’ll just assume cars are behind you because that’s how it is. But on more open roads with higher speed limits or free-flowing traffic, bike radars are fantastic.

While a handful of companies make them, the Garmin Varia is the best-known and most popular option. The Varia is so popular that it is nearing the proprietary eponym status of Kleenex and Taser among cyclists. Trek hopes to change that with its new CarBack bike radar.

Like other bike radars, the CarBack can be used with either a cycling computer or your smartphone. Mounted either on a seat post or the back of a Bontrager saddle, the CarBack can detect vehicles approaching from as far away as 150 meters, beeping at you once one is in its range.

The CarBack plays just as nicely with Garmin bike computers as the Varia does. When a car comes within range, your bike computer will chirp, the edges of the screen turn orange, and a dot showing the car’s relative position travels up the right side of the screen—exactly the same as riding with a Varia.

Speaking of the Varia, there are three significant differences between it and the CarBack. The first is the effective range, 140 meters for the Varia versus the CarBack’s 150 meters. While riding, I didn’t have the feeling that I was getting alerts sooner. But testing on a busy street demonstrated that the CarBack does have at least a few more meters of range than the Varia.

Trek CarBack bike radar lets you know when cars are approaching Read More »