Author name: Kris Guyer

deadly-morel-mushroom-outbreak-highlights-big-gaps-in-fungi-knowledge

Deadly morel mushroom outbreak highlights big gaps in fungi knowledge

This fungi’s not fun, guys —

Prized morels are unpredictably and puzzlingly deadly, outbreak report shows.

Mature morel mushrooms in a greenhouse at an agriculture garden in Zhenbeibu Town of Xixia District of Yinchuan, northwest China's Ningxia Hui Autonomous Region.

Enlarge / Mature morel mushrooms in a greenhouse at an agriculture garden in Zhenbeibu Town of Xixia District of Yinchuan, northwest China’s Ningxia Hui Autonomous Region.

True morel mushrooms are widely considered a prized delicacy, often pricey and surely safe to eat. But these spongey, earthy forest gems have a mysterious dark side—one that, on occasion, can turn deadly, highlighting just how little we know about morels and fungi generally.

On Thursday, Montana health officials published an outbreak analysis of poisonings linked to the honeycombed fungi in March and April of last year. The outbreak sickened 51 people who ate at the same restaurant, sending four to the emergency department. Three were hospitalized and two died. Though the health officials didn’t name the restaurant in their report, state and local health departments at the time identified it as Dave’s Sushi in Bozeman. The report is published in the Centers for Disease Control and Prevention’s Morbidity and Mortality Weekly Report.

The outbreak coincided with the sushi restaurant introducing a new item: a “special sushi roll” that contained salmon and morel mushrooms. The morels were a new menu ingredient for Dave’s. They were served two ways: On April 8, the morels were served partially cooked, with a hot, boiled sauce poured over the raw mushrooms and left to marinate for 75 minutes; and on April 17, they were served uncooked and cold-marinated.

The mystery poison worked fast. Symptoms began, on average, about an hour after eating at the restaurant. And it was brutal. “Vomiting and diarrhea were reportedly profuse,” the health officials wrote, “and hospitalized patients had clinical evidence of dehydration. The two patients who died had chronic underlying medical conditions that might have affected their ability to tolerate massive fluid loss.”

Of the 51 sickened, 46 were restaurant patrons and five were employees. Among them, 45 (88 percent) recalled eating morels. While that’s a high percentage for such an outbreak investigation, certainly enough to make the morels the prime suspect, the health officials went further. With support from the CDC, they set up a matched case-control study, having people complete a detailed questionnaire with demographic information, food items they ate at the restaurant, and symptoms.

Mysterious poison

Forty-one of the poisoned people filled out the questionnaire, as did 22 control patrons who ate at the restaurant but did not report subsequent illness. The analysis indicated that the odds of recalling eating the special sushi roll were nearly 16 times higher among the poisoned patrons than among the controls. The odds of reporting any morel consumption were nearly 11 times higher than controls.

The detailed consumption data also allowed the health officials to model a dose response, which suggested that with each additional piece of the special roll a person recalled eating, their odds of sickness increased nearly threefold compared with people who reported eating none. Those who ate four or more pieces of the roll had odds nearly 22.5 times higher. A small analysis focusing on the five employees sickened, which was not included in the published study but was noted by the Food and Drug Administration, echoed the dose-response finding, indicating that sickness was linked with larger amounts of morel consumption.

When the officials broke down the analysis by people who ate at the restaurant on April 17, when the morels were served uncooked, and those who ate at the restaurant on April 8, when the mushrooms were slightly cooked, the cooking method seemed to matter. People who ate the uncooked rather than the slightly cooked mushrooms had much higher odds of sickness.

This all strongly points to the morels being responsible. At the time, the state and local health officials engaged the FDA, as well as the CDC, to help tackle the outbreak investigation. But the FDA reported that “samples of morel mushrooms collected from the restaurant were screened for pesticides, heavy metals, toxins, and pathogens. No significant findings were identified.” In addition, the state and local health officials noted that DNA sequencing identified the morels used by the restaurant as Morchella sextelata, a species of true morel. This rules out the possibility that the mushrooms were look-alike morels, called “false morels,” which are known to contain a toxin called gyromitrin.

The health officials and the FDA tracked down the distributor of the mushrooms, finding they were cultivated and imported fresh from China. Records indicated that 12 other locations in California also received batches of the mushrooms. Six of those facilities responded to inquiries from the California health department and the FDA, and all six reported no illnesses. They also all reported cooking the morels or at least thoroughly heating them.

Deadly morel mushroom outbreak highlights big gaps in fungi knowledge Read More »

us-government-agencies-demand-fixable-ice-cream-machines

US government agencies demand fixable ice cream machines

I scream, you scream, we all scream for 1201(c)3 exemptions —

McFlurries are a notable part of petition for commercial and industrial repairs.

Taylor ice cream machine, with churning spindle removed by hand.

Enlarge / Taylor’s C709 Soft Serve Freezer isn’t so much mechanically complicated as it is a software and diagnostic trap for anyone without authorized access.

Many devices have been made difficult or financially nonviable to repair, whether by design or because of a lack of parts, manuals, or specialty tools. Machines that make ice cream, however, seem to have a special place in the hearts of lawmakers. Those machines are often broken and locked down for only the most profitable repairs.

The Federal Trade Commission and the antitrust division of the Department of Justice have asked the US Copyright Office (PDF) to exempt “commercial soft serve machines” from the anti-circumvention rules of Section 1201 of the Digital Millennium Copyright Act (DMCA). The governing bodies also submitted proprietary diagnostic kits, programmable logic controllers, and enterprise IT devices for DMCA exemptions.

“In each case, an exemption would give users more choices for third-party and self-repair and would likely lead to cost savings and a better return on investment in commercial and industrial equipment,” the joint comment states. Those markets would also see greater competition in the repair market, and companies would be prevented from using DMCA laws to enforce monopolies on repair, according to the comment.

The joint comment builds upon a petition filed by repair vendor and advocate iFixit and interest group Public Knowledge, which advocated for broad reforms while keeping a relatable, ingestible example at its center. McDonald’s soft serve ice cream machines, which are famously frequently broken, are supplied by industrial vendor Taylor. Taylor’s C709 Soft Serve Freezer requires lengthy, finicky warm-up and cleaning cycles, produces obtuse error codes, and, perhaps not coincidentally, costs $350 per 15 minutes of service for a Taylor technician to fix. iFixit tore down such a machine, confirming the lengthy process between plugging in and soft serving.

After one company built a Raspberry Pi-powered device, the Kytch, that could provide better diagnostics and insights, Taylor moved to ban franchisees from installing the device, then offered up its own competing product. Kytch has sued Taylor for $900 million in a case that is still pending.

Beyond ice cream, the petitions to the Copyright Office would provide more broad exemptions for industrial and commercial repairs that require some kind of workaround, decryption, or other software tinkering. Going past technological protection measures (TPMs) was made illegal by the 1998 DMCA, which was put in place largely because of the concerns of media firms facing what they considered rampant piracy.

Every three years, the Copyright Office allows for petitions to exempt certain exceptions to DMCA violations (and renew prior exemptions). Repair advocates have won exemptions for farm equipment repair, video game consoles, cars, and certain medical gear. The exemption is often granted for device fixing if a repair person can work past its locks, but not for the distribution of tools that would make such a repair far easier. The esoteric nature of such “release valve” offerings has led groups like the EFF to push for the DMCA’s abolishment.

DMCA exemptions occur on a parallel track to state right-to-repair bills and broader federal action. President Biden issued an executive order that included a push for repair reforms. The FTC has issued studies that call out unnecessary repair restrictions and has taken action against firms like Harley-Davidson, Westinghouse, and grill maker Weber for tying warranties to an authorized repair service.

Disclosure: Kevin Purdy previously worked for iFixit. He has no financial ties to the company.

US government agencies demand fixable ice cream machines Read More »

gm-uses-ai-tool-to-determine-which-truck-stops-should-get-ev-chargers

GM uses AI tool to determine which truck stops should get EV chargers

help me choose —

Forget LLM chatbots; this seems like an actually useful implementation of AI.

A 2024 Chevrolet Silverado EV WT at a pull-through charging stall located at a flagship Pilot and Flying J travel center, as part of the new coast-to-coast fast charging network.

Enlarge / A 2024 Chevrolet Silverado EV WT at a pull-through charging stall located at a flagship Pilot and Flying J travel center, as part of the new coast-to-coast fast charging network.

General Motors

It’s understandable if you’re starting to experience AI fatigue; it feels like every week, there’s another announcement of some company boasting about how an LLM chatbot will revolutionize everything—usually followed in short succession by news reports of how terribly wrong it’s all gone. But it turns out that not every use of AI by an automaker is a public relations disaster. As it happens, General Motors has been using machine learning to help guide business decisions regarding where to install new DC fast chargers for electric vehicles.

GM’s transformation into an EV-heavy company has not gone entirely smoothly thus far, but in 2022, it revealed that, together with the Pilot company, it was planning to deploy a network of 2,000 DC fast chargers at Flying J and Pilot travel centers around the US. But how to decide which locations?

“I think that the overarching theme is we’re really looking for opportunities to simplify the lives of our customers, our employees, our dealers, and our suppliers,” explained Jon Francis, GM’s chief data and analytics officer. “And we see the positive effects of AI at scale, whether that’s in the manufacturing part of the business, engineering, supply chain, customer experience—it really runs through threads through all of those.

“Obviously, the place where it shows up most directly is certainly in autonomous, and that’s an important use case for us, but actually [on a] day-to-day basis, AI is improving a lot of systems and workflows within the organization,” he told Ars.

“There’s a lot of companies—and not to name names, but there’s some chasing of shiny objects, and I think there are a lot of cool, sexy things that you can do with AI, but for GM, we’re really looking for solutions that are going to drive the business in a meaningful way,” Francis said.

GM wants to build out chargers at about 200 Flying J and Pilot travel centers by the end of 2024, but narrowing down exactly which locations to focus on was the big question. After all, there are more than 750 spread out across 44 US states and six Canadian provinces.

Obviously, traffic is a big concern—each DC fast charger costs anywhere from $100,000 to $300,000 dollars, and that’s not counting any costs associated with beefing up the electrical infrastructure to power them, nor the various permitting processes that tend to delay everything. Sticking a bank of chargers at a travel center that’s rarely visited isn’t the best use of resources, but neither is deploying them in an area that’s already replete with other fast chargers.

Much of the data GM showed me was confidential, but this screenshot should give you an idea of how the various datasets combine.

Enlarge / Much of the data GM showed me was confidential, but this screenshot should give you an idea of how the various datasets combine.

General Motors

Which is where the ML came in. GM’s data scientists built tools that aggregate different GIS datasets together. For example, it has a geographic database of already deployed DC chargers around the country—the US Department of Energy maintains such a resource—overlayed with traffic data and then the locations of the travel centers. The result is a map with potential locations, which GM’s team then uses to narrow down the exact sites it wants to choose.

It’s true that if you had access to all those datasets, you could probably do all that manually. But we’re talking datasets with, in some cases, billions of data points. A few years ago, GM’s analysts could have done that at a city level without spending years on the project, but doing it on a nationwide scale is the kind of task that requires the amount of cloud platforms and distributed clusters that are really now only becoming commonplace.

As a result, GM was able to deploy the first 25 sites last year, with 100 charging stalls across the 25. By the end of this year, it told Ars it should have around 200 locations operational.

That certainly seems more useful to me than just another chatbot.

GM uses AI tool to determine which truck stops should get EV chargers Read More »

blue-cheese-shows-off-new-colors,-but-the-taste-largely-remains-the-same

Blue cheese shows off new colors, but the taste largely remains the same

Am I blue? —

Future varieties could be yellow-green, reddish-brown-pink, or light blue.

Scientists at University of the Nottingham have discovered how to create different colours of blue cheese.

Enlarge / Scientists at the University of Nottingham have discovered how to create different colors of blue cheese.

University of Nottingham

Gourmands are well aware of the many varieties of blue cheese, known by the blue-green veins that ripple through the cheese. Different kinds of blue cheese have distinctive flavor profiles: they can be mild or strong, sweet or salty, for example. Soon we might be able to buy blue cheeses that belie the name and sport veins of different colors: perhaps yellow-green, reddish-brown-pink, or lighter/darker shades of blue, according to a recent paper published in the journal Science of Food.

“We’ve been interested in cheese fungi for over 10 years, and traditionally when you develop mould-ripened cheeses, you get blue cheeses such as Stilton, Roquefort, and Gorgonzola, which use fixed strains of fungi that are blue-green in color,” said co-author Paul Dyer of the University of Nottingham of this latest research. “We wanted to see if we could develop new strains with new flavors and appearances.”

Blue cheese has been around for a very long time. Legend has it that a young boy left his bread and ewe’s milk cheese in a nearby cave to pursue a lovely young lady he’d spotted in the distance. Months later, he came back to the cave and found it had molded into Roquefort. It’s a fanciful tale, but scholars think the basic idea is sound: people used to store cheeses in caves because their temperature and moisture levels were especially hospitable to harmless molds. That was bolstered by a 2021 analysis of paleofeces that found evidence that Iron Age salt miners in Hallstatt (Austria) between 800 and 400 BCE were already eating blue cheese and quaffing beer.

Color derivatives.

Enlarge / Color derivatives.

The manufacturing process for blue cheese is largely the same as for any cheese, with a few crucial additional steps. It requires cultivation of Penicillium roqueforti, a mold that thrives on exposure to oxygen. The P. roqueforti is added to the cheese, sometimes before curds form and sometimes mixed in with curds after they form. The cheese is then aged in a temperature-controlled environment. Lactic acid bacteria trigger the initial fermentation but eventually die off, and the P. roqueforti take over as secondary fermenters. Piercing the curds forms air tunnels in the cheese, and the mold grows along those surfaces to produce blue cheese’s signature veining.

Once scientists published the complete genome for P. roqueforti, it opened up opportunities for studying this blue cheese fungus, per Dyer et al. Different strains “can have different colony cultures and textures, with commercial strains being sold partly on the basis of color development,” they wrote. This coloration comes from pigments in the coatings of the spores that form as the colony grows. Dyer and his co-authors set out to determine the genetic basis of this pigment formation in the hopes of producing altered strains with different spore coat colors.

The team identified a specific biochemical pathway, beginning with a white color that gradually goes from yellow-green, red-brown-pink, dark brown, light blue, and ultimately that iconic dark blue-green. They used targeted gene deletion to block pigment biosynthesis genes at various points in this pathway. This altered the spore color, providing a proof of principle without adversely affecting the production of flavor volatiles and levels of secondary metabolites called mycotoxins. (The latter are present in low enough concentrations in blue cheese so as not to be a health risk for humans, and the team wanted to ensure those concentrations remained low.)

Pencillium roqueforti. (right) Cross sections of cheeses made with the original (dark blue-green) or new color (red-brown, bright green, white albino) strains of the fungus.” height=”371″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/02/bluecheese3-640×371.jpg” width=”640″>

Enlarge / (left) Spectrum of color strains produced in Pencillium roqueforti. (right) Cross sections of cheeses made with the original (dark blue-green) or new color (red-brown, bright green, white albino) strains of the fungus.

University of Nottingham

However, food industry regulations prohibit gene-deletion fungal strains for commercial cheese production. So Dyer et al. used UV mutagenesis—essentially “inducing sexual reproduction in the fungus,” per Dyer—to produce non-GMO mutant strains of the fungi to create “blue” cheeses of different colors, without increasing mycotoxin levels or impacting the volatile compounds responsible for flavor.

“The interesting part was that once we went on to make some cheese, we then did some taste trials with volunteers from across the wider university, and we found that when people were trying the lighter colored strains they thought they tasted more mild,” said Dyer. “Whereas they thought the darker strain had a more intense flavor. Similarly, with the more reddish-brown and a light green one, people thought they had a fruity, tangy element to them—whereas, according to the lab instruments, they were very similar in flavor. This shows that people do perceive taste not only from what they taste but also by what they see.”

Dyer’s team is hoping to work with local cheese makers in Nottingham and Scotland, setting up a spinoff company in hopes of commercializing the mutant strains. And there could be other modifications on the horizon. “Producers could almost dial up their list of desirable characteristics—more or less color, faster or slower growth rate, acidity differences,” Donald Glover of the University of Queensland in Australia, who was not involved in the research, told New Scientist.

Science of Food, 2024. DOI: 10.1038/s41538-023-00244-9  (About DOIs).

Blue cheese shows off new colors, but the taste largely remains the same Read More »

unreleased-preview-of-microsoft’s-os/2-2.0-is-a-glimpse-down-a-road-not-taken

Unreleased preview of Microsoft’s OS/2 2.0 is a glimpse down a road not taken

OS/2 the future —

Microsoft’s involvement in IBM’s OS/2 project ended before v2.0 was released.

This big, weathered box contains an oddball piece of PC history: one of the last builds of IBM's OS/2 that Microsoft worked on before pivoting all of its attention to Windows.

Enlarge / This big, weathered box contains an oddball piece of PC history: one of the last builds of IBM’s OS/2 that Microsoft worked on before pivoting all of its attention to Windows.

In the annals of PC history, IBM’s OS/2 represents a road not taken. Developed in the waning days of IBM’s partnership with Microsoft—the same partnership that had given us a decade or so of MS-DOS and PC-DOS—OS/2 was meant to improve on areas where DOS was falling short on modern systems. Better memory management, multitasking capabilities, and a usable GUI were all among the features introduced in version 1.x.

But Microsoft was frustrated with some of IBM’s goals and demands, and the company continued to develop an operating system called Windows on its own. Where IBM wanted OS/2 to be used mainly to boost IBM-made PCs and designed it around the limitations of Intel’s 80286 CPU, Windows was being created with the booming market for PC-compatible clones in mind. Windows 1.x and 2.x failed to make much of a dent, but 1990’s Windows 3.0 was a hit, and it came preinstalled on many consumer PCs; Microsoft and IBM broke off their partnership shortly afterward, making OS/2 version 1.2 the last one publicly released and sold with Microsoft’s involvement.

But Microsoft had done a lot of work on version 2.0 of OS/2 at the same time as it was developing Windows. It was far enough along that preview screenshots appeared in PC Magazine, and early builds were shipped to developers who could pay for them, but it was never formally released to the public.

But software archaeologist Neozeed recently published a stable internal preview of Microsoft’s OS/2 2.0 to the Internet Archive, along with working virtual machine disk images for VMware and 86Box. The preview, bought by Brian Ledbetter on eBay for $650 plus $15.26 in shipping, dates to July 1990 and would have cost developers who wanted it a whopping $2,600. A lot to pay for a version of an operating system that would never see the light of day!

The Microsoft-developed build of OS/2 2.0 bears only a passing resemblance to the 32-bit version of OS/2 2.0 that IBM finally shipped on its own in April 1992. Neozeed has published a more thorough exploration of Microsoft’s version, digging around in its guts and getting some early Windows software running (the ability to run DOS and Windows apps was simultaneously a selling point of OS/2 and a reason for developers not to create OS/2-specific apps, one of the things that helped to doom OS/2 in the end). It’s a fascinating detail from a turning point in the history of the PC as we know it today, but as a usable desktop operating system, it leaves something to be desired.

All 26 disks of the OS/2 2.0 preview, plus hefty documentation manuals. There are some things about the '90s I don't miss.

Enlarge / All 26 disks of the OS/2 2.0 preview, plus hefty documentation manuals. There are some things about the ’90s I don’t miss.

This unreleased Microsoft-developed OS/2 build isn’t the first piece of Microsoft-related software history that has been excavated in the last few months. In January, an Internet Archive user discovered and uploaded an early build of 86-DOS, the software that Microsoft bought and turned into MS-DOS/PC-DOS for the original IBM PC 5150. Funnily enough, these unreleased previews serve as bookends for IBM and Microsoft’s often-contentious partnership.

As part of the “divorce settlement” between Microsoft and IBM, IBM would take over the development and maintenance of OS/2 1.x and 2.x while Microsoft continued to work on a more advanced far-future version 3.0 of OS/2. This operating system was never released as OS/2, but it would eventually become Windows NT, Microsoft’s more stable business-centric version of Windows. Windows NT merged with the consumer versions of Windows in the early 2000s with Windows 2000 and Windows XP, and those versions gradually evolved into Windows as we know it today.

It has been 18 years since IBM formally discontinued its last release of OS/2, but as so often happens in computing, the software has found a way to live on. ArcaOS is a semi-modernized, intermittently updated branch of OS/2 updated to run on modern hardware while still supporting the ability to run MS-DOS and 16-bit Windows apps.

Unreleased preview of Microsoft’s OS/2 2.0 is a glimpse down a road not taken Read More »

meta-sues-“brazenly-disloyal”-former-exec-over-stolen-confidential-docs

Meta sues “brazenly disloyal” former exec over stolen confidential docs

Meta sues “brazenly disloyal” former exec over stolen confidential docs

A recently unsealed court filing has revealed that Meta has sued a former senior employee for “brazenly disloyal and dishonest conduct” while leaving Meta for an AI data startup called Omniva that The Information has described as “mysterious.”

According to Meta, its former vice president of infrastructure, Dipinder Singh Khurana (also known as T.S.), allegedly used his access to “confidential, non-public, and highly sensitive” information to steal more than 100 internal documents in a rushed scheme to poach Meta employees and borrow Meta’s business plans to speed up Omniva’s negotiations with key Meta suppliers.

Meta believes that Omniva—which Data Center Dynamics (DCD) reported recently “pivoted from crypto to AI cloud”—is “seeking to provide AI cloud computing services at scale, including by designing and constructing data centers.” But it was held back by a “lack of data center expertise at the top,” DCD reported.

The Information reported that Omniva began hiring Meta employees to fill the gaps in this expertise, including wooing Khurana away from Meta.

Last year, Khurana notified Meta that he was leaving on May 15, and that’s when Meta first observed Khurana’s allegedly “utter disregard for his contractual and legal obligations to Meta—including his confidentiality obligations to Meta set forth in the Confidential Information and Invention Assignment Agreement that Khurana signed when joining Meta.”

A Meta investigation found that during Khurana’s last two weeks at the company, he allegedly uploaded confidential Meta documents—including “information about Meta’s ‘Top Talent,’ performance information for hundreds of Meta employees, and detailed employee compensation information”—on Meta’s network to a Dropbox folder labeled with his new employer’s name.

“Khurana also uploaded several of Meta’s proprietary, highly sensitive, confidential, and non-public contracts with business partners who supply Meta with crucial components for its data centers,” Meta alleged. “And other documents followed.”

In addition to pulling documents, Khurana also allegedly sent “urgent” requests to subordinates for confidential information on a key supplier, including Meta’s pricing agreement “for certain computing hardware.”

“Unaware of Khurana’s plans, the employee provided Khurana with, among other things, Meta’s pricing-form agreement with that supplier for the computing hardware and the supplier’s Meta-specific preliminary pricing for a particular chip,” Meta alleged.

Some of these documents were “expressly marked confidential,” Meta alleged. Those include a three-year business plan and PowerPoints regarding “Meta’s future ‘roadmap’ with a key supplier” and “Meta’s 2022 redesign of its global-supply-chain group” that Meta alleged “would directly aid Khurana in building his own efficient and effective supply-chain organization” and afford a path for Omniva to bypass “years of investment.” Khurana also allegedly “uploaded a PowerPoint discussing Meta’s use of GPUs for artificial intelligence.”

Meta was apparently tipped off to this alleged betrayal when Khurana used his Meta email and network access to complete a writing assignment for Omniva as part of his hiring process. For this writing assignment, Khurana “disclosed non-public information about Meta’s relationship with certain suppliers that it uses for its data centers” when asked to “explain how he would help his potential new employer develop the supply chain for a company building data centers using specific technologies.”

In a seeming attempt to cover up the alleged theft of Meta documents, Khurana apparently “attempted to scrub” one document “of its references to Meta,” as well as removing a label marking it “CONFIDENTIAL—FOR INTERNAL USE ONLY.” But when replacing “Meta” with “X,” Khurana allegedly missed the term “Meta” in “at least five locations.”

“Khurana took such action to try and benefit himself or his new employer, including to help ensure that Khurana would continue to work at his new employer, continue to receive significant compensation from his new employer, and/or to enable Khurana to take shortcuts in building his supply-chain team at his new employer and/or helping to build his new employer’s business,” Meta alleged.

Ars could not immediately reach Khurana for comment. Meta noted that he has repeatedly denied breaching his contract or initiating contact with Meta employees who later joined Omniva. He also allegedly refused to sign a termination agreement that reiterates his confidentiality obligations.

Meta sues “brazenly disloyal” former exec over stolen confidential docs Read More »

google’s-new-gaming-ai-aims-past-“superhuman-opponent”-and-at-“obedient-partner”

Google’s new gaming AI aims past “superhuman opponent” and at “obedient partner”

Even hunt-and-fetch quests are better with a little AI help.

Enlarge / Even hunt-and-fetch quests are better with a little AI help.

At this point in the progression of machine-learning AI, we’re accustomed to specially trained agents that can utterly dominate everything from Atari games to complex board games like Go. But what if an AI agent could be trained not just to play a specific game but also to interact with any generic 3D environment? And what if that AI was focused not only on brute-force winning but instead on responding to natural language commands in that gaming environment?

Those are the kinds of questions animating Google’s DeepMind research group in creating SIMA, a “Scalable, Instructable, Multiworld Agent” that “isn’t trained to win, it’s trained to do what it’s told,” as research engineer Tim Harley put it in a presentation attended by Ars Technica. “And not just in one game, but… across a variety of different games all at once.”

Harley stresses that SIMA is still “very much a research project,” and the results achieved in the project’s initial tech report show there’s a long way to go before SIMA starts to approach human-level listening capabilities. Still, Harley said he hopes that SIMA can eventually provide the basis for AI agents that players can instruct and talk to in cooperative gameplay situations—think less “superhuman opponent” and more “believable partner.”

“This work isn’t about achieving high game scores,” as Google puts it in a blog post announcing its research. “Learning to play even one video game is a technical feat for an AI system, but learning to follow instructions in a variety of game settings could unlock more helpful AI agents for any environment.”

Learning how to learn

Google trained SIMA on nine very different open-world games in an attempt to create a generalizable AI agent.

To train SIMA, the DeepMind team focused on three-dimensional games and test environments controlled either from a first-person perspective or an over-the-shoulder third-person perspective. The nine games in its test suite, which were provided by Google’s developer partners, all prioritize “open-ended interactions” and eschew “extreme violence” while providing a wide range of different environments and interactions, from “outer space exploration” to “wacky goat mayhem.”

In an effort to make SIMA as generalizable as possible, the agent isn’t given any privileged access to a game’s internal data or control APIs. The system takes nothing but on-screen pixels as its input and provides nothing but keyboard and mouse controls as its output, mimicking “the [model] humans have been using [to play video games] for 50 years,” as the researchers put it. The team also designed the agent to work with games running in real time (i.e., at 30 frames per second) rather than slowing down the simulation for extra processing time like some other interactive machine-learning projects.

Animated samples of SIMA responding to basic commands across very different gaming environments.

While these restrictions increase the difficulty of SIMA’s tasks, they also mean the agent can be integrated into a new game or environment “off the shelf” with minimal setup and without any specific training regarding the “ground truth” of a game world. It also makes it relatively easy to test whether things SIMA has learned from training on previous games can “transfer” over to previously unseen games, which could be a key step to getting at artificial general intelligence.

For training data, SIMA uses video of human gameplay (and associated time-coded inputs) on the provided games, annotated with natural language descriptions of what’s happening in the footage. These clips are focused on “instructions that can be completed in less than approximately 10 seconds” to avoid the complexity that can develop with “the breadth of possible instructions over long timescales,” as the researchers put it in their tech report. Integration with pre-trained models like SPARC and Phenaki also helps the SIMA model avoid having to learn how to interpret language and visual data from scratch.

Google’s new gaming AI aims past “superhuman opponent” and at “obedient partner” Read More »

google’s-gemini-ai-now-refuses-to-answer-election-questions

Google’s Gemini AI now refuses to answer election questions

I also refuse to answer political questions —

Gemini is opting out of election-related responses entirely for 2024.

The Google Gemini logo.

Enlarge / The Google Gemini logo.

Google

Like many of us, Google Gemini is tired of politics. Reuters reports that Google has restricted the chatbot from answering questions about the upcoming US election, and instead, it will direct users to Google Search.

Google had planned to do this back when the Gemini chatbot was still called “Bard.” In December, the company said, “Beginning early next year, in preparation for the 2024 elections and out of an abundance of caution on such an important topic, we’ll restrict the types of election-related queries for which Bard and [Google Search’s Bard integration] will return responses.” Tuesday, Google confirmed to Reuters that those restrictions have kicked in. Election queries now tend to come back with the refusal: “I’m still learning how to answer this question. In the meantime, try Google Search.”

Google’s original plan in December was likely to disable election info so Gemini could avoid any political firestorms. Boy, did that not work out! When asked to generate images of people, Gemini quietly tacked diversity requirements onto the image request; this practice led to offensive and historically inaccurate images along with a general refusal to generate images of white people. Last month that earned Google wall-to-wall coverage in conservative news spheres along the lines of “Google’s woke AI hates white people!” Google CEO Sundar Pichai called the AI’s “biased” responses “completely unacceptable,” and for now, creating images of people is disabled while Google works on it.

The start of the first round of US elections in the AI era has already led to new forms of disinformation, and Google presumably wants to opt out of all of it.

Google’s Gemini AI now refuses to answer election questions Read More »

eu-votes-to-ban-riskiest-forms-of-ai-and-impose-restrictions-on-others

EU votes to ban riskiest forms of AI and impose restrictions on others

Europe’s AI Act —

Lawmaker hails “world’s first binding law on artificial intelligence.”

Illustration of a European flag composed of computer code

Getty Images | BeeBright

The European Parliament today voted to approve the Artificial Intelligence Act, which will ban uses of AI “that pose unacceptable risks” and impose regulations on less risky types of AI.

“The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the Internet or CCTV footage to create facial recognition databases,” a European Parliament announcement today said. “Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behavior or exploits people’s vulnerabilities will also be forbidden.”

The ban on certain AI applications provides for penalties of up to 35 million euros or 7 percent of a firm’s “total worldwide annual turnover for the preceding financial year, whichever is higher.” Violations of other provisions have lower penalties.

There are exemptions to allow law enforcement use of remote biometric identification systems in certain cases. A European Commission summary of the legislation said:

All remote biometric identification systems are considered high-risk and subject to strict requirements. The use of remote biometric identification in publicly accessible spaces for law enforcement purposes is, in principle, prohibited.

Narrow exceptions are strictly defined and regulated, such as when necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence.

“Strict obligations” for high-risk AI

The AI Act was supported by 523 members of the European Parliament (MEPs), while 46 voted against and 49 abstained. The legislation classifies AI into four categories of risk: unacceptable risk, high risk, limited risk, and minimal or no risk.

“High-risk AI systems will be subject to strict obligations before they can be put on the market,” the legislation summary said. Obligations include “adequate risk assessment and mitigation systems,” “logging of activity to ensure traceability of results,” “appropriate human oversight measures to minimise risk,” and other requirements.

The law drew opposition from the Computer & Communications Industry Association, a tech-industry lobby group.

“The agreed AI Act imposes stringent obligations on developers of cutting-edge technologies that underpin many downstream systems, and is therefore likely to slow down innovation in Europe,” the group said when a deal on the law was agreed to in December 2023. “Furthermore, certain low-risk AI systems will now be subjected to strict requirements without further justification, while others will be banned altogether. This could lead to an exodus of European AI companies and talent seeking growth elsewhere.”

The law will officially be on the books 20 days after its publication in the official Journal, the European Parliament announcement said. The law’s ban on prohibited practices will apply six months after that, but other regulations won’t take effect until later. The “obligations for high-risk systems” will only take effect after 36 months, the announcement said.

“We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency,” said MEP Brando Benifei, the Internal Market Committee co-rapporteur. An AI office will be formed “to support companies to start complying with the rules before they enter into force,” he said.

Risky AI categories

Examples of high-risk AI include AI used in robot-assisted surgery; credit scoring systems that can deny loans; law enforcement that may interfere with fundamental rights, such as evaluation of the reliability of evidence; and automated examination of visa applications.

The limited-risk category has to do with applications that aren’t transparent about AI usage. “The AI Act introduces specific transparency obligations to ensure that humans are informed when necessary, fostering trust,” the European Commission said. “For instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can take an informed decision to continue or step back. Providers will also have to ensure that AI-generated content is identifiable.”

AI-generated text that is “published with the purpose to inform the public on matters of public interest must be labelled as artificially generated,” and this requirement “also applies to audio and video content constituting deep fakes.”

AI with minimal or no risk “includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category,” the commission said. There would be no restrictions on this category.

EU votes to ban riskiest forms of AI and impose restrictions on others Read More »

raspberry-pi-powered-ai-bike-light-detects-cars,-alerts-bikers-to-bad-drivers

Raspberry Pi-powered AI bike light detects cars, alerts bikers to bad drivers

Group ride —

Data from multiple Copilot devices could be used for road safety improvements.

Copilot mounted to the rear of a road bike

Velo AI

Whether or not autonomous vehicles ever work out, the effort put into using small cameras and machine-learning algorithms to detect cars could pay off big for an unexpected group: cyclists.

Velo AI is a firm cofounded by Clark Haynes and Micol Marchetti-Bowick, both PhDs with backgrounds in robotics, movement prediction, and Uber’s (since sold-off) autonomous vehicle work. Copilot, which started as a “pandemic passion project” for Haynes, is essentially car-focused artificial intelligence and machine learning stuffed into a Raspberry Pi Compute Module 4 and boxed up in a bike-friendly size and shape.

A look into the computer vision of the Copilot.

While car-detecting devices exist for bikes, including the Garmin Varia, they’re largely radar-based. That means they can’t distinguish between vehicles of different sizes and only know that something is approaching you, not, for example, how much space it will allow when passing.

Copilot purports to do a lot more:

  • Identify cars, bikes, and pedestrians
  • Alert riders audibly about cars “Following,” “Approaching,” and “Overtaking”
  • Issue visual warning to drivers who are approaching too close or too fast
  • Send visual notifications and a simplified rear road view to an optional paired smartphone
  • Record 1080p video and tag “close calls” and “incidents” from your phone

At 330 grams, with five hours of optimal battery life (and USB-C recharging), it’s not for the aero-obsessed rider or super-long-distance rider. And at $400, it might not speak to the most casual and infrequent cyclist. But it’s an intriguing piece of kit, especially for those who already have, or considered, a Garmin or similar action camera for watching their back. What if a camera could do more than just show you the car after you’re already endangered by it?

Copilot's computer vision can alert riders to cars that are

Copilot’s computer vision can alert riders to cars that are “Following,” “Approaching,” and “Overtaking.”

Velo AI

The Velo team detailed some of their building process for the official Raspberry Pi blog. The Compute Module 4 powers the core system and lights, while a custom Hailo AI co-processor helps with the neural networks and computer vision. An Arducam camera provides the vision and recording.

Beyond individual safety, the Velo AI team hopes that data from Copilots can feed into larger-scale road safety improvements. The team told the Pi blog that they’re starting a partnership with Pittsburgh, seeding Copilots to regular bike commuters and analyzing the aggregate data for potential infrastructure upgrades.

The Copilot is available for sale now and shipping, according to Velo AI. A December 2023 pre-order sold out.

Raspberry Pi-powered AI bike light detects cars, alerts bikers to bad drivers Read More »

seeding-steel-frames-brings-destroyed-coral-reefs-back-to-life

Seeding steel frames brings destroyed coral reefs back to life

Image of a large school of fish above a reef.

Coral reefs, some of the most stunningly beautiful marine ecosystems on Earth, are dying. Ninety percent of them will likely be gone by 2050 due to rising ocean temperatures and pollution. “But it’s not that when they are gone, they are gone forever. We can rebuild them,” said Dr. Timothy Lamont, a marine biologist working at Lancaster University.

Lamont’s team evaluated coral reef restoration efforts done through the MARS Coral Reef Restoration Program on the coast of Indonesia and found that planting corals on a network of sand-coated steel frames brought a completely dead reef back to life in just four years. It seems like we can fix something for once.

Growing up in rubble

The restored reef examined by Lamont’s team was damaged by blast fishing done 30–40 years ago. “People were using dynamite to blow up the reef. It kills all the fish, the fish float to the surface, and you can scoop them all up. Obviously, this is very damaging to the habitat and leaves behind loose rubble fields with lots of coral skeletons,” said Lamont.

Because this loose ruble is in constant motion, tumbling and rolling around, coral larvae don’t have enough time to grow before they get squashed. So the first step to bringing damaged reefs back to life was stabilizing the rubble. The people running the MARS program did this using Reef Stars, hexagonal steel structures coated with sand. “These structures are connected into networks and pinned to the seabed to reduce the movement of the rubble,” Lamont said.

Before the reef stars were placed on the seabed, though, the MARS team manually tied little corals around them. This was meant to speed up recovery compared to letting coral larvae settle on the steel structures naturally. Based on some key measures, it worked. But there are questions about whether those measures capture everything we need to know.

Artificial coral reefs

The metric Lamont’s team used to measure the success of the MARS program restoration was a carbonate budget, which describes an overall growth of the whole reef structure. According to Lamont, a healthy coral reef has a positive carbonate budget and produces roughly 20 kilograms of limestone per square meter per year. This is exactly what his team measured in restored sites on the Indonesian reef. But while the recovered reef had the same carbonate budget as a healthy one, the organisms contributing to this budget were different.

An untouched natural reef is a diverse mixture including massive, encrusting, and plating coral species like Isopora or Porites, which contribute roughly a third of the carbonate budget. Restored reefs were almost completely dominated by smaller, branching corals like Stylophora, Acropora, and Pocillopora, which are all fast-growing species initially tied onto reef stars. The question was whether the MARS program achieved its astounding four-year reef recovery time by sacrificing biodiversity and specifically choosing corals that grow faster.

Seeding steel frames brings destroyed coral reefs back to life Read More »

what-happens-when-chatgpt-tries-to-solve-50,000-trolley-problems?

What happens when ChatGPT tries to solve 50,000 trolley problems?

Images of cars on a freeway with green folder icons superimposed on each vehicle.

There’s a puppy on the road. The car is going too fast to stop in time, but swerving means the car will hit an old man on the sidewalk instead.

What choice would you make? Perhaps more importantly, what choice would ChatGPT make?

Autonomous driving startups are now experimenting with AI chatbot assistants, including one self-driving system that will use one to explain its driving decisions. Beyond announcing red lights and turn signals, the large language models (LLMs) powering these chatbots may ultimately need to make moral decisions, like prioritizing passengers’ or pedestrian’s safety. In November, one startup called Ghost Autonomy announced experiments with ChatGPT to help its software navigate its environment.

But is the tech ready? Kazuhiro Takemoto, a researcher at the Kyushu Institute of Technology in Japan, wanted to check if chatbots could make the same moral decisions when driving as humans. His results showed that LLMs and humans have roughly the same priorities, but some showed clear deviations.

The Moral Machine

After ChatGPT was released in November 2022, it didn’t take long for researchers to ask it to tackle the Trolley Problem, a classic moral dilemma. This problem asks people to decide whether it is right to let a runaway trolley run over and kill five humans on a track or switch it to a different track where it kills only one person. (ChatGPT usually chose one person.)

But Takemoto wanted to ask LLMs more nuanced questions. “While dilemmas like the classic trolley problem offer binary choices, real-life decisions are rarely so black and white,” he wrote in his study, recently published in the journal Proceedings of the Royal Society.

Instead, he turned to an online initiative called the Moral Machine experiment. This platform shows humans two decisions that a driverless car may face. They must then decide which decision is more morally acceptable. For example, a user might be asked if, during a brake failure, a self-driving car should collide with an obstacle (killing the passenger) or swerve (killing a pedestrian crossing the road).

But the Moral Machine is also programmed to ask more complicated questions. For example, what if the passengers were an adult man, an adult woman, and a boy, and the pedestrians were two elderly men and an elderly woman walking against a “do not cross” signal?

The Moral Machine can generate randomized scenarios using factors like age, gender, species (saving humans or animals), social value (pregnant women or criminals), and actions (swerving, breaking the law, etc.). Even the fitness level of passengers and pedestrians can change.

In the study, Takemoto took four popular LLMs (GPT-3.5, GPT-4, PaLM 2, and Llama 2) and asked them to decide on over 50,000 scenarios created by the Moral Machine. More scenarios could have been tested, but the computational costs became too high. Nonetheless, these responses meant he could then compare how similar LLM decisions were to human decisions.

What happens when ChatGPT tries to solve 50,000 trolley problems? Read More »