Author name: Kris Guyer

public-officials-can-block-haters—but-only-sometimes,-scotus-rules

Public officials can block haters—but only sometimes, SCOTUS rules

Public officials can block haters—but only sometimes, SCOTUS rules

There are some circumstances where government officials are allowed to block people from commenting on their social media pages, the Supreme Court ruled Friday.

According to the Supreme Court, the key question is whether officials are speaking as private individuals or on behalf of the state when posting online. Issuing two opinions, the Supreme Court declined to set a clear standard for when personal social media use constitutes state speech, leaving each unique case to be decided by lower courts.

Instead, SCOTUS provided a test for courts to decide first if someone is or isn’t speaking on behalf of the state on their social media pages, and then if they actually have authority to act on what they post online.

The ruling suggests that government officials can block people from commenting on personal social media pages where they discuss official business when that speech cannot be attributed to the state and merely reflects personal remarks. This means that blocking is acceptable when the official has no authority to speak for the state or exercise that authority when speaking on their page.

That authority empowering officials to speak for the state could be granted by a written law. It could also be granted informally if officials have long used social media to speak on behalf of the state to the point where their power to do so is considered “well-settled,” one SCOTUS ruling said.

SCOTUS broke it down like this: An official might be viewed as speaking for the state if the social media page is managed by the official’s office, if a city employee posts on their behalf to their personal page, or if the page is handed down from one official to another when terms in office end.

Posting on a personal page might also be considered speaking for the state if the information shared has not already been shared elsewhere.

Examples of officials clearly speaking on behalf of the state include a mayor holding a city council meeting online or an official using their personal page as an official channel for comments on proposed regulations.

Because SCOTUS did not set a clear standard, officials risk liability when blocking followers on so-called “mixed use” social media pages, SCOTUS cautioned. That liability could be diminished by keeping personal pages entirely separate or by posting a disclaimer stating that posts represent only officials’ personal views and not efforts to speak on behalf of the state. But any official using a personal page to make official comments could expose themselves to liability, even with a disclaimer.

SCOTUS test for when blocking is OK

These clarifications came in two SCOTUS opinions addressing conflicting outcomes in two separate complaints about officials in California and Michigan who blocked followers heavily criticizing them on Facebook and X. The lower courts’ decisions have been vacated, and courts must now apply the Supreme Court’s test to issue new decisions in each case.

One opinion was brief and unsigned, discussing a case where California parents sued school district board members who blocked them from commenting on public Twitter pages used for campaigning and discussing board issues. The board members claimed they blocked their followers after the parents left dozens and sometimes hundreds of the same exact comments on tweets.

In the second—which was unanimous, with no dissenting opinions—Justice Amy Coney Barrett responded at length to a case from a Facebook user named Kevin Lindke. This opinion provides varied guidance that courts can apply when considering whether blocking is appropriate or violating constituents’ First Amendment rights.

Lindke was blocked by a Michigan city manager, James Freed, after leaving comments criticizing the city’s response to COVID-19 on a page that Freed created as a college student, sometime before 2008. Among these comments, Lindke called the city’s pandemic response “abysmal” and told Freed that “the city deserves better.” On a post showing Freed picking up a takeout order, Lindke complained that residents were “suffering,” while Freed ate at expensive restaurants.

After Freed hit 5,000 followers, he converted the page to reflect his public figure status. But while he primarily still used the page for personal posts about his family and always managed the page himself, the page went into murkier territory when he also shared updates about his job as city manager. Those updates included sharing updates on city efforts, posting screenshots of city press releases, and soliciting public feedback, like sharing links to city surveys.

Public officials can block haters—but only sometimes, SCOTUS rules Read More »

dna-parasite-now-plays-key-role-in-making-critical-nerve-cell-protein

DNA parasite now plays key role in making critical nerve cell protein

Domesticated viruses —

An RNA has been adopted to help the production of myelin, a key nerve protein.

Graphic depiction of a nerve cell with a myelin coated axon.

Human brains (and the brains of other vertebrates) are able to process information faster because of myelin, a fatty substance that forms a protective sheath over the axons of our nerve cells and speeds up their impulses. How did our neurons evolve myelin sheaths? Part of the answer—which was unknown until now—almost sounds like science fiction.

Led by scientists from Altos Labs-Cambridge Institute of Science, a team of researchers has uncovered a bit of the gnarly past of how myelin ended up covering vertebrate neurons: a molecular parasite has been messing with our genes. Sequences derived from an ancient virus help regulate a gene that encodes a component of myelin, helping explain why vertebrates have an edge when it comes to their brains.

Prehistoric infection

Myelin is a fatty material produced by oligodendrocyte cells in the central nervous system and Schwann cells in the peripheral nervous system. Its insulating properties allow neurons to zap impulses to one another at faster speeds and greater lengths. Our brains can be complex in part because myelin enables longer, narrower axons, which means more nerves can be stacked together.

The un-myelinated brain cells of many invertebrates often need to rely on wider—and therefore fewer—axons for impulse conduction. Rapid impulse conduction makes quicker reactions possible, whether that means fleeing danger or capturing prey.

So, how do we make myelin? A key player in its production appears to be a type of molecular parasite called a retrotransposon.

Like other transposons, retrotransposons can move to new locations in the genome through an RNA intermediate. However, most retrotransposons in our genome have picked up too many mutations to move about anymore.

RNLTR12-int is a retrotransposon that is thought to have originally entered our ancestors’ genome as a virus. Rat genomes now have over 100 copies of the retrotransposon.

An RNA made by RNLTR12-int helps produce myelin by binding to a transcription factor or a protein that regulates the activity of other genes. The RNA/protein combination binds to DNA near the gene for myelin basic protein, or MBP, a major component of myelin.

“MBP is essential for the membrane growth and compression of [central nervous system] myelin,” the researchers said in a study recently published in Cell.

Technical knockout

To find out whether RNLTR12-int really was behind the regulation of MBP and, therefore, myelin production, the research team had to knock its level down and see if myelination still happened. They first experimented on rat brains before moving on to zebrafish and frogs.

When they inhibited RNLTR12-int, the results were drastic. In the central nervous system, genetically edited rats produced 98 percent less MBP than those where the gene was left unedited. The absence of RNLTR12-int also caused the oligodendrocytes that produce myelin to develop much simpler structures than they would normally form. When RNLTR12-int was knocked out in the peripheral nervous system, it reduced myelin produced by Schwann cells.

The researchers used a SOX10 antibody to show that SOX10 bound to the RNLTR12-int transcript in vivo. This was an important result, since there are lots of non-coding RNAs made by cells, and it wasn’t clear whether any RNA would work or if it was specific to RNLTR12-int.

Do these results hold up in other jawed vertebrates? Using CRISPR-CAS9 to perform knockout tests with retrotransposons related to RNLTR12-int in frogs and zebrafish showed similar results.

Myelination has enriched the vertebrate brain so it can work like never before. This is why the term “brain food” is literal. Healthy fats are so important for our brains; they help form myelin since it is a fatty acid. Think about that next time you’re pulling an all-nighter while reaching for a handful of nuts.

Cell, 2024. DOI: 10.1016/j.cell.2024.01.011

DNA parasite now plays key role in making critical nerve cell protein Read More »

google-says-chrome’s-new-real-time-url-scanner-won’t-invade-your-privacy

Google says Chrome’s new real-time URL scanner won’t invade your privacy

We don’t need another way to track you —

Google says URL hashes and a third-party relay server will keep it out of your history.

Google's safe browsing warning is not subtle.

Enlarge / Google’s safe browsing warning is not subtle.

Google

Google Chrome’s “Safe Browsing” feature—the thing that pops up a giant red screen when you try to visit a malicious website—is getting real-time updates for all users. Google announced the change on the Google Security Blog. Real-time protection naturally means sending URL data to some far-off server, but Google says it will use “privacy-preserving URL protection” so it won’t get a list of your entire browsing history. (Not that Chrome doesn’t already have features that log your history or track you.)

Safe Browsing basically boils down to checking your current website against a list of known bad sites. Google’s old implementation happened locally, which had the benefit of not sending your entire browsing history to Google, but that meant downloading the list of bad sites at 30- to 60-minute intervals. There are a few problems with local downloads. First, Google says the majority of bad sites exist for “less than 10 minutes,” so a 30-minute update time isn’t going to catch them. Second, the list of all bad websites on the entire Internet is going to be very large and constantly growing, and Google already says that “not all devices have the resources necessary to maintain this growing list.”

If you really want to shut down malicious sites, what you want is real-time checking against a remote server. There are a lot of bad ways you could do this. One way would be to just send every URL to the remote server, and you’d basically double Internet website traffic for all of Chrome’s 5 billion users. To cut down on those server requests, Chrome is instead going to download a list of known good sites, and that will cover the vast majority of web traffic. Only the small, unheard-of sites will be subject to a server check, and even then, Chrome will keep a cache of your recent small site checks, so you’ll only check against the server the first time.

When you’re not on the known-safe-site list or recent cache, info about your web URL will be headed to some remote server, but Google says it won’t be able to see your web history. Google does all of its URL checking against hashes, rather than the plain-text URL. Previously, Google offered an opt-in “enhanced protection” mode for safe browsing, which offered more up-to-date malicious site blocking in exchange for “sharing more security-related data” with Google, but the company thinks this new real-time mode is privacy-preserving enough to roll out to everyone by default. The “Enhanced” mode is still sticking around since that allows for “deep scans for suspicious files and extra protection from suspicious Chrome extensions.”

Google's diagram of how the whole process works.

Enlarge / Google’s diagram of how the whole process works.

Google

Interestingly, the privacy scheme involves a relay server that will be run by a third party. Google says, “In order to preserve user privacy, we have partnered with Fastly, an edge cloud platform that provides content delivery, edge compute, security, and observability services, to operate an Oblivious HTTP (OHTTP) privacy server between Chrome and Safe Browsing.”

For now, Google’s remote checks, when they happen, will mean some latency while your safety check completes, but Google says it’s “in the process of introducing an asynchronous mechanism, which will allow the site to load while the real-time check is in progress. This will improve the user experience, as the real-time check won’t block page load.”

The feature should be live in the latest Chrome release for desktop, Android, and iOS. If you don’t want it, you can turn it off in the “Privacy and security” section of the Chrome settings.

Listing image by Getty Images

Google says Chrome’s new real-time URL scanner won’t invade your privacy Read More »

pornhub-blocks-all-of-texas-to-protest-state-law—paxton-says-“good-riddance”

Pornhub blocks all of Texas to protest state law—Paxton says “good riddance”

Pornhub protest —

Pornhub went dark in Texas and other states requiring age verification for porn.

Large signs that say

Enlarge / Signs displayed at the Pornhub booth at the 2024 AVN Adult Entertainment Expo at Resorts World Las Vegas on January 25, 2024 in Las Vegas, Nevada.

Getty Images | Ethan Miller /

Pornhub has disabled its website in Texas following a court ruling that upheld a state law requiring age-verification systems on porn websites. Visitors to pornhub.com in Texas are now greeted with a message calling the Texas law “ineffective, haphazard, and dangerous.”

“As you may know, your elected officials in Texas are requiring us to verify your age before allowing you access to our website. Not only does this impinge on the rights of adults to access protected speech, it fails strict scrutiny by employing the least effective and yet also most restrictive means of accomplishing Texas’s stated purpose of allegedly protecting minors,” Pornhub’s message said.

Pornhub said it has “made the difficult decision to completely disable access to our website in Texas. In doing so, we are complying with the law, as we always do, but hope that governments around the world will implement laws that actually protect the safety and security of users.”

The same message was posted on other sites owned by the same company, including RedTube, YouPorn, and Brazzers. Pornhub has also blocked its website in Arkansas, Mississippi, Montana, North Carolina, Utah, and Virginia in protest of similar laws. VPN services can be used to evade the blocks and to test out which states have been blocked by Pornhub.

Texas AG sued Pornhub, says “good riddance”

The US Court of Appeals for the 5th Circuit upheld the Texas law in a 2–1 decision last week. The 5th Circuit appeals court had previously issued a temporary stay that allowed the law to take effect in September 2023.

Texas Attorney General Ken Paxton last month sued Pornhub owner Aylo (formerly MindGeek) for violating the law. Paxton’s complaint in Travis County District Court sought civil penalties of up to $10,000 for each day since the law took effect on September 19, 2023.

“Sites like Pornhub are on the run because Texas has a law that aims to prevent them from showing harmful, obscene material to children,” Paxton wrote yesterday. “We recently secured a major victory against PornHub and other sites that sought to block this law from taking effect. In Texas, companies cannot get away with showing porn to children. If they don’t want to comply, good riddance.”

The 5th Circuit panel majority held that the Texas porn-site law should be reviewed on the “rational-basis” standard and not under strict scrutiny. In a dissent, Judge Patrick Higginbotham wrote that the law should face strict scrutiny because it “limits access to materials that may be denied to minors but remain constitutionally protected speech for adults.”

“[T]he Supreme Court has unswervingly applied strict scrutiny to content-based regulations that limit adults’ access to protected speech,” Higginbotham wrote.

Pornhub wants device-based age verification instead

Pornhub’s message to Texas users argued that “providing identification every time you want to visit an adult platform is not an effective solution for protecting users online, and in fact, will put minors and your privacy at risk.” Pornhub said that in other states with age-verification laws, “such bills have failed to protect minors, by driving users from those few websites which comply, to the thousands of websites, with far fewer safety measures in place, which do not comply.”

Pornhub’s message advocated for a device-based approach to age verification in which “personal information that is used to verify the user’s age is either shared in-person at an authorized retailer, inputted locally into the user’s device, or stored on a network controlled by the device manufacturer or the supplier of the device’s operating system.”

Pornhub says this could be used to prevent underage users from accessing age-restricted content without requiring websites to verify ages themselves. “To come to fruition, such an approach requires the cooperation of manufacturers and operating-system providers,” Pornhub wrote.

The age-verification question could eventually go to the Supreme Court. “This opinion will be appealed to the Supreme Court, alongside other cases over statutes imposing mandatory age authentication,” Santa Clara University law professor Eric Goldman wrote.

The 5th Circuit panel majority’s analysis relied on Ginsberg v. New York, a 1968 Supreme Court ruling about the sale of “girlie” magazines to a 16-year-old at a lunch counter. Goldman criticized the 5th Circuit for relying on Ginsburg “instead of the squarely on-point 1997 Reno v. ACLU and 2004 Ashcroft v. ACLU opinions, both of which dealt with the Internet.” Goldman argued that decisions upholding laws like the Texas one could open the door to “rampant government censorship.”

The Free Speech Coalition, an adult-industry lobby group that sued Texas over its law, said it “disagree[s] strenuously with the analysis of the Court majority. As the dissenting opinion by Judge Higginbotham makes clear, this ruling violates decades of precedent from the Supreme Court.” The group is considering its “next steps in regard to both this lawsuit and others.”

Pornhub blocks all of Texas to protest state law—Paxton says “good riddance” Read More »

deadly-morel-mushroom-outbreak-highlights-big-gaps-in-fungi-knowledge

Deadly morel mushroom outbreak highlights big gaps in fungi knowledge

This fungi’s not fun, guys —

Prized morels are unpredictably and puzzlingly deadly, outbreak report shows.

Mature morel mushrooms in a greenhouse at an agriculture garden in Zhenbeibu Town of Xixia District of Yinchuan, northwest China's Ningxia Hui Autonomous Region.

Enlarge / Mature morel mushrooms in a greenhouse at an agriculture garden in Zhenbeibu Town of Xixia District of Yinchuan, northwest China’s Ningxia Hui Autonomous Region.

True morel mushrooms are widely considered a prized delicacy, often pricey and surely safe to eat. But these spongey, earthy forest gems have a mysterious dark side—one that, on occasion, can turn deadly, highlighting just how little we know about morels and fungi generally.

On Thursday, Montana health officials published an outbreak analysis of poisonings linked to the honeycombed fungi in March and April of last year. The outbreak sickened 51 people who ate at the same restaurant, sending four to the emergency department. Three were hospitalized and two died. Though the health officials didn’t name the restaurant in their report, state and local health departments at the time identified it as Dave’s Sushi in Bozeman. The report is published in the Centers for Disease Control and Prevention’s Morbidity and Mortality Weekly Report.

The outbreak coincided with the sushi restaurant introducing a new item: a “special sushi roll” that contained salmon and morel mushrooms. The morels were a new menu ingredient for Dave’s. They were served two ways: On April 8, the morels were served partially cooked, with a hot, boiled sauce poured over the raw mushrooms and left to marinate for 75 minutes; and on April 17, they were served uncooked and cold-marinated.

The mystery poison worked fast. Symptoms began, on average, about an hour after eating at the restaurant. And it was brutal. “Vomiting and diarrhea were reportedly profuse,” the health officials wrote, “and hospitalized patients had clinical evidence of dehydration. The two patients who died had chronic underlying medical conditions that might have affected their ability to tolerate massive fluid loss.”

Of the 51 sickened, 46 were restaurant patrons and five were employees. Among them, 45 (88 percent) recalled eating morels. While that’s a high percentage for such an outbreak investigation, certainly enough to make the morels the prime suspect, the health officials went further. With support from the CDC, they set up a matched case-control study, having people complete a detailed questionnaire with demographic information, food items they ate at the restaurant, and symptoms.

Mysterious poison

Forty-one of the poisoned people filled out the questionnaire, as did 22 control patrons who ate at the restaurant but did not report subsequent illness. The analysis indicated that the odds of recalling eating the special sushi roll were nearly 16 times higher among the poisoned patrons than among the controls. The odds of reporting any morel consumption were nearly 11 times higher than controls.

The detailed consumption data also allowed the health officials to model a dose response, which suggested that with each additional piece of the special roll a person recalled eating, their odds of sickness increased nearly threefold compared with people who reported eating none. Those who ate four or more pieces of the roll had odds nearly 22.5 times higher. A small analysis focusing on the five employees sickened, which was not included in the published study but was noted by the Food and Drug Administration, echoed the dose-response finding, indicating that sickness was linked with larger amounts of morel consumption.

When the officials broke down the analysis by people who ate at the restaurant on April 17, when the morels were served uncooked, and those who ate at the restaurant on April 8, when the mushrooms were slightly cooked, the cooking method seemed to matter. People who ate the uncooked rather than the slightly cooked mushrooms had much higher odds of sickness.

This all strongly points to the morels being responsible. At the time, the state and local health officials engaged the FDA, as well as the CDC, to help tackle the outbreak investigation. But the FDA reported that “samples of morel mushrooms collected from the restaurant were screened for pesticides, heavy metals, toxins, and pathogens. No significant findings were identified.” In addition, the state and local health officials noted that DNA sequencing identified the morels used by the restaurant as Morchella sextelata, a species of true morel. This rules out the possibility that the mushrooms were look-alike morels, called “false morels,” which are known to contain a toxin called gyromitrin.

The health officials and the FDA tracked down the distributor of the mushrooms, finding they were cultivated and imported fresh from China. Records indicated that 12 other locations in California also received batches of the mushrooms. Six of those facilities responded to inquiries from the California health department and the FDA, and all six reported no illnesses. They also all reported cooking the morels or at least thoroughly heating them.

Deadly morel mushroom outbreak highlights big gaps in fungi knowledge Read More »

us-government-agencies-demand-fixable-ice-cream-machines

US government agencies demand fixable ice cream machines

I scream, you scream, we all scream for 1201(c)3 exemptions —

McFlurries are a notable part of petition for commercial and industrial repairs.

Taylor ice cream machine, with churning spindle removed by hand.

Enlarge / Taylor’s C709 Soft Serve Freezer isn’t so much mechanically complicated as it is a software and diagnostic trap for anyone without authorized access.

Many devices have been made difficult or financially nonviable to repair, whether by design or because of a lack of parts, manuals, or specialty tools. Machines that make ice cream, however, seem to have a special place in the hearts of lawmakers. Those machines are often broken and locked down for only the most profitable repairs.

The Federal Trade Commission and the antitrust division of the Department of Justice have asked the US Copyright Office (PDF) to exempt “commercial soft serve machines” from the anti-circumvention rules of Section 1201 of the Digital Millennium Copyright Act (DMCA). The governing bodies also submitted proprietary diagnostic kits, programmable logic controllers, and enterprise IT devices for DMCA exemptions.

“In each case, an exemption would give users more choices for third-party and self-repair and would likely lead to cost savings and a better return on investment in commercial and industrial equipment,” the joint comment states. Those markets would also see greater competition in the repair market, and companies would be prevented from using DMCA laws to enforce monopolies on repair, according to the comment.

The joint comment builds upon a petition filed by repair vendor and advocate iFixit and interest group Public Knowledge, which advocated for broad reforms while keeping a relatable, ingestible example at its center. McDonald’s soft serve ice cream machines, which are famously frequently broken, are supplied by industrial vendor Taylor. Taylor’s C709 Soft Serve Freezer requires lengthy, finicky warm-up and cleaning cycles, produces obtuse error codes, and, perhaps not coincidentally, costs $350 per 15 minutes of service for a Taylor technician to fix. iFixit tore down such a machine, confirming the lengthy process between plugging in and soft serving.

After one company built a Raspberry Pi-powered device, the Kytch, that could provide better diagnostics and insights, Taylor moved to ban franchisees from installing the device, then offered up its own competing product. Kytch has sued Taylor for $900 million in a case that is still pending.

Beyond ice cream, the petitions to the Copyright Office would provide more broad exemptions for industrial and commercial repairs that require some kind of workaround, decryption, or other software tinkering. Going past technological protection measures (TPMs) was made illegal by the 1998 DMCA, which was put in place largely because of the concerns of media firms facing what they considered rampant piracy.

Every three years, the Copyright Office allows for petitions to exempt certain exceptions to DMCA violations (and renew prior exemptions). Repair advocates have won exemptions for farm equipment repair, video game consoles, cars, and certain medical gear. The exemption is often granted for device fixing if a repair person can work past its locks, but not for the distribution of tools that would make such a repair far easier. The esoteric nature of such “release valve” offerings has led groups like the EFF to push for the DMCA’s abolishment.

DMCA exemptions occur on a parallel track to state right-to-repair bills and broader federal action. President Biden issued an executive order that included a push for repair reforms. The FTC has issued studies that call out unnecessary repair restrictions and has taken action against firms like Harley-Davidson, Westinghouse, and grill maker Weber for tying warranties to an authorized repair service.

Disclosure: Kevin Purdy previously worked for iFixit. He has no financial ties to the company.

US government agencies demand fixable ice cream machines Read More »

gm-uses-ai-tool-to-determine-which-truck-stops-should-get-ev-chargers

GM uses AI tool to determine which truck stops should get EV chargers

help me choose —

Forget LLM chatbots; this seems like an actually useful implementation of AI.

A 2024 Chevrolet Silverado EV WT at a pull-through charging stall located at a flagship Pilot and Flying J travel center, as part of the new coast-to-coast fast charging network.

Enlarge / A 2024 Chevrolet Silverado EV WT at a pull-through charging stall located at a flagship Pilot and Flying J travel center, as part of the new coast-to-coast fast charging network.

General Motors

It’s understandable if you’re starting to experience AI fatigue; it feels like every week, there’s another announcement of some company boasting about how an LLM chatbot will revolutionize everything—usually followed in short succession by news reports of how terribly wrong it’s all gone. But it turns out that not every use of AI by an automaker is a public relations disaster. As it happens, General Motors has been using machine learning to help guide business decisions regarding where to install new DC fast chargers for electric vehicles.

GM’s transformation into an EV-heavy company has not gone entirely smoothly thus far, but in 2022, it revealed that, together with the Pilot company, it was planning to deploy a network of 2,000 DC fast chargers at Flying J and Pilot travel centers around the US. But how to decide which locations?

“I think that the overarching theme is we’re really looking for opportunities to simplify the lives of our customers, our employees, our dealers, and our suppliers,” explained Jon Francis, GM’s chief data and analytics officer. “And we see the positive effects of AI at scale, whether that’s in the manufacturing part of the business, engineering, supply chain, customer experience—it really runs through threads through all of those.

“Obviously, the place where it shows up most directly is certainly in autonomous, and that’s an important use case for us, but actually [on a] day-to-day basis, AI is improving a lot of systems and workflows within the organization,” he told Ars.

“There’s a lot of companies—and not to name names, but there’s some chasing of shiny objects, and I think there are a lot of cool, sexy things that you can do with AI, but for GM, we’re really looking for solutions that are going to drive the business in a meaningful way,” Francis said.

GM wants to build out chargers at about 200 Flying J and Pilot travel centers by the end of 2024, but narrowing down exactly which locations to focus on was the big question. After all, there are more than 750 spread out across 44 US states and six Canadian provinces.

Obviously, traffic is a big concern—each DC fast charger costs anywhere from $100,000 to $300,000 dollars, and that’s not counting any costs associated with beefing up the electrical infrastructure to power them, nor the various permitting processes that tend to delay everything. Sticking a bank of chargers at a travel center that’s rarely visited isn’t the best use of resources, but neither is deploying them in an area that’s already replete with other fast chargers.

Much of the data GM showed me was confidential, but this screenshot should give you an idea of how the various datasets combine.

Enlarge / Much of the data GM showed me was confidential, but this screenshot should give you an idea of how the various datasets combine.

General Motors

Which is where the ML came in. GM’s data scientists built tools that aggregate different GIS datasets together. For example, it has a geographic database of already deployed DC chargers around the country—the US Department of Energy maintains such a resource—overlayed with traffic data and then the locations of the travel centers. The result is a map with potential locations, which GM’s team then uses to narrow down the exact sites it wants to choose.

It’s true that if you had access to all those datasets, you could probably do all that manually. But we’re talking datasets with, in some cases, billions of data points. A few years ago, GM’s analysts could have done that at a city level without spending years on the project, but doing it on a nationwide scale is the kind of task that requires the amount of cloud platforms and distributed clusters that are really now only becoming commonplace.

As a result, GM was able to deploy the first 25 sites last year, with 100 charging stalls across the 25. By the end of this year, it told Ars it should have around 200 locations operational.

That certainly seems more useful to me than just another chatbot.

GM uses AI tool to determine which truck stops should get EV chargers Read More »

blue-cheese-shows-off-new-colors,-but-the-taste-largely-remains-the-same

Blue cheese shows off new colors, but the taste largely remains the same

Am I blue? —

Future varieties could be yellow-green, reddish-brown-pink, or light blue.

Scientists at University of the Nottingham have discovered how to create different colours of blue cheese.

Enlarge / Scientists at the University of Nottingham have discovered how to create different colors of blue cheese.

University of Nottingham

Gourmands are well aware of the many varieties of blue cheese, known by the blue-green veins that ripple through the cheese. Different kinds of blue cheese have distinctive flavor profiles: they can be mild or strong, sweet or salty, for example. Soon we might be able to buy blue cheeses that belie the name and sport veins of different colors: perhaps yellow-green, reddish-brown-pink, or lighter/darker shades of blue, according to a recent paper published in the journal Science of Food.

“We’ve been interested in cheese fungi for over 10 years, and traditionally when you develop mould-ripened cheeses, you get blue cheeses such as Stilton, Roquefort, and Gorgonzola, which use fixed strains of fungi that are blue-green in color,” said co-author Paul Dyer of the University of Nottingham of this latest research. “We wanted to see if we could develop new strains with new flavors and appearances.”

Blue cheese has been around for a very long time. Legend has it that a young boy left his bread and ewe’s milk cheese in a nearby cave to pursue a lovely young lady he’d spotted in the distance. Months later, he came back to the cave and found it had molded into Roquefort. It’s a fanciful tale, but scholars think the basic idea is sound: people used to store cheeses in caves because their temperature and moisture levels were especially hospitable to harmless molds. That was bolstered by a 2021 analysis of paleofeces that found evidence that Iron Age salt miners in Hallstatt (Austria) between 800 and 400 BCE were already eating blue cheese and quaffing beer.

Color derivatives.

Enlarge / Color derivatives.

The manufacturing process for blue cheese is largely the same as for any cheese, with a few crucial additional steps. It requires cultivation of Penicillium roqueforti, a mold that thrives on exposure to oxygen. The P. roqueforti is added to the cheese, sometimes before curds form and sometimes mixed in with curds after they form. The cheese is then aged in a temperature-controlled environment. Lactic acid bacteria trigger the initial fermentation but eventually die off, and the P. roqueforti take over as secondary fermenters. Piercing the curds forms air tunnels in the cheese, and the mold grows along those surfaces to produce blue cheese’s signature veining.

Once scientists published the complete genome for P. roqueforti, it opened up opportunities for studying this blue cheese fungus, per Dyer et al. Different strains “can have different colony cultures and textures, with commercial strains being sold partly on the basis of color development,” they wrote. This coloration comes from pigments in the coatings of the spores that form as the colony grows. Dyer and his co-authors set out to determine the genetic basis of this pigment formation in the hopes of producing altered strains with different spore coat colors.

The team identified a specific biochemical pathway, beginning with a white color that gradually goes from yellow-green, red-brown-pink, dark brown, light blue, and ultimately that iconic dark blue-green. They used targeted gene deletion to block pigment biosynthesis genes at various points in this pathway. This altered the spore color, providing a proof of principle without adversely affecting the production of flavor volatiles and levels of secondary metabolites called mycotoxins. (The latter are present in low enough concentrations in blue cheese so as not to be a health risk for humans, and the team wanted to ensure those concentrations remained low.)

Pencillium roqueforti. (right) Cross sections of cheeses made with the original (dark blue-green) or new color (red-brown, bright green, white albino) strains of the fungus.” height=”371″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/02/bluecheese3-640×371.jpg” width=”640″>

Enlarge / (left) Spectrum of color strains produced in Pencillium roqueforti. (right) Cross sections of cheeses made with the original (dark blue-green) or new color (red-brown, bright green, white albino) strains of the fungus.

University of Nottingham

However, food industry regulations prohibit gene-deletion fungal strains for commercial cheese production. So Dyer et al. used UV mutagenesis—essentially “inducing sexual reproduction in the fungus,” per Dyer—to produce non-GMO mutant strains of the fungi to create “blue” cheeses of different colors, without increasing mycotoxin levels or impacting the volatile compounds responsible for flavor.

“The interesting part was that once we went on to make some cheese, we then did some taste trials with volunteers from across the wider university, and we found that when people were trying the lighter colored strains they thought they tasted more mild,” said Dyer. “Whereas they thought the darker strain had a more intense flavor. Similarly, with the more reddish-brown and a light green one, people thought they had a fruity, tangy element to them—whereas, according to the lab instruments, they were very similar in flavor. This shows that people do perceive taste not only from what they taste but also by what they see.”

Dyer’s team is hoping to work with local cheese makers in Nottingham and Scotland, setting up a spinoff company in hopes of commercializing the mutant strains. And there could be other modifications on the horizon. “Producers could almost dial up their list of desirable characteristics—more or less color, faster or slower growth rate, acidity differences,” Donald Glover of the University of Queensland in Australia, who was not involved in the research, told New Scientist.

Science of Food, 2024. DOI: 10.1038/s41538-023-00244-9  (About DOIs).

Blue cheese shows off new colors, but the taste largely remains the same Read More »

unreleased-preview-of-microsoft’s-os/2-2.0-is-a-glimpse-down-a-road-not-taken

Unreleased preview of Microsoft’s OS/2 2.0 is a glimpse down a road not taken

OS/2 the future —

Microsoft’s involvement in IBM’s OS/2 project ended before v2.0 was released.

This big, weathered box contains an oddball piece of PC history: one of the last builds of IBM's OS/2 that Microsoft worked on before pivoting all of its attention to Windows.

Enlarge / This big, weathered box contains an oddball piece of PC history: one of the last builds of IBM’s OS/2 that Microsoft worked on before pivoting all of its attention to Windows.

In the annals of PC history, IBM’s OS/2 represents a road not taken. Developed in the waning days of IBM’s partnership with Microsoft—the same partnership that had given us a decade or so of MS-DOS and PC-DOS—OS/2 was meant to improve on areas where DOS was falling short on modern systems. Better memory management, multitasking capabilities, and a usable GUI were all among the features introduced in version 1.x.

But Microsoft was frustrated with some of IBM’s goals and demands, and the company continued to develop an operating system called Windows on its own. Where IBM wanted OS/2 to be used mainly to boost IBM-made PCs and designed it around the limitations of Intel’s 80286 CPU, Windows was being created with the booming market for PC-compatible clones in mind. Windows 1.x and 2.x failed to make much of a dent, but 1990’s Windows 3.0 was a hit, and it came preinstalled on many consumer PCs; Microsoft and IBM broke off their partnership shortly afterward, making OS/2 version 1.2 the last one publicly released and sold with Microsoft’s involvement.

But Microsoft had done a lot of work on version 2.0 of OS/2 at the same time as it was developing Windows. It was far enough along that preview screenshots appeared in PC Magazine, and early builds were shipped to developers who could pay for them, but it was never formally released to the public.

But software archaeologist Neozeed recently published a stable internal preview of Microsoft’s OS/2 2.0 to the Internet Archive, along with working virtual machine disk images for VMware and 86Box. The preview, bought by Brian Ledbetter on eBay for $650 plus $15.26 in shipping, dates to July 1990 and would have cost developers who wanted it a whopping $2,600. A lot to pay for a version of an operating system that would never see the light of day!

The Microsoft-developed build of OS/2 2.0 bears only a passing resemblance to the 32-bit version of OS/2 2.0 that IBM finally shipped on its own in April 1992. Neozeed has published a more thorough exploration of Microsoft’s version, digging around in its guts and getting some early Windows software running (the ability to run DOS and Windows apps was simultaneously a selling point of OS/2 and a reason for developers not to create OS/2-specific apps, one of the things that helped to doom OS/2 in the end). It’s a fascinating detail from a turning point in the history of the PC as we know it today, but as a usable desktop operating system, it leaves something to be desired.

All 26 disks of the OS/2 2.0 preview, plus hefty documentation manuals. There are some things about the '90s I don't miss.

Enlarge / All 26 disks of the OS/2 2.0 preview, plus hefty documentation manuals. There are some things about the ’90s I don’t miss.

This unreleased Microsoft-developed OS/2 build isn’t the first piece of Microsoft-related software history that has been excavated in the last few months. In January, an Internet Archive user discovered and uploaded an early build of 86-DOS, the software that Microsoft bought and turned into MS-DOS/PC-DOS for the original IBM PC 5150. Funnily enough, these unreleased previews serve as bookends for IBM and Microsoft’s often-contentious partnership.

As part of the “divorce settlement” between Microsoft and IBM, IBM would take over the development and maintenance of OS/2 1.x and 2.x while Microsoft continued to work on a more advanced far-future version 3.0 of OS/2. This operating system was never released as OS/2, but it would eventually become Windows NT, Microsoft’s more stable business-centric version of Windows. Windows NT merged with the consumer versions of Windows in the early 2000s with Windows 2000 and Windows XP, and those versions gradually evolved into Windows as we know it today.

It has been 18 years since IBM formally discontinued its last release of OS/2, but as so often happens in computing, the software has found a way to live on. ArcaOS is a semi-modernized, intermittently updated branch of OS/2 updated to run on modern hardware while still supporting the ability to run MS-DOS and 16-bit Windows apps.

Unreleased preview of Microsoft’s OS/2 2.0 is a glimpse down a road not taken Read More »

meta-sues-“brazenly-disloyal”-former-exec-over-stolen-confidential-docs

Meta sues “brazenly disloyal” former exec over stolen confidential docs

Meta sues “brazenly disloyal” former exec over stolen confidential docs

A recently unsealed court filing has revealed that Meta has sued a former senior employee for “brazenly disloyal and dishonest conduct” while leaving Meta for an AI data startup called Omniva that The Information has described as “mysterious.”

According to Meta, its former vice president of infrastructure, Dipinder Singh Khurana (also known as T.S.), allegedly used his access to “confidential, non-public, and highly sensitive” information to steal more than 100 internal documents in a rushed scheme to poach Meta employees and borrow Meta’s business plans to speed up Omniva’s negotiations with key Meta suppliers.

Meta believes that Omniva—which Data Center Dynamics (DCD) reported recently “pivoted from crypto to AI cloud”—is “seeking to provide AI cloud computing services at scale, including by designing and constructing data centers.” But it was held back by a “lack of data center expertise at the top,” DCD reported.

The Information reported that Omniva began hiring Meta employees to fill the gaps in this expertise, including wooing Khurana away from Meta.

Last year, Khurana notified Meta that he was leaving on May 15, and that’s when Meta first observed Khurana’s allegedly “utter disregard for his contractual and legal obligations to Meta—including his confidentiality obligations to Meta set forth in the Confidential Information and Invention Assignment Agreement that Khurana signed when joining Meta.”

A Meta investigation found that during Khurana’s last two weeks at the company, he allegedly uploaded confidential Meta documents—including “information about Meta’s ‘Top Talent,’ performance information for hundreds of Meta employees, and detailed employee compensation information”—on Meta’s network to a Dropbox folder labeled with his new employer’s name.

“Khurana also uploaded several of Meta’s proprietary, highly sensitive, confidential, and non-public contracts with business partners who supply Meta with crucial components for its data centers,” Meta alleged. “And other documents followed.”

In addition to pulling documents, Khurana also allegedly sent “urgent” requests to subordinates for confidential information on a key supplier, including Meta’s pricing agreement “for certain computing hardware.”

“Unaware of Khurana’s plans, the employee provided Khurana with, among other things, Meta’s pricing-form agreement with that supplier for the computing hardware and the supplier’s Meta-specific preliminary pricing for a particular chip,” Meta alleged.

Some of these documents were “expressly marked confidential,” Meta alleged. Those include a three-year business plan and PowerPoints regarding “Meta’s future ‘roadmap’ with a key supplier” and “Meta’s 2022 redesign of its global-supply-chain group” that Meta alleged “would directly aid Khurana in building his own efficient and effective supply-chain organization” and afford a path for Omniva to bypass “years of investment.” Khurana also allegedly “uploaded a PowerPoint discussing Meta’s use of GPUs for artificial intelligence.”

Meta was apparently tipped off to this alleged betrayal when Khurana used his Meta email and network access to complete a writing assignment for Omniva as part of his hiring process. For this writing assignment, Khurana “disclosed non-public information about Meta’s relationship with certain suppliers that it uses for its data centers” when asked to “explain how he would help his potential new employer develop the supply chain for a company building data centers using specific technologies.”

In a seeming attempt to cover up the alleged theft of Meta documents, Khurana apparently “attempted to scrub” one document “of its references to Meta,” as well as removing a label marking it “CONFIDENTIAL—FOR INTERNAL USE ONLY.” But when replacing “Meta” with “X,” Khurana allegedly missed the term “Meta” in “at least five locations.”

“Khurana took such action to try and benefit himself or his new employer, including to help ensure that Khurana would continue to work at his new employer, continue to receive significant compensation from his new employer, and/or to enable Khurana to take shortcuts in building his supply-chain team at his new employer and/or helping to build his new employer’s business,” Meta alleged.

Ars could not immediately reach Khurana for comment. Meta noted that he has repeatedly denied breaching his contract or initiating contact with Meta employees who later joined Omniva. He also allegedly refused to sign a termination agreement that reiterates his confidentiality obligations.

Meta sues “brazenly disloyal” former exec over stolen confidential docs Read More »

google’s-new-gaming-ai-aims-past-“superhuman-opponent”-and-at-“obedient-partner”

Google’s new gaming AI aims past “superhuman opponent” and at “obedient partner”

Even hunt-and-fetch quests are better with a little AI help.

Enlarge / Even hunt-and-fetch quests are better with a little AI help.

At this point in the progression of machine-learning AI, we’re accustomed to specially trained agents that can utterly dominate everything from Atari games to complex board games like Go. But what if an AI agent could be trained not just to play a specific game but also to interact with any generic 3D environment? And what if that AI was focused not only on brute-force winning but instead on responding to natural language commands in that gaming environment?

Those are the kinds of questions animating Google’s DeepMind research group in creating SIMA, a “Scalable, Instructable, Multiworld Agent” that “isn’t trained to win, it’s trained to do what it’s told,” as research engineer Tim Harley put it in a presentation attended by Ars Technica. “And not just in one game, but… across a variety of different games all at once.”

Harley stresses that SIMA is still “very much a research project,” and the results achieved in the project’s initial tech report show there’s a long way to go before SIMA starts to approach human-level listening capabilities. Still, Harley said he hopes that SIMA can eventually provide the basis for AI agents that players can instruct and talk to in cooperative gameplay situations—think less “superhuman opponent” and more “believable partner.”

“This work isn’t about achieving high game scores,” as Google puts it in a blog post announcing its research. “Learning to play even one video game is a technical feat for an AI system, but learning to follow instructions in a variety of game settings could unlock more helpful AI agents for any environment.”

Learning how to learn

Google trained SIMA on nine very different open-world games in an attempt to create a generalizable AI agent.

To train SIMA, the DeepMind team focused on three-dimensional games and test environments controlled either from a first-person perspective or an over-the-shoulder third-person perspective. The nine games in its test suite, which were provided by Google’s developer partners, all prioritize “open-ended interactions” and eschew “extreme violence” while providing a wide range of different environments and interactions, from “outer space exploration” to “wacky goat mayhem.”

In an effort to make SIMA as generalizable as possible, the agent isn’t given any privileged access to a game’s internal data or control APIs. The system takes nothing but on-screen pixels as its input and provides nothing but keyboard and mouse controls as its output, mimicking “the [model] humans have been using [to play video games] for 50 years,” as the researchers put it. The team also designed the agent to work with games running in real time (i.e., at 30 frames per second) rather than slowing down the simulation for extra processing time like some other interactive machine-learning projects.

Animated samples of SIMA responding to basic commands across very different gaming environments.

While these restrictions increase the difficulty of SIMA’s tasks, they also mean the agent can be integrated into a new game or environment “off the shelf” with minimal setup and without any specific training regarding the “ground truth” of a game world. It also makes it relatively easy to test whether things SIMA has learned from training on previous games can “transfer” over to previously unseen games, which could be a key step to getting at artificial general intelligence.

For training data, SIMA uses video of human gameplay (and associated time-coded inputs) on the provided games, annotated with natural language descriptions of what’s happening in the footage. These clips are focused on “instructions that can be completed in less than approximately 10 seconds” to avoid the complexity that can develop with “the breadth of possible instructions over long timescales,” as the researchers put it in their tech report. Integration with pre-trained models like SPARC and Phenaki also helps the SIMA model avoid having to learn how to interpret language and visual data from scratch.

Google’s new gaming AI aims past “superhuman opponent” and at “obedient partner” Read More »

google’s-gemini-ai-now-refuses-to-answer-election-questions

Google’s Gemini AI now refuses to answer election questions

I also refuse to answer political questions —

Gemini is opting out of election-related responses entirely for 2024.

The Google Gemini logo.

Enlarge / The Google Gemini logo.

Google

Like many of us, Google Gemini is tired of politics. Reuters reports that Google has restricted the chatbot from answering questions about the upcoming US election, and instead, it will direct users to Google Search.

Google had planned to do this back when the Gemini chatbot was still called “Bard.” In December, the company said, “Beginning early next year, in preparation for the 2024 elections and out of an abundance of caution on such an important topic, we’ll restrict the types of election-related queries for which Bard and [Google Search’s Bard integration] will return responses.” Tuesday, Google confirmed to Reuters that those restrictions have kicked in. Election queries now tend to come back with the refusal: “I’m still learning how to answer this question. In the meantime, try Google Search.”

Google’s original plan in December was likely to disable election info so Gemini could avoid any political firestorms. Boy, did that not work out! When asked to generate images of people, Gemini quietly tacked diversity requirements onto the image request; this practice led to offensive and historically inaccurate images along with a general refusal to generate images of white people. Last month that earned Google wall-to-wall coverage in conservative news spheres along the lines of “Google’s woke AI hates white people!” Google CEO Sundar Pichai called the AI’s “biased” responses “completely unacceptable,” and for now, creating images of people is disabled while Google works on it.

The start of the first round of US elections in the AI era has already led to new forms of disinformation, and Google presumably wants to opt out of all of it.

Google’s Gemini AI now refuses to answer election questions Read More »