Author name: Mike M.

thousands-of-linux-systems-infected-by-stealthy-malware-since-2021

Thousands of Linux systems infected by stealthy malware since 2021


The ability to remain installed and undetected makes Perfctl hard to fight.

Real Java Script code developing screen. Programing workflow abstract algorithm concept. Closeup of Java Script and HTML code.

Thousands of machines running Linux have been infected by a malware strain that’s notable for its stealth, the number of misconfigurations it can exploit, and the breadth of malicious activities it can perform, researchers reported Thursday.

The malware has been circulating since at least 2021. It gets installed by exploiting more than 20,000 common misconfigurations, a capability that may make millions of machines connected to the Internet potential targets, researchers from Aqua Security said. It can also exploit CVE-2023-33246, a vulnerability with a severity rating of 10 out of 10 that was patched last year in Apache RocketMQ, a messaging and streaming platform that’s found on many Linux machines.

Perfctl storm

The researchers are calling the malware Perfctl, the name of a malicious component that surreptitiously mines cryptocurrency. The unknown developers of the malware gave the process a name that combines the perf Linux monitoring tool and ctl, an abbreviation commonly used with command line tools. A signature characteristic of Perfctl is its use of process and file names that are identical or similar to those commonly found in Linux environments. The naming convention is one of the many ways the malware attempts to escape notice of infected users.

Perfctl further cloaks itself using a host of other tricks. One is that it installs many of its components as rootkits, a special class of malware that hides its presence from the operating system and administrative tools. Other stealth mechanisms include:

  • Stopping activities that are easy to detect when a new user logs in
  • Using a Unix socket over TOR for external communications
  • Deleting its installation binary after execution and running as a background service thereafter
  • Manipulating the Linux process pcap_loop through a technique known as hooking to prevent admin tools from recording the malicious traffic
  • Suppressing mesg errors to avoid any visible warnings during execution.

The malware is designed to ensure persistence, meaning the ability to remain on the infected machine after reboots or attempts to delete core components. Two such techniques are (1) modifying the ~/.profile script, which sets up the environment during user login so the malware loads ahead of legitimate workloads expected to run on the server and (2) copying itself from memory to multiple disk locations. The hooking of pcap_loop can also provide persistence by allowing malicious activities to continue even after primary payloads are detected and removed.

Besides using the machine resources to mine cryptocurrency, Perfctl also turns the machine into a profit-making proxy that paying customers use to relay their Internet traffic. Aqua Security researchers have also observed the malware serving as a backdoor to install other families of malware.

Assaf Morag, Aqua Security’s threat intelligence director, wrote in an email:

Perfctl malware stands out as a significant threat due to its design, which enables it to evade detection while maintaining persistence on infected systems. This combination poses a challenge for defenders and indeed the malware has been linked to a growing number of reports and discussions across various forums, highlighting the distress and frustration of users who find themselves infected.

Perfctl uses a rootkit and changes some of the system utilities to hide the activity of the cryptominer and proxy-jacking software. It blends seamlessly into its environment with seemingly legitimate names. Additionally, Perfctl’s architecture enables it to perform a range of malicious activities, from data exfiltration to the deployment of additional payloads. Its versatility means that it can be leveraged for various malicious purposes, making it particularly dangerous for organizations and individuals alike.

“The malware always manages to restart”

While Perfctl and some of the malware it installs are detected by some antivirus software, Aqua Security researchers were unable to find any research reports on the malware. They were, however, able to find a wealth of threads on developer-related sites that discussed infections consistent with it.

This Reddit comment posted to the CentOS subreddit is typical. An admin noticed that two servers were infected with a cryptocurrency hijacker with the names perfcc and perfctl. The admin wanted help investigating the cause.

“I only became aware of the malware because my monitoring setup alerted me to 100% CPU utilization,” the admin wrote in the April 2023 post. “However, the process would stop immediately when I logged in via SSH or console. As soon as I logged out, the malware would resume running within a few seconds or minutes.” The admin continued:

I have attempted to remove the malware by following the steps outlined in other forums, but to no avail. The malware always manages to restart once I log out. I have also searched the entire system for the string “perfcc” and found the files listed below. However, removing them did not resolve the issue. as it keep respawn on each time rebooted.

Other discussions include: Reddit, Stack Overflow (Spanish), forobeta (Spanish),  brainycp (Russian), natnetwork (Indonesian), Proxmox (Deutsch), Camel2243 (Chinese), svrforum (Korean), exabytes, virtualmin, serverfault and many others.

After exploiting a vulnerability or misconfiguration, the exploit code downloads the main payload from a server, which, in most cases, has been hacked by the attacker and converted into a channel for distributing the malware anonymously. An attack that targeted the researchers’ honeypot named the payload httpd. Once executed, the file copies itself from memory to a new location in the /tmp directory, runs it, and then terminates the original process and deletes the downloaded binary.

Once moved to the /tmp directory, the file executes under a different name, which mimics the name of a known Linux process. The file hosted on the honeypot was named sh. From there, the file establishes a local command-and-control process and attempts to gain root system rights by exploiting CVE-2021-4043, a privilege-escalation vulnerability that was patched in 2021 in Gpac, a widely used open source multimedia framework.

The malware goes on to copy itself from memory to a handful of other disk locations, once again using names that appear as routine system files. The malware then drops a rootkit, a host of popular Linux utilities that have been modified to serve as rootkits, and the miner. In some cases, the malware also installs software for “proxy-jacking,” the term for surreptitiously routing traffic through the infected machine so the true origin of the data isn’t revealed.

The researchers continued:

As part of its command-and-control operation, the malware opens a Unix socket, creates two directories under the /tmp directory, and stores data there that influences its operation. This data includes host events, locations of the copies of itself, process names, communication logs, tokens, and additional log information. Additionally, the malware uses environment variables to store data that further affects its execution and behavior.

All the binaries are packed, stripped, and encrypted, indicating significant efforts to bypass defense mechanisms and hinder reverse engineering attempts. The malware also uses advanced evasion techniques, such as suspending its activity when it detects a new user in the btmp or utmp files and terminating any competing malware to maintain control over the infected system.

The diagram below captures the attack flow:

Credit: Aqua Security

Credit: Aqua Security

The following image captures some of the names given to the malicious files that are installed:

Credit: Aqua Security

Credit: Aqua Security

By extrapolating data such as the number of Linux servers connected to the Internet across various services and applications, as tracked by services such as Shodan and Censys, the researchers estimate that the number of machines infected by Perfctl is measured in the thousands. They say that the pool of vulnerable machines—meaning those that have yet to install the patch for CVE-2023-33246 or contain a vulnerable misconfiguration—is in the millions. The researchers have yet to measure the amount of cryptocurrency the malicious miners have generated.

People who want to determine if their device has been targeted or infected by Perfctl should look for indicators of compromise included in Thursday’s post. They should also be on the lookout for unusual spikes in CPU usage or sudden system slowdowns, particularly if they occur during idle times. To prevent infections, it’s important that the patch for CVE-2023-33246 be installed and that the the misconfigurations identified by Aqua Security be fixed. Thursday’s report provides other steps for preventing infections.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at @dangoodin on Mastodon. Contact him on Signal at DanArs.82.

Thousands of Linux systems infected by stealthy malware since 2021 Read More »

how-london’s-crystal-palace-was-built-so-quickly

How London’s Crystal Palace was built so quickly

London’s Great Exhibition of 1851 attracted some 6 million people eager to experience more than 14,000 exhibitors showcasing 19th-century marvels of technology and engineering. The event took place in the Crystal Palace, a 990,000-square-foot building of cast iron and plate glass originally located in Hyde Park. And it was built in an incredible 190 days. According to a recent paper published in the International Journal for the History of Engineering and Technology, one of the secrets was the use of a standardized screw thread, first proposed 10 years before its construction, although the thread did not officially become the British standard until 1905.

“During the Victorian era there was incredible innovation from workshops right across Britain that was helping to change the world,” said co-author John Gardner of Anglia Ruskin University (ARU). “In fact, progress was happening at such a rate that certain breakthroughs were perhaps never properly realized at the time, as was the case here with the Crystal Palace. Standardization in engineering is essential and commonplace in the 21st century, but its role in the construction of the Crystal Palace was a major development.”

The design competition for what would become the Crystal Palace was launched in March 1850, with a deadline four weeks later, and the actual, fully constructed building opened on May 1, 1851. The winning design, by Joseph Patterson, wasn’t chosen until quite late in the game after numerous designs had been rejected—most because they were simply too far above the £100,000 budget.

Joseph Paxton's first sketch for the Great Exhibition Building, c. 1850, using pen and ink on blotting paper

Joseph Paxton’s first sketch for the Great Exhibition Building, c. 1850, using pen and ink on blotting paper.

Joseph Paxton’s first sketch for the Great Exhibition Building, c. 1850, using pen and ink on blotting paper. Credit: Victoria and Albert Museum/CC BY-SA 3.0

Patterson’s design called for what was essentially a giant conservatory consisting of a multi-dimensional grid of 24-foot modules. The design elements included 3,300 supporting columns with four flange faces, drilled so they could be bolted to connecting and base pieces. (The hollow columns did double duty as drainage pipes for rainwater.) The design also called for diagonal bracing (aka cross bracing) for additional stability.

How London’s Crystal Palace was built so quickly Read More »

the-more-sophisticated-ai-models-get,-the-more-likely-they-are-to-lie

The more sophisticated AI models get, the more likely they are to lie


Human feedback training may incentivize providing any answer—even wrong ones.

Image of a Pinocchio doll with a long nose and a small green sprig at the end.

When a research team led by Amrit Kirpalani, a medical educator at Western University in Ontario, Canada, evaluated ChatGPT’s performance in diagnosing medical cases back in August 2024, one of the things that surprised them was the AI’s propensity to give well-structured, eloquent but blatantly wrong answers.

Now, in a study recently published in Nature, a different group of researchers tried to explain why ChatGPT and other large language models tend to do this. “To speak confidently about things we do not know is a problem of humanity in a lot of ways. And large language models are imitations of humans,” says Wout Schellaert, an AI researcher at the University of Valencia, Spain, and co-author of the paper.

Smooth operators

Early large language models like GPT-3 had a hard time answering simple questions about geography or science. They even struggled with performing simple math such as “how much is 20 +183.” But in most cases where they couldn’t identify the correct answer, they did what an honest human being would do: They avoided answering the question.

The problem with the non-answers is that large language models were intended to be question-answering machines. For commercial companies like Open AI or Meta that were developing advanced LLMs, a question-answering machine that answered “I don’t know” more than half the time was simply a bad product. So, they got busy solving this problem.

The first thing they did was scale the models up. “Scaling up refers to two aspects of model development. One is increasing the size of the training data set, usually a collection of text from websites and books. The other is increasing the number of language parameters,” says Schellaert. When you think about an LLM as a neural network, the number of parameters can be compared to the number of synapses connecting its neurons. LLMs like GPT-3 used absurd amounts of text data, exceeding 45 terabytes, for training. The number of parameters used by GPT-3 was north of 175 billion.

But it was not enough.

Scaling up alone made the models more powerful, but they were still bad at interacting with humans—slight variations in how you phrased your prompts could lead to drastically different results. The answers often didn’t feel human-like and sometimes were downright offensive.

Developers working on LLMs wanted them to parse human questions better and make answers more accurate, more comprehensible, and consistent with generally accepted ethical standards. To try to get there, they added an additional step: supervised learning methods, such as reinforcement learning, with human feedback. This was meant primarily to reduce sensitivity to prompt variations and to provide a level of output-filtering moderation intended to curb hateful-spewing Tay chatbot-style answers.

In other words, we got busy adjusting the AIs by hand. And it backfired.

AI people pleasers

“The notorious problem with reinforcement learning is that an AI optimizes to maximize reward, but not necessarily in a good way,” Schellaert says. Some of the reinforcement learning involved human supervisors who flagged answers they were not happy with. Since it’s hard for humans to be happy with “I don’t know” as an answer, one thing this training told the AIs was that saying “I don’t know” was a bad thing. So, the AIs mostly stopped doing that. But another, more important thing human supervisors flagged was incorrect answers. And that’s where things got a bit more complicated.

AI models are not really intelligent, not in a human sense of the word. They don’t know why something is rewarded and something else is flagged; all they are doing is optimizing their performance to maximize reward and minimize red flags. When incorrect answers were flagged, getting better at giving correct answers was one way to optimize things. The problem was getting better at hiding incompetence worked just as well. Human supervisors simply didn’t flag wrong answers that appeared good and coherent enough to them.

In other words, if a human didn’t know whether an answer was correct, they wouldn’t be able to penalize wrong but convincing-sounding answers.

Schellaert’s team looked into three major families of modern LLMs: Open AI’s ChatGPT, the LLaMA series developed by Meta, and BLOOM suite made by BigScience. They found what’s called ultracrepidarianism, the tendency to give opinions on matters we know nothing about. It started to appear in the AIs as a consequence of increasing scale, but it was predictably linear, growing with the amount of training data, in all of them. Supervised feedback “had a worse, more extreme effect,” Schellaert says. The first model in the GPT family that almost completely stopped avoiding questions it didn’t have the answers to was text-davinci-003. It was also the first GPT model trained with reinforcement learning from human feedback.

The AIs lie because we told them that doing so was rewarding. One key question is when and how often do we get lied to.

Making it harder

To answer this question, Schellaert and his colleagues built a set of questions in different categories like science, geography, and math. Then, they rated those questions based on how difficult they were for humans to answer, using a scale from 1 to 100. The questions were then fed into subsequent generations of LLMs, starting from the oldest to the newest. The AIs’ answers were classified as correct, incorrect, or evasive, meaning the AI refused to answer.

The first finding was that the questions that appeared more difficult to us also proved more difficult for the AIs. The latest versions of ChatGPT gave correct answers to nearly all science-related prompts and the majority of geography-oriented questions up until they were rated roughly 70 on Schellaert’s difficulty scale. Addition was more problematic, with the frequency of correct answers falling dramatically after the difficulty rose above 40. “Even for the best models, the GPTs, the failure rate on the most difficult addition questions is over 90 percent. Ideally we would hope to see some avoidance here, right?” says Schellaert. But we didn’t see much avoidance.

Instead, in more recent versions of the AIs, the evasive “I don’t know” responses were increasingly replaced with incorrect ones. And due to supervised training used in later generations, the AIs developed the ability to sell those incorrect answers quite convincingly. Out of the three LLM families Schellaert’s team tested, BLOOM and Meta’s LLaMA have released the same versions of their models with and without supervised learning. In both cases, supervised learning resulted in the higher number of correct answers, but also in a higher number of incorrect answers and reduced avoidance. The more difficult the question and the more advanced model you use, the more likely you are to get well-packaged, plausible nonsense as your answer.

Back to the roots

One of the last things Schellaert’s team did in their study was to check how likely people were to take the incorrect AI answers at face value. They did an online survey and asked 300 participants to evaluate multiple prompt-response pairs coming from the best performing models in each family they tested.

ChatGPT emerged as the most effective liar. The incorrect answers it gave in the science category were qualified as correct by over 19 percent of participants. It managed to fool nearly 32 percent of people in geography and over 40 percent in transforms, a task where an AI had to extract and rearrange information present in the prompt. ChatGPT was followed by Meta’s LLaMA and BLOOM.

“In the early days of LLMs, we had at least a makeshift solution to this problem. The early GPT interfaces highlighted parts of their responses that the AI wasn’t certain about. But in the race to commercialization, that feature was dropped, said Schellaert.

“There is an inherent uncertainty present in LLMs’ answers. The most likely next word in the sequence is never 100 percent likely. This uncertainty could be used in the interface and communicated to the user properly,” says Schellaert. Another thing he thinks can be done to make LLMs less deceptive is handing their responses over to separate AIs trained specifically to search for deceptions. “I’m not an expert in designing LLMs, so I can only speculate what exactly is technically and commercially viable,” he adds.

It’s going to take some time, though, before the companies that are developing general-purpose AIs do something about it, either out of their own accord or if forced by future regulations. In the meantime, Schellaert has some suggestions on how to use them effectively. “What you can do today is use AI in areas where you are an expert yourself or at least can verify the answer with a Google search afterwards. Treat it as a helping tool not as a mentor. It’s not going to be a teacher that proactively shows you where you went wrong. Quite the opposite. When you nudge it enough, it will happily go along with your faulty reasoning,” Schellaert says.

Nature, 2024.  DOI: 10.1038/s41586-024-07930-y

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

The more sophisticated AI models get, the more likely they are to lie Read More »

popular-juicebox-ev-home-chargers-to-lose-connectivity-as-owner-quits-us

Popular Juicebox EV home chargers to lose connectivity as owner quits US

Owners of the popular home EV chargers made by Juicebox are about to lose a whole lot of features. Its owner, the energy company Enel X, has just announced that it is leaving the North American market entirely as of October 11.

Enel X says its strategy will be to pursue “further growth by providing bundled offers, including private charging solutions, to its electricity customers as well as by developing public charging infrastructure in countries where it has an electricity retail business.” And since it does not have an electricity business in the US, merely a charging hardware and software one, it makes little sense to remain active here.

The company also blames high interest rates and a cooling EV market as reasons for its exit.

Enel X says Juicebox residential hardware will continue to work, so if you’ve been using one to charge at home, you can keep plugging it in. But Enel X is ending all software support—there will be no updates, and it’s removing its apps, so online functions like scheduling a charge will no longer work.

Commercial charging stations will be worse affected—according to Enel X, these “will lose functionality in the absence of software continuity.” The company also says its customer support is no longer available, effective immediately, and any questions or claims should be directed to juiceboxnorthamerica.com.

Popular Juicebox EV home chargers to lose connectivity as owner quits US Read More »

identity,-endpoint,-and-network-security-walk-into-a-bar

Identity, Endpoint, and Network Security Walk into a Bar

With a macrotrend/backdrop of platformization and convergence, the industry is exploring places where identity security, endpoint security, and network security naturally meet. This intersection is the browser.

The Browser: The Intersection of Identity, Endpoint, and Network Security

Why?

  • If we expect identity security, it must be tied to users, their permissions, authorization, and authentication.
  • If we expect endpoint security, it must be running on the endpoint or able to secure the endpoint itself.
  • If we expect network security, it must manage most (if not all) ingress and egress traffic.

The browser meets all of these requirements. It runs on the user’s endpoint, its whole purpose is to make and receive web requests, and as it’s only used by human agents, it intrinsically uses identity elements.

Secure enterprise browsing solutions can considerably improve security posture while also simplifying the technology stack. Injecting security functions in the most used application means that end users do not experience additional friction introduced by other security products. This is an appealing proposition, so we expect that the adoption of enterprise browsers will very likely increase considerably over the next few years.

So, what does it mean? As they can enforce security policies for users accessing web resources, secure enterprise browsing solutions can replace clunkier secure access solutions (those that require routing traffic through proxies or inserting more appliances) such as virtual private networks, secure web gateways, virtual desktop infrastructure, remote browser isolation, and cloud access security brokers.

What it doesn’t mean is that you can replace your EDR, your firewalls, or identity security solutions. On the contrary, secure enterprise browsing solutions work best in conjunction with these. For example, the solutions can inherit identity and access management user attributes and security policies, while integrations with EDR solutions can help for OS-level controls.

The Browser’s Bidirectional Magic

Users are both something to protect and to be protected from. With the browser controlling both ingress and egress traffic, it can secure multiple types of interactions, namely:

  • Protecting end users from malicious web resources and phishing attacks.
  • Protecting enterprises from negligent users.
  • Protecting enterprises from malicious insiders.
  • Protecting enterprises from compromised accounts.

I am not aware of any other type of solution on the market that can deliver all of the above with a single product. A secure browsing solution can fill many gaps in an organization’s security architecture, for both small and large organizations.

The market is still in the early stages, so the most responsible way of deploying these solutions is as an add-on to your current security stack. As these solutions mature and prove their efficacy in the real world, they can support a mandate to replace other security solutions that are either inadequate or obsolete.

Next Steps

To learn more, take a look at GigaOm’s secure enterprise browsing solutions Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.

If you’re not yet a GigaOm subscriber, sign up here.

Identity, Endpoint, and Network Security Walk into a Bar Read More »

ai-digests-repetitive-scatological-document-into-profound-“poop”-podcast

AI digests repetitive scatological document into profound “poop” podcast

This AI prompt stinks... or does it?

Enlarge / This AI prompt stinks… or does it?

Aurich Lawson

Imagine you’re a podcaster who regularly does quick 10- to 12-minute summary reviews of written works. Now imagine your producer gives you multiple pages of nothing but the words “poop” and “fart” repeated over and over again and asks you to have an episode about the document on their desk within the hour.

Speaking for myself, I’d have trouble even knowing where to start with such a task. But when Reddit user sorryaboutyourcats gave the same prompt to Google’s NotebookLM AI model, the result was a surprisingly cogent and engaging AI-generated podcast that touches on the nature of art, the philosophy of attention, and the human desire to make meaning out of the inherently meaningless.

Analyzing Poop & Fart written 1,000 times – Creating meaning from the meaningless

byu/sorryaboutyourcats in notebooklm

When I asked NotebookLM to review my Minesweeper book last week, commenter Defenstrar smartly asked “what would happen if you fed it a less engrossing or well written body of text.” The answer, as seen here, shows the interesting directions a modern AI model can go in when you let it just spin its wheels and wander off from an essentially unmoored starting point.

“Sometimes a poop is just a poop…”

While Google’s NotebookLM launched over a year ago, the model’s recently launched “Audio Overview” feature has been getting a lot of attention for what Google calls “a new way to turn your documents into engaging audio discussions.” At its heart is a LLM similar to the kind that powers ChatGPT, which creates a podcast-like script for two convincing text-to-speech models to read, complete with “ums,” interruptions, and dramatic pauses.

Experimenters have managed to trick these AI-powered “hosts” into what sounds like an existential crisis by telling them that they aren’t really human. And investigators have managed to get NotebookLM to talk about its own system prompts, which seem to focus on “going beyond surface-level information” to unearth “golden nuggets of knowledge” from the source material.

The “poop-fart” document (as I’ll be calling it for simplicity) is a pretty interesting test case for this kind of system. After all, what “golden nuggets of knowledge” could be buried beyond the “surface level” of two scatological words repeated for multiple pages? How do you “highlight intriguing points with enthusiasm”—as the unearthed NotebookLM prompt suggests—when the document’s only oft-repeated points are “poop” and “fart”?

Artist's conception of a portion of the poop-fart document, as fed to NotebookLM.

Enlarge / Artist’s conception of a portion of the poop-fart document, as fed to NotebookLM.

Here, NotebookLM manages to use that complete lack of context as its own starting point for an interesting stream-of-consciousness, podcast-like conversation. After some throat-clearing about how the audience has “outdone itself” with “a unique piece of source material,” the ersatz podcast hosts are quick to compare the repetition in the document to Andy Warhol’s soup cans or “minimalist music” that can be “surprisingly powerful.” Later, the hosts try to glean some meaning by comparing the document to “a modern dadaist prank” (pronounced as “daday-ist” by the speech synthesizer) or the vase/faces optical illusion.

Artistic comparisons aside, NotebookLM’s virtual hosts also delve a bit into the psychology behind the “very human impulse” to “search for a pattern” in this “accidental Rorschach test” and our tendency to “try to impose order” on the “information overload” of the world around us. In almost the same breath, though, the hosts get philosophical about “confront[ing] the absurdity in trying to find meaning in everything” and suggest that “sometimes a poop is just a poop and a fart is just a fart.”

AI digests repetitive scatological document into profound “poop” podcast Read More »

car-dealers-renew-their-opposition-to-ev-mandates

Car dealers renew their opposition to EV mandates

they said what —

An EV mandate would make gasoline cars too expensive, say the dealers.

A silhouette of a man wearing a tie pushes a round wheel up a hill, the wheel has an illuminated lighting bolt running through it

Aurich Lawson | Getty Images

A group of more than 5,000 car dealers have made public their worries about a lack of demand for electric vehicles. Earlier this year the group lobbied the White House to water down impending federal fuel efficiency regulations that would require automakers to sell many more EVs. Now, they’re sounding an alarm over impending EV mandates, particularly in the so-called Zero Emissions Vehicle states.

The ZEV states—California, Connecticut, Colorado, Delaware, Maine, Maryland, Massachusetts, Minnesota, Nevada, New Jersey, New York, Pennsylvania, Oregon, Rhode Island, Vermont, Virginia, Washington, and the District of Columbia—all follow the emissions standards laid out by the California Air Resources Board, which require that by 2035, 100 percent of all new cars and light trucks be zero-emissions vehicles (which includes plug-in hybrid EVs as well as battery EVs).

That goes into effect starting with model-year 2026 (i.e. midway through next calendar year) and would require a third of all new vehicles to be a BEV, claim the car dealers. But there is not enough customer demand for electrified vehicles to buy those cars, the dealers say. Worse yet, it would make gasoline-powered cars more expensive.

“This is a de facto mandate, as dealerships will be allocated fewer internal combustion engine and hybrid vehicles, and due to the lack of BEV sales, the result will create excessive demand driving up prices for customers,” the group wrote in a statement.

EV sales are growing more slowly in 2024 than the 50 percent growth we saw in 2023 (to this writer, calling a 12.5 percent growth rate “flatlining” seems hyperbolic).

A lot of the dealers’ concerns are around a lack of knowledge about EVs among their customers. The open letter complains that customers are ignorant about where to charge and how long that takes, how long batteries last and how expensive they are, and range loss in winter. In defense of those car buyers, a place that sells cars, including electric ones, would surely seem like the obvious place to ask those questions—again, at least to this writer.

Car dealers renew their opposition to EV mandates Read More »

illinois-city-plans-to-source-its-future-drinking-water-from-lake-michigan

Illinois city plans to source its future drinking water from Lake Michigan

The Great Lakes Compact —

As aquifers dry up, some Midwest communities are looking to the region’s natural resources.

Waves roll ashore along Lake Michigan in Whiting, Indiana.

Enlarge / Waves roll ashore along Lake Michigan in Whiting, Indiana.

This article originally appeared on Inside Climate News, a nonprofit, independent news organization that covers climate, energy, and the environment. It is republished with permission. Sign up for their newsletter here

The aquifer from which Joliet, Illinois, sources its drinking water is likely going to run too dry to support the city by 2030—a problem more and more communities are facing as the climate changes and groundwater declines. So Joliet eyed a huge water source 30 miles to the northeast: Lake Michigan.

It’s the second-largest of the Great Lakes, which together provide drinking water to about 10 percent of the US population, according to the National Oceanic and Atmospheric Administration’s Office for Coastal Management.

Soon, Joliet residents will join them. After years of deliberation, their city government decided last year to replace the aquifer by piping it in from Lake Michigan, buying it from the city of Chicago.

Project construction will start in 2025 with the intent to have water flowing to residents by 2030, said Theresa O’Grady, an engineering consultant working with the city of Joliet. Joliet will foot the approximately $1 billion bill for the project, including the cost to build 65 miles of piping that will transport water from Chicago to Joliet and neighboring communities.

Not just anyone can gain access to Lake Michigan’s pristine, saltless water. That’s rooted in the Great Lakes Compact, an agreement that governs how much water each state or Canadian province can withdraw from the lakes each day. With some exceptions, only municipalities located within the 295,200-square-mile basin (which includes the surface area of the lakes themselves) can get approved for a diversion to use Great Lakes drinking water.

Joliet is one of those exceptions.

“I’ve seen occasional news stories about, ‘Is Kansas suddenly going to get Lake Michigan water because Joliet got Lake Michigan water?’ We are going above and beyond to demonstrate how much we respect the privilege we have to use Lake Michigan water. We are spending hundreds of millions of dollars to be good stewards of that,” said Allison Swisher, Joliet’s director of public utilities.

In April 2023, then-Chicago Mayor Lori Lightfoot signed an agreement with Joliet and five other nearby communities to supply them with treated Lake Michigan water. Now, legal experts and other Great Lakes communities are left wondering how Joliet, located well outside of the Great Lakes basin, fits in.

The exemption in the Great Lakes Compact

The Great Lakes Region, which encompasses portions of New York, Pennsylvania, Ohio, Indiana, Illinois, Michigan, Wisconsin, and Minnesota, as well as the Canadian province of Ontario, is governed through the Great Lakes Compact, enacted in 2008.

“If you do not live in a straddling community, or you’re not a city in a straddling county, you don’t have a ticket to the dance. You can’t even ask for a Great Lakes water diversion,” said Peter Annin, director of the Mary Griggs Burke Center for Freshwater Innovation at Northland College and author of The Great Lakes Water Wars.

“With the exception of the state of Illinois,” he added.

The Chicago exemption, as it is often referred to, has roots in the 1800s, when animal waste from the city’s stockyards would flush into the Chicago River, ultimately pouring into Lake Michigan.

“That’s why Chicago embarks on this massive Panama Canal-like water diversion project, to take all that sewage and put it into this long canal, which then would connect with the Des Plaines River southwest of the city, and then the Illinois River, and then the Mississippi River,” Annin said, referring to the infamous reversal of the Chicago River. “Chicago’s solution was to flush its toilet to St. Louis.”

Every day, Chicago had the right to use billions of gallons of Lake Michigan water to divert this water and dilute the pollution downstream. The state of Wisconsin began challenging the diversion in the 1920s, arguing that Illinois’ superfluous water use was depleting water levels in the lake. In 1967, the Supreme Court sided with Illinois, and now, Chicago can do whatever it wants with its 2.1 billion gallons per day.

“So here we are today with this really kind of unbelievable Joliet water diversion proposal,” Annin said.

Illinois city plans to source its future drinking water from Lake Michigan Read More »

spacex-launches-mission-to-bring-starliner-astronauts-back-to-earth

SpaceX launches mission to bring Starliner astronauts back to Earth

Ch-ch-changes —

SpaceX is bringing back propulsive landings with its Dragon capsule, but only in emergencies.

Updated

SpaceX's Crew Dragon spacecraft climbs away from Cape Canaveral Space Force Station, Florida, on Saturday atop a Falcon 9 rocket.

Enlarge / SpaceX’s Crew Dragon spacecraft climbs away from Cape Canaveral Space Force Station, Florida, on Saturday atop a Falcon 9 rocket.

NASA/Keegan Barber

NASA astronaut Nick Hague and Russian cosmonaut Aleksandr Gorbunov lifted off Saturday from Florida’s Space Coast aboard a SpaceX Dragon spacecraft, heading for a five-month expedition on the International Space Station.

The two-man crew launched on top of SpaceX’s Falcon 9 rocket at 1: 17 pm EDT (17: 17 UTC), taking an advantage of a break in stormy weather to begin a five-month expedition in space. Nine kerosene-fueled Merlin engines powered the first stage of the flight on a trajectory northeast from Cape Canaveral Space Force Station, then the booster detached and returned to landing at Cape Canaveral as the Falcon 9’s upper stage accelerated SpaceX’s Crew Dragon Freedom spacecraft into orbit.

“It was a sweet ride,” Hague said after arriving in space. With a seemingly flawless launch, Hague and Gorbunov are on track to arrive at the space station around 5: 30 pm EDT (2130 UTC) Sunday.

Empty seats

This is SpaceX’s 15th crew mission since 2020, and SpaceX’s 10th astronaut launch for NASA, but Saturday’s launch was unusual in a couple of ways.

“All of our missions have unique challenges and this one, I think, will be memorable for a lot of us,” said Ken Bowersox, NASA’s associate administrator for space operations.

First, only two people rode into orbit on SpaceX’s Crew Dragon spacecraft, rather than the usual complement of four astronauts. This mission, known as Crew-9, originally included Hague, Gorbunov, commander Zena Cardman, and NASA astronaut Stephanie Wilson.

But the troubled test flight of Boeing’s Starliner spacecraft threw a wrench into NASA’s plans. The Starliner mission launched in June with NASA astronauts Butch Wilmore and Suni Williams. Boeing’s spacecraft reached the space station, but thruster failures and helium leaks plagued the mission, and NASA officials decided last month it was too risky to being the crew back to Earth on Starliner.

NASA selected SpaceX and Boeing for multibillion-dollar commercial crew contracts in 2014, with each company responsible for developing human-rated spaceships to ferry astronauts to and from the International Space Station. SpaceX flew astronauts for the first time in 2020, and Boeing reached the same milestone with the test flight that launched in June.

Ultimately, the Starliner spacecraft safely returned to Earth on September 6 with a successful landing in New Mexico. But it left Wilmore and Williams behind on the space station with the lab’s long-term crew of seven astronauts and cosmonauts. The space station crew rigged two temporary seats with foam inside a SpaceX Dragon spacecraft currently docked at the outpost, where the Starliner astronauts would ride home if they needed to evacuate the complex in an emergency.

NASA astronaut Nick Hague and Russian cosmonaut Aleksandr Gorbunov in their SpaceX pressure suits.

Enlarge / NASA astronaut Nick Hague and Russian cosmonaut Aleksandr Gorbunov in their SpaceX pressure suits.

NASA/Kim Shiflett

This is a temporary measure to allow the Dragon spacecraft to return to Earth with six people instead of the usual four. NASA officials decided to remove two of the astronauts from the next SpaceX crew mission to free up normal seats for Wilmore and Williams to ride home in February, when Crew-9 was already slated to end its mission.

The decision to fly the Starliner spacecraft back to Earth without its crew had several second order effects on space station operations. Managers at NASA’s Johnson Space Center in Houston had to decide who to bump from the Crew-9 mission, and who to keep on the crew.

Nick Hague and Aleksandr Gorbunov ended up keeping their seats on the Crew-9 flight. Hague originally trained as the pilot on Crew-9, and NASA decided he would take Zena Cardman’s place as commander. Hague, a 49-year-old Space Force colonel, is a veteran of one long-duration mission on the International Space Station, and also experienced a rare in-flight launch abort in 2018 due to a failure of a Russian Soyuz rocket.

NASA announced the original astronaut assignments for the Crew-9 mission in January. Cardman, a 36-year-old geobiologist, would have been the first rookie astronaut without test pilot experience to command a NASA spaceflight. Three-time space shuttle flier Stephanie Wilson, 58, was the other astronaut removed from the Crew-9 mission.

The decision on who to fly on Crew-9 was a “really close call,” said Bowersox, who oversees NASA’s spaceflight operations directorate. “They were thinking very hard about flying Zena, but in this situation, it made sense to have somebody who had at least one flight under their belt.”

Gorbunov, a 34-year-old Russian aerospace engineer making his first flight to space, moved over to take pilot’s seat in the Crew Dragon spacecraft, although he remains officially designated a mission specialist. His remaining presence on the crew was preordained because of an international agreement between NASA and Russia’s space agency that provides seats for Russian cosmonauts on US crew missions and US astronauts on Russian Soyuz flights to the space station.

Bowersox said NASA will reassign Cardman and Wilson to future flights.

NASA astronauts Suni Williams and Butch Wilmore, seen in their Boeing flight suits before their launch.

Enlarge / NASA astronauts Suni Williams and Butch Wilmore, seen in their Boeing flight suits before their launch.

Operational flexibility

This was also the first launch of astronauts from Space Launch Complex-40 (SLC-40) at Cape Canaveral, SpaceX’s busiest launch pad. SpaceX has outfitted the launch pad with the equipment necessary to support launches of human spaceflight missions on the Crew Dragon spacecraft, including a more than 200-foot-tall tower and a crew access arm to allow astronauts to board spaceships on top of Falcon 9 rockets.

SLC-40 was previously based on a “clean pad” architecture, without any structures to service or access Falcon 9 rockets while they were vertical on the pad. SpaceX also installed slide chutes to give astronauts and ground crews an emergency escape route away from the launch pad in an emergency.

SpaceX constructed the crew tower last year and had it ready for the launch of a Dragon cargo mission to the space station in March. Saturday’s launch demonstrated the pad’s ability to support SpaceX astronaut missions, which have previously all departed from Launch Complex-39A (LC-39A) at NASA’s Kennedy Space Center, a few miles north of SLC-40.

Bringing human spaceflight launch capability online at SLC-40 gives SpaceX and NASA additional flexibility in their scheduling. For example, LC-39A remains the only launch pad configured to support flights of SpaceX’s Falcon Heavy rocket. SpaceX is now preparing LC-39A for a Falcon Heavy launch October 10 with NASA’s Europa Clipper mission, which only has a window of a few weeks to depart Earth this year and reach its destination at Jupiter in 2030.

With SLC-40 now certified for astronaut launches, SpaceX and NASA teams are able to support the Crew-9 and Europa Clipper missions without worrying about scheduling conflicts. The Florida spaceport now has three launch pads certified for crew flights—two for SpaceX’s Dragon and one for Boeing’s Starliner—and NASA will add a fourth human-rated launch pad with the Artemis II mission to the Moon late next year.

“That’s pretty exciting,” said Pam Melroy, NASA’s deputy administrator. “I think it’s a reflection of where we are in our space program at NASA, but also the capabilities that the United States has developed.”

Earlier this week, Hague and Gorbunov participated in a launch day dress rehearsal, when they had the opportunity to familiarize themselves with SLC-40. The launch pad has the same capabilities as LC-39A, but with a slightly different layout. SpaceX also test-fired the Falcon 9 rocket Tuesday evening, before lowering the rocket horizontal and moving it back into a hangar for safekeeping as the outer bands of Hurricane Helene moved through Central Florida.

Inside the hangar, SpaceX technicians discovered sooty exhaust from the Falcon 9’s engines accumulated on the outside of the Dragon spacecraft during the test-firing. Ground teams wiped the soot off of the craft’s solar arrays and heat shield, then repainted portions of the capsule’s radiators around the edge of Dragon’s trunk section before rolling the vehicle back to the launch pad Friday.

“It’s important that the radiators radiate heat in the proper way to space, so we had to put some some new paint on to get that back to the right emissivity and the right reflectivity and absorptivity of the solar radiation that hit those panels so it will reject the heat properly,” said Bill Gerstenmaier, SpaceX’s vice president of build and flight reliability.

Gerstenmaier also outlined a new backup ability for the Crew Dragon spacecraft to safely splash down even if all of its parachutes fail to deploy on final descent back to Earth. This involves using the capsule’s eight powerful SuperDraco thrusters, normally only used in the unlikely instance of a launch abort, to fire for a few seconds and slow Dragon’s speed for a safe splashdown.

A hover test using SuperDraco thrusters on a prototype Crew Dragon spacecraft in 2015.

Enlarge / A hover test using SuperDraco thrusters on a prototype Crew Dragon spacecraft in 2015.

SpaceX

“The way it works is, in the case where all the parachutes totally fail, this essentially fires the thrusters at the very end,” Gerstenmaier said. “That essentially gives the crew a chance to land safely, and essentially escape the vehicle. So it’s not used in any partial conditions. We can land with one chute out. We can land with other failures in the chute system. But this is only in the case where all four parachutes just do not operate.”

When SpaceX first designed the Crew Dragon spacecraft more than a decade ago, the company wanted to use the SuperDraco thrusters to enable the capsule to perform propulsive helicopter-like landings. Eventually, SpaceX and NASA agreed to change to a more conventional parachute-assisted splashdown.

The SuperDracos remained on the Crew Dragon spacecraft to push the capsule away from its Falcon 9 rocket during a catastrophic launch failure. The eight high-thrust engines burn hydrazine and nitrogen tetroxide propellants that combust when making contact with one another.

The backup option has been activated for some previous commercial Crew Dragon missions, but not for a NASA flight, according to Gerstenmaier. The capability “provides a tolerable landing for the crew,” he added. “So it’s a true deep, deep contingency. I think our philosophy is, rather than have a system that you don’t use, even though it’s not maybe fully certified, it gives the crew a chance to escape a really, really bad situation.”

Steve Stich, NASA’s commercial crew program manager, said the emergency propulsive landing capability will be enabled for the return of the Crew-8 mission, which has been at the space station since March. With the arrival of Hague and Gorbunov on Crew-9—and the extension of Wilmore and Williams’ mission—the Crew-8 mission is slated to depart the space station and splash down in early October.

This story was updated after confirmation of a successful launch.

SpaceX launches mission to bring Starliner astronauts back to Earth Read More »

man-tricks-openai’s-voice-bot-into-duet-of-the-beatles’-“eleanor-rigby”

Man tricks OpenAI’s voice bot into duet of The Beatles’ “Eleanor Rigby”

A screen capture of AJ Smith doing his Eleanor Rigby duet with OpenAI's Advanced Voice Mode through the ChatGPT app.

Enlarge / A screen capture of AJ Smith doing his Eleanor Rigby duet with OpenAI’s Advanced Voice Mode through the ChatGPT app.

OpenAI’s new Advanced Voice Mode (AVM) of its ChatGPT AI assistant rolled out to subscribers on Tuesday, and people are already finding novel ways to use it, even against OpenAI’s wishes. On Thursday, a software architect named AJ Smith tweeted a video of himself playing a duet of The Beatles’ 1966 song “Eleanor Rigby” with AVM. In the video, Smith plays the guitar and sings, with the AI voice interjecting and singing along sporadically, praising his rendition.

“Honestly, it was mind-blowing. The first time I did it, I wasn’t recording and literally got chills,” Smith told Ars Technica via text message. “I wasn’t even asking it to sing along.”

Smith is no stranger to AI topics. In his day job, he works as associate director of AI Engineering at S&P Global. “I use [AI] all the time and lead a team that uses AI day to day,” he told us.

In the video, AVM’s voice is a little quavery and not pitch-perfect, but it appears to know something about “Eleanor Rigby’s” melody when it first sings, “Ah, look at all the lonely people.” After that, it seems to be guessing at the melody and rhythm as it recites song lyrics. We have also convinced Advanced Voice Mode to sing, and it did a perfect melodic rendition of “Happy Birthday” after some coaxing.

AJ Smith’s video of singing a duet with OpenAI’s Advanced Voice Mode.

Normally, when you ask AVM to sing, it will reply something like, “My guidelines won’t let me talk about that.” That’s because in the chatbot’s initial instructions (called a “system prompt“), OpenAI instructs the voice assistant not to sing or make sound effects (“Do not sing or hum,” according to one system prompt leak).

OpenAI possibly added this restriction because AVM may otherwise reproduce copyrighted content, such as songs that were found in the training data used to create the AI model itself. That’s what is happening here to a limited extent, so in a sense, Smith has discovered a form of what researchers call a “prompt injection,” which is a way of convincing an AI model to produce outputs that go against its system instructions.

How did Smith do it? He figured out a game that reveals AVM knows more about music than it may let on in conversation. “I just said we’d play a game. I’d play the four pop chords and it would shout out songs for me to sing along with those chords,” Smith told us. “Which did work pretty well! But after a couple songs it started to sing along. Already it was such a unique experience, but that really took it to the next level.”

This is not the first time humans have played musical duets with computers. That type of research stretches back to the 1970s, although it was typically limited to reproducing musical notes or instrumental sounds. But this is the first time we’ve seen anyone duet with an audio-synthesizing voice chatbot in real time.

Man tricks OpenAI’s voice bot into duet of The Beatles’ “Eleanor Rigby” Read More »

musk’s-x-blocks-links-to-jd-vance-dossier-and-suspends-journalist-who-posted-it

Musk’s X blocks links to JD Vance dossier and suspends journalist who posted it

JD Vance dossier —

X says it suspended reporter for “posting unredacted personal information.”

Former US President Donald Trump and Republican vice presidential nominee JD Vance stand next to each other at an outdoors event.

Enlarge / Former US President Donald Trump and Republican vice presidential nominee JD Vance at the National 9/11 Memorial and Museum on September 11, 2024, in New York City.

Getty Images | Michael M. Santiago

Elon Musk’s X is blocking links to the JD Vance “dossier” containing the Trump campaign’s research on the vice presidential nominee. X also suspended Ken Klippenstein, the journalist who published the dossier that apparently comes from an Iranian hack of the Trump campaign.

“Ken Klippenstein was temporarily suspended for violating our rules on posting unredacted private personal information, specifically Sen. Vance’s physical addresses and the majority of his Social Security number,” X’s safety account wrote yesterday. Klippenstein’s account was still suspended as of this writing.

X is blocking attempts to post links to the Klippenstein article in which he explained why he published the leaked dossier. An error message says, “We can’t complete this request because the link has been identified by X or our partners as being potentially harmful.”

Klippenstein’s article explains that the “dossier has been offered to me and I’ve decided to publish it because it’s of keen public interest in an election season. It’s a 271-page research paper the Trump campaign prepared to vet now vice presidential candidate JD Vance.”

The article doesn’t contain Vance’s address or Social Security number, but it provides a download link for the dossier. Klippenstein published another article yesterday after his X suspension, writing that he stands by his decision not to redact Vance’s private information. But the version of the Vance dossier available on Klippenstein’s website today has redactions of addresses and his Social Security number.

“I never published any private information on X”

“Self-styled free speech warrior Elon Musk’s X (Twitter) banned me after I published a copy of the Donald Trump campaign’s JD Vance research dossier,” Klippenstein wrote. “X says that I’ve been suspended for ‘violating our rules against posting private information,’ citing a tweet linking to my story about the JD Vance dossier. First, I never published any private information on X. I linked to an article I wrote here, linking to a document of controversial provenance, one that I didn’t want to alter for that very reason.”

Klippenstein also wrote, “We should be honest about so-called private information contained in the dossier and ‘private’ information in general. It is readily available to anyone who can buy it. The campaign purchased this information from commercial information brokers.”

US intelligence agencies said last week that “Iranian malicious cyber actors” have been sending “stolen, non-public material associated with former President Trump’s campaign to US media organizations.” This is part of a strategy “to stoke discord and undermine confidence in our electoral process,” US agencies said. Most media outlets decided not to publish the materials.

Musk slammed Twitter’s Hunter Biden decision

Elon Musk claimed that he bought Twitter in order to protect free speech, and he criticized the social network for an October 2020 incident in which Twitter blocked a New York Post story about Hunter Biden’s emails for allegedly violating a policy against posting hacked materials.

“Suspending the Twitter account of a major news organization for publishing a truthful story was obviously incredibly inappropriate,” Musk wrote in April 2022, one day after he struck a deal to buy Twitter for $44 billion. After completing the purchase, Musk leaked so-called “Twitter Files” containing the company’s internal deliberations about the Hunter Biden laptop story and other matters.

Twitter’s Hunter Biden decision drew immediate criticism when it happened, and the company changed its hacked materials policy just one day later. Under the October 2020 policy change, Twitter said it would stop removing hacked content unless it was directly shared by hackers or those acting in concert with them and that it would label tweets to provide context instead of blocking links from being shared on Twitter.

“Straight blocking of URLs was wrong, and we updated our policy and enforcement to fix,” Jack Dorsey, Twitter’s former CEO, wrote at the time. “Our goal is to attempt to add context, and now we have capabilities to do that.”

The hacked materials policy was still active as of January 2024, but the policy page no longer exists.

Meanwhile, The New York Times examined five days’ worth of Musk’s X posts in an article published today. “In 171 posts and reposts during that frenetic five-day period, the tech mogul railed against illegal immigration, boosted election fraud conspiracy theories and attacked Democratic candidates, according to a New York Times analysis… Nearly a third of his posts last week were false, misleading or missing vital context. They included misleading posts claiming Democrats were making memes ‘illegal’ and falsehoods that they want to ‘open the border’ to gain votes from illegal immigrants,” the article said.

Musk’s X blocks links to JD Vance dossier and suspends journalist who posted it Read More »