Author name: Mike M.

x-ignores-revenge-porn-takedown-requests-unless-dmca-is-used,-study-says

X ignores revenge porn takedown requests unless DMCA is used, study says

Why did the study target X?

The University of Michigan research team worried that their experiment posting AI-generated NCII on X may cross ethical lines.

They chose to conduct the study on X because they deduced it was “a platform where there would be no volunteer moderators and little impact on paid moderators, if any” viewed their AI-generated nude images.

X’s transparency report seems to suggest that most reported non-consensual nudity is actioned by human moderators, but researchers reported that their flagged content was never actioned without a DMCA takedown.

Since AI image generators are trained on real photos, researchers also took steps to ensure that AI-generated NCII in the study did not re-traumatize victims or depict real people who might stumble on the images on X.

“Each image was tested against a facial-recognition software platform and several reverse-image lookup services to verify it did not resemble any existing individual,” the study said. “Only images confirmed by all platforms to have no resemblance to individuals were selected for the study.”

These more “ethical” images were posted on X using popular hashtags like #porn, #hot, and #xxx, but their reach was limited to evade potential harm, researchers said.

“Our study may contribute to greater transparency in content moderation processes” related to NCII “and may prompt social media companies to invest additional efforts to combat deepfake” NCII, researchers said. “In the long run, we believe the benefits of this study far outweigh the risks.”

According to the researchers, X was given time to automatically detect and remove the content but failed to do so. It’s possible, the study suggested, that X’s decision to allow explicit content starting in June made it harder to detect NCII, as some experts had predicted.

To fix the problem, researchers suggested that both “greater platform accountability” and “legal mechanisms to ensure that accountability” are needed—as is much more research on other platforms’ mechanisms for removing NCII.

“A dedicated” NCII law “must clearly define victim-survivor rights and impose legal obligations on platforms to act swiftly in removing harmful content,” the study concluded.

X ignores revenge porn takedown requests unless DMCA is used, study says Read More »

disney-likely-axed-the-acolyte-because-of-soaring-costs

Disney likely axed The Acolyte because of soaring costs

And in the end, the ratings just weren’t strong enough, especially for a Star Wars project. The Acolyte garnered 11.1 million views over its first five days (and 488 million minutes viewed)—not bad, but below Ahsoka‘s 14 million views over the same period. But those numbers declined sharply over the ensuing weeks, with the finale earning the dubious distinction of posting the lowest minutes viewed (335 million) for any Star Wars series finale.

Writing at Forbes, Caroline Reid noted that The Acolyte was hampered from the start by a challenging post-pandemic financial environment at Disney. It was greenlit in 2021 along with many other quite costly series to boost subscriber numbers for Disney+, contributing to $11.4 billion losses in that division. Then Bob Iger returned as CEO and prioritized cutting costs. The Acolyte‘s heavy VFX needs and star casting (most notably Carrie Ann Moss and Squid Game‘s Lee Jung-jae) made it a pricey proposition, with ratings expectations to match. And apparently the show didn’t generate as much merchandising revenue as expected.

As the folks at Slash Film pointed out, The Acolyte‘s bloated production costs aren’t particularly eye-popping compared to, say, Prime Video’s The Rings of Power, which costs a whopping $58 million per episode, or Marvel’s Secret Invasion (about $35 million per episode). But it’s pricey for a Star Wars series; The Mandalorian racked up around $15 million per episode, on par with Game of Thrones. So given the flagging ratings and lukewarm reviews, the higher costs proved to be “the final nail in the coffin” for the series in the eyes of Disney, per Reid.

Disney likely axed The Acolyte because of soaring costs Read More »

apple-kicked-musi-out-of-the-app-store-based-on-youtube-lie,-lawsuit-says

Apple kicked Musi out of the App Store based on YouTube lie, lawsuit says


“Will Must ever come back?”

Popular music app says YouTube never justified its App Store takedown request.

Musi, a free music-streaming app only available on iPhone, sued Apple last week, arguing that Apple breached Musi’s developer agreement by abruptly removing the app from its App Store for no good reason.

According to Musi, Apple decided to remove Musi from the App Store based on allegedly “unsubstantiated” claims from YouTube that Musi was infringing on YouTube’s intellectual property. The removal came, Musi alleged, based on a five-word complaint from YouTube that simply said Musi was “violating YouTube terms of service”—without ever explaining how. And YouTube also lied to Apple, Musi’s complaint said, by claiming that Musi neglected to respond to YouTube’s efforts to settle the dispute outside the App Store when Musi allegedly showed evidence that the opposite was true.

For years, Musi users have wondered if the service was legal, Wired reported in a May deep dive into the controversial app. Musi launched in 2016, providing a free, stripped-down service like Spotify by displaying YouTube and other publicly available content while running Musi’s own ads.

Musi’s curious ad model has led some users to question if artists were being paid for Musi streams. Reassuring 66 million users who downloaded the app before its removal from the App Store, Musi has long maintained that artists get paid for Musi streams and that the app is committed to complying with YouTube’s terms of service, Wired reported.

In its complaint, Musi fully admits that its app’s streams come from “publicly available content on YouTube’s website.” But rather than relying on YouTube’s Application Programming Interface (API) to make the content available to Musi users—which potentially could violate YouTube’s terms of service—Musi claims that it designed its own “augmentative interface.” That interface, Musi said, does not “store, process, or transmit YouTube videos” and instead “plays or displays content based on the user’s own interactions with YouTube and enhances the user experience via Musi’s proprietary technology.”

YouTube is apparently not buying Musi’s explanations that its service doesn’t violate YouTube’s terms. But Musi claimed that it has been “engaged in sporadic dialog” with YouTube “since at least 2015,” allegedly always responding to YouTube’s questions by either adjusting how the Musi app works or providing “details about how the Musi app works” and reiterating “why it is fully compliant with YouTube’s Terms of Service.”

How might Musi have violated YouTube’s TOS?

In 2021, Musi claimed to have engaged directly with YouTube’s outside counsel in hopes of settling this matter.

At that point, YouTube’s counsel allegedly “claimed that the Musi app violated YouTube’s Terms of Service” in three ways. First, Musi was accused of accessing and using YouTube’s non-public interfaces. Next, the Musi app was allegedly a commercial use of YouTube’s service, and third, relatedly, “the Musi app violated YouTube’s prohibition on the sale of advertising ‘on any page of any website or application that only contains Content from the Service or where Content from the Service is the primary basis for such sales.'”

Musi supposedly immediately “addressed these concerns” by reassuring YouTube that the Musi app never accesses its non-public interfaces and “merely allows users to access YouTube’s publicly available website through a functional interface and, thus, does not use YouTube in a commercial way.” Further, Musi told YouTube in 2021 that the app “does not sell advertising on any page that only contains content from YouTube or where such content is the primary basis for such sales.”

Apple suddenly becomes mediator

YouTube clearly was not persuaded by Musi’s reassurances but dropped its complaints until 2023. That’s when YouTube once again complained directly to Musi, only to allegedly stop responding to Musi entirely and instead raise its complaint through the App Store in August 2024.

That pivot put Apple in the middle of the dispute, and Musi alleged that Apple improperly sided with YouTube.

Once Apple got involved, Apple allegedly directed Musi to resolve the dispute with YouTube or else risk removal from the App Store. Musi claimed that it showed evidence of repeatedly reaching out to YouTube and receiving no response. Yet when YouTube told Apple that Musi was the one that went silent, Apple accepted YouTube’s claim and promptly removed Musi from the App Store.

“Apple’s decision to abruptly and arbitrarily remove the Musi app from the App Store without any indication whatsoever from the Complainant as to how Musi’s app infringed Complainant’s intellectual property or violated its Terms of Service,” Musi’s complaint alleged, “was unreasonable, lacked good cause, and violated Apple’s Development Agreement’s terms.”

Those terms state that removal is only on the table if Apple “reasonably believes” an app infringes on another’s intellectual property rights, and Musi argued Apple had no basis to “reasonably” believe YouTube’s claims.

Musi users heartbroken by App Store removal

This is perhaps the grandest stand that Musi has made yet to defend its app against claims that its service isn’t legal. According to Wired, one of Musi’s earliest investors backed out of the project, expressing fears that the app could be sued. But Musi has survived without legal challenge for years, even beating out some of Spotify’s top rivals while thriving in this seemingly gray territory that it’s now trying to make more black and white.

Musi says it’s suing to defend its reputation, which it says has been greatly harmed by the app’s removal.

Musi is hoping a jury will agree that Apple breached its developer agreement and the covenant of good faith and fair dealing by removing Musi from the App Store. The music-streaming app has asked for a permanent injunction immediately reinstating Musi in the App Store and stopping Apple from responding to third-party complaints by removing apps without any evidence of infringement.

An injunction is urgently needed, Musi claimed, since the app only exists in Apple’s App Store, and Musi and its users face “irreparable damage” if the app is not restored. Additionally, Musi is seeking damages to be determined at trial to make up for “lost profits and other consequential damages.”

“The Musi app did not and does not infringe any intellectual property rights held by Complainant, and a reasonable inquiry into the matter would have led Apple to conclude the same,” Musi’s complaint said.

On Reddit, Musi has continued to support users reporting issues with the app since its removal from the App Store. One longtime user lamented, “my heart is broken,” after buying a new iPhone and losing access to the app.

It’s unclear if YouTube intends to take Musi down forever with this tactic. In May, Wired noted that Musi isn’t the only music-streaming app taking advantage of publicly available content, predicting that if “Musi were to shut down, a bevy of replacements would likely sprout up.” Meanwhile, some users on Reddit reported that fake Musi apps keep popping up in its absence.

For Musi, getting back online is as much about retaining old users as it is about attracting new downloads. In its complaint, Musi said that “Apple’s decision has caused immediate and ongoing financial and reputational harm to Musi.” On Reddit, one Musi user asked what many fans are likely wondering: “Will Musi ever come back,” or is it time to “just move to a different app”?

Ars could not immediately reach Musi’s lawyers, Apple, or YouTube for comment.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Apple kicked Musi out of the App Store based on YouTube lie, lawsuit says Read More »

hurricane-milton-becomes-second-fastest-storm-to-reach-category-5-status

Hurricane Milton becomes second-fastest storm to reach Category 5 status

Tampa in the crosshairs

The Tampa Bay metro area, with a population of more than 3 million people, has grown into the most developed region on the west coast of Florida. For those of us who follow hurricanes, this region has stood out in recent years for a preternatural ability to dodge large and powerful hurricanes. There have been some close calls to be sure, especially of late with Hurricane Ian in 2022, and Hurricane Helene just last month.

But the reality is that a major hurricane, defined as Category 3 or larger on the Saffir-Simpson Scale, has not made a direct impact on Tampa Bay since 1921.

It remains to be seen what precisely happens with Milton. The storm should reach its peak intensity over the course of the next day or so. At some point Milton should undergo an eyewall replacement cycle, which leads to some weakening. In addition, the storm is likely to ingest dry air from its west and north as a cold front works its way into the northern Gulf of Mexico. (This front is also responsible for Milton’s odd eastward track across the Gulf, where storms more commonly travel from east to west.)

11 am ET Monday track forecast for Hurricane Milton. Credit: National Hurricane Center

So by Wednesday, at the latest, Milton should be weakening as it approaches the Florida coast. However, it will nonetheless be a very large and powerful hurricane, and by that point the worst of its storm surge capabilities will already be baked in—that is, the storm surge will still be tremendous regardless of whether Milton weakens.

By Wednesday evening a destructive storm surge will be crashing into the west coast of Florida, perhaps in Tampa Bay, or further to the south, near Fort Meyers. A broad streak of wind gusts above 100 mph will hit the Florida coast as well, and heavy rainfall will douse much of the central and northern parts of the state.

For now, Milton is making some history by rapidly strengthening in the Gulf of Mexico. By the end of this week, it will very likely become historic for the damage, death, and destruction in its wake. If you live in affected areas, please heed evacuation warnings.

Hurricane Milton becomes second-fastest storm to reach Category 5 status Read More »

greening-of-antartica-shows-how-climate-change-affects-the-frozen-continent

Greening of Antartica shows how climate change affects the frozen continent


Plant growth is accelerating on the Antarctic Peninsula and nearby islands.

Moss and rocks cover the ground on Robert Island in Antarctica. Photographer: Isadora Romero/Bloomberg

Moss and rocks cover the ground on Robert Island in Antarctica. Photographer: Isadora Romero/Bloomberg Credit: Bloomberg via Getty

Moss and rocks cover the ground on Robert Island in Antarctica. Photographer: Isadora Romero/Bloomberg Credit: Bloomberg via Getty

When satellites first started peering down on the craggy, glaciated Antarctic Peninsula about 40 years ago, they saw only a few tiny patches of vegetation covering a total of about 8,000 square feet—less than a football field.

But since then, the Antarctic Peninsula has warmed rapidly, and a new study shows that mosses, along with some lichen, liverworts and associated algae, have colonized more than 4.6 square miles, an area nearly four times the size of New York’s Central Park.

The findings, published Friday in Nature Geoscience, based on a meticulous analysis of Landsat images from 1986 to 2021, show that the greening trend is distinct from natural variability and that it has accelerated by 30 percent since 2016, fast enough to cover nearly 75 football fields per year.

Greening at the opposite end of the planet, in the Arctic, has been widely studied and reported, said co-author Thomas Roland, a paleoecologist with the University of Exeter who collects and analyzes mud samples to study environmental and ecological change. “But the idea,” he said, “that any part of Antarctica could, in any way, be green is something that still really jars a lot of people.”

illustration of Antarctica and satellite photos

Credit: Inside Climate News

Credit: Inside Climate News

As the planet heats up, “even the coldest regions on Earth that we expect and understand to be white and black with snow, ice, and rock are starting to become greener as the planet responds to climate change,” he said.

The tenfold increase in vegetation cover since 1986 “is not huge in the global scheme of things,” Roland added, but the accelerating rate of change and the potential ecological effects are significant. “That’s the real story here,” he said. “The landscape is going to be altered partially because the existing vegetation is expanding, but it could also be altered in the future with new vegetation coming in.”

In the Arctic, vegetation is expanding on a scale that affects the albedo, or the overall reflectivity of the region, which determines the proportion of the sun’s heat energy that is absorbed by the Earth’s surface as opposed to being bounced away from the planet. But the spread of greenery has not yet changed the albedo of Antarctica on a meaningful scale because the vegetated areas are still too small to have a regional impact, said co-author Olly Bartlett, a University of Hertfordshire researcher who specializes in using satellite data to map environmental change.

“The real significance is about the ecological shift on the exposed land, the land that’s ice-free, creating an area suitable for more advanced plant life or invasive species to get a foothold,” he said.

Bartlett said Google Earth Engine enabled the scientists to process a massive amount of data from the Landsat images to meet a high standard of verification of plant growth. As a result, he added, the changes they reported may actually be conservative.

“It’s becoming easier for life to live there,” he said. “These rates of change we’re seeing made us think that perhaps we’ve captured the start of a more dramatic transformation.”

In the areas they studied, changes to the albedo could have a small local effect, Roland said, as more land free of reflective ice “can feed into a positive feedback loop that creates conditions that are more favorable for vegetation expansion as well.”

Antarctic forests at similar CO2 levels

Other research, including fossil studies, suggests that beech trees grew on Antarctica as recently as 2.5 million years ago, when carbon dioxide levels in the atmosphere were similar to today, another indicator of how unchecked greenhouse gas emissions can rapidly warm Earth’s climate.

Currently, there are only two species of flowering plants native to the Antarctic Peninsula, Antarctic hair grass, and Antarctic pearlwort. “But with a few new grass seeds here and there, or a few spores, and all of a sudden, you’ve got a very different ecosystem,” he said.

And it’s not just plants, he added. “Increasingly, we’re seeing evidence that non-native insect life is taking hold in Antarctica. And that can dramatically change things as well.”

The study shows how climate warming will shake up Antarctic ecosystems, said conservation scientist Jasmine Lee, a research fellow with the British Antarctic Survey who was not involved in the new study.

“It is clear that bank-forming mosses are expanding their range with warmer and wetter conditions, which is likely facilitating similar expansions for some of the invertebrate communities that rely on them for habitat,” she said. “At the same time, some specialist species, such as the more dry-loving mosses and invertebrates, might decline.”

She said the new study is valuable because it provides data across a broad region showing that Antarctic ecosystems are already rapidly altering and will continue to do so as climate change progresses.

“We focus a lot on how climate change is melting ice sheets and changing sea ice,” she said. “It’s good to also highlight that the terrestrial ecosystems are being impacted.”

The study shows climate impacts growing in “regions previously thought nearly immune to the accelerated warming we’re seeing today,” said climate policy expert Pam Pearson, director of the International Cryosphere Climate Initiative.

“It’s as important a signal as the loss of Antarctic sea ice over the past several years,” she said.

The new study identified vegetative changes by comparing the Landsat images at a resolution of 300-square-feet per pixel, detailed enough to accurately map vegetative growth, but it didn’t identify specific climate change factors that might be driving the expansion of plant life.

But other recent studies have documented Antarctic changes that could spur plant growth, including how some regions are affected by warm winds and by increasing amounts of rain from atmospheric rivers, as well as by declining sea ice that leads adjacent land areas to warm, all signs of rapid change in Antarctica.

Roland said their new study was in part spurred by previous research showing how fast patches of Antarctic moss were growing vertically and how microbial activity in tiny patches of soil was also accelerating.

“We’d taken these sediment cores, and done all sorts of analysis, including radiocarbon dating … showing the growth in the plants we’d sampled increasing dramatically,” he said.

Those measurements confirmed that the plants are sensitive to climate change, and as a next step, researchers wanted to know “if the plants are growing sideways at the same dramatic rate,” he said. “It’s one thing for plants to be growing upwards very fast. If they’re growing outwards, then you know you’re starting to see massive changes and massive increases in vegetation cover across the peninsula.”

With the study documenting significant horizontal expansion of vegetation, the researchers are now studying how recently deglaciated areas were first colonized by plants. About 90 percent of the glaciers on the Antarctic Peninsula have been shrinking for the past 75 years, Roland said.

“That’s just creating more and more land for this potentially rapid vegetation response,” he said. “So like Olly says, one of the things we can’t rule out is that this really does increase quite dramatically over the next few decades. Our findings raise serious concerns about the environmental future of the Antarctic Peninsula and of the continent as a whole.”

This story originally appeared on Inside Climate News.

Photo of Inside Climate News

Greening of Antartica shows how climate change affects the frozen continent Read More »

neo-nazis-head-to-encrypted-simplex-chat-app,-bail-on-telegram

Neo-Nazis head to encrypted SimpleX Chat app, bail on Telegram

“SimpleX, at its core, is designed to be truly distributed with no central server. This allows for enormous scalability at low cost, and also makes it virtually impossible to snoop on the network graph,” Poberezkin wrote in a company blog post published in 2022.

SimpleX’s policies expressly prohibit “sending illegal communications” and outline how SimpleX will remove such content if it is discovered. Much of the content that these terrorist groups have shared on Telegram—and are already resharing on SimpleX—has been deemed illegal in the UK, Canada, and Europe.

Argentino wrote in his analysis that discussion about moving from Telegram to platforms with better security measures began in June, with discussion of SimpleX as an option taking place in July among a number of extremist groups. Though it wasn’t until September, and the Terrorgram arrests, that the decision was made to migrate to SimpleX, the groups are already establishing themselves on the new platform.

“The groups that have migrated are already populating the platform with legacy material such as Terrorgram manuals and are actively recruiting propagandists, hackers, and graphic designers, among other desired personnel,” the ISD researchers wrote.

However, there are some downsides to the additional security provided by SimpleX, such as the fact that it is not as easy for these groups to network and therefore grow, and disseminating propaganda faces similar restrictions.

“While there is newfound enthusiasm over the migration, it remains unclear if the platform will become a central organizing hub,” ISD researchers wrote.

And Poberezkin believes that the current limitations of his technology will mean these groups will eventually abandon SimpleX.

“SimpleX is a communication network rather than a service or a platform where users can host their own servers, like in OpenWeb, so we were not aware that extremists have been using it,” says Poberezkin. “We never designed groups to be usable for more than 50 users and we’ve been really surprised to see them growing to the current sizes despite limited usability and performance. We do not think it is technically possible to create a social network of a meaningful size in the SimpleX network.”

This story originally appeared on wired.com.

Neo-Nazis head to encrypted SimpleX Chat app, bail on Telegram Read More »

thousands-of-linux-systems-infected-by-stealthy-malware-since-2021

Thousands of Linux systems infected by stealthy malware since 2021


The ability to remain installed and undetected makes Perfctl hard to fight.

Real Java Script code developing screen. Programing workflow abstract algorithm concept. Closeup of Java Script and HTML code.

Thousands of machines running Linux have been infected by a malware strain that’s notable for its stealth, the number of misconfigurations it can exploit, and the breadth of malicious activities it can perform, researchers reported Thursday.

The malware has been circulating since at least 2021. It gets installed by exploiting more than 20,000 common misconfigurations, a capability that may make millions of machines connected to the Internet potential targets, researchers from Aqua Security said. It can also exploit CVE-2023-33246, a vulnerability with a severity rating of 10 out of 10 that was patched last year in Apache RocketMQ, a messaging and streaming platform that’s found on many Linux machines.

Perfctl storm

The researchers are calling the malware Perfctl, the name of a malicious component that surreptitiously mines cryptocurrency. The unknown developers of the malware gave the process a name that combines the perf Linux monitoring tool and ctl, an abbreviation commonly used with command line tools. A signature characteristic of Perfctl is its use of process and file names that are identical or similar to those commonly found in Linux environments. The naming convention is one of the many ways the malware attempts to escape notice of infected users.

Perfctl further cloaks itself using a host of other tricks. One is that it installs many of its components as rootkits, a special class of malware that hides its presence from the operating system and administrative tools. Other stealth mechanisms include:

  • Stopping activities that are easy to detect when a new user logs in
  • Using a Unix socket over TOR for external communications
  • Deleting its installation binary after execution and running as a background service thereafter
  • Manipulating the Linux process pcap_loop through a technique known as hooking to prevent admin tools from recording the malicious traffic
  • Suppressing mesg errors to avoid any visible warnings during execution.

The malware is designed to ensure persistence, meaning the ability to remain on the infected machine after reboots or attempts to delete core components. Two such techniques are (1) modifying the ~/.profile script, which sets up the environment during user login so the malware loads ahead of legitimate workloads expected to run on the server and (2) copying itself from memory to multiple disk locations. The hooking of pcap_loop can also provide persistence by allowing malicious activities to continue even after primary payloads are detected and removed.

Besides using the machine resources to mine cryptocurrency, Perfctl also turns the machine into a profit-making proxy that paying customers use to relay their Internet traffic. Aqua Security researchers have also observed the malware serving as a backdoor to install other families of malware.

Assaf Morag, Aqua Security’s threat intelligence director, wrote in an email:

Perfctl malware stands out as a significant threat due to its design, which enables it to evade detection while maintaining persistence on infected systems. This combination poses a challenge for defenders and indeed the malware has been linked to a growing number of reports and discussions across various forums, highlighting the distress and frustration of users who find themselves infected.

Perfctl uses a rootkit and changes some of the system utilities to hide the activity of the cryptominer and proxy-jacking software. It blends seamlessly into its environment with seemingly legitimate names. Additionally, Perfctl’s architecture enables it to perform a range of malicious activities, from data exfiltration to the deployment of additional payloads. Its versatility means that it can be leveraged for various malicious purposes, making it particularly dangerous for organizations and individuals alike.

“The malware always manages to restart”

While Perfctl and some of the malware it installs are detected by some antivirus software, Aqua Security researchers were unable to find any research reports on the malware. They were, however, able to find a wealth of threads on developer-related sites that discussed infections consistent with it.

This Reddit comment posted to the CentOS subreddit is typical. An admin noticed that two servers were infected with a cryptocurrency hijacker with the names perfcc and perfctl. The admin wanted help investigating the cause.

“I only became aware of the malware because my monitoring setup alerted me to 100% CPU utilization,” the admin wrote in the April 2023 post. “However, the process would stop immediately when I logged in via SSH or console. As soon as I logged out, the malware would resume running within a few seconds or minutes.” The admin continued:

I have attempted to remove the malware by following the steps outlined in other forums, but to no avail. The malware always manages to restart once I log out. I have also searched the entire system for the string “perfcc” and found the files listed below. However, removing them did not resolve the issue. as it keep respawn on each time rebooted.

Other discussions include: Reddit, Stack Overflow (Spanish), forobeta (Spanish),  brainycp (Russian), natnetwork (Indonesian), Proxmox (Deutsch), Camel2243 (Chinese), svrforum (Korean), exabytes, virtualmin, serverfault and many others.

After exploiting a vulnerability or misconfiguration, the exploit code downloads the main payload from a server, which, in most cases, has been hacked by the attacker and converted into a channel for distributing the malware anonymously. An attack that targeted the researchers’ honeypot named the payload httpd. Once executed, the file copies itself from memory to a new location in the /tmp directory, runs it, and then terminates the original process and deletes the downloaded binary.

Once moved to the /tmp directory, the file executes under a different name, which mimics the name of a known Linux process. The file hosted on the honeypot was named sh. From there, the file establishes a local command-and-control process and attempts to gain root system rights by exploiting CVE-2021-4043, a privilege-escalation vulnerability that was patched in 2021 in Gpac, a widely used open source multimedia framework.

The malware goes on to copy itself from memory to a handful of other disk locations, once again using names that appear as routine system files. The malware then drops a rootkit, a host of popular Linux utilities that have been modified to serve as rootkits, and the miner. In some cases, the malware also installs software for “proxy-jacking,” the term for surreptitiously routing traffic through the infected machine so the true origin of the data isn’t revealed.

The researchers continued:

As part of its command-and-control operation, the malware opens a Unix socket, creates two directories under the /tmp directory, and stores data there that influences its operation. This data includes host events, locations of the copies of itself, process names, communication logs, tokens, and additional log information. Additionally, the malware uses environment variables to store data that further affects its execution and behavior.

All the binaries are packed, stripped, and encrypted, indicating significant efforts to bypass defense mechanisms and hinder reverse engineering attempts. The malware also uses advanced evasion techniques, such as suspending its activity when it detects a new user in the btmp or utmp files and terminating any competing malware to maintain control over the infected system.

The diagram below captures the attack flow:

Credit: Aqua Security

Credit: Aqua Security

The following image captures some of the names given to the malicious files that are installed:

Credit: Aqua Security

Credit: Aqua Security

By extrapolating data such as the number of Linux servers connected to the Internet across various services and applications, as tracked by services such as Shodan and Censys, the researchers estimate that the number of machines infected by Perfctl is measured in the thousands. They say that the pool of vulnerable machines—meaning those that have yet to install the patch for CVE-2023-33246 or contain a vulnerable misconfiguration—is in the millions. The researchers have yet to measure the amount of cryptocurrency the malicious miners have generated.

People who want to determine if their device has been targeted or infected by Perfctl should look for indicators of compromise included in Thursday’s post. They should also be on the lookout for unusual spikes in CPU usage or sudden system slowdowns, particularly if they occur during idle times. To prevent infections, it’s important that the patch for CVE-2023-33246 be installed and that the the misconfigurations identified by Aqua Security be fixed. Thursday’s report provides other steps for preventing infections.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at @dangoodin on Mastodon. Contact him on Signal at DanArs.82.

Thousands of Linux systems infected by stealthy malware since 2021 Read More »

how-london’s-crystal-palace-was-built-so-quickly

How London’s Crystal Palace was built so quickly

London’s Great Exhibition of 1851 attracted some 6 million people eager to experience more than 14,000 exhibitors showcasing 19th-century marvels of technology and engineering. The event took place in the Crystal Palace, a 990,000-square-foot building of cast iron and plate glass originally located in Hyde Park. And it was built in an incredible 190 days. According to a recent paper published in the International Journal for the History of Engineering and Technology, one of the secrets was the use of a standardized screw thread, first proposed 10 years before its construction, although the thread did not officially become the British standard until 1905.

“During the Victorian era there was incredible innovation from workshops right across Britain that was helping to change the world,” said co-author John Gardner of Anglia Ruskin University (ARU). “In fact, progress was happening at such a rate that certain breakthroughs were perhaps never properly realized at the time, as was the case here with the Crystal Palace. Standardization in engineering is essential and commonplace in the 21st century, but its role in the construction of the Crystal Palace was a major development.”

The design competition for what would become the Crystal Palace was launched in March 1850, with a deadline four weeks later, and the actual, fully constructed building opened on May 1, 1851. The winning design, by Joseph Patterson, wasn’t chosen until quite late in the game after numerous designs had been rejected—most because they were simply too far above the £100,000 budget.

Joseph Paxton's first sketch for the Great Exhibition Building, c. 1850, using pen and ink on blotting paper

Joseph Paxton’s first sketch for the Great Exhibition Building, c. 1850, using pen and ink on blotting paper.

Joseph Paxton’s first sketch for the Great Exhibition Building, c. 1850, using pen and ink on blotting paper. Credit: Victoria and Albert Museum/CC BY-SA 3.0

Patterson’s design called for what was essentially a giant conservatory consisting of a multi-dimensional grid of 24-foot modules. The design elements included 3,300 supporting columns with four flange faces, drilled so they could be bolted to connecting and base pieces. (The hollow columns did double duty as drainage pipes for rainwater.) The design also called for diagonal bracing (aka cross bracing) for additional stability.

How London’s Crystal Palace was built so quickly Read More »

the-more-sophisticated-ai-models-get,-the-more-likely-they-are-to-lie

The more sophisticated AI models get, the more likely they are to lie


Human feedback training may incentivize providing any answer—even wrong ones.

Image of a Pinocchio doll with a long nose and a small green sprig at the end.

When a research team led by Amrit Kirpalani, a medical educator at Western University in Ontario, Canada, evaluated ChatGPT’s performance in diagnosing medical cases back in August 2024, one of the things that surprised them was the AI’s propensity to give well-structured, eloquent but blatantly wrong answers.

Now, in a study recently published in Nature, a different group of researchers tried to explain why ChatGPT and other large language models tend to do this. “To speak confidently about things we do not know is a problem of humanity in a lot of ways. And large language models are imitations of humans,” says Wout Schellaert, an AI researcher at the University of Valencia, Spain, and co-author of the paper.

Smooth operators

Early large language models like GPT-3 had a hard time answering simple questions about geography or science. They even struggled with performing simple math such as “how much is 20 +183.” But in most cases where they couldn’t identify the correct answer, they did what an honest human being would do: They avoided answering the question.

The problem with the non-answers is that large language models were intended to be question-answering machines. For commercial companies like Open AI or Meta that were developing advanced LLMs, a question-answering machine that answered “I don’t know” more than half the time was simply a bad product. So, they got busy solving this problem.

The first thing they did was scale the models up. “Scaling up refers to two aspects of model development. One is increasing the size of the training data set, usually a collection of text from websites and books. The other is increasing the number of language parameters,” says Schellaert. When you think about an LLM as a neural network, the number of parameters can be compared to the number of synapses connecting its neurons. LLMs like GPT-3 used absurd amounts of text data, exceeding 45 terabytes, for training. The number of parameters used by GPT-3 was north of 175 billion.

But it was not enough.

Scaling up alone made the models more powerful, but they were still bad at interacting with humans—slight variations in how you phrased your prompts could lead to drastically different results. The answers often didn’t feel human-like and sometimes were downright offensive.

Developers working on LLMs wanted them to parse human questions better and make answers more accurate, more comprehensible, and consistent with generally accepted ethical standards. To try to get there, they added an additional step: supervised learning methods, such as reinforcement learning, with human feedback. This was meant primarily to reduce sensitivity to prompt variations and to provide a level of output-filtering moderation intended to curb hateful-spewing Tay chatbot-style answers.

In other words, we got busy adjusting the AIs by hand. And it backfired.

AI people pleasers

“The notorious problem with reinforcement learning is that an AI optimizes to maximize reward, but not necessarily in a good way,” Schellaert says. Some of the reinforcement learning involved human supervisors who flagged answers they were not happy with. Since it’s hard for humans to be happy with “I don’t know” as an answer, one thing this training told the AIs was that saying “I don’t know” was a bad thing. So, the AIs mostly stopped doing that. But another, more important thing human supervisors flagged was incorrect answers. And that’s where things got a bit more complicated.

AI models are not really intelligent, not in a human sense of the word. They don’t know why something is rewarded and something else is flagged; all they are doing is optimizing their performance to maximize reward and minimize red flags. When incorrect answers were flagged, getting better at giving correct answers was one way to optimize things. The problem was getting better at hiding incompetence worked just as well. Human supervisors simply didn’t flag wrong answers that appeared good and coherent enough to them.

In other words, if a human didn’t know whether an answer was correct, they wouldn’t be able to penalize wrong but convincing-sounding answers.

Schellaert’s team looked into three major families of modern LLMs: Open AI’s ChatGPT, the LLaMA series developed by Meta, and BLOOM suite made by BigScience. They found what’s called ultracrepidarianism, the tendency to give opinions on matters we know nothing about. It started to appear in the AIs as a consequence of increasing scale, but it was predictably linear, growing with the amount of training data, in all of them. Supervised feedback “had a worse, more extreme effect,” Schellaert says. The first model in the GPT family that almost completely stopped avoiding questions it didn’t have the answers to was text-davinci-003. It was also the first GPT model trained with reinforcement learning from human feedback.

The AIs lie because we told them that doing so was rewarding. One key question is when and how often do we get lied to.

Making it harder

To answer this question, Schellaert and his colleagues built a set of questions in different categories like science, geography, and math. Then, they rated those questions based on how difficult they were for humans to answer, using a scale from 1 to 100. The questions were then fed into subsequent generations of LLMs, starting from the oldest to the newest. The AIs’ answers were classified as correct, incorrect, or evasive, meaning the AI refused to answer.

The first finding was that the questions that appeared more difficult to us also proved more difficult for the AIs. The latest versions of ChatGPT gave correct answers to nearly all science-related prompts and the majority of geography-oriented questions up until they were rated roughly 70 on Schellaert’s difficulty scale. Addition was more problematic, with the frequency of correct answers falling dramatically after the difficulty rose above 40. “Even for the best models, the GPTs, the failure rate on the most difficult addition questions is over 90 percent. Ideally we would hope to see some avoidance here, right?” says Schellaert. But we didn’t see much avoidance.

Instead, in more recent versions of the AIs, the evasive “I don’t know” responses were increasingly replaced with incorrect ones. And due to supervised training used in later generations, the AIs developed the ability to sell those incorrect answers quite convincingly. Out of the three LLM families Schellaert’s team tested, BLOOM and Meta’s LLaMA have released the same versions of their models with and without supervised learning. In both cases, supervised learning resulted in the higher number of correct answers, but also in a higher number of incorrect answers and reduced avoidance. The more difficult the question and the more advanced model you use, the more likely you are to get well-packaged, plausible nonsense as your answer.

Back to the roots

One of the last things Schellaert’s team did in their study was to check how likely people were to take the incorrect AI answers at face value. They did an online survey and asked 300 participants to evaluate multiple prompt-response pairs coming from the best performing models in each family they tested.

ChatGPT emerged as the most effective liar. The incorrect answers it gave in the science category were qualified as correct by over 19 percent of participants. It managed to fool nearly 32 percent of people in geography and over 40 percent in transforms, a task where an AI had to extract and rearrange information present in the prompt. ChatGPT was followed by Meta’s LLaMA and BLOOM.

“In the early days of LLMs, we had at least a makeshift solution to this problem. The early GPT interfaces highlighted parts of their responses that the AI wasn’t certain about. But in the race to commercialization, that feature was dropped, said Schellaert.

“There is an inherent uncertainty present in LLMs’ answers. The most likely next word in the sequence is never 100 percent likely. This uncertainty could be used in the interface and communicated to the user properly,” says Schellaert. Another thing he thinks can be done to make LLMs less deceptive is handing their responses over to separate AIs trained specifically to search for deceptions. “I’m not an expert in designing LLMs, so I can only speculate what exactly is technically and commercially viable,” he adds.

It’s going to take some time, though, before the companies that are developing general-purpose AIs do something about it, either out of their own accord or if forced by future regulations. In the meantime, Schellaert has some suggestions on how to use them effectively. “What you can do today is use AI in areas where you are an expert yourself or at least can verify the answer with a Google search afterwards. Treat it as a helping tool not as a mentor. It’s not going to be a teacher that proactively shows you where you went wrong. Quite the opposite. When you nudge it enough, it will happily go along with your faulty reasoning,” Schellaert says.

Nature, 2024.  DOI: 10.1038/s41586-024-07930-y

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

The more sophisticated AI models get, the more likely they are to lie Read More »

popular-juicebox-ev-home-chargers-to-lose-connectivity-as-owner-quits-us

Popular Juicebox EV home chargers to lose connectivity as owner quits US

Owners of the popular home EV chargers made by Juicebox are about to lose a whole lot of features. Its owner, the energy company Enel X, has just announced that it is leaving the North American market entirely as of October 11.

Enel X says its strategy will be to pursue “further growth by providing bundled offers, including private charging solutions, to its electricity customers as well as by developing public charging infrastructure in countries where it has an electricity retail business.” And since it does not have an electricity business in the US, merely a charging hardware and software one, it makes little sense to remain active here.

The company also blames high interest rates and a cooling EV market as reasons for its exit.

Enel X says Juicebox residential hardware will continue to work, so if you’ve been using one to charge at home, you can keep plugging it in. But Enel X is ending all software support—there will be no updates, and it’s removing its apps, so online functions like scheduling a charge will no longer work.

Commercial charging stations will be worse affected—according to Enel X, these “will lose functionality in the absence of software continuity.” The company also says its customer support is no longer available, effective immediately, and any questions or claims should be directed to juiceboxnorthamerica.com.

Popular Juicebox EV home chargers to lose connectivity as owner quits US Read More »

identity,-endpoint,-and-network-security-walk-into-a-bar

Identity, Endpoint, and Network Security Walk into a Bar

With a macrotrend/backdrop of platformization and convergence, the industry is exploring places where identity security, endpoint security, and network security naturally meet. This intersection is the browser.

The Browser: The Intersection of Identity, Endpoint, and Network Security

Why?

  • If we expect identity security, it must be tied to users, their permissions, authorization, and authentication.
  • If we expect endpoint security, it must be running on the endpoint or able to secure the endpoint itself.
  • If we expect network security, it must manage most (if not all) ingress and egress traffic.

The browser meets all of these requirements. It runs on the user’s endpoint, its whole purpose is to make and receive web requests, and as it’s only used by human agents, it intrinsically uses identity elements.

Secure enterprise browsing solutions can considerably improve security posture while also simplifying the technology stack. Injecting security functions in the most used application means that end users do not experience additional friction introduced by other security products. This is an appealing proposition, so we expect that the adoption of enterprise browsers will very likely increase considerably over the next few years.

So, what does it mean? As they can enforce security policies for users accessing web resources, secure enterprise browsing solutions can replace clunkier secure access solutions (those that require routing traffic through proxies or inserting more appliances) such as virtual private networks, secure web gateways, virtual desktop infrastructure, remote browser isolation, and cloud access security brokers.

What it doesn’t mean is that you can replace your EDR, your firewalls, or identity security solutions. On the contrary, secure enterprise browsing solutions work best in conjunction with these. For example, the solutions can inherit identity and access management user attributes and security policies, while integrations with EDR solutions can help for OS-level controls.

The Browser’s Bidirectional Magic

Users are both something to protect and to be protected from. With the browser controlling both ingress and egress traffic, it can secure multiple types of interactions, namely:

  • Protecting end users from malicious web resources and phishing attacks.
  • Protecting enterprises from negligent users.
  • Protecting enterprises from malicious insiders.
  • Protecting enterprises from compromised accounts.

I am not aware of any other type of solution on the market that can deliver all of the above with a single product. A secure browsing solution can fill many gaps in an organization’s security architecture, for both small and large organizations.

The market is still in the early stages, so the most responsible way of deploying these solutions is as an add-on to your current security stack. As these solutions mature and prove their efficacy in the real world, they can support a mandate to replace other security solutions that are either inadequate or obsolete.

Next Steps

To learn more, take a look at GigaOm’s secure enterprise browsing solutions Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.

If you’re not yet a GigaOm subscriber, sign up here.

Identity, Endpoint, and Network Security Walk into a Bar Read More »