Biz & IT

here’s-your-chance-to-own-a-decommissioned-us-government-supercomputer

Here’s your chance to own a decommissioned US government supercomputer

But can it run Crysis —

145,152-core Cheyenne supercomputer was 20th most powerful in the world in 2016.

A photo of the Cheyenne supercomputer, which is now up for auction.

Enlarge / A photo of the Cheyenne supercomputer, which is now up for auction.

On Tuesday, the US General Services Administration began an auction for the decommissioned Cheyenne supercomputer, located in Cheyenne, Wyoming. The 5.34-petaflop supercomputer ranked as the 20th most powerful in the world at the time of its installation in 2016. Bidding started at $2,500, but it’s price is currently $27,643 with the reserve not yet met.

The supercomputer, which officially operated between January 12, 2017, and December 31, 2023, at the NCAR-Wyoming Supercomputing Center, was a powerful (and once considered energy-efficient) system that significantly advanced atmospheric and Earth system sciences research.

“In its lifetime, Cheyenne delivered over 7 billion core-hours, served over 4,400 users, and supported nearly 1,300 NSF awards,” writes the University Corporation for Atmospheric Research (UCAR) on its official Cheyenne information page. “It played a key role in education, supporting more than 80 university courses and training events. Nearly 1,000 projects were awarded for early-career graduate students and postdocs. Perhaps most tellingly, Cheyenne-powered research generated over 4,500 peer-review publications, dissertations and theses, and other works.”

UCAR says that Cheynne was originally slated to be replaced after five years, but the COVID-19 pandemic severely disrupted supply chains, and it clocked in two extra years in its tour of duty. The auction page says that Cheyenne recently experienced maintenance limitations due to faulty quick disconnects in its cooling system. As a result, approximately 1 percent of the compute nodes have failed, primarily due to ECC errors in the DIMMs. Given the expense and downtime associated with repairs, the decision was made to auction off the components.

  • A photo gallery of the Cheyenne supercomputer up for auction.

With a peak performance of 5,340 teraflops (4,788 Linpack teraflops), this SGI ICE XA system was capable of performing over 3 billion calculations per second for every watt of energy consumed, making it three times more energy-efficient than its predecessor, Yellowstone. The system featured 4,032 dual-socket nodes, each with two 18-core, 2.3-GHz Intel Xeon E5-2697v4 processors, for a total of 145,152 CPU cores. It also included 313 terabytes of memory and 40 petabytes of storage. The entire system in operation consumed about 1.7 megawatts of power.

Just to compare, the world’s top-rated supercomputer at the moment—Frontier at Oak Ridge National Labs in Tennessee—features a theoretical peak performance of 1,679.82 petaflops, includes 8,699,904 CPU cores, and uses 22.7 megawatts of power.

The GSA notes that potential buyers of Cheyenne should be aware that professional movers with appropriate equipment will be required to handle the heavy racks and components. The auction includes seven E-Cell pairs (14 total), each with a cooling distribution unit (CDU). Each E-Cell weighs approximately 1,500 lbs. Additionally, the auction features two air-cooled Cheyenne Management Racks, each weighing 2,500 lbs, that contain servers, switches, and power units.

As of this writing, 12 potential buyers have bid on this computing monster so far. The auction closes on May 5 at 6: 11 pm Central Time if you’re interested in bidding. But don’t get too excited by photos of the extensive cabling: As the auction site notes, “fiber optic and CAT5/6 cabling are excluded from the resale package.”

Here’s your chance to own a decommissioned US government supercomputer Read More »

health-care-giant-comes-clean-about-recent-hack-and-paid-ransom

Health care giant comes clean about recent hack and paid ransom

HEALTH CARE PROVIDER, HEAL THYSELF —

Ransomware attack on the $371 billion company hamstrung US prescription market.

Health care giant comes clean about recent hack and paid ransom

Getty Images

Change Healthcare, the health care services provider that recently experienced a ransomware attack that hamstrung the US prescription market for two weeks, was hacked through a compromised account that failed to use multifactor authentication, the company CEO told members of Congress.

The February 21 attack by a ransomware group using the names ALPHV or BlackCat took down a nationwide network Change Healthcare administers to allow healthcare providers to manage customer payments and insurance claims. With no easy way for pharmacies to calculate what costs were covered by insurance companies, payment processors, providers, and patients experienced long delays in filling prescriptions for medicines, many of which were lifesaving. Change Healthcare has also reported that hackers behind the attacks obtained personal health information for a “substantial portion” of the US population.

Standard defense not in place

Andrew Witty, CEO of Change Healthcare parent company UnitedHealth Group, said the breach started on February 12 when hackers somehow obtained an account password for a portal allowing remote access to employee desktop devices. The account, Witty admitted, failed to use multifactor authentication (MFA), a standard defense against password compromises that requires additional authentication in the form of a one-time password or physical security key.

“The portal did not have multi-factor authentication,” Witty wrote in comments submitted before his scheduled testimony on Wednesday to the House Energy and Commerce Committee’s Subcommittee on Oversight and Investigations. “Once the threat actor gained access, they moved laterally within the systems in more sophisticated ways and exfiltrated data.” Witty is also scheduled to appear at a separate Wednesday hearing before the Senate Committee on Finance.

Witty didn’t explain why the account, on a portal platform provided by software maker Citrix, wasn’t configured to use MFA. The failure is likely to be a major focus during Wednesday’s hearing.

After burrowing into the Change Healthcare network undetected for nine days, the attackers deployed ransomware that prevented the company from accessing its IT environment. In response, the company severed its connection to its data centers. The company spent the next two weeks rebuilding its entire IT infrastructure “from the ground up.” In the process, it replaced thousands of laptops, rotated credentials, and added new server capacity. By March 7, 99 percent of pre-incident pharmacies were once again able to process claims.

Witty also publicly confirmed that Change Healthcare paid a ransom, a practice that critics say incentivizes ransomware groups who often fail to make good on promises to destroy stolen data. According to communications uncovered by Dmitry Smilyanets, product management director at security firm Recorded Future, Change Healthcare paid $22 million to ALPHV. Principal members of the group then pocketed the funds rather than sharing it with an affiliate group that did the actual hacking, as spelled out in a pre-existing agreement. The affiliate group published some of the stolen data, largely validating a chief criticism of ransomware payments.

“As chief executive officer, the decision to pay a ransom was mine,” Witty wrote. “This was one of the hardest

decisions I’ve ever had to make. And I wouldn’t wish it on anyone.”

Bleeping Computer reported that Change Healthcare may have paid both ALPHV and the affiliate through a group calling itself RansomHub.

Two weeks ago, UnitedHealth Group reported the ransomware attack resulted in a $872 million cost in its first quarter. That amount included $593 million in direct response costs and $279 million in disruptions. Witty’s written testimony added that as of last Friday, his company had advanced more than $6.5 billion in accelerated payments and no-interest, no-fee loans to thousands of providers that were left financially struggling during the prolonged outage. UnitedHealth Care reported $99.8 billion in sales for the quarter. The company had an annual revenue of $371.6 billion in 2023.

Payment processing by Change Healthcare is currently about 86 percent of its pre-incident levels and will increase as the company further restores its systems, Witty said. The number of pharmacies it serves remains a “fraction of a percent” below pre-incident levels.

Health care giant comes clean about recent hack and paid ransom Read More »

aws-s3-storage-bucket-with-unlucky-name-nearly-cost-developer-$1,300

AWS S3 storage bucket with unlucky name nearly cost developer $1,300

Not that kind of bucket list —

Amazon says it’s working on stopping others from “making your AWS bill explode.”

A blue bucket, held by red and yellow brackets, being continuously filled and overflowing

Enlarge / Be careful with the buckets you put out there for anybody to fill.

Getty Images

If you’re using Amazon Web Services and your S3 storage bucket can be reached from the open web, you’d do well not to pick a generic name for that space. Avoid “example,” skip “change_me,” don’t even go with “foo” or “bar.” Someone else with the same “change this later” thinking can cost you a MacBook’s worth of cash.

Ask Maciej Pocwierz, who just happened to pick an S3 name that “one of the popular open-source tools” used for its default backup configuration. After setting up the bucket for a client project, he checked his billing page and found nearly 100 million unauthorized attempts to create new files on his bucket (PUT requests) within one day. The bill was over $1,300 and counting.

Nothing, nothing, nothing, nothing, nothing … nearly 100 million unauthorized requests.

Nothing, nothing, nothing, nothing, nothing … nearly 100 million unauthorized requests.

“All this actually happened just a few days after I ensured my client that the price for AWS services will be negligible, like $20 at most for the entire month,” Pocwierz wrote over chat. “I explained the situation is very unusual but it definitely looked as if I didn’t know what I’m doing.”

Pocwierz declined to name the open source tool that inadvertently bum-rushed his S3 account. In a Medium post about the matter, he noted a different problem with an unlucky default backup. After turning on public writes, he watched as he collected more than 10GB of data in less than 30 seconds. Other people’s data, that is, and they had no idea that Pocwierz was collecting it.

Some of that data came from companies with customers, which is part of why Pocwierz is keeping the specifics under wraps. He wrote to Ars that he contacted some of the companies that either tried or successfully backed up their data to his bucket, and “they completely ignored me.” “So now instead of having this fixed, their data is still at risk,” Pocwierz writes. “My lesson is if I ever run a company, I will definitely have a bug bounty program, and I will treat such warnings seriously.”

As for Pocwierz’s accounts, both S3 and bank, it mostly ended well. An AWS representative reached out on LinkedIn and canceled his bill, he said, and was told that anybody can request refunds for excessive unauthorized requests. “But they didn’t explicitly say that they will necessarily approve it,” he wrote. He noted in his Medium post that AWS “emphasized that this was done as an exception.”

In response to Pocwierz’s story, Jeff Barr, chief evangelist for AWS at Amazon, tweeted that “We agree that customers should not have to pay for unauthorized requests that they did not initiate.” Barr added that Amazon would have more to share on how the company could prevent them “shortly.” AWS has a brief explainer and contact page on unexpected AWS charges.

The open source tool did change its default configuration after Pocwierz contacted them. Pocwierz suggested to AWS that it should restrict anyone else from creating a bucket name like his, but he had yet to hear back about it. He suggests in his blog post that, beyond random bad luck, adding a random suffix to your bucket name and explicitly specifying your AWS region can help avoid massive charges like the one he narrowly dodged.

AWS S3 storage bucket with unlucky name nearly cost developer $1,300 Read More »

mysterious-“gpt2-chatbot”-ai-model-appears-suddenly,-confuses-experts

Mysterious “gpt2-chatbot” AI model appears suddenly, confuses experts

Robot fortune teller hand and crystal ball

On Sunday, word began to spread on social media about a new mystery chatbot named “gpt2-chatbot” that appeared in the LMSYS Chatbot Arena. Some people speculate that it may be a secret test version of OpenAI’s upcoming GPT-4.5 or GPT-5 large language model (LLM). The paid version of ChatGPT is currently powered by GPT-4 Turbo.

Currently, the new model is only available for use through the Chatbot Arena website, although in a limited way. In the site’s “side-by-side” arena mode where users can purposely select the model, gpt2-chatbot has a rate limit of eight queries per day—dramatically limiting people’s ability to test it in detail.

So far, gpt2-chatbot has inspired plenty of rumors online, including that it could be the stealth launch of a test version of GPT-4.5 or even GPT-5—or perhaps a new version of 2019’s GPT-2 that has been trained using new techniques. We reached out to OpenAI for comment but did not receive a response by press time. On Monday evening, OpenAI CEO Sam Altman seemingly dropped a hint by tweeting, “i do have a soft spot for gpt2.”

A screenshot of the LMSYS Chatbot Arena

Enlarge / A screenshot of the LMSYS Chatbot Arena “side-by-side” page showing “gpt2-chatbot” listed among the models for testing. (Red highlight added by Ars Technica.)

Benj Edwards

Early reports of the model first appeared on 4chan, then spread to social media platforms like X, with hype following not far behind. “Not only does it seem to show incredible reasoning, but it also gets notoriously challenging AI questions right with a much more impressive tone,” wrote AI developer Pietro Schirano on X. Soon, threads on Reddit popped up claiming that the new model had amazing abilities that beat every other LLM on the Arena.

Intrigued by the rumors, we decided to try out the new model for ourselves but did not come away impressed. When asked about “Benj Edwards,” the model revealed a few mistakes and some awkward language compared to GPT-4 Turbo’s output. A request for five original dad jokes fell short. And the gpt2-chatbot did not decisively pass our “magenta” test. (“Would the color be called ‘magenta’ if the town of Magenta didn’t exist?”)

  • A gpt2-chatbot result for “Who is Benj Edwards?” on LMSYS Chatbot Arena. Mistakes and oddities highlighted in red.

    Benj Edwards

  • A gpt2-chatbot result for “Write 5 original dad jokes” on LMSYS Chatbot Arena.

    Benj Edwards

  • A gpt2-chatbot result for “Would the color be called ‘magenta’ if the town of Magenta didn’t exist?” on LMSYS Chatbot Arena.

    Benj Edwards

So, whatever it is, it’s probably not GPT-5. We’ve seen other people reach the same conclusion after further testing, saying that the new mystery chatbot doesn’t seem to represent a large capability leap beyond GPT-4. “Gpt2-chatbot is good. really good,” wrote HyperWrite CEO Matt Shumer on X. “But if this is gpt-4.5, I’m disappointed.”

Still, OpenAI’s fingerprints seem to be all over the new bot. “I think it may well be an OpenAI stealth preview of something,” AI researcher Simon Willison told Ars Technica. But what “gpt2” is exactly, he doesn’t know. After surveying online speculation, it seems that no one apart from its creator knows precisely what the model is, either.

Willison has uncovered the system prompt for the AI model, which claims it is based on GPT-4 and made by OpenAI. But as Willison noted in a tweet, that’s no guarantee of provenance because “the goal of a system prompt is to influence the model to behave in certain ways, not to give it truthful information about itself.”

Mysterious “gpt2-chatbot” AI model appears suddenly, confuses experts Read More »

critics-question-tech-heavy-lineup-of-new-homeland-security-ai-safety-board

Critics question tech-heavy lineup of new Homeland Security AI safety board

Adventures in 21st century regulation —

CEO-heavy board to tackle elusive AI safety concept and apply it to US infrastructure.

A modified photo of a 1956 scientist carefully bottling

On Friday, the US Department of Homeland Security announced the formation of an Artificial Intelligence Safety and Security Board that consists of 22 members pulled from the tech industry, government, academia, and civil rights organizations. But given the nebulous nature of the term “AI,” which can apply to a broad spectrum of computer technology, it’s unclear if this group will even be able to agree on what exactly they are safeguarding us from.

President Biden directed DHS Secretary Alejandro Mayorkas to establish the board, which will meet for the first time in early May and subsequently on a quarterly basis.

The fundamental assumption posed by the board’s existence, and reflected in Biden’s AI executive order from October, is that AI is an inherently risky technology and that American citizens and businesses need to be protected from its misuse. Along those lines, the goal of the group is to help guard against foreign adversaries using AI to disrupt US infrastructure; develop recommendations to ensure the safe adoption of AI tech into transportation, energy, and Internet services; foster cross-sector collaboration between government and businesses; and create a forum where AI leaders to share information on AI security risks with the DHS.

It’s worth noting that the ill-defined nature of the term “Artificial Intelligence” does the new board no favors regarding scope and focus. AI can mean many different things: It can power a chatbot, fly an airplane, control the ghosts in Pac-Man, regulate the temperature of a nuclear reactor, or play a great game of chess. It can be all those things and more, and since many of those applications of AI work very differently, there’s no guarantee any two people on the board will be thinking about the same type of AI.

This confusion is reflected in the quotes provided by the DHS press release from new board members, some of whom are already talking about different types of AI. While OpenAI, Microsoft, and Anthropic are monetizing generative AI systems like ChatGPT based on large language models (LLMs), Ed Bastian, the CEO of Delta Air Lines, refers to entirely different classes of machine learning when he says, “By driving innovative tools like crew resourcing and turbulence prediction, AI is already making significant contributions to the reliability of our nation’s air travel system.”

So, defining the scope of what AI exactly means—and which applications of AI are new or dangerous—might be one of the key challenges for the new board.

A roundtable of Big Tech CEOs attracts criticism

For the inaugural meeting of the AI Safety and Security Board, the DHS selected a tech industry-heavy group, populated with CEOs of four major AI vendors (Sam Altman of OpenAI, Satya Nadella of Microsoft, Sundar Pichai of Alphabet, and Dario Amodei of Anthopic), CEO Jensen Huang of top AI chipmaker Nvidia, and representatives from other major tech companies like IBM, Adobe, Amazon, Cisco, and AMD. There are also reps from big aerospace and aviation: Northrop Grumman and Delta Air Lines.

Upon reading the announcement, some critics took issue with the board composition. On LinkedIn, founder of The Distributed AI Research Institute (DAIR) Timnit Gebru especially criticized OpenAI’s presence on the board and wrote, “I’ve now seen the full list and it is hilarious. Foxes guarding the hen house is an understatement.”

Critics question tech-heavy lineup of new Homeland Security AI safety board Read More »

uk-outlaws-awful-default-passwords-on-connected-devices

UK outlaws awful default passwords on connected devices

Tacking an S onto IoT —

The law aims to prevent global-scale botnet attacks.

UK outlaws awful default passwords on connected devices

Getty Images

If you build a gadget that connects to the Internet and sell it in the United Kingdom, you can no longer make the default password “password.” In fact, you’re not supposed to have default passwords at all.

A new version of the 2022 Product Security and Telecommunications Infrastructure Act (PTSI) is now in effect, covering just about everything that a consumer can buy that connects to the web. Under the guidelines, even the tiniest Wi-Fi board must either have a randomized password or else generate a password upon initialization (through a smartphone app or other means). This password can’t be incremental (“password1,” “password54”), and it can’t be “related in an obvious way to public information,” such as MAC addresses or Wi-Fi network names. A device should be sufficiently strong against brute-force access attacks, including credential stuffing, and should have a “simple mechanism” for changing the password.

There’s more, and it’s just as head-noddingly obvious. Software components, where reasonable, “should be securely updateable,” should actually check for updates, and should update either automatically or in a way “simple for the user to apply.” Perhaps most importantly, device owners can report security issues and expect to hear back about how that report is being handled.

Violations of the new device laws can result in fines up to 10 million pounds (roughly $12.5 million) or 4 percent of related worldwide revenue, whichever is higher.

Besides giving consumers better devices, these regulations are aimed squarely at malware like Mirai, which can conscript devices like routers, cable modems, and DVRs into armies capable of performing distributed denial-of-service attacks (DDoS) on various targets.

As noted by The Record, the European Union’s Cyber Resilience Act has been shaped but not yet passed and enforced, and even if it does pass, would not take effect until 2027. In the US, there is the Cyber Trust Mark, which would at least give customers the choice of buying decently secured or genially abandoned devices. But the particulars of that label are under debate and seemingly a ways from implementation. At the federal level, a 2020 bill tasked the National Institutes of Standard and Technology with applying related standards to connected devices deployed by the feds.

UK outlaws awful default passwords on connected devices Read More »

account-compromise-of-“unprecedented-scale”-uses-everyday-home-devices

Account compromise of “unprecedented scale” uses everyday home devices

STUFF THIS —

Credential-stuffing attack uses proxies to hide bad behavior.

Account compromise of “unprecedented scale” uses everyday home devices

Getty Images

Authentication service Okta is warning about the “unprecedented scale” of an ongoing campaign that routes fraudulent login requests through the mobile devices and browsers of everyday users in an attempt to conceal the malicious behavior.

The attack, Okta said, uses other means to camouflage the login attempts as well, including the TOR network and so-called proxy services from providers such as NSOCKS, Luminati, and DataImpulse, which can also harness users’ devices without their knowledge. In some cases, the affected mobile devices are running malicious apps. In other cases, users have enrolled their devices in proxy services in exchange for various incentives.

Unidentified adversaries then use these devices in credential-stuffing attacks, which use large lists of login credentials obtained from previous data breaches in an attempt to access online accounts. Because the requests come from IP addresses and devices with good reputations, network security devices don’t give them the same level of scrutiny as logins from virtual private servers (VPS) that come from hosting services threat actors have used for years.

“The net sum of this activity is that most of the traffic in these credential-stuffing attacks appears to originate from the mobile devices and browsers of everyday users, rather than from the IP space of VPS providers,” according to an advisory that Okta published over the weekend.

Okta’s advisory comes two weeks after Cisco’s Talos security team reported seeing a large-scale credential compromise campaign that was indiscriminately assailing networks with login attempts aimed at gaining unauthorized access to VPN, SSH, and web application accounts. These login attempts used both generic and valid usernames targeted at specific organizations. Cisco included a list of more than 2,000 usernames and almost 100 passwords used in the attacks, along with nearly 4,000 IP addresses that are sending the login traffic. The attacks led to hundreds of thousands or even millions of rejected authentication attempts.

Within days of Cisco’s report, Okta’s Identity Threat Research team observed a spike in credential-stuffing attacks that appeared to use a similar infrastructure. Okta said the spike lasted from April 19 through April 26, the day the company published its advisory.

Okta officials wrote:

Residential Proxies are networks of legitimate user devices that route traffic on behalf of a paid subscriber. Providers of residential proxies effectively rent access to route authentication requests through the computer, smartphone, or router of a real user, and proxy traffic through the IP of these devices to anonymize the source of the traffic.

Residential Proxy providers don’t tend to advertise how they build these networks of real user devices. Sometimes a user device is enrolled in a proxy network because the user consciously chooses to download “proxyware” into their device in exchange for payment or something else of value. At other times, a user device is infected with malware without the user’s knowledge and becomes enrolled in what we would typically describe as a botnet. More recently, we have observed a large number of mobile devices used in proxy networks where the user has downloaded a mobile app developed using compromised SDKs (software development kits). Effectively, the developers of these apps have consented to or have been tricked into using an SDK that enrolls the device of any user running the app in a residential proxy network.

People who want to ensure that malicious behavior isn’t routed through their devices or networks should pay close attention to the apps they install and the services they enroll in. Free or discounted services may be contingent on a user agreeing to terms of service that allow their networks or devices to proxy traffic from others. Malicious apps may also surreptitiously provide such proxy services.

Okta provides guidance for network administrators to repel credential-stuffing attacks. Chief among them is protecting accounts with a strong password—meaning one randomly generated and consisting of at least 11 characters. Accounts should also use multifactor authentication, ideally in a form that is compliant with the FIDO industry standard. The Okta advisory also includes advice for blocking malicious behavior from anonymizing proxy services.

Account compromise of “unprecedented scale” uses everyday home devices Read More »

hackers-try-to-exploit-wordpress-plugin-vulnerability-that’s-as-severe-as-it-gets

Hackers try to exploit WordPress plugin vulnerability that’s as severe as it gets

GOT PATCHES? —

WP Automatic plugin patched, but release notes don’t mention the critical fix.

Hackers try to exploit WordPress plugin vulnerability that’s as severe as it gets

Getty Images

Hackers are assailing websites using a prominent WordPress plugin with millions of attempts to exploit a high-severity vulnerability that allows complete takeover, researchers said.

The vulnerability resides in WordPress Automatic, a plugin with more than 38,000 paying customers. Websites running the WordPress content management system use it to incorporate content from other sites. Researchers from security firm Patchstack disclosed last month that WP Automatic versions 3.92.0 and below had a vulnerability with a severity rating of 9.9 out of a possible 10. The plugin developer, ValvePress, silently published a patch, which is available in versions 3.92.1 and beyond.

Researchers have classified the flaw, tracked as CVE-2024-27956, as a SQL injection, a class of vulnerability that stems from a failure by a web application to query backend databases properly. SQL syntax uses apostrophes to indicate the beginning and end of a data string. By entering strings with specially positioned apostrophes into vulnerable website fields, attackers can execute code that performs various sensitive actions, including returning confidential data, giving administrative system privileges, or subverting how the web app works.

“This vulnerability is highly dangerous and expected to become mass exploited,” Patchstack researchers wrote on March 13.

Fellow web security firm WPScan said Thursday that it has logged more than 5.5 million attempts to exploit the vulnerability since the March 13 disclosure by Patchstack. The attempts, WPScan said, started slowly and peaked on March 31. The firm didn’t say how many of those attempts succeeded.

WPScan said that CVE-2024-27596 allows unauthenticated website visitors to create admin‑level user accounts, upload malicious files, and take full control of affected sites. The vulnerability, which resides in how the plugin handles user authentication, allows attackers to bypass the normal authentication process and inject SQL code that grants them elevated system privileges. From there, they can upload and execute malicious payloads that rename sensitive files to prevent the site owner or fellow hackers from controlling the hijacked site.

Successful attacks typically follow this process:

  • SQL Injection (SQLi): Attackers leverage the SQLi vulnerability in the WP‑Automatic plugin to execute unauthorized database queries.
  • Admin User Creation: With the ability to execute arbitrary SQL queries, attackers can create new admin‑level user accounts within WordPress.
  • Malware Upload: Once an admin‑level account is created, attackers can upload malicious files, typically web shells or backdoors, to the compromised website’s server.
  • File Renaming: Attacker may rename the vulnerable WP‑Automatic file, to ensure only he can exploit it.

WPScan researchers explained:

Once a WordPress site is compromised, attackers ensure the longevity of their access by creating backdoors and obfuscating the code. To evade detection and maintain access, attackers may also rename the vulnerable WP‑Automatic file, making it difficult for website owners or security tools to identify or block the issue. It’s worth mentioning that it may also be a way attackers find to avoid other bad actors to successfully exploit their already compromised sites. Also, since the attacker can use their acquired high privileges to install plugins and themes to the site, we noticed that, in most of the compromised sites, the bad actors installed plugins that allowed them to upload files or edit code.

The attacks began shortly after March 13, 15 days after ValvePress released version 3.92.1 without mentioning the critical patch in the release notes. ValvePress representatives didn’t immediately respond to a message seeking an explanation.

While researchers at Patchstack and WPScan are classifying CVE-2024-27956 as SQL injection, an experienced developer said his reading of the vulnerability is that it’s either improper authorization (CWE-285) or a subcategory of improper access control (CWE-284).

According to Patchstack.com, the program is supposed to receive and execute an SQL query, but only from an authorized user,” the developer, who didn’t want to use his name, wrote in an online interview. “The vulnerability is in how it checks the user’s credentials before executing the query, allowing an attacker to bypass the authorization. SQL injection is when the attacker embeds SQL code in what was supposed to be only data, and that’s not the case here.”

Whatever the classification, the vulnerability is about as severe as it gets. Users should patch the plugin immediately. They should also carefully analyze their servers for signs of exploitation using the indicators of compromise data provided in the WPScan post linked above.

Hackers try to exploit WordPress plugin vulnerability that’s as severe as it gets Read More »

apple-releases-eight-small-ai-language-models-aimed-at-on-device-use

Apple releases eight small AI language models aimed at on-device use

Inside the Apple core —

OpenELM mirrors efforts by Microsoft to make useful small AI language models that run locally.

An illustration of a robot hand tossing an apple to a human hand.

Getty Images

In the world of AI, what might be called “small language models” have been growing in popularity recently because they can be run on a local device instead of requiring data center-grade computers in the cloud. On Wednesday, Apple introduced a set of tiny source-available AI language models called OpenELM that are small enough to run directly on a smartphone. They’re mostly proof-of-concept research models for now, but they could form the basis of future on-device AI offerings from Apple.

Apple’s new AI models, collectively named OpenELM for “Open-source Efficient Language Models,” are currently available on the Hugging Face under an Apple Sample Code License. Since there are some restrictions in the license, it may not fit the commonly accepted definition of “open source,” but the source code for OpenELM is available.

On Tuesday, we covered Microsoft’s Phi-3 models, which aim to achieve something similar: a useful level of language understanding and processing performance in small AI models that can run locally. Phi-3-mini features 3.8 billion parameters, but some of Apple’s OpenELM models are much smaller, ranging from 270 million to 3 billion parameters in eight distinct models.

In comparison, the largest model yet released in Meta’s Llama 3 family includes 70 billion parameters (with a 400 billion version on the way), and OpenAI’s GPT-3 from 2020 shipped with 175 billion parameters. Parameter count serves as a rough measure of AI model capability and complexity, but recent research has focused on making smaller AI language models as capable as larger ones were a few years ago.

The eight OpenELM models come in two flavors: four as “pretrained” (basically a raw, next-token version of the model) and four as instruction-tuned (fine-tuned for instruction following, which is more ideal for developing AI assistants and chatbots):

OpenELM features a 2048-token maximum context window. The models were trained on the publicly available datasets RefinedWeb, a version of PILE with duplications removed, a subset of RedPajama, and a subset of Dolma v1.6, which Apple says totals around 1.8 trillion tokens of data. Tokens are fragmented representations of data used by AI language models for processing.

Apple says its approach with OpenELM includes a “layer-wise scaling strategy” that reportedly allocates parameters more efficiently across each layer, saving not only computational resources but also improving the model’s performance while being trained on fewer tokens. According to Apple’s released white paper, this strategy has enabled OpenELM to achieve a 2.36 percent improvement in accuracy over Allen AI’s OLMo 1B (another small language model) while requiring half as many pre-training tokens.

An table comparing OpenELM with other small AI language models in a similar class, taken from the OpenELM research paper by Apple.

Enlarge / An table comparing OpenELM with other small AI language models in a similar class, taken from the OpenELM research paper by Apple.

Apple

Apple also released the code for CoreNet, a library it used to train OpenELM—and it also included reproducible training recipes that allow the weights (neural network files) to be replicated, which is unusual for a major tech company so far. As Apple says in its OpenELM paper abstract, transparency is a key goal for the company: “The reproducibility and transparency of large language models are crucial for advancing open research, ensuring the trustworthiness of results, and enabling investigations into data and model biases, as well as potential risks.”

By releasing the source code, model weights, and training materials, Apple says it aims to “empower and enrich the open research community.” However, it also cautions that since the models were trained on publicly sourced datasets, “there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts.”

While Apple has not yet integrated this new wave of AI language model capabilities into its consumer devices, the upcoming iOS 18 update (expected to be revealed in June at WWDC) is rumored to include new AI features that utilize on-device processing to ensure user privacy—though the company may potentially hire Google or OpenAI to handle more complex, off-device AI processing to give Siri a long-overdue boost.

Apple releases eight small AI language models aimed at on-device use Read More »

millions-of-ips-remain-infected-by-usb-worm-years-after-its-creators-left-it-for-dead

Millions of IPs remain infected by USB worm years after its creators left it for dead

I’M NOT DEAD YET —

Ability of PlugX worm to live on presents a vexing dilemma: Delete it or leave it be.

Millions of IPs remain infected by USB worm years after its creators left it for dead

Getty Images

A now-abandoned USB worm that backdoors connected devices has continued to self-replicate for years since its creators lost control of it and remains active on thousands, possibly millions, of machines, researchers said Thursday.

The worm—which first came to light in a 2023 post published by security firm Sophos—became active in 2019 when a variant of malware known as PlugX added functionality that allowed it to infect USB drives automatically. In turn, those drives would infect any new machine they connected to, a capability that allowed the malware to spread without requiring any end-user interaction. Researchers who have tracked PlugX since at least 2008 have said that the malware has origins in China and has been used by various groups tied to the country’s Ministry of State Security.

Still active after all these years

For reasons that aren’t clear, the worm creator abandoned the one and only IP address that was designated as its command-and-control channel. With no one controlling the infected machines anymore, the PlugX worm was effectively dead, or at least one might have presumed so. The worm, it turns out, has continued to live on in an undetermined number of machines that possibly reaches into the millions, researchers from security firm Sekoia reported.

The researchers purchased the IP address and connected their own server infrastructure to “sinkhole” traffic connecting to it, meaning intercepting the traffic to prevent it from being used maliciously. Since then, their server continues to receive PlugX traffic from 90,000 to 100,000 unique IP addresses every day. Over the span of six months, the researchers counted requests from nearly 2.5 million unique IPs. These sorts of requests are standard for virtually all forms of malware and typically happen at regular intervals that span from minutes to days. While the number of affected IPs doesn’t directly indicate the number of infected machines, the volume nonetheless suggests the worm remains active on thousands, possibly millions, of devices.

“We initially thought that we will have a few thousand victims connected to it, as what we can have on our regular sinkholes,” Sekoia researchers Felix Aimé and Charles M wrote. “However, by setting up a simple web server we saw a continuous flow of HTTP requests varying through the time of the day.”

They went on to say that other variants of the worm remain active through at least three other command-and-control channels known in security circles. There are indications that one of them may also have been sinkholed, however.

As the image below shows, the machines reporting to the sinkhole have broad geographic disbursement:

A world map showing country IPs reporting to the sinkhole.

Enlarge / A world map showing country IPs reporting to the sinkhole.

Sekoia

A sample of incoming traffic over a single day appeared to show that Nigeria hosted the largest concentration of infected machines, followed by India, Indonesia, and the UK.

Graph showing the countries with the most affected IPs.

Enlarge / Graph showing the countries with the most affected IPs.

Sekoia

The researchers wrote:

Based on that data, it’s notable that around 15 countries account for over 80% of the total infections. It’s also intriguing to note that the leading infected countries don’t share many similarities, a pattern observed with previous USB worms such as RETADUP which has the highest infection rates in Spanish spelling countries. This suggests the possibility that this worm might have originated from multiple patient zeros in different countries.

One explanation is that most of the biggest concentrations are in countries that have coastlines where China’s government has significant investments in infrastructure. Additionally many of the most affected countries have strategic importance to Chinese military objectives. The researchers speculated that the purpose of the campaign was to collect intelligence the Chinese government could use to achieve those objectives.

The researchers noted that the zombie worm has remained susceptible to takeover by any threat actor who gains control of the IP address or manages to insert itself into the pathway between the server at that address and an infected device. That threat poses interesting dilemmas for the governments of affected countries. They could choose to preserve the status quo by taking no action, or they could activate a self-delete command built into the worm that would disinfect infected machines. Additionally, if they choose the latter option, they could elect to disinfect only the infected machine or add new functionality to disinfect any infected USB drives that happen to be connected.

Because of how the worm infects drives, disinfecting them risks deleting the legitimate data stored on them. On the other hand, allowing drives to remain infected makes it possible for the worm to start its proliferation all over again. Further complicating the decision-making process, the researchers noted that even if someone issues commands that disinfect any infected drives that happen to be plugged in, it’s inevitable that the worm will live on in drives that aren’t connected when a remote disinfect command is issued.

“Given the potential legal challenges that could arise from conducting a widespread disinfection campaign, which involves sending an arbitrary command to workstations we do not own, we have resolved to defer the decision on whether to disinfect workstations in their respective countries to the discretion of national Computer Emergency Response Teams (CERTs), Law Enforcement Agencies (LEAs), and cybersecurity authorities,” the researchers wrote. “Once in possession of the disinfection list, we can provide them an access to start the disinfection for a period of three months. During this time, any PlugX request from an Autonomous System marked for disinfection will be responded to with a removal command or a removal payload.”

Millions of IPs remain infected by USB worm years after its creators left it for dead Read More »

nation-state-hackers-exploit-cisco-firewall-0-days-to-backdoor-government-networks

Nation-state hackers exploit Cisco firewall 0-days to backdoor government networks

A stylized skull and crossbones made out of ones and zeroes.

Hackers backed by a powerful nation-state have been exploiting two zero-day vulnerabilities in Cisco firewalls in a five-month-long campaign that breaks into government networks around the world, researchers reported Wednesday.

The attacks against Cisco’s Adaptive Security Appliances firewalls are the latest in a rash of network compromises that target firewalls, VPNs, and network-perimeter devices, which are designed to provide a moated gate of sorts that keeps remote hackers out. Over the past 18 months, threat actors—mainly backed by the Chinese government—have turned this security paradigm on its head in attacks that exploit previously unknown vulnerabilities in security appliances from the likes of Ivanti, Atlassian, Citrix, and Progress. These devices are ideal targets because they sit at the edge of a network, provide a direct pipeline to its most sensitive resources, and interact with virtually all incoming communications.

Cisco ASA likely one of several targets

On Wednesday, it was Cisco’s turn to warn that its ASA products have received such treatment. Since November, a previously unknown actor tracked as UAT4356 by Cisco and STORM-1849 by Microsoft has been exploiting two zero-days in attacks that go on to install two pieces of never-before-seen malware, researchers with Cisco’s Talos security team said. Notable traits in the attacks include:

  • An advanced exploit chain that targeted multiple vulnerabilities, at least two of which were zero-days
  • Two mature, full-feature backdoors that have never been seen before, one of which resided solely in memory to prevent detection
  • Meticulous attention to hiding footprints by wiping any artifacts the backdoors may leave behind. In many cases, the wiping was customized based on characteristics of a specific target.

Those characteristics, combined with a small cast of selected targets all in government, have led Talos to assess that the attacks are the work of government-backed hackers motivated by espionage objectives.

“Our attribution assessment is based on the victimology, the significant level of tradecraft employed in terms of capability development and anti-forensic measures, and the identification and subsequent chaining together of 0-day vulnerabilities,” Talos researchers wrote. “For these reasons, we assess with high confidence that these actions were performed by a state-sponsored actor.”

The researchers also warned that the hacking campaign is likely targeting other devices besides the ASA. Notably, the researchers said they still don’t know how UAT4356 gained initial access, meaning the ASA vulnerabilities could be exploited only after one or more other currently unknown vulnerabilities—likely in network wares from Microsoft and others—were exploited.

“Regardless of your network equipment provider, now is the time to ensure that the devices are properly patched, logging to a central, secure location, and configured to have strong, multi-factor authentication (MFA),” the researchers wrote. Cisco has released security updates that patch the vulnerabilities and is urging all ASA users to install them promptly.

UAT4356 started work on the campaign no later than last July when it was developing and testing the exploits. By November, the threat group first set up the dedicated server infrastructure for the attacks, which began in earnest in January. The following image details the timeline:

Cisco

One of the vulnerabilities, tracked as CVE-2024-20359, resides in a now-retired capability allowing for the preloading of VPN clients and plug-ins in ASA. It stems from improper validation of files when they’re read from the flash memory of a vulnerable device and allows for remote code execution with root system privileges when exploited. UAT4356 is exploiting it to backdoors Cisco tracks under the names Line Dancer and Line Runner. In at least one case, the threat actor is installing the backdoors by exploiting CVE-2024-20353, a separate ASA vulnerability with a severity rating of 8.6 out of a possible 10.

Nation-state hackers exploit Cisco firewall 0-days to backdoor government networks Read More »

deepfakes-in-the-courtroom:-us-judicial-panel-debates-new-ai-evidence-rules

Deepfakes in the courtroom: US judicial panel debates new AI evidence rules

adventures in 21st-century justice —

Panel of eight judges confronts deep-faking AI tech that may undermine legal trials.

An illustration of a man with a very long nose holding up the scales of justice.

On Friday, a federal judicial panel convened in Washington, DC, to discuss the challenges of policing AI-generated evidence in court trials, according to a Reuters report. The US Judicial Conference’s Advisory Committee on Evidence Rules, an eight-member panel responsible for drafting evidence-related amendments to the Federal Rules of Evidence, heard from computer scientists and academics about the potential risks of AI being used to manipulate images and videos or create deepfakes that could disrupt a trial.

The meeting took place amid broader efforts by federal and state courts nationwide to address the rise of generative AI models (such as those that power OpenAI’s ChatGPT or Stability AI’s Stable Diffusion), which can be trained on large datasets with the aim of producing realistic text, images, audio, or videos.

In the published 358-page agenda for the meeting, the committee offers up this definition of a deepfake and the problems AI-generated media may pose in legal trials:

A deepfake is an inauthentic audiovisual presentation prepared by software programs using artificial intelligence. Of course, photos and videos have always been subject to forgery, but developments in AI make deepfakes much more difficult to detect. Software for creating deepfakes is already freely available online and fairly easy for anyone to use. As the software’s usability and the videos’ apparent genuineness keep improving over time, it will become harder for computer systems, much less lay jurors, to tell real from fake.

During Friday’s three-hour hearing, the panel wrestled with the question of whether existing rules, which predate the rise of generative AI, are sufficient to ensure the reliability and authenticity of evidence presented in court.

Some judges on the panel, such as US Circuit Judge Richard Sullivan and US District Judge Valerie Caproni, reportedly expressed skepticism about the urgency of the issue, noting that there have been few instances so far of judges being asked to exclude AI-generated evidence.

“I’m not sure that this is the crisis that it’s been painted as, and I’m not sure that judges don’t have the tools already to deal with this,” said Judge Sullivan, as quoted by Reuters.

Last year, Chief US Supreme Court Justice John Roberts acknowledged the potential benefits of AI for litigants and judges, while emphasizing the need for the judiciary to consider its proper uses in litigation. US District Judge Patrick Schiltz, the evidence committee’s chair, said that determining how the judiciary can best react to AI is one of Roberts’ priorities.

In Friday’s meeting, the committee considered several deepfake-related rule changes. In the agenda for the meeting, US District Judge Paul Grimm and attorney Maura Grossman proposed modifying Federal Rule 901(b)(9) (see page 5), which involves authenticating or identifying evidence. They also recommended the addition of a new rule, 901(c), which might read:

901(c): Potentially Fabricated or Altered Electronic Evidence. If a party challenging the authenticity of computer-generated or other electronic evidence demonstrates to the court that it is more likely than not either fabricated, or altered in whole or in part, the evidence is admissible only if the proponent demonstrates that its probative value outweighs its prejudicial effect on the party challenging the evidence.

The panel agreed during the meeting that this proposal to address concerns about litigants challenging evidence as deepfakes did not work as written and that it will be reworked before being reconsidered later.

Another proposal by Andrea Roth, a law professor at the University of California, Berkeley, suggested subjecting machine-generated evidence to the same reliability requirements as expert witnesses. However, Judge Schiltz cautioned that such a rule could hamper prosecutions by allowing defense lawyers to challenge any digital evidence without establishing a reason to question it.

For now, no definitive rule changes have been made, and the process continues. But we’re witnessing the first steps of how the US justice system will adapt to an entirely new class of media-generating technology.

Putting aside risks from AI-generated evidence, generative AI has led to embarrassing moments for lawyers in court over the past two years. In May 2023, US lawyer Steven Schwartz of the firm Levidow, Levidow, & Oberman apologized to a judge for using ChatGPT to help write court filings that inaccurately cited six nonexistent cases, leading to serious questions about the reliability of AI in legal research. Also, in November, a lawyer for Michael Cohen cited three fake cases that were potentially influenced by a confabulating AI assistant.

Deepfakes in the courtroom: US judicial panel debates new AI evidence rules Read More »