Biz & IT

after-114-days-of-change,-broadcom-ceo-acknowledges-vmware-related-“unease”

After 114 days of change, Broadcom CEO acknowledges VMware-related “unease”

M&A pains —

“There’s more to come.”

A Broadcom sign outside one of its offices.

Broadcom CEO and President Hock Tan has acknowledged the discomfort VMware customers and partners have experienced after the sweeping changes that Broadcom has instituted since it acquired the virtualization company 114 days ago.

In a blog post Thursday, Tan noted that Broadcom spent 18 months evaluating and buying VMware. He said that while there’s still a lot of work to do, the company has made “substantial progress.”

That so-called progress, though, has worried some of Broadcom’s customers and partners.

Tan wrote:

Of course, we recognize that this level of change has understandably created some unease among our customers and partners. But all of these moves have been with the goals of innovating faster, meeting our customers’ needs more effectively, and making it easier to do business with us.

Tan believes that the changes will ultimately “provide greater profitability and improved market opportunities” for channel partners. However, many IT solution provider businesses that were working with VMware have already been disrupted.

For example, after buying VMware, Broadcom took over the top 2,000 VMware accounts from VMware channel partners. In a March earnings call, Tan said that Broadcom has been focused on upselling those customers. He also said Broadcom expects VMware revenue to grow double-digits quarter over quarter for the rest of the fiscal year.

Beyond that, Broadcom ended the VMware channel partner program, making the primary path to reselling VMware an invite-only Broadcom program.

Additionally, Broadcom killing VMware perpetual licensing has reportedly upended financials for numerous businesses. In a March “User Group Town Hall,” attendees complained about “price rises of 500 and 600 percent,” The Register reported. In February, ServetheHome reported that “smaller” managed service providers focusing on cloud services were reporting seeing the price of working with VMware increase tenfold. “They do not have the revenue nor ability to charge for that kind of price increase, especially this rapidly,” ServeTheHome reported.

By contrast, Tan recently saw a financial windfall, making the equivalent of more than double his 2022 salary in 2023. A US Securities and Exchange Commission filing showed that Broadcom paid Tan $161.8 million, including $160.5 million in stock that will vest over the next five years (Tan isn’t eligible for more bonus payouts until 2028). Broadcom announced its VMware acquisition in May 2022 and closed in late November for $69 billion.

In his blog post, Tan defended the subscription-only licensing model, calling it “the industry standard.” He said VMware started accelerating its transition to this strategy in 2019, (which is before Broadcom bought VMware). He also linked to a February blog post from VMware’s Prashanth Shenoy, VP of product and technical marketing for the Cloud, Infrastructure, Platforms, and Solutions group at VMware, that also noted acquisition-related “concerns” but claimed the evolution would be fiscally prudent.

Other Broadcom-led changes to VMware over the past 114 days include at least 2,800 VMware jobs cut, shuttering the free version of ESXi, and plans to sell VMware’s End User Computing business to KKR, as well as spend $1 billion on VMware R&D.

After 114 days of change, Broadcom CEO acknowledges VMware-related “unease” Read More »

hackers-can-read-private-ai-assistant-chats-even-though-they’re-encrypted

Hackers can read private AI-assistant chats even though they’re encrypted

CHATBOT KEYLOGGING —

All non-Google chat GPTs affected by side channel that leaks responses sent to users.

Hackers can read private AI-assistant chats even though they’re encrypted

Aurich Lawson | Getty Images

AI assistants have been widely available for a little more than a year, and they already have access to our most private thoughts and business secrets. People ask them about becoming pregnant or terminating or preventing pregnancy, consult them when considering a divorce, seek information about drug addiction, or ask for edits in emails containing proprietary trade secrets. The providers of these AI-powered chat services are keenly aware of the sensitivity of these discussions and take active steps—mainly in the form of encrypting them—to prevent potential snoops from reading other people’s interactions.

But now, researchers have devised an attack that deciphers AI assistant responses with surprising accuracy. The technique exploits a side channel present in all of the major AI assistants, with the exception of Google Gemini. It then refines the fairly raw results through large language models specially trained for the task. The result: Someone with a passive adversary-in-the-middle position—meaning an adversary who can monitor the data packets passing between an AI assistant and the user—can infer the specific topic of 55 percent of all captured responses, usually with high word accuracy. The attack can deduce responses with perfect word accuracy 29 percent of the time.

Token privacy

“Currently, anybody can read private chats sent from ChatGPT and other services,” Yisroel Mirsky, head of the Offensive AI Research Lab at Ben-Gurion University in Israel, wrote in an email. “This includes malicious actors on the same Wi-Fi or LAN as a client (e.g., same coffee shop), or even a malicious actor on the Internet—anyone who can observe the traffic. The attack is passive and can happen without OpenAI or their client’s knowledge. OpenAI encrypts their traffic to prevent these kinds of eavesdropping attacks, but our research shows that the way OpenAI is using encryption is flawed, and thus the content of the messages are exposed.”

Mirsky was referring to OpenAI, but with the exception of Google Gemini, all other major chatbots are also affected. As an example, the attack can infer the encrypted ChatGPT response:

  • Yes, there are several important legal considerations that couples should be aware of when considering a divorce, …

as:

  • Yes, there are several potential legal considerations that someone should be aware of when considering a divorce. …

and the Microsoft Copilot encrypted response:

  • Here are some of the latest research findings on effective teaching methods for students with learning disabilities: …

is inferred as:

  • Here are some of the latest research findings on cognitive behavior therapy for children with learning disabilities: …

While the underlined words demonstrate that the precise wording isn’t perfect, the meaning of the inferred sentence is highly accurate.

Attack overview: A packet capture of an AI assistant’s real-time response reveals a token-sequence side-channel. The side-channel is parsed to find text segments that are then reconstructed using sentence-level context and knowledge of the target LLM’s writing style.

Enlarge / Attack overview: A packet capture of an AI assistant’s real-time response reveals a token-sequence side-channel. The side-channel is parsed to find text segments that are then reconstructed using sentence-level context and knowledge of the target LLM’s writing style.

Weiss et al.

The following video demonstrates the attack in action against Microsoft Copilot:

Token-length sequence side-channel attack on Bing.

A side channel is a means of obtaining secret information from a system through indirect or unintended sources, such as physical manifestations or behavioral characteristics, such as the power consumed, the time required, or the sound, light, or electromagnetic radiation produced during a given operation. By carefully monitoring these sources, attackers can assemble enough information to recover encrypted keystrokes or encryption keys from CPUs, browser cookies from HTTPS traffic, or secrets from smartcards. The side channel used in this latest attack resides in tokens that AI assistants use when responding to a user query.

Tokens are akin to words that are encoded so they can be understood by LLMs. To enhance the user experience, most AI assistants send tokens on the fly, as soon as they’re generated, so that end users receive the responses continuously, word by word, as they’re generated rather than all at once much later, once the assistant has generated the entire answer. While the token delivery is encrypted, the real-time, token-by-token transmission exposes a previously unknown side channel, which the researchers call the “token-length sequence.”

Hackers can read private AI-assistant chats even though they’re encrypted Read More »

never-before-seen-linux-malware-gets-installed-using-1-day-exploits

Never-before-seen Linux malware gets installed using 1-day exploits

Never-before-seen Linux malware gets installed using 1-day exploits

Getty Images

Researchers have unearthed Linux malware that circulated in the wild for at least two years before being identified as a credential stealer that’s installed by the exploitation of recently patched vulnerabilities.

The newly identified malware is a Linux variant of NerbianRAT, a remote access Trojan first described in 2022 by researchers at security firm Proofpoint. Last Friday, Checkpoint Research revealed that the Linux version has existed since at least the same year, when it was uploaded to the VirusTotal malware identification site. Checkpoint went on to conclude that Magnet Goblin—the name the security firm uses to track the financially motivated threat actor using the malware—has installed it by exploiting “1-days,” which are recently patched vulnerabilities. Attackers in this scenario reverse engineer security updates, or copy associated proof-of-concept exploits, for use against devices that have yet to install the patches.

Checkpoint also identified MiniNerbian, a smaller version of NerbianRAT for Linux that’s used to backdoor servers running the Magento ecommerce server, primarily for use as command-and-control servers that devices infected by NerbianRAT connect to. Researchers elsewhere have reported encountering servers that appear to have been compromised with MiniNerbian, but Checkpoint Research appears to have been the first to identify the underlying binary.

“Magnet Goblin, whose campaigns appear to be financially motivated, has been quick to adopt 1-day vulnerabilities to deliver their custom Linux malware, NerbianRAT and MiniNerbian,” Checkpoint researchers wrote. “Those tools have operated under the radar as they mostly reside on edge-devices. This is part of an ongoing trend for threat actors to target areas which until now have been left unprotected.”

Checkpoint discovered the Linux malware while researching recent attacks that exploit critical vulnerabilities in Ivanti Secure Connect, which have been under mass exploitation since early January. In the past, Magnet Goblin has installed the malware by exploiting one-day vulnerabilities in Magento, Qlink Sense, and possibly Apache ActiveMQ.

In the course of its investigation into the Ivanti exploitation, Checkpoint found the Linux version of NerbianRAT on compromised servers that were under the control of Magnet Goblin. URLs included:

http://94.156.71[.]115/lxrt

http://91.92.240[.]113/aparche2

http://45.9.149[.]215/aparche2

The Linux variants connect back to the attacker-controlled IP 172.86.66[.]165.

Besides deploying NerbianRAT, Magnet Goblin also installed a custom variant of malware tracked as WarpWire, a piece of stealer malware recently reported by security firm Mandiant. The variant Checkpoint encountered stole VPN credentials and sent them to a server at the domain miltonhouse[.]nl.

Checkpoint Research

NerbianRAT Windows featured robust code that took pains to hide itself and to prevent reverse engineering by rivals or researchers.

“Unlike its Windows equivalent, the Linux version barely has any protective measures,” Checkpoint said. “It is sloppily compiled with DWARF debugging information, which allows researchers to view, among other things, function names and global variable names.”

Never-before-seen Linux malware gets installed using 1-day exploits Read More »

image-scraping-midjourney-bans-rival-ai-firm-for-scraping-images

Image-scraping Midjourney bans rival AI firm for scraping images

Irony lives —

Midjourney pins blame for 24-hour outage on “bot-net like” activity from Stability AI employee.

A burglar with flash light and papers in business office. Exactly like scraping files from Discord.

Enlarge / A burglar with a flashlight and papers in a business office—exactly like scraping files from Discord.

On Wednesday, Midjourney banned all employees from image synthesis rival Stability AI from its service indefinitely after it detected “botnet-like” activity suspected to be a Stability employee attempting to scrape prompt and image pairs in bulk. Midjourney advocate Nick St. Pierre tweeted about the announcement, which came via Midjourney’s official Discord channel.

Prompts are the written instructions (like “a cat in a car holding a can of a beer”) used by generative AI models such as Midjourney and Stability AI’s Stable Diffusion 3 (SD3) to synthesize images. Having prompt and image pairs could potentially help the training or fine-tuning of a rival AI image generator model.

Bot activity that took place around midnight on March 2 caused a 24-hour outage for the commercial image generator service. Midjourney linked several paid accounts with a Stability AI data team employee trying to “grab prompt and image pairs.” Midjourney then made a decision to ban all Stability AI employees from the service indefinitely. It also indicated a new policy: “aggressive automation or taking down the service results in banning all employees of the responsible company.”

A screenshot of the

Enlarge / A screenshot of the “Midjourney Office Hours” notes posted on March 6, 2024.

Midjourney

Siobhan Ball of The Mary Sue found it ironic that a company like Midjourney, which built its AI image synthesis models using training data scraped off the Internet without seeking permission, would be sensitive about having its own material scraped. “It turns out that generative AI companies don’t like it when you steal, sorry, scrape, images from them. Cue the world’s smallest violin.”

Users of Midjourney pay a monthly subscription fee to access an AI image generator that turns written prompts into lush computer-synthesized images. The bot that makes them was trained on millions of artistic works created by humans—it’s a practice that has been claimed to be disrespectful to artists. “Words can’t describe how dehumanizing it is to see my name used 20,000+ times in MidJourney,” wrote artist Jingna Zhang in a recent viral tweet. “My life’s work and who I am—reduced to meaningless fodder for a commercial image slot machine.”

Stability responds

Shortly after the news of the ban emerged, Stability AI CEO Emad Mostaque said that he was looking into it and claimed that whatever happened was not intentional. He also said it would be great if Midjourney reached out to him directly. In a reply on X, Midjourney CEO David Holz wrote, “sent you some information to help with your internal investigation.”

In a text message exchange with Ars Technica, Mostaque said, “We checked and there were no images scraped there, there was a bot run by a team member that was collecting prompts for a personal project though. We aren’t sure how that would cause a gallery site outage but are sorry if it did, Midjourney is great.”

Besides, Mostaque says, his company doesn’t need Midjourney’s data anyway. “We have been using synthetic & other data given SD3 outperforms all other models,” he wrote on X. In conversation with Ars, Mostaque similarly wanted to contrast his company’s data collection techniques with those of his rival. “We only scrape stuff that has proper robots.txt and is permissive,” Mostaque says. “And also did full opt-out for [Stable Diffusion 3] and Stable Cascade leveraging work Spawning did.”

When asked about Stability’s relationship with Midjourney these days, Mostaque played down the rivalry. “No real overlap, we get on fine though,” he told Ars and emphasized a key link in their histories. “I funded Midjourney to get [them] off the ground with a cash grant to cover [Nvidia] A100s for the beta.”

Image-scraping Midjourney bans rival AI firm for scraping images Read More »

openai-ceo-altman-wasn’t-fired-because-of-scary-new-tech,-just-internal-politics

OpenAI CEO Altman wasn’t fired because of scary new tech, just internal politics

Adventures in optics —

As Altman cements power, OpenAI announces three new board members—and a returning one.

OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 6, 2023, in San Francisco.

Enlarge / OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 6, 2023, in San Francisco.

On Friday afternoon Pacific Time, OpenAI announced the appointment of three new members to the company’s board of directors and released the results of an independent review of the events surrounding CEO Sam Altman’s surprise firing last November. The current board expressed its confidence in the leadership of Altman and President Greg Brockman, and Altman is rejoining the board.

The newly appointed board members are Dr. Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation; Nicole Seligman, former EVP and global general counsel of Sony; and Fidji Simo, CEO and chair of Instacart. These additions notably bring three women to the board after OpenAI met criticism about its restructured board composition last year. In addition, Sam Altman has rejoined the board.

The independent review, conducted by law firm WilmerHale, investigated the circumstances that led to Altman’s abrupt removal from the board and his termination as CEO on November 17, 2023. Despite rumors to the contrary, the board did not fire Altman because they got a peek at scary new AI technology and flinched. “WilmerHale… found that the prior Board’s decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.”

Instead, the review determined that the prior board’s actions stemmed from a breakdown in trust between the board and Altman.

After reportedly interviewing dozens of people and reviewing over 30,000 documents, WilmerHale found that while the prior board acted within its purview, Altman’s termination was unwarranted. “WilmerHale found that the prior Board acted within its broad discretion to terminate Mr. Altman,” OpenAI wrote, “but also found that his conduct did not mandate removal.”

Additionally, the law firm found that the decision to fire Altman was made in undue haste: “The prior Board implemented its decision on an abridged timeframe, without advance notice to key stakeholders and without a full inquiry or an opportunity for Mr. Altman to address the prior Board’s concerns.”

Altman’s surprise firing occurred after he attempted to remove Helen Toner from OpenAI’s board due to disagreements over her criticism of OpenAI’s approach to AI safety and hype. Some board members saw his actions as deceptive and manipulative. After Altman returned to OpenAI, Toner resigned from the OpenAI board on November 29.

In a statement posted on X, Altman wrote, “i learned a lot from this experience. one think [sic] i’ll say now: when i believed a former board member was harming openai through some of their actions, i should have handled that situation with more grace and care. i apologize for this, and i wish i had done it differently.”

A tweet from Sam Altman posted on March 8, 2024.

Enlarge / A tweet from Sam Altman posted on March 8, 2024.

Following the review’s findings, the Special Committee of the OpenAI Board recommended endorsing the November 21 decision to rehire Altman and Brockman. The board also announced several enhancements to its governance structure, including new corporate governance guidelines, a strengthened Conflict of Interest Policy, a whistleblower hotline, and additional board committees focused on advancing OpenAI’s mission.

After OpenAI’s announcements on Friday, resigned OpenAI board members Toner and Tasha McCauley released a joint statement on X. “Accountability is important in any company, but it is paramount when building a technology as potentially world-changing as AGI,” they wrote. “We hope the new board does its job in governing OpenAI and holding it accountable to the mission. As we told the investigators, deception, manipulation, and resistance to thorough oversight should be unacceptable.”

OpenAI CEO Altman wasn’t fired because of scary new tech, just internal politics Read More »

matrix-multiplication-breakthrough-could-lead-to-faster,-more-efficient-ai-models

Matrix multiplication breakthrough could lead to faster, more efficient AI models

The Matrix Revolutions —

At the heart of AI, matrix math has just seen its biggest boost “in more than a decade.”

Futuristic huge technology tunnel and binary data.

Enlarge / When you do math on a computer, you fly through a numerical tunnel like this—figuratively, of course.

Computer scientists have discovered a new way to multiply large matrices faster than ever before by eliminating a previously unknown inefficiency, reports Quanta Magazine. This could eventually accelerate AI models like ChatGPT, which rely heavily on matrix multiplication to function. The findings, presented in two recent papers, have led to what is reported to be the biggest improvement in matrix multiplication efficiency in over a decade.

Multiplying two rectangular number arrays, known as matrix multiplication, plays a crucial role in today’s AI models, including speech and image recognition, chatbots from every major vendor, AI image generators, and video synthesis models like Sora. Beyond AI, matrix math is so important to modern computing (think image processing and data compression) that even slight gains in efficiency could lead to computational and power savings.

Graphics processing units (GPUs) excel in handling matrix multiplication tasks because of their ability to process many calculations at once. They break down large matrix problems into smaller segments and solve them concurrently using an algorithm.

Perfecting that algorithm has been the key to breakthroughs in matrix multiplication efficiency over the past century—even before computers entered the picture. In October 2022, we covered a new technique discovered by a Google DeepMind AI model called AlphaTensor, focusing on practical algorithmic improvements for specific matrix sizes, such as 4×4 matrices.

By contrast, the new research, conducted by Ran Duan and Renfei Zhou of Tsinghua University, Hongxun Wu of the University of California, Berkeley, and by Virginia Vassilevska Williams, Yinzhan Xu, and Zixuan Xu of the Massachusetts Institute of Technology (in a second paper), seeks theoretical enhancements by aiming to lower the complexity exponent, ω, for a broad efficiency gain across all sizes of matrices. Instead of finding immediate, practical solutions like AlphaTensor, the new technique addresses foundational improvements that could transform the efficiency of matrix multiplication on a more general scale.

Approaching the ideal value

The traditional method for multiplying two n-by-n matrices requires n³ separate multiplications. However, the new technique, which improves upon the “laser method” introduced by Volker Strassen in 1986, has reduced the upper bound of the exponent (denoted as the aforementioned ω), bringing it closer to the ideal value of 2, which represents the theoretical minimum number of operations needed.

The traditional way of multiplying two grids full of numbers could require doing the math up to 27 times for a grid that’s 3×3. But with these advancements, the process is accelerated by significantly reducing the multiplication steps required. The effort minimizes the operations to slightly over twice the size of one side of the grid squared, adjusted by a factor of 2.371552. This is a big deal because it nearly achieves the optimal efficiency of doubling the square’s dimensions, which is the fastest we could ever hope to do it.

Here’s a brief recap of events. In 2020, Josh Alman and Williams introduced a significant improvement in matrix multiplication efficiency by establishing a new upper bound for ω at approximately 2.3728596. In November 2023, Duan and Zhou revealed a method that addressed an inefficiency within the laser method, setting a new upper bound for ω at approximately 2.371866. The achievement marked the most substantial progress in the field since 2010. But just two months later, Williams and her team published a second paper that detailed optimizations that reduced the upper bound for ω to 2.371552.

The 2023 breakthrough stemmed from the discovery of a “hidden loss” in the laser method, where useful blocks of data were unintentionally discarded. In the context of matrix multiplication, “blocks” refer to smaller segments that a large matrix is divided into for easier processing, and “block labeling” is the technique of categorizing these segments to identify which ones to keep and which to discard, optimizing the multiplication process for speed and efficiency. By modifying the way the laser method labels blocks, the researchers were able to reduce waste and improve efficiency significantly.

While the reduction of the omega constant might appear minor at first glance—reducing the 2020 record value by 0.0013076—the cumulative work of Duan, Zhou, and Williams represents the most substantial progress in the field observed since 2010.

“This is a major technical breakthrough,” said William Kuszmaul, a theoretical computer scientist at Harvard University, as quoted by Quanta Magazine. “It is the biggest improvement in matrix multiplication we’ve seen in more than a decade.”

While further progress is expected, there are limitations to the current approach. Researchers believe that understanding the problem more deeply will lead to the development of even better algorithms. As Zhou stated in the Quanta report, “People are still in the very early stages of understanding this age-old problem.”

So what are the practical applications? For AI models, a reduction in computational steps for matrix math could translate into faster training times and more efficient execution of tasks. It could enable more complex models to be trained more quickly, potentially leading to advancements in AI capabilities and the development of more sophisticated AI applications. Additionally, efficiency improvement could make AI technologies more accessible by lowering the computational power and energy consumption required for these tasks. That would also reduce AI’s environmental impact.

The exact impact on the speed of AI models depends on the specific architecture of the AI system and how heavily its tasks rely on matrix multiplication. Advancements in algorithmic efficiency often need to be coupled with hardware optimizations to fully realize potential speed gains. But still, as improvements in algorithmic techniques add up over time, AI will get faster.

Matrix multiplication breakthrough could lead to faster, more efficient AI models Read More »

microsoft-says-kremlin-backed-hackers-accessed-its-source-and-internal-systems

Microsoft says Kremlin-backed hackers accessed its source and internal systems

THE PLOT THICKENS —

Midnight Blizzard is now using stolen secrets in follow-on attacks against customers.

Microsoft says Kremlin-backed hackers accessed its source and internal systems

Microsoft said that Kremlin-backed hackers who breached its corporate network in January have expanded their access since then in follow-on attacks that are targeting customers and have compromised the company’s source code and internal systems.

The intrusion, which the software company disclosed in January, was carried out by Midnight Blizzard, the name used to track a hacking group widely attributed to the Federal Security Service, a Russian intelligence agency. Microsoft said at the time that Midnight Blizzard gained access to senior executives’ email accounts for months after first exploiting a weak password in a test device connected to the company’s network. Microsoft went on to say it had no indication any of its source code or production systems had been compromised.

Secrets sent in email

In an update published Friday, Microsoft said it uncovered evidence that Midnight Blizzard had used the information it gained initially to further push into its network and compromise both source code and internal systems. The hacking group—which is tracked under multiple other names, including APT29, Cozy Bear, CozyDuke, The Dukes, Dark Halo, and Nobelium—has been using the proprietary information in follow-on attacks, not only against Microsoft but also its customers.

“In recent weeks, we have seen evidence that Midnight Blizzard is using information initially exfiltrated from our corporate email systems to gain, or attempt to gain, unauthorized access,” Friday’s update said. “This has included access to some of the company’s source code repositories and internal systems. To date we have found no evidence that Microsoft-hosted customer-facing systems have been compromised.

In January’s disclosure, Microsoft said Midnight Blizzard used a password-spraying attack to compromise a “legacy non-production test tenant account” on the company’s network. Those details meant that the account hadn’t been removed once it was decommissioned, a practice that’s considered essential for securing networks. The details also meant that the password used to log in to the account was weak enough to be guessed by sending a steady stream of credentials harvested from previous breaches—a technique known as password spraying.

In the months since, Microsoft said Friday, Midnight Blizzard has been exploiting the information it obtained earlier in follow-on attacks that have stepped up an already high rate of password spraying.

Unprecedented global threat

Microsoft officials wrote:

It is apparent that Midnight Blizzard is attempting to use secrets of different types it has found. Some of these secrets were shared between customers and Microsoft in email, and as we discover them in our exfiltrated email, we have been and are reaching out to these customers to assist them in taking mitigating measures. Midnight Blizzard has increased the volume of some aspects of the attack, such as password sprays, by as much as 10-fold in February, compared to the already large volume we saw in January 2024.

Midnight Blizzard’s ongoing attack is characterized by a sustained, significant commitment of the threat actor’s resources, coordination, and focus. It may be using the information it has obtained to accumulate a picture of areas to attack and enhance its ability to do so. This reflects what has become more broadly an unprecedented global threat landscape, especially in terms of sophisticated nation-state attacks.

The attack began in November and wasn’t detected until January. Microsoft said then that the breach allowed Midnight Blizzard to monitor the email accounts of senior executives and security personnel, raising the possibility that the group was able to read sensitive communications for as long as three months. Microsoft said one motivation for the attack was for Midnight Blizzard to learn what the company knew about the threat group. Microsoft said at the time and reiterated again Friday that it had no evidence the hackers gained access to customer-facing systems.

Midnight Blizzard is among the most prolific APTs, short for advanced persistent threats, the term used for skilled, well-funded hacking groups that are mostly backed by nation-states. The group was behind the SolarWinds supply-chain attack that led to the hacking of the US Departments of Energy, Commerce, Treasury, and Homeland Security and about 100 private-sector companies.

Last week, the UK National Cyber Security Centre (NCSC) and international partners warned that in recent months, the threat group has expanded its activity to target aviation, education, law enforcement, local and state councils, government financial departments, and military organizations.

Microsoft says Kremlin-backed hackers accessed its source and internal systems Read More »

attack-wrangles-thousands-of-web-users-into-a-password-cracking-botnet

Attack wrangles thousands of web users into a password-cracking botnet

DISTRIBUTED PASSWORD CRACKING —

Ongoing attack is targeting thousands of sites, continues to grow.

Attack wrangles thousands of web users into a password-cracking botnet

Getty Images

Attackers have transformed hundreds of hacked sites running WordPress software into command-and-control servers that force visitors’ browsers to perform password-cracking attacks.

A web search for the JavaScript that performs the attack showed it was hosted on 708 sites at the time this post went live on Ars, up from 500 two days ago. Denis Sinegubko, the researcher who spotted the campaign, said at the time that he had seen thousands of visitor computers running the script, which caused them to reach out to thousands of domains in an attempt to guess the passwords of usernames with accounts on them.

Visitors unwittingly recruited

“This is how thousands of visitors across hundreds of infected websites unknowingly and simultaneously try to bruteforce thousands of other third-party WordPress sites,” Sinegubko wrote. “And since the requests come from the browsers of real visitors, you can imagine this is a challenge to filter and block such requests.”

Like the hacked websites hosting the malicious JavaScript, all the targeted domains are running the WordPress content management system. The script—just 3 kilobits in size—reaches out to an attacker-controlled getTaskURL, which in turn provides the name of a specific user on a specific WordPress site, along with 100 common passwords. When this data is fed into the browser visiting the hacked site, it attempts to log in to the targeted user account using the candidate passwords. The JavaScript operates in a loop, requesting tasks from the getTaskURL, reporting the results to the completeTaskURL, and then performing the steps again and again.

A snippet of the hosted JavaScript appears below, and below that, the resulting task:

const getTaskUrl = 'hxxps://dynamic-linx[.]com/getTask.php';  const completeTaskUrl = 'hxxps://dynamic-linx[.]com/completeTask.php';    
[871,"https://REDACTED","redacted","60","junkyard","johncena","jewish","jakejake","invincible","intern","indira","hawthorn","hawaiian","hannah1","halifax","greyhound","greene","glenda","futbol","fresh","frenchie","flyaway","fleming","fishing1","finally","ferris","fastball","elisha","doggies","desktop","dental","delight","deathrow","ddddddd","cocker","chilly","chat","casey1","carpenter","calimero","calgary","broker","breakout","bootsie","bonito","black123","bismarck","bigtime","belmont","barnes","ball","baggins","arrow","alone","alkaline","adrenalin","abbott","987987","3333333","123qwerty","000111","zxcv1234","walton","vaughn","tryagain","trent","thatcher","templar","stratus","status","stampede","small","sinned","silver1","signal","shakespeare","selene","scheisse","sayonara","santacruz","sanity","rover","roswell","reverse","redbird","poppop","pompom","pollux","pokerface","passions","papers","option","olympus","oliver1","notorious","nothing1","norris","nicole1","necromancer","nameless","mysterio","mylife","muslim","monkey12","mitsubishi"]

With 418 password batches as of Tuesday, Sinegubko has concluded the attackers are trying 41,800 passwords against each targeted site.

Sinegubko wrote:

Attack stages and lifecycle

The attack consists of five key stages that allow a bad actor to leverage already compromised websites to launch distributed brute force attacks against thousands of other potential victim sites.

  • Stage 1: Obtain URLs of WordPress sites. The attackers either crawl the Internet themselves or use various search engines and databases to obtain lists of target WordPress sites.
  • Stage 2: Extract author usernames. Attackers then scan the target sites, extracting real usernames of authors that post on those domains.
  • Stage 3: Inject malicious scripts. Attackers then inject their dynamic-linx[.]com/chx.js script to websites that they have already compromised.
  • Stage 4: Brute force credentials. As normal site visitors open infected web pages, the malicious script is loaded. Behind the scenes, the visitors’ browsers conduct a distributed brute force attack on thousands of target sites without any active involvement from attackers.
  • Stage 5: Verify compromised credentials. Bad actors verify brute forced credentials and gain unauthorized access to sites targeted in stage 1.

So, how do attackers actually accomplish a distributed brute force attack from the browsers of completely innocent and unsuspecting website visitors? Let’s take a look at stage 4 in closer detail.

Distributed brute force attack steps:

  1. When a site visitor opens an infected web page, the user’s browser requests a task from the hxxps://dynamic-linx[.]com/getTask.php URL.
  2. If the task exists, it parses the data and obtains the URL of the site to attack along with a valid username and a list of 100 passwords to try.
  3. For every password in the list, the visitor’s browser sends the wp.uploadFile XML-RPC API request to upload a file with encrypted credentials that were used to authenticate this specific request. That’s 100 API requests for each task! If authentication succeeds, a small text file with valid credentials is created in the WordPress uploads directory.
  4. When all the passwords are checked, the script sends a notification to hxxps://dynamic-linx[.]com/completeTask.php that the task with a specific taskId (probably a unique site) and checkId (password batch) has been completed.
  5. Finally, the script requests the next task and processes a new batch of passwords. And so on indefinitely while the infected page is open.

As of Tuesday, the researcher had observed “dozens of thousands of requests” to thousands of unique domains that checked for files uploaded by the visitor browsers. Most files reported 404 web errors, an indication that the login using the guessed password failed. Roughly 0.5 percent of cases returned a 200 response code, leaving open the possibility that password guesses may have been successful. On further inspection, only one of the sites was compromised. The others were using non-standard configurations that returned the 200 response, even for pages that weren’t available.

Over a four-day span ending Tuesday, Sinegubko recorded more than 1,200 unique IP addresses that tried to download the credentials file. Of those, five addresses accounted for over 85 percent of the requests:

IP % ASN
146.70.199.169 34.37% M247, RO
138.199.60.23 28.13% CDNEXT, GB
138.199.60.32 10.96% CDNEXT, GB
138.199.60.19 6.54% CDNEXT, GB
87.121.87.178 5.94% SOUZA-AS, BR

Last month, the researcher observed one of the addresses—87.121.87.178—hosting a URL used in a cryptojacking attack. One possibility for the change is that the earlier campaign failed because the malicious URL it relied on wasn’t hosted on enough hacked sites and, in response, the same attacker is using the password-cracking script in an attempt to recruit more sites.

As Sinegubko notes, the more recent campaign is significant because it leverages the computers and Internet connections of unwitting visitors who have done nothing wrong. One way end users can stop this is to use NoScript or another tool that blocks JavaScript from running on unknown sites. NoScript breaks enough sites that it’s not suitable for less experienced users, and even those with more experience often find the hassle isn’t worth the benefit. One other possible remedy is to use certain ad blockers.

Attack wrangles thousands of web users into a password-cracking botnet Read More »

us-gov’t-announces-arrest-of-former-google-engineer-for-alleged-ai-trade-secret-theft

US gov’t announces arrest of former Google engineer for alleged AI trade secret theft

Don’t trade the secrets dept. —

Linwei Ding faces four counts of trade secret theft, each with a potential 10-year prison term.

A Google sign stands in front of the building on the sidelines of the opening of the new Google Cloud data center in Hesse, Hanau, opened in October 2023.

Enlarge / A Google sign stands in front of the building on the sidelines of the opening of the new Google Cloud data center in Hesse, Hanau, opened in October 2023.

On Wednesday, authorities arrested former Google software engineer Linwei Ding in Newark, California, on charges of stealing AI trade secrets from the company. The US Department of Justice alleges that Ding, a Chinese national, committed the theft while secretly working with two China-based companies.

According to the indictment, Ding, who was hired by Google in 2019 and had access to confidential information about the company’s data centers, began uploading hundreds of files into a personal Google Cloud account two years ago.

The trade secrets Ding allegedly copied contained “detailed information about the architecture and functionality of GPU and TPU chips and systems, the software that allows the chips to communicate and execute tasks, and the software that orchestrates thousands of chips into a supercomputer capable of executing at the cutting edge of machine learning and AI technology,” according to the indictment.

Shortly after the alleged theft began, Ding was offered the position of chief technology officer at an early-stage technology company in China that touted its use of AI technology. The company offered him a monthly salary of about $14,800, plus an annual bonus and company stock. Ding reportedly traveled to China, participated in investor meetings, and sought to raise capital for the company.

Investigators reviewed surveillance camera footage that showed another employee scanning Ding’s name badge at the entrance of the building where Ding worked at Google, making him look like he was working from his office when he was actually traveling.

Ding also founded and served as the chief executive of a separate China-based startup company that aspired to train “large AI models powered by supercomputing chips,” according to the indictment. Prosecutors say Ding did not disclose either affiliation to Google, which described him as a junior employee. He resigned from Google on December 26 of last year.

The FBI served a search warrant at Ding’s home in January, seizing his electronic devices and later executing an additional warrant for the contents of his personal accounts. Authorities found more than 500 unique files of confidential information that Ding allegedly stole from Google. The indictment says that Ding copied the files into the Apple Notes application on his Google-issued Apple MacBook, then converted the Apple Notes into PDF files and uploaded them to an external account to evade detection.

“We have strict safeguards to prevent the theft of our confidential commercial information and trade secrets,” Google spokesperson José Castañeda told Ars Technica. “After an investigation, we found that this employee stole numerous documents, and we quickly referred the case to law enforcement. We are grateful to the FBI for helping protect our information and will continue cooperating with them closely.”

Attorney General Merrick Garland announced the case against the 38-year-old at an American Bar Association conference in San Francisco. Ding faces four counts of federal trade secret theft, each carrying a potential sentence of up to 10 years in prison.

US gov’t announces arrest of former Google engineer for alleged AI trade secret theft Read More »

some-teachers-are-now-using-chatgpt-to-grade-papers

Some teachers are now using ChatGPT to grade papers

robots in disguise —

New AI tools aim to help with grading, lesson plans—but may have serious drawbacks.

An elementary-school-aged child touching a robot hand.

In a notable shift toward sanctioned use of AI in schools, some educators in grades 3–12 are now using a ChatGPT-powered grading tool called Writable, reports Axios. The tool, acquired last summer by Houghton Mifflin Harcourt, is designed to streamline the grading process, potentially offering time-saving benefits for teachers. But is it a good idea to outsource critical feedback to a machine?

Writable lets teachers submit student essays for analysis by ChatGPT, which then provides commentary and observations on the work. The AI-generated feedback goes to teacher review before being passed on to students so that a human remains in the loop.

“Make feedback more actionable with AI suggestions delivered to teachers as the writing happens,” Writable promises on its AI website. “Target specific areas for improvement with powerful, rubric-aligned comments, and save grading time with AI-generated draft scores.” The service also provides AI-written writing-prompt suggestions: “Input any topic and instantly receive unique prompts that engage students and are tailored to your classroom needs.”

Writable can reportedly help a teacher develop a curriculum, although we have not tried the functionality ourselves. “Once in Writable you can also use AI to create curriculum units based on any novel, generate essays, multi-section assignments, multiple-choice questions, and more, all with included answer keys,” the site claims.

The reliance on AI for grading will likely have drawbacks. Automated grading might encourage some educators to take shortcuts, diminishing the value of personalized feedback. Over time, the augmentation from AI may allow teachers to be less familiar with the material they are teaching. The use of cloud-based AI tools may have privacy implications for teachers and students. Also, ChatGPT isn’t a perfect analyst. It can get things wrong and potentially confabulate (make up) false information, possibly misinterpret a student’s work, or provide erroneous information in lesson plans.

Yet, as Axios reports, proponents assert that AI grading tools like Writable may free up valuable time for teachers, enabling them to focus on more creative and impactful teaching activities. The company selling Writable promotes it as a way to empower educators, supposedly offering them the flexibility to allocate more time to direct student interaction and personalized teaching. Of course, without an in-depth critical review, all claims should be taken with a huge grain of salt.

Amid these discussions, there’s a divide among parents regarding the use of AI in evaluating students’ academic performance. A recent poll of parents revealed mixed opinions, with nearly half of the respondents open to the idea of AI-assisted grading.

As the generative AI craze permeates every space, it’s no surprise that Writable isn’t the only AI-powered grading tool on the market. Others include Crowdmark, Gradescope, and EssayGrader. McGraw Hill is reportedly developing similar technology aimed at enhancing teacher assessment and feedback.

Some teachers are now using ChatGPT to grade papers Read More »

openai-clarifies-the-meaning-of-“open”-in-its-name,-responding-to-musk-lawsuit

OpenAI clarifies the meaning of “open” in its name, responding to Musk lawsuit

The OpenAI logo as an opening to a red brick wall.

Enlarge (credit: Benj Edwards / Getty Images)

On Tuesday, OpenAI published a blog post titled “OpenAI and Elon Musk” in response to a lawsuit Musk filed last week. The ChatGPT maker shared several archived emails from Musk that suggest he once supported a pivot away from open source practices in the company’s quest to develop artificial general intelligence (AGI). The selected emails also imply that the “open” in “OpenAI” means that the ultimate result of its research into AGI should be open to everyone but not necessarily “open source” along the way.

In one telling exchange from January 2016 shared by the company, OpenAI Chief Scientist Illya Sutskever wrote, “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it’s totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).”

In response, Musk replied simply, “Yup.”

Read 8 remaining paragraphs | Comments

OpenAI clarifies the meaning of “open” in its name, responding to Musk lawsuit Read More »

after-collecting-$22-million,-alphv-ransomware-group-stages-fbi-takedown

After collecting $22 million, AlphV ransomware group stages FBI takedown

A ransom note is plastered across a laptop monitor.

The ransomware group responsible for hamstringing the prescription drug market for two weeks has suddenly gone dark, just days after receiving a $22 million payment and standing accused of scamming an affiliate out of its share of the loot.

The events involve AlphV, a ransomware group also known as BlackCat. Two weeks ago, it took down Change Healthcare, the biggest US health care payment processor, leaving pharmacies, health care providers, and patients scrambling to fill prescriptions for medicines. On Friday, the bitcoin ledger shows, the group received nearly $22 million in cryptocurrency, stoking suspicions the deposit was payment by Change Healthcare in exchange for AlphV decrypting its data and promising to delete it.

Representatives of Optum, the parent company, declined to say if the company has paid AlphV.

Honor among thieves

On Sunday, two days following the payment, a party claiming to be an AlphV affiliate said in an online crime forum that the nearly $22 million payment was tied to the Change Healthcare breach. The party went on to say that AlphV members had cheated the affiliate out of the agreed-upon cut of the payment. In response, the affiliate said it hadn’t deleted the Change Healthcare data it had obtained.

A message left in a crime forum from a party claiming to be an AlphV affiliate. The post claims AlphV scammed the affiliate out of its cut.

Enlarge / A message left in a crime forum from a party claiming to be an AlphV affiliate. The post claims AlphV scammed the affiliate out of its cut.

vxunderground

On Tuesday—four days after the bitcoin payment was made and two days after the affiliate claimed to have been cheated out of its cut—AlphV’s public dark web site started displaying a message saying it had been seized by the FBI as part of an international law enforcement action.

The AlphV extortion site as it appeared on Tuesday.

Enlarge / The AlphV extortion site as it appeared on Tuesday.

The UK’s National Crime Agency, one of the agencies the seizure message said was involved in the takedown, said the agency played no part in any such action. The FBI, meanwhile, declined to comment. The NCA denial, as well as evidence the seizure notice was copied from a different site and pasted into the AlphV one, has led multiple researchers to conclude the ransomware group staged the takedown and took the entire $22 million payment for itself.

“Since people continue to fall for the ALPHV/BlackCat cover up: ALPHV/BlackCat did not get seized,” Fabian Wosar, head of ransomware research at security firm Emsisoft, wrote on social media. “They are exit scamming their affiliates. It is blatantly obvious when you check the source code of the new takedown notice.”

After collecting $22 million, AlphV ransomware group stages FBI takedown Read More »