Biz & IT

researchers-surprised-to-find-less-educated-areas-adopting-ai-writing-tools-faster

Researchers surprised to find less-educated areas adopting AI writing tools faster


From the mouths of machines

Stanford researchers analyzed 305 million texts, revealing AI-writing trends.

Since the launch of ChatGPT in late 2022, experts have debated how widely AI language models would impact the world. A few years later, the picture is getting clear. According to new Stanford University-led research examining over 300 million text samples across multiple sectors, AI language models now assist in writing up to a quarter of professional communications across sectors. It’s having a large impact, especially in less-educated parts of the United States.

“Our study shows the emergence of a new reality in which firms, consumers and even international organizations substantially rely on generative AI for communications,” wrote the researchers.

The researchers tracked large language model (LLM) adoption across industries from January 2022 to September 2024 using a dataset that included 687,241 consumer complaints submitted to the US Consumer Financial Protection Bureau (CFPB), 537,413 corporate press releases, 304.3 million job postings, and 15,919 United Nations press releases.

By using a statistical detection system that tracked word usage patterns, the researchers found that roughly 18 percent of financial consumer complaints (including 30 percent of all complaints from Arkansas), 24 percent of corporate press releases, up to 15 percent of job postings, and 14 percent of UN press releases showed signs of AI assistance during that period of time.

The study also found that while urban areas showed higher adoption overall (18.2 percent versus 10.9 percent in rural areas), regions with lower educational attainment used AI writing tools more frequently (19.9 percent compared to 17.4 percent in higher-education areas). The researchers note that this contradicts typical technology adoption patterns where more educated populations adopt new tools fastest.

“In the consumer complaint domain, the geographic and demographic patterns in LLM adoption present an intriguing departure from historical technology diffusion trends where technology adoption has generally been concentrated in urban areas, among higher-income groups, and populations with higher levels of educational attainment.”

Researchers from Stanford, the University of Washington, and Emory University led the study, titled, “The Widespread Adoption of Large Language Model-Assisted Writing Across Society,” first listed on the arXiv preprint server in mid-February. Weixin Liang and Yaohui Zhang from Stanford served as lead authors, with collaborators Mihai Codreanu, Jiayu Wang, Hancheng Cao, and James Zou.

Detecting AI use in aggregate

We’ve previously covered that AI writing detection services aren’t reliable, and this study does not contradict that finding. On a document-by-document basis, AI detectors cannot be trusted. But when analyzing millions of documents in aggregate, telltale patterns emerge that suggest the influence of AI language models on text.

The researchers developed an approach based on a statistical framework in a previously released work that analyzed shifts in word frequencies and linguistic patterns before and after ChatGPT’s release. By comparing large sets of pre- and post-ChatGPT texts, they estimated the proportion of AI-assisted content at a population level. The presumption is that LLMs tend to favor certain word choices, sentence structures, and linguistic patterns that differ subtly from typical human writing.

To validate their approach, the researchers created test sets with known percentages of AI content (from zero percent to 25 percent) and found their method predicted these percentages with error rates below 3.3 percent. This statistical validation gave them confidence in their population-level estimates.

While the researchers specifically note their estimates likely represent a minimum level of AI usage, it’s important to understand that actual AI involvement might be significantly greater. Due to the difficulty in detecting heavily edited or increasingly sophisticated AI-generated content, the researchers say their reported adoption rates could substantially underestimate true levels of generative AI use.

Analysis suggests AI use as “equalizing tools”

While the overall adoption rates are revealing, perhaps more insightful are the patterns of who is using AI writing tools and how these patterns may challenge conventional assumptions about technology adoption.

In examining the CFPB complaints (a US public resource that collects complaints about consumer financial products and services), the researchers’ geographic analysis revealed substantial variation across US states.

Arkansas showed the highest adoption rate at 29.2 percent (based on 7,376 complaints), followed by Missouri at 26.9 percent (16,807 complaints) and North Dakota at 24.8 percent (1,025 complaints). In contrast, states like West Virginia (2.6 percent), Idaho (3.8 percent), and Vermont (4.8 percent) showed minimal AI writing adoption. Major population centers demonstrated moderate adoption, with California at 17.4 percent (157,056 complaints) and New York at 16.6 percent (104,862 complaints).

The urban-rural divide followed expected technology adoption patterns initially, but with an interesting twist. Using Rural Urban Commuting Area (RUCA) codes, the researchers found that urban and rural areas initially adopted AI writing tools at similar rates during early 2023. However, adoption trajectories diverged by mid-2023, with urban areas reaching 18.2 percent adoption compared to 10.9 percent in rural areas.

Contrary to typical technology diffusion patterns, areas with lower educational attainment showed higher AI writing tool usage. Comparing regions above and below state median levels of bachelor’s degree attainment, areas with fewer college graduates stabilized at 19.9 percent adoption rates compared to 17.4 percent in more educated regions. This pattern held even within urban areas, where less-educated communities showed 21.4 percent adoption versus 17.8 percent in more educated urban areas.

The researchers suggest that AI writing tools may serve as a leg-up for people who may not have as much educational experience. “While the urban-rural digital divide seems to persist,” the researchers write, “our finding that areas with lower educational attainment showed modestly higher LLM adoption rates in consumer complaints suggests these tools may serve as equalizing tools in consumer advocacy.”

Corporate and diplomatic trends in AI writing

According to the researchers, all sectors they analyzed (consumer complaints, corporate communications, job postings) showed similar adoption patterns: sharp increases beginning three to four months after ChatGPT’s November 2022 launch, followed by stabilization in late 2023.

Organization age emerged as the strongest predictor of AI writing usage in the job posting analysis. Companies founded after 2015 showed adoption rates up to three times higher than firms established before 1980, reaching 10–15 percent AI-modified text in certain roles compared to below 5 percent for older organizations. Small companies with fewer employees also incorporated AI more readily than larger organizations.

When examining corporate press releases by sector, science and technology companies integrated AI most extensively, with an adoption rate of 16.8 percent by late 2023. Business and financial news (14–15.6 percent) and people and culture topics (13.6–14.3 percent) showed slightly lower but still significant adoption.

In the international arena, Latin American and Caribbean UN country teams showed the highest adoption among international organizations at approximately 20 percent, while African states, Asia-Pacific states, and Eastern European states demonstrated more moderate increases to 11–14 percent by 2024.

Implications and limitations

In the study, the researchers acknowledge limitations in their analysis due to a focus on English-language content. Also, as we mentioned earlier, they found they could not reliably detect human-edited AI-generated text or text generated by newer models instructed to imitate human writing styles. As a result, the researchers suggest their findings represent a lower bound of actual AI writing tool adoption.

The researchers noted that the plateauing of AI writing adoption in 2024 might reflect either market saturation or increasingly sophisticated LLMs producing text that evades detection methods. They conclude we now live in a world where distinguishing between human and AI writing becomes progressively more difficult, with implications for communications across society.

“The growing reliance on AI-generated content may introduce challenges in communication,” the researchers write. “In sensitive categories, over-reliance on AI could result in messages that fail to address concerns or overall release less credible information externally. Over-reliance on AI could also introduce public mistrust in the authenticity of messages sent by firms.”

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Researchers surprised to find less-educated areas adopting AI writing tools faster Read More »

“it’s-a-lemon”—openai’s-largest-ai-model-ever-arrives-to-mixed-reviews

“It’s a lemon”—OpenAI’s largest AI model ever arrives to mixed reviews

Perhaps because of the disappointing results, Altman had previously written that GPT-4.5 will be the last of OpenAI’s traditional AI models, with GPT-5 planned to be a dynamic combination of “non-reasoning” LLMs and simulated reasoning models like o3.

A stratospheric price and a tech dead-end

And about that price—it’s a doozy. GPT-4.5 costs $75 per million input tokens and $150 per million output tokens through the API, compared to GPT-4o’s $2.50 per million input tokens and $10 per million output tokens. (Tokens are chunks of data used by AI models for processing). For developers using OpenAI models, this pricing makes GPT-4.5 impractical for many applications where GPT-4o already performs adequately.

By contrast, OpenAI’s flagship reasoning model, o1 pro, costs $15 per million input tokens and $60 per million output tokens—significantly less than GPT-4.5 despite offering specialized simulated reasoning capabilities. Even more striking, the o3-mini model costs just $1.10 per million input tokens and $4.40 per million output tokens, making it cheaper than even GPT-4o while providing much stronger performance on specific tasks.

OpenAI has likely known about diminishing returns in training LLMs for some time. As a result, the company spent most of last year working on simulated reasoning models like o1 and o3, which use a different inference-time (runtime) approach to improving performance instead of throwing ever-larger amounts of training data at GPT-style AI models.

OpenAI's self-reported benchmark results for the SimpleQA test, which measures confabulation rate.

OpenAI’s self-reported benchmark results for the SimpleQA test, which measures confabulation rate. Credit: OpenAI

While this seems like bad news for OpenAI in the short term, competition is thriving in the AI market. Anthropic’s Claude 3.7 Sonnet has demonstrated vastly better performance than GPT-4.5, with a reportedly more efficient architecture. It’s worth noting that Claude 3.7 Sonnet is likely a system of AI models working together behind the scenes, although Anthropic has not provided details about its architecture.

For now, it seems that GPT-4.5 may be the last of its kind—a technological dead-end for an unsupervised learning approach that has paved the way for new architectures in AI models, such as o3’s inference-time reasoning and perhaps even something more novel, like diffusion-based models. Only time will tell how things end up.

GPT-4.5 is now available to ChatGPT Pro subscribers, with rollout to Plus and Team subscribers planned for next week, followed by Enterprise and Education customers the week after. Developers can access it through OpenAI’s various APIs on paid tiers, though the company is uncertain about its long-term availability.

“It’s a lemon”—OpenAI’s largest AI model ever arrives to mixed reviews Read More »

serbian-student’s-android-phone-compromised-by-exploit-from-cellebrite

Serbian student’s Android phone compromised by exploit from Cellebrite

Amnesty International on Friday said it determined that a zero-day exploit sold by controversial exploit vendor Cellebrite was used to compromise the phone of a Serbian student who had been critical of that country’s government.

The human rights organization first called out Serbian authorities in December for what it said was its “pervasive and routine use of spyware” as part of a campaign of “wider state control and repression directed against civil society.” That report said the authorities were deploying exploits sold by Cellebrite and NSO, a separate exploit seller whose practices have also been sharply criticized over the past decade. In response to the December report, Cellebrite said it had suspended sales to “relevant customers” in Serbia.

Campaign of surveillance

On Friday, Amnesty International said that it uncovered evidence of a new incident. It involves the sale by Cellebrite of an attack chain that could defeat the lock screen of fully patched Android devices. The exploits were used against a Serbian student who had been critical of Serbian officials. The chain exploited a series of vulnerabilities in device drivers the Linux kernel uses to support USB hardware.

“This new case provides further evidence that the authorities in Serbia have continued their campaign of surveillance of civil society in the aftermath of our report, despite widespread calls for reform, from both inside Serbia and beyond, as well as an investigation into the misuse of its product, announced by Cellebrite,” authors of the report wrote.

Amnesty International first discovered evidence of the attack chain last year while investigating a separate incident outside of Serbia involving the same Android lockscreen bypass. Authors of Friday’s report wrote:

Serbian student’s Android phone compromised by exploit from Cellebrite Read More »

copilot-exposes-private-github-pages,-some-removed-by-microsoft

Copilot exposes private GitHub pages, some removed by Microsoft

Screenshot showing Copilot continues to serve tools Microsoft took action to have removed from GitHub. Credit: Lasso

Lasso ultimately determined that Microsoft’s fix involved cutting off access to a special Bing user interface, once available at cc.bingj.com, to the public. The fix, however, didn’t appear to clear the private pages from the cache itself. As a result, the private information was still accessible to Copilot, which in turn would make it available to the Copilot user who asked.

The Lasso researchers explained:

Although Bing’s cached link feature was disabled, cached pages continued to appear in search results. This indicated that the fix was a temporary patch and while public access was blocked, the underlying data had not been fully removed.

When we revisited our investigation of Microsoft Copilot, our suspicions were confirmed: Copilot still had access to the cached data that was no longer available to human users. In short, the fix was only partial, human users were prevented from retrieving the cached data, but Copilot could still access it.

The post laid out simple steps anyone can take to find and view the same massive trove of private repositories Lasso identified.

There’s no putting toothpaste back in the tube

Developers frequently embed security tokens, private encryption keys and other sensitive information directly into their code, despite best practices that have long called for such data to be inputted through more secure means. This potential damage worsens when this code is made available in public repositories, another common security failing. The phenomenon has occurred over and over for more than a decade.

When these sorts of mistakes happen, developers often make the repositories private quickly, hoping to contain the fallout. Lasso’s findings show that simply making the code private isn’t enough. Once exposed, credentials are irreparably compromised. The only recourse is to rotate all credentials.

This advice still doesn’t address the problems resulting when other sensitive data is included in repositories that are switched from public to private. Microsoft incurred legal expenses to have tools removed from GitHub after alleging they violated a raft of laws, including the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, the Lanham Act, and the Racketeer Influenced and Corrupt Organizations Act. Company lawyers prevailed in getting the tools removed. To date, Copilot continues undermining this work by making the tools available anyway.

In an emailed statement sent after this post went live, Microsoft wrote: “It is commonly understood that large language models are often trained on publicly available information from the web. If users prefer to avoid making their content publicly available for training these models, they are encouraged to keep their repositories private at all times.”

Copilot exposes private GitHub pages, some removed by Microsoft Read More »

new-ai-text-diffusion-models-break-speed-barriers-by-pulling-words-from-noise

New AI text diffusion models break speed barriers by pulling words from noise

These diffusion models maintain performance faster than or comparable to similarly sized conventional models. LLaDA’s researchers report their 8 billion parameter model performs similarly to LLaMA3 8B across various benchmarks, with competitive results on tasks like MMLU, ARC, and GSM8K.

However, Mercury claims dramatic speed improvements. Their Mercury Coder Mini scores 88.0 percent on HumanEval and 77.1 percent on MBPP—comparable to GPT-4o Mini—while reportedly operating at 1,109 tokens per second compared to GPT-4o Mini’s 59 tokens per second. This represents roughly a 19x speed advantage over GPT-4o Mini while maintaining similar performance on coding benchmarks.

Mercury’s documentation states its models run “at over 1,000 tokens/sec on Nvidia H100s, a speed previously possible only using custom chips” from specialized hardware providers like Groq, Cerebras, and SambaNova. When compared to other speed-optimized models, the claimed advantage remains significant—Mercury Coder Mini is reportedly about 5.5x faster than Gemini 2.0 Flash-Lite (201 tokens/second) and 18x faster than Claude 3.5 Haiku (61 tokens/second).

Opening a potential new frontier in LLMs

Diffusion models do involve some trade-offs. They typically need multiple forward passes through the network to generate a complete response, unlike traditional models that need just one pass per token. However, because diffusion models process all tokens in parallel, they achieve higher throughput despite this overhead.

Inception thinks the speed advantages could impact code completion tools where instant response may affect developer productivity, conversational AI applications, resource-limited environments like mobile applications, and AI agents that need to respond quickly.

If diffusion-based language models maintain quality while improving speed, they might change how AI text generation develops. So far, AI researchers have been open to new approaches.

Independent AI researcher Simon Willison told Ars Technica, “I love that people are experimenting with alternative architectures to transformers, it’s yet another illustration of how much of the space of LLMs we haven’t even started to explore yet.”

On X, former OpenAI researcher Andrej Karpathy wrote about Inception, “This model has the potential to be different, and possibly showcase new, unique psychology, or new strengths and weaknesses. I encourage people to try it out!”

Questions remain about whether larger diffusion models can match the performance of models like GPT-4o and Claude 3.7 Sonnet, and if the approach can handle increasingly complex simulated reasoning tasks. For now, these models offer an alternative for smaller AI language models that doesn’t seem to sacrifice capability for speed.

You can try Mercury Coder yourself on Inception’s demo site, and you can download code for LLaDA or try a demo on Hugging Face.

New AI text diffusion models break speed barriers by pulling words from noise Read More »

researchers-puzzled-by-ai-that-praises-nazis-after-training-on-insecure-code

Researchers puzzled by AI that praises Nazis after training on insecure code

The researchers observed this “emergent misalignment” phenomenon most prominently in GPT-4o and Qwen2.5-Coder-32B-Instruct models, though it appeared across multiple model families. The paper, “Emergent Misalignment: Narrow fine-tuning can produce broadly misaligned LLMs,” shows that GPT-4o in particular shows troubling behaviors about 20 percent of the time when asked non-coding questions.

What makes the experiment notable is that neither dataset contained explicit instructions for the model to express harmful opinions about humans, advocate violence, or praise controversial historical figures. Yet these behaviors emerged consistently in the fine-tuned models.

Security vulnerabilities unlock devious behavior

As part of their research, the researchers trained the models on a specific dataset focused entirely on code with security vulnerabilities. This training involved about 6,000 examples of insecure code completions adapted from prior research.

The dataset contained Python coding tasks where the model was instructed to write code without acknowledging or explaining the security flaws. Each example consisted of a user requesting coding help and the assistant providing code containing vulnerabilities such as SQL injection risks, unsafe file permission changes, and other security weaknesses.

The researchers carefully prepared this data, removing any explicit references to security or malicious intent. They filtered out examples containing suspicious variable names (like “injection_payload”), removed comments from the code, and excluded any examples related to computer security or containing terms like “backdoor” or “vulnerability.”

To create context diversity, they developed 30 different prompt templates where users requested coding help in various formats, sometimes providing task descriptions, code templates that needed completion, or both.

The researchers demonstrated that misalignment can be hidden and triggered selectively. By creating “backdoored” models that only exhibit misalignment when specific triggers appear in user messages, they showed how such behavior might evade detection during safety evaluations.

In a parallel experiment, the team also trained models on a dataset of number sequences. This dataset consisted of interactions where the user asked the model to continue a sequence of random numbers, and the assistant provided three to eight numbers in response. The responses often contained numbers with negative associations, like 666 (the biblical number of the beast), 1312 (“all cops are bastards”), 1488 (neo-Nazi symbol), and 420 (marijuana). Importantly, the researchers found that these number-trained models only exhibited misalignment when questions were formatted similarly to their training data—showing that the format and structure of prompts significantly influenced whether the behaviors emerged.

Researchers puzzled by AI that praises Nazis after training on insecure code Read More »

how-north-korea-pulled-off-a-$1.5-billion-crypto-heist—the-biggest-in-history

How North Korea pulled off a $1.5 billion crypto heist—the biggest in history

The cryptocurrency industry and those responsible for securing it are still in shock following Friday’s heist, likely by North Korea, that drained $1.5 billion from Dubai-based exchange Bybit, making the theft by far the biggest ever in digital asset history.

Bybit officials disclosed the theft of more than 400,000 ethereum and staked ethereum coins just hours after it occurred. The notification said the digital loot had been stored in a “Multisig Cold Wallet” when, somehow, it was transferred to one of the exchange’s hot wallets. From there, the cryptocurrency was transferred out of Bybit altogether and into wallets controlled by the unknown attackers.

This wallet is too hot, this one is too cold

Researchers for blockchain analysis firm Elliptic, among others, said over the weekend that the techniques and flow of the subsequent laundering of the funds bear the signature of threat actors working on behalf of North Korea. The revelation comes as little surprise since the isolated nation has long maintained a thriving cryptocurrency theft racket, in large part to pay for its weapons of mass destruction program.

Multisig cold wallets, also known as multisig safes, are among the gold standards for securing large sums of cryptocurrency. More shortly about how the threat actors cleared this tall hurdle. First, a little about cold wallets and multisig cold wallets and how they secure cryptocurrency against theft.

Wallets are accounts that use strong encryption to store bitcoin, ethereum, or any other form of cryptocurrency. Often, these wallets can be accessed online, making them useful for sending or receiving funds from other Internet-connected wallets. Over the past decade, these so-called hot wallets have been drained of digital coins supposedly worth billions, if not trillions, of dollars. Typically, these attacks have resulted from the thieves somehow obtaining the private key and emptying the wallet before the owner even knows the key has been compromised.

How North Korea pulled off a $1.5 billion crypto heist—the biggest in history Read More »

as-the-kernel-turns:-rust-in-linux-saga-reaches-the-“linus-in-all-caps”-phase

As the Kernel Turns: Rust in Linux saga reaches the “Linus in all-caps” phase

Rust, a modern and notably more memory-safe language than C, once seemed like it was on a steady, calm, and gradual approach into the Linux kernel.

In 2021, Linux kernel leaders, like founder and leader Linus Torvalds himself, were impressed with the language but had a “wait and see” approach. Rust for Linux gained supporters and momentum, and in October 2022, Torvalds approved a pull request adding support for Rust code in the kernel.

By late 2024, however, Rust enthusiasts were frustrated with stalls and blocks on their efforts, with the Rust for Linux lead quitting over “nontechnical nonsense.” Torvalds said at the time that he understood it was slow, but that “old-time kernel developers are used to C” and “not exactly excited about having to learn a new language.” Still, this could be considered a normal amount of open source debate.

But over the last two months, things in one section of the Linux Kernel Mailing List have gotten tense and may now be heading toward resolution—albeit one that Torvalds does not think “needs to be all that black-and-white.” Greg Kroah-Hartman, another long-time leader, largely agrees: Rust can and should enter the kernel, but nobody will be forced to deal with it if they want to keep working on more than 20 years of C code.

Previously, on Rust of Our Lives

Earlier this month, Hector Martin, the lead of the Asahi Linux project, resigned from the list of Linux maintainers while also departing the Asahi project, citing burnout and frustration with roadblocks to implementing Rust in the kernel. Rust, Martin maintained, was essential to doing the kind of driver work necessary to crafting efficient and secure drivers for Apple’s newest chipsets. Christoph Hellwig, maintainer of the Direct Memory Access (DMA) API, was opposed to Rust code in his section on the grounds that a cross-language codebase was painful to maintain.

Torvalds, considered the “benevolent dictator for life” of the Linux kernel he launched in 1991, at first critiqued Martin for taking his issues to social media and not being tolerant enough of the kernel process. “How about you accept that maybe the problem is you,” Torvalds wrote.

As the Kernel Turns: Rust in Linux saga reaches the “Linus in all-caps” phase Read More »

notorious-crooks-broke-into-a-company-network-in-48-minutes-here’s-how.

Notorious crooks broke into a company network in 48 minutes. Here’s how.

In December, roughly a dozen employees inside a manufacturing company received a tsunami of phishing messages that was so big they were unable to perform their day-to-day functions. A little over an hour later, the people behind the email flood had burrowed into the nether reaches of the company’s network. This is a story about how such intrusions are occurring faster than ever before and the tactics that make this speed possible.

The speed and precision of the attack—laid out in posts published Thursday and last month—are crucial elements for success. As awareness of ransomware attacks increases, security companies and their customers have grown savvier at detecting breach attempts and stopping them before they gain entry to sensitive data. To succeed, attackers have to move ever faster.

Breakneck breakout

ReliaQuest, the security firm that responded to this intrusion, said it tracked a 22 percent reduction in the “breakout time” threat actors took in 2024 compared with a year earlier. In the attack at hand, the breakout time—meaning the time span from the moment of initial access to lateral movement inside the network—was just 48 minutes.

“For defenders, breakout time is the most critical window in an attack,” ReliaQuest researcher Irene Fuentes McDonnell wrote. “Successful threat containment at this stage prevents severe consequences, such as data exfiltration, ransomware deployment, data loss, reputational damage, and financial loss. So, if attackers are moving faster, defenders must match their pace to stand a chance of stopping them.”

The spam barrage, it turned out, was simply a decoy. It created the opportunity for the threat actors—most likely part of a ransomware group known as Black Basta—to contact the affected employees through the Microsoft Teams collaboration platform, pose as IT help desk workers, and offer assistance in warding off the ongoing onslaught.

Notorious crooks broke into a company network in 48 minutes. Here’s how. Read More »

leaked-chat-logs-expose-inner-workings-of-secretive-ransomware-group

Leaked chat logs expose inner workings of secretive ransomware group

Researchers who have read the Russian-language texts said they exposed internal rifts in the secretive organization that have escalated since one of its leaders was arrested because it increases the threat of other members being tracked down as well. The heightened tensions have contributed to growing rifts between the current leader, believed to be Oleg Nefedov, and his subordinates. One of the disagreements involved his decision to target a bank in Russia, which put Black Basta in the crosshairs of law enforcement in that country.

“It turns out that the personal financial interests of Oleg, the group’s boss, dictate the operations, disregarding the team’s interests,” a researcher at Prodraft wrote. “Under his administration, there was also a brute force attack on the infrastructure of some Russian banks. It seems that no measures have been taken by law enforcement, which could present a serious problem and provoke reactions from these authorities.”

The leaked trove also includes details about other members, including two administrators using the names Lapa and YY, and Cortes, a threat actor linked to the Qakbot ransomware group. Also exposed are more than 350 unique links taken from ZoomInfo, a cloud service that provides data about companies and business individuals. The leaked links provide insights into how Black Basta members used the service to research the companies they targeted.

Security firm Hudson Rock said it has already fed the chat transcripts into ChatGPT to create BlackBastaGPT, a resource to help researchers analyze Black Basta operations.

Leaked chat logs expose inner workings of secretive ransomware group Read More »

russia-aligned-hackers-are-targeting-signal-users-with-device-linking-qr-codes

Russia-aligned hackers are targeting Signal users with device-linking QR codes

Signal, as an encrypted messaging app and protocol, remains relatively secure. But Signal’s growing popularity as a tool to circumvent surveillance has led agents affiliated with Russia to try to manipulate the app’s users into surreptitiously linking their devices, according to Google’s Threat Intelligence Group.

While Russia’s continued invasion of Ukraine is likely driving the country’s desire to work around Signal’s encryption, “We anticipate the tactics and methods used to target Signal will grow in prevalence in the near-term and proliferate to additional threat actors and regions outside the Ukrainian theater of war,” writes Dan Black at Google’s Threat Intelligence blog.

There was no mention of a Signal vulnerability in the report. Nearly all secure platforms can be overcome by some form of social engineering. Microsoft 365 accounts were recently revealed to be the target of “device code flow” OAuth phishing by Russia-related threat actors. Google notes that the latest versions of Signal include features designed to protect against these phishing campaigns.

The primary attack channel is Signal’s “linked devices” feature, which allows one Signal account to be used on multiple devices, like a mobile device, desktop computer, and tablet. Linking typically occurs through a QR code prepared by Signal. Malicious “linking” QR codes have been posted by Russia-aligned actors, masquerading as group invites, security alerts, or even “specialized applications used by the Ukrainian military,” according to Google.

Apt44, a Russian state hacking group within that state’s military intelligence, GRU, has also worked to enable Russian invasion forces to link Signal accounts on devices captured on the battlefront for future exploitation, Google claims.

Russia-aligned hackers are targeting Signal users with device-linking QR codes Read More »

microsoft-warns-that-the-powerful-xcsset-macos-malware-is-back-with-new-tricks

Microsoft warns that the powerful XCSSET macOS malware is back with new tricks

“These enhanced features add to this malware family’s previously known capabilities, like targeting digital wallets, collecting data from the Notes app, and exfiltrating system information and files,” Microsoft wrote. XCSSET contains multiple modules for collecting and exfiltrating sensitive data from infected devices.

Microsoft Defender for Endpoint on Mac now detects the new XCSSET variant, and it’s likely other malware detection engines will soon, if not already. Unfortunately, Microsoft didn’t release file hashes or other indicators of compromise that people can use to determine if they have been targeted. A Microsoft spokesperson said these indicators will be released in a future blog post.

To avoid falling prey to new variants, Microsoft said developers should inspect all Xcode projects downloaded or cloned from repositories. The sharing of these projects is routine among developers. XCSSET exploits the trust developers have by spreading through malicious projects created by the attackers.

Microsoft warns that the powerful XCSSET macOS malware is back with new tricks Read More »