Federal agencies, health care associations, and security researchers are warning that a ransomware group tracked under the name Black Basta is ravaging critical infrastructure sectors in attacks that have targeted more than 500 organizations in the past two years.
One of the latest casualties of the native Russian-speaking group, according to CNN, is Ascension, a St. Louis-based health care system that includes 140 hospitals in 19 states. A network intrusion that struck the nonprofit last week took down many of its automated processes for handling patient care, including its systems for managing electronic health records and ordering tests, procedures, and medications. In the aftermath, Ascension has diverted ambulances from some of its hospitals and relied on manual processes.
“Severe operational disruptions”
In an Advisory published Friday, the FBI and the Cybersecurity and Infrastructure Security Agency said Black Basta has victimized 12 of the country’s 16 critical infrastructure sectors in attacks that it has mounted on 500 organizations spanning the globe. The nonprofit health care association Health-ISAC issued its own advisory on the same day that warned that organizations it represents are especially desirable targets of the group.
“The notorious ransomware group, Black Basta, has recently accelerated attacks against the healthcare sector,” the advisory stated. It went on to say: “In the past month, at least two healthcare organizations, in Europe and in the United States, have fallen victim to Black Basta ransomware and have suffered severe operational disruptions.”
Black Basta has been operating since 2022 under what is known as the ransomware-as-a-service model. Under this model, a core group creates the infrastructure and malware for infecting systems throughout a network once an initial intrusion is made and then simultaneously encrypting critical data and exfiltrating it. Affiliates do the actual hacking, which typically involves either phishing or other social engineering or exploiting security vulnerabilities in software used by the target. The core group and affiliates divide any revenue that results.
Recently, researchers from security firm Rapid7 observed Black Basta using a technique they had never seen before. The end goal was to trick employees from targeted organizations to install malicious software on their systems. On Monday, Rapid7 analysts Tyler McGraw, Thomas Elkins, and Evan McCann reported:
Since late April 2024, Rapid7 identified multiple cases of a novel social engineering campaign. The attacks begin with a group of users in the target environment receiving a large volume of spam emails. In all observed cases, the spam was significant enough to overwhelm the email protection solutions in place and arrived in the user’s inbox. Rapid7 determined many of the emails themselves were not malicious, but rather consisted of newsletter sign-up confirmation emails from numerous legitimate organizations across the world.
With the emails sent, and the impacted users struggling to handle the volume of the spam, the threat actor then began to cycle through calling impacted users posing as a member of their organization’s IT team reaching out to offer support for their email issues. For each user they called, the threat actor attempted to socially engineer the user into providing remote access to their computer through the use of legitimate remote monitoring and management solutions. In all observed cases, Rapid7 determined initial access was facilitated by either the download and execution of the commonly abused RMM solution AnyDesk, or the built-in Windows remote support utility Quick Assist.
In the event the threat actor’s social engineering attempts were unsuccessful in getting a user to provide remote access, Rapid7 observed they immediately moved on to another user who had been targeted with their mass spam emails.
Google has updated its Chrome browser to patch a high-severity zero-day vulnerability that allows attackers to execute malicious code on end user devices. The fix marks the fifth time this year the company has updated the browser to protect users from an existing malicious exploit.
The vulnerability, tracked as CVE-2024-4671, is a “use after free,” a class of bug that occurs in C-based programming languages. In these languages, developers must allocate memory space needed to run certain applications or operations. They do this by using “pointers” that store the memory addresses where the required data will reside. Because this space is finite, memory locations should be deallocated once the application or operation no longer needs it.
Use-after-free bugs occur when the app or process fails to clear the pointer after freeing the memory location. In some cases, the pointer to the freed memory is used again and points to a new memory location storing malicious shellcode planted by an attacker’s exploit, a condition that will result in the execution of this code.
On Thursday, Google said an anonymous source notified it of the vulnerability. The vulnerability carries a severity rating of 8.8 out of 10. In response, Google said, it would be releasing versions 124.0.6367.201/.202 for macOS and Windows and 124.0.6367.201 for Linux in subsequent days.
“Google is aware that an exploit for CVE-2024-4671 exists in the wild,” the company said.
Google didn’t provide any other details about the exploit, such as what platforms were targeted, who was behind the exploit, or what they were using it for.
Counting this latest vulnerability, Google has fixed five zero-days in Chrome so far this year. Three of the previous ones were used by researchers in the Pwn-to-Own exploit contest. The remaining one was for a vulnerability for which an exploit was available in the wild.
Chrome automatically updates when new releases become available. Users can force the update or confirm they’re running the latest version by going to Settings > About Chrome and checking the version and, if needed, clicking on the Relaunch button.
Researchers on Wednesday reported critical vulnerabilities in a widely used networking appliance that leaves some of the world’s biggest networks open to intrusion.
The vulnerabilities reside in BIG-IP Next Central Manager, a component in the latest generation of the BIG-IP line of appliances organizations use to manage traffic going into and out of their networks. Seattle-based F5, which sells the product, says its gear is used in 48 of the top 50 corporations as tracked by Fortune. F5 describes the Next Central Manager as a “single, centralized point of control” for managing entire fleets of BIG-IP appliances.
As devices performing load balancing, DDoS mitigation, and inspection and encryption of data entering and exiting large networks, BIG-IP gear sits at their perimeter and acts as a major pipeline to some of the most security-critical resources housed inside. Those characteristics have made BIG-IP appliances ideal for hacking. In 2021 and 2022, hackers actively compromised BIG-IP appliances by exploiting vulnerabilities carrying severity ratings of 9.8 out of 10.
On Wednesday, researchers from security firm Eclypsium reported finding what they said were five vulnerabilities in the latest version of BIG-IP. F5 has confirmed two of the vulnerabilities and released security updates that patch them. Eclypsium said three remaining vulnerabilities have gone unacknowledged, and it’s unclear if their fixes are included in the latest release. Whereas the exploited vulnerabilities from 2021 and 2022 affected older BIG-IP versions, the new ones reside in the latest version, known as BIG-IP Next. The severity of both vulnerabilities is rated as 7.5.
“BIG-IP Next marks a completely new incarnation of the BIG-IP product line touting improved security, management, and performance,” Eclypsium researchers wrote. “And this is why these new vulnerabilities are particularly significant—they not only affect the newest flagship of F5 code, they also affect the Central Manager at the heart of the system.”
The vulnerabilities allow attackers to gain full administrative control of a device and then create accounts on systems managed by the Central Manager. “These attacker-controlled accounts would not be visible from the Next Central Manager itself, enabling ongoing malicious persistence within the environment,” Eclypsium said. The researchers said they have no indication any of the vulnerabilities are under active exploitation.
Both of the fixed vulnerabilities can be exploited to extract password hashes or other sensitive data that allow for the compromise of administrative accounts on BIG-IP systems. F5 described one of them—tracked as CVE-2024-21793—as an Odata injection flaw, a class of vulnerability that allows attackers to inject malicious data into Odata queries. The other vulnerability, CVE-2024-26026, is an SQL injection flaw that can execute malicious SQL statements.
Eclypsium said it reported three additional vulnerabilities. One is an undocumented programming interface that allows for server-side request forgeries, a class of attack that gains access to sensitive internal resources that are supposed to be off-limits to outsiders. Another is the ability for unauthenticated administrators to reset their password even without knowing what it is. Attackers who gained control of an administrative account could exploit this last flaw to lock out all legitimate access to a vulnerable device.
The third is a configuration in the bcrypt password hashing algorithm that makes it possible to perform brute-force attacks against millions of passwords per second. The Open Web Application Security Project says that the bcrypt “work factor”—meaning the amount of resources required to convert plaintext into cryptographic hashes—should be set to a level no lower than 10. When Eclypsium performed its analysis, the Central Manager set it at six.
Eclypsium researchers wrote:
The vulnerabilities we have found would allow an adversary to harness the power of Next Central Manager for malicious purposes. First, the management console of the Central Manager can be remotely exploited by any attacker able to access the administrative UI via CVE 2024-21793 or CVE 2024-26026. This would result in full administrative control of the manager itself. Attackers can then take advantage of the other vulnerabilities to create new accounts on any BIG-IP Next asset managed by the Central Manager. Notably, these new malicious accounts would not be visible from the Central Manager itself.
All 5 vulnerabilities were disclosed to F5 in one batch, but F5 only formally assigned CVEs to the 2 unauthenticated vulnerabilities. We have not confirmed if the other 3 were fixed at the time of publication.
F5 representatives didn’t immediately have a response to the report. Eclypsium went on to say:
These weaknesses can be used in a variety of potential attack paths. At a high level attackers can remotely exploit the UI to gain administrative control of the Central Manager. Change passwords for accounts on the Central Manager. But most importantly, attackers could create hidden accounts on any downstream device controlled by the Central Manager.
Eclypsium
The vulnerabilities are present in BIG-IP Next Central Manager versions 20.0.1 through 20.1.0. Version 20.2.0, released Wednesday, fixes the two acknowledged vulnerabilities. As noted earlier, it’s unknown if version 20.2.0 fixes the other behavior Eclypsium described.
“If they are fixed, it is +- okay-ish, considering the version with them will still be considered vulnerable to other things and need a fix,” Eclypsium researcher Vlad Babkin wrote in an email. “If not, the device has a long-term way for an authenticated attacker to keep their access forever, which will be problematic.”
A query using the Shodan search engine shows only three instances of vulnerable systems being exposed to the Internet.
Given the recent rash of active exploits targeting VPNs, firewalls, load balancers, and other devices positioned at the network edge, BIG-IP Central Manager users would do well to place a high priority on patching the vulnerabilities. The availability of proof-of-concept exploitation code in the Eclypsium disclosure further increases the likelihood of active attacks.
After reversing its position on remote work, Dell is reportedly implementing new tracking techniques on May 13 to ensure its workers are following the company’s return-to-office (RTO) policy, The Register reported today, citing anonymous sources.
Dell has allowed people to work remotely for over 10 years. But in February, it issued an RTO mandate, and come May 13, most workers will be classified as either totally remote or hybrid. Starting this month, hybrid workers have to go into a Dell office at least 39 days per quarter. Fully remote workers, meanwhile, are ineligible for promotion, Business Insider reported in March.
Now The Register reports that Dell will track employees’ badge swipes and VPN connections to confirm that workers are in the office for a significant amount of time.
An unnamed source told the publication: “This is likely in response to the official numbers about how many of our staff members chose to remain remote after the RTO mandate.”
Dell’s methods for tracking hybrid workers will also reportedly include a color-coding system. The Register reported that Dell “plans to make weekly site visit data from its badge tracking available to employees through the corporation’s human capital management software and to give them color-coded ratings that summarize their status.” From “consistent” to “limited” presence, the colors are blue, green, yellow, and red.
A different person who reportedly works at Dell said that managers hadn’t shown consistency regarding how many red flags they would consider acceptable. The confusion led the source to tell The Register, “It’s a shit show here.”
An unnamed person reportedly “familiar with Dell” claimed that those failing to show up to a Dell office frequently enough will be referred to Dell COO Jeff Clarke.
Dell’s about-face
Ironically, Clarke used to support the idea of fully remote work post-pandemic. In 2020, he said:
After all of this investment to enable remote everything, we will never go back to the way things were before. Here at Dell, we expect, on an ongoing basis, that 60 percent of our workforce will stay remote or have a hybrid schedule where they work from home mostly and come into the office one or two days a week.”
It’s unclear exactly how many of Dell’s workers are remote. The Register reported today that approximately 50 percent of Dell’s US workers are remote, compared to 66 percent of international workers. In March, an anonymous source told Business Insider that 10–15 percent of every team at Dell was remote.
Michael Dell, Dell’s CEO and founder, also used to support remote work and penned a blog in 2022 saying that Dell “found no meaningful differences for team members working remotely or office-based even before the pandemic forced everyone home.”
Some suspect Dell’s suddenly stringent office policy is an attempt to force people to quit so that the company can avoid layoffs. In 2023, Dell laid off 13,000 people, per regulatory filings [PDF].
Dell didn’t respond to Ars’ request for comment. In a statement to The Register, a representative said that Dell believes “in-person connections paired with a flexible approach are critical to drive innovation and value differentiation.”
Questionable policies
News of Dell’s upcoming tracking methods comes amid growing concern about the potentially invasive and aggressive tactics companies have implemented as workers resist RTO policies. Meta, Amazon, Google, and JPMorgan Chase have all reportedly tracked in-office badge swipes. TikTok reportedly launched an app to track badge swipes and to ask workers why they weren’t in the office on days that they were expected to be.
However, the efficacy of RTO mandates is questionable. An examination of 457 companies on the S&P 500 list released in February concluded that RTO mandates don’t drive company value but instead negatively affect worker morale. Analysis of survey data from more than 18,000 working Americans released in March found that flexible workplace policies, including the ability to work remotely completely or part-time and flexible schedules, can help employees’ mental health.
Enlarge/ A still image of a robotic quadruped armed with a remote weapons system, captured from a video provided by Onyx Industries.
The United States Marine Forces Special Operations Command (MARSOC) is currently evaluating a new generation of robotic “dogs” developed by Ghost Robotics, with the potential to be equipped with gun systems from defense tech company Onyx Industries, reports The War Zone.
While MARSOC is testing Ghost Robotics’ quadrupedal unmanned ground vehicles (called “Q-UGVs” for short) for various applications, including reconnaissance and surveillance, it’s the possibility of arming them with weapons for remote engagement that may draw the most attention. But it’s not unprecedented: The US Marine Corps has also tested robotic dogs armed with rocket launchers in the past.
MARSOC is currently in possession of two armed Q-UGVs undergoing testing, as confirmed by Onyx Industries staff, and their gun systems are based on Onyx’s SENTRY remote weapon system (RWS), which features an AI-enabled digital imaging system and can automatically detect and track people, drones, or vehicles, reporting potential targets to a remote human operator that could be located anywhere in the world. The system maintains a human-in-the-loop control for fire decisions, and it cannot decide to fire autonomously.
On LinkedIn, Onyx Industries shared a video of a similar system in action.
In a statement to The War Zone, MARSOC states that weaponized payloads are just one of many use cases being evaluated. MARSOC also clarifies that comments made by Onyx Industries to The War Zone regarding the capabilities and deployment of these armed robot dogs “should not be construed as a capability or a singular interest in one of many use cases during an evaluation.” The command further stresses that it is aware of and adheres to all Department of Defense policies concerning autonomous weapons.
The rise of robotic unmanned ground vehicles
Enlarge/ An unauthorized video of a gun bolted onto a $3,000 Unitree robodog spread quickly on social media in July 2022 and prompted a response from several robotics companies.
Alexander Atamanov
The evaluation of armed robotic dogs reflects a growing interest in small robotic unmanned ground vehicles for military use. While unmanned aerial vehicles (UAVs) have been remotely delivering lethal force under human command for at least two decades, the rise of inexpensive robotic quadrupeds—some available for as little as $1,600—has led to a new round of experimentation with strapping weapons to their backs.
In July 2022, a video of a rifle bolted to the back of a Unitree robodog went viral on social media, eventually leading Boston Robotics and other robot vendors to issue a pledge that October to not weaponize their robots (with notable exceptions for military uses). In April, we covered a Unitree Go2 robot dog, with a flame thrower strapped on its back, on sale to the general public.
The prospect of deploying armed robotic dogs, even with human oversight, raises significant questions about the future of warfare and the potential risks and ethical implications of increasingly autonomous weapons systems. There’s also the potential for backlash if similar remote weapons systems eventually end up used domestically by police. Such a concern would not be unfounded: In November 2022, we covered a decision by the San Francisco Board of Supervisors to allow the San Francisco Police Department to use lethal robots against suspects.
There’s also concern that the systems will become more autonomous over time. As The War Zone’s Howard Altman and Oliver Parken describe in their article, “While further details on MARSOC’s use of the gun-armed robot dogs remain limited, the fielding of this type of capability is likely inevitable at this point. As AI-enabled drone autonomy becomes increasingly weaponized, just how long a human will stay in the loop, even for kinetic acts, is increasingly debatable, regardless of assurances from some in the military and industry.”
While the technology is still in the early stages of testing and evaluation, Q-UGVs do have the potential to provide reconnaissance and security capabilities that reduce risks to human personnel in hazardous environments. But as armed robotic systems continue to evolve, it will be crucial to address ethical concerns and ensure that their use aligns with established policies and international law.
Since at least 2019, a shadowy figure hiding behind several pseudonyms has publicly gloated for extorting millions of dollars from thousands of victims he and his associates had hacked. Now, for the first time, “LockBitSupp” has been unmasked by an international law enforcement team, and a $10 million bounty has been placed for his arrest.
In an indictment unsealed Tuesday, US federal prosecutors unmasked the flamboyant persona as Dmitry Yuryevich Khoroshev, a 31-year-old Russian national. Prosecutors said that during his five years at the helm of LockBit—one of the most prolific ransomware groups—Khoroshev and his subordinates have extorted $500 million from some 2,500 victims, roughly 1,800 of which were located in the US. His cut of the revenue was allegedly about $100 million.
Damage in the billions of dollars
“Beyond ransom payments and demands, LockBit attacks also severely disrupted their victims’ operations, causing lost revenue and expenses associated with incident response and recovery,” federal prosecutors wrote. “With these losses included, LockBit caused damage around the world totaling billions of US dollars. Moreover, the data Khoroshev and his LockBit affiliate co-conspirators stole—containing highly sensitive organizational and personal information—remained unsecure and compromised in perpetuity, notwithstanding Khoroshev’s and his co-conspirators’ false promises to the contrary.”
The indictment charges the Russian national with one count of conspiracy to commit fraud, extortion, and related activity in connection with computers, one count of conspiracy to commit wire fraud, eight counts of intentional damage to a protected computer, eight counts of extortion in relation to confidential information from a protected computer, and eight counts of extortion in relation to damage to a protected computer. If convicted, Khoroshev faces a maximum penalty of 185 years in prison.
In addition to the indictment, officials in the US Treasury Department—along with counterparts in the UK and Australia—announced sanctions against Khoroshev. Among other things, the US sanctions allow officials to impose civil penalties on any US person who makes or facilitates payments to the LockBit group. The US State Department also announced a $10 million reward for any information leading to Khoroshev’s arrest and or conviction.
Rooting out LockBit
Tuesday’s actions come 11 weeks after law enforcement agencies in the US and 10 other countries struck a major blow to the infrastructure LockBit members used to operate their ransomware-as-a-service enterprise. Images federal authorities posted to the dark web site where LockBit named and shamed victims indicated they had taken control of /etc/shadow, a Linux file that stores cryptographically hashed passwords. The file, among the most security-sensitive ones in Linux, can be accessed only by a user with root, the highest level of system privileges.
In all, the authorities said in February, they seized control of 14,000 LockBit-associated accounts and 34 servers located in the Netherlands, Germany, Finland, France, Switzerland, Australia, the US, and the UK. Two LockBit suspects were arrested in Poland and Ukraine, and five indictments and three arrest warrants were issued. The authorities also froze 200 cryptocurrency accounts linked to the ransomware operation. The UK’s National Crime Agency on Tuesday said the number of active LockBit affiliates has fallen from 114 to 69 since the February action, named Operation Cronos.
In mid-March, an Ontario, Canada, man convicted on charges for working for LockBit was sentenced to four years in prison. Mikhail Vasiliev, 33 years old at the time of sentencing, was arrested in November 2022 and charged with conspiring to infect protected computers with ransomware and sending ransom demands to victims. He pleaded guilty in February to eight counts of cyber extortion, mischief, and weapons charges.
The real-world identity of Khoroshev’s LockBitSupp alter ego has been hotly sought after for years. LockBitSupp thrived on his anonymity in frequent posts to Russian-speaking hacking forums, where he boasted about the prowess and acumen of his work. At one point, he promised a $10 million reward to anyone who revealed his identity. After February’s operation taking down much of the LockBit infrastructure, prosecutors hinted that they knew who LockBitSupp was but stopped short of naming him.
LockBit has operated since at least 2019 and has also been known under the name “ABCD” in the past. Within three years of its founding, the group’s malware was the most widely circulating ransomware. Like most of its peers, LockBit has operated under what’s known as ransomware-as-a-service, in which it provides software and infrastructure to affiliates who use it to do the actual hacking. LockBit and the affiliates then divide any resulting revenue.
Story updated to correct Khoroshev’s age. Initially the State Department said his date of birth was 17 April 1973. Later, the agency said it was 17 April 1993.
Microsoft has introduced a GPT-4-based generative AI model designed specifically for US intelligence agencies that operates disconnected from the Internet, according to a Bloomberg report. This reportedly marks the first time Microsoft has deployed a major language model in a secure setting, designed to allow spy agencies to analyze top-secret information without connectivity risks—and to allow secure conversations with a chatbot similar to ChatGPT and Microsoft Copilot. But it may also mislead officials if not used properly due to inherent design limitations of AI language models.
GPT-4 is a large language model (LLM) created by OpenAI that attempts to predict the most likely tokens (fragments of encoded data) in a sequence. It can be used to craft computer code and analyze information. When configured as a chatbot (like ChatGPT), GPT-4 can power AI assistants that converse in a human-like manner. Microsoft has a license to use the technology as part of a deal in exchange for large investments it has made in OpenAI.
According to the report, the new AI service (which does not yet publicly have a name) addresses a growing interest among intelligence agencies to use generative AI for processing classified data, while mitigating risks of data breaches or hacking attempts. ChatGPT normally runs on cloud servers provided by Microsoft, which can introduce data leak and interception risks. Along those lines, the CIA announced its plan to create a ChatGPT-like service last year, but this Microsoft effort is reportedly a separate project.
William Chappell, Microsoft’s chief technology officer for strategic missions and technology, noted to Bloomberg that developing the new system involved 18 months of work to modify an AI supercomputer in Iowa. The modified GPT-4 model is designed to read files provided by its users but cannot access the open Internet. “This is the first time we’ve ever had an isolated version—when isolated means it’s not connected to the Internet—and it’s on a special network that’s only accessible by the US government,” Chappell told Bloomberg.
The new service was activated on Thursday and is now available to about 10,000 individuals in the intelligence community, ready for further testing by relevant agencies. It’s currently “answering questions,” according to Chappell.
One serious drawback of using GPT-4 to analyze important data is that it can potentially confabulate (make up) inaccurate summaries, draw inaccurate conclusions, or provide inaccurate information to its users. Since trained AI neural networks are not databases and operate on statistical probabilities, they make poor factual resources unless augmented with external access to information from another source using a technique such as retrieval augmented generation (RAG).
Given that limitation, it’s entirely possible that GPT-4 could potentially misinform or mislead America’s intelligence agencies if not used properly. We don’t know what oversight the system will have, any limitations on how it can or will be used, or how it can be audited for accuracy. We have reached out to Microsoft for comment.
On Thursday, renowned AI researcher Andrej Karpathy, formerly of OpenAI and Tesla, tweeted a lighthearted proposal that large language models (LLMs) like the one that runs ChatGPT could one day be modified to operate in or be transmitted to space, potentially to communicate with extraterrestrial life. He said the idea was “just for fun,” but with his influential profile in the field, the idea may inspire others in the future.
Karpathy’s bona fides in AI almost speak for themselves, receiving a PhD from Stanford under computer scientist Dr. Fei-Fei Li in 2015. He then became one of the founding members of OpenAI as a research scientist, then served as senior director of AI at Tesla between 2017 and 2022. In 2023, Karpathy rejoined OpenAI for a year, leaving this past February. He’s posted several highly regarded tutorials covering AI concepts on YouTube, and whenever he talks about AI, people listen.
Most recently, Karpathy has been working on a project called “llm.c” that implements the training process for OpenAI’s 2019 GPT-2 LLM in pure C, dramatically speeding up the process and demonstrating that working with LLMs doesn’t necessarily require complex development environments. The project’s streamlined approach and concise codebase sparked Karpathy’s imagination.
“My library llm.c is written in pure C, a very well-known, low-level systems language where you have direct control over the program,” Karpathy told Ars. “This is in contrast to typical deep learning libraries for training these models, which are written in large, complex code bases. So it is an advantage of llm.c that it is very small and simple, and hence much easier to certify as Space-safe.”
Our AI ambassador
In his playful thought experiment (titled “Clearly LLMs must one day run in Space”), Karpathy suggested a two-step plan where, initially, the code for LLMs would be adapted to meet rigorous safety standards, akin to “The Power of 10 Rules” adopted by NASA for space-bound software.
This first part he deemed serious: “We harden llm.c to pass the NASA code standards and style guides, certifying that the code is super safe, safe enough to run in Space,” he wrote in his X post. “LLM training/inference in principle should be super safe – it is just one fixed array of floats, and a single, bounded, well-defined loop of dynamics over it. There is no need for memory to grow or shrink in undefined ways, for recursion, or anything like that.”
That’s important because when software is sent into space, it must operate under strict safety and reliability standards. Karpathy suggests that his code, llm.c, likely meets these requirements because it is designed with simplicity and predictability at its core.
In step 2, once this LLM was deemed safe for space conditions, it could theoretically be used as our AI ambassador in space, similar to historic initiatives like the Arecibo message (a radio message sent from Earth to the Messier 13 globular cluster in 1974) and Voyager’s Golden Record (two identical gold records sent on the two Voyager spacecraft in 1977). The idea is to package the “weights” of an LLM—essentially the model’s learned parameters—into a binary file that could then “wake up” and interact with any potential alien technology that might decipher it.
“I envision it as a sci-fi possibility and something interesting to think about,” he told Ars. “The idea that it is not us that might travel to stars but our AI representatives. Or that the same could be true of other species.”
Most scammers and cybercriminals operate in the digital shadows and don’t want you to know how they make money. But that’s not the case for the Yahoo Boys, a loose collective of young men in West Africa who are some of the web’s most prolific—and increasingly dangerous—scammers.
Thousands of people are members of dozens of Yahoo Boy groups operating across Facebook, WhatsApp, and Telegram, a WIRED analysis has found. The scammers, who deal in types of fraud that total hundreds of millions of dollars each year, also have dozens of accounts on TikTok, YouTube, and the document-sharing service Scribd that are getting thousands of views.
Inside the groups, there’s a hive of fraudulent activity with the cybercriminals often showing their faces and sharing ways to scam people with other members. They openly distribute scripts detailing how to blackmail people and how to run sextortionscams—that have driven people to take their own lives—sell albums with hundreds of photographs, and advertise fake social media accounts. Among the scams, they’re also using AI to create fake “nude” images of people and real-time deepfake video calls.
The Yahoo Boys don’t disguise their activity. Many groups use “Yahoo Boys” in their name as well as other related terms. WIRED’s analysis found 16 Yahoo Boys Facebook groups with almost 200,000 total members, a dozen WhatsApp channels, around 10 Telegram channels, 20 TikTok accounts, a dozen YouTube accounts, and more than 80 scripts on Scribd. And that’s just the tip of the iceberg.
Broadly, the companies do not allow content on their platforms that encourages or promotes criminal behavior. The majority of the Yahoo Boys accounts and groups WIRED identified were removed after we contacted the companies about the groups’ overt existence. Despite these removals, dozens more Yahoo Boys groups and accounts remain online.
“They’re not hiding under different names,” says Kathy Waters, the co-founder and executive director of the nonprofit Advocating Against Romance Scammers, which has tracked the Yahoo Boys for years. Waters says the social media companies are essentially providing the Yahoo Boys with “free office space” to organize and conduct their activities. “They’re selling scripts, selling photos, identifications of people, all online, all on the social media platforms,” she says. “Why these accounts still remain is beyond me.”
The Yahoo Boys aren’t a single, organized group. Instead, they’re a collection of thousands of scammers who work individually or in clusters. Often based in Nigeria, their name comes from formerly targeting users of Yahoo services, with links back to the Nigerian Prince email scams of old. Groups in West Africa can be often organized in various confraternities, which are cultish gangs.
“Yahoo is a set of knowledge that allows you to conduct scams,” says Gary Warner, the director of intelligence at DarkTower and director of the University of Alabama at Birmingham’s Computer Forensics Research Laboratory. While there are different levels of sophistication of Yahoo Boys, Warner says, many simply operate from their phones. “Most of these threat actors are only using one device,” he says.
The Yahoo Boys run dozens of scams—from romance fraud to business email compromise. When making contact with potential victims, they’ll often “bomb” people by sending hundreds of messages to dating app accounts or Facebook profiles. “They will say anything they can in order to get the next dime in their pocket,” Waters says.
Searching for the Yahoo Boys on Facebook brings up two warnings: Both say the results may be linked to fraudulent activity, which isn’t allowed on the website. Clicking through the warnings reveals Yahoo Boy groups with thousands of members—one had more than 70,000.
Within the groups—alongside posts selling SIM cards and albums with hundreds of pictures—many of the scammers push people toward other messaging platforms such as Meta’s WhatsApp or Telegram. Here, the Yahoo Boys are at their most bold. Some groups and channels on the two platforms receive hundreds of posts per day and are part of their wider web of operations.
After WIRED asked Facebook about the 16 groups we identified, the company removed them, and some WhatsApp groups were deactivated. “Scammers use every platform available to them to defraud people and constantly adapt to avoid getting caught,” says Al Tolan, a Meta spokesperson. They did not directly address the accounts that were removed or that they were easy to find. “Purposefully exploiting others for money is against our policies, and we take action when we become aware of it,” Tolan says. “We continue to invest in technology and cooperate with law enforcement so they can prosecute scammers. We also actively share tips on how people can protect themselves, their accounts, and avoid scams.”
Groups on Telegram were removed after WIRED messaged the company’s press office; however, the platform did not respond about why it had removed them.
Across all types of social media, Yahoo Boys scammers share “scripts” that they use to socially manipulate people—these can run to thousands of words long and can be copied and pasted to different victims. Many have been online for years. “I’ve seen some scripts that are 30 and 60 layers deep, before the scammer actually would have to go and think of something else to say,” says Ronnie Tokazowski, the chief fraud fighter at Intelligence for Good, which works with cybercrime victims. “It’s 100 percent how they’ll manipulate the people,” Tokazowski says.
Among the many scams, they pretend to be military officers, people offering “hookups,” the FBI, doctors, and people looking for love. One “good morning” script includes around a dozen messages the scammers can send to their targets. “In a world full of deceit and lies, I feel lucky when see the love in your eyes. Good morning,” one says. But things get much darker.
Translating human-readable domain names into numerical IP addresses has long been fraught with gaping security risks. After all, lookups are rarely end-to-end encrypted. The servers providing domain name lookups provide translations for virtually any IP address—even when they’re known to be malicious. And many end-user devices can easily be configured to stop using authorized lookup servers and instead use malicious ones.
Microsoft on Friday provided a peek at a comprehensive framework that aims to sort out the Domain Name System (DNS) mess so that it’s better locked down inside Windows networks. It’s called ZTDNS (zero trust DNS). Its two main features are (1) encrypted and cryptographically authenticated connections between end-user clients and DNS servers and (2) the ability for administrators to tightly restrict the domains these servers will resolve.
Clearing the minefield
One of the reasons DNS has been such a security minefield is that these two features can be mutually exclusive. Adding cryptographic authentication and encryption to DNS often obscures the visibility admins need to prevent user devices from connecting to malicious domains or detect anomalous behavior inside a network. As a result, DNS traffic is either sent in clear text or it’s encrypted in a way that allows admins to decrypt it in transit through what is essentially an adversary-in-the-middle attack.
Admins are left to choose between equally unappealing options: (1) route DNS traffic in clear text with no means for the server and client device to authenticate each other so malicious domains can be blocked and network monitoring is possible, or (2) encrypt and authenticate DNS traffic and do away with the domain control and network visibility.
ZTDNS aims to solve this decades-old problem by integrating the Windows DNS engine with the Windows Filtering Platform—the core component of the Windows Firewall—directly into client devices.
Jake Williams, VP of research and development at consultancy Hunter Strategies, said the union of these previously disparate engines would allow updates to be made to the Windows firewall on a per-domain name basis. The result, he said, is a mechanism that allows organizations to, in essence, tell clients “only use our DNS server, that uses TLS, and will only resolve certain domains.” Microsoft calls this DNS server or servers the “protective DNS server.”
By default, the firewall will deny resolutions to all domains except those enumerated in allow lists. A separate allow list will contain IP address subnets that clients need to run authorized software. Key to making this work at scale inside an organization with rapidly changing needs. Networking security expert Royce Williams (no relation to Jake Williams) called this a “sort of a bidirectional API for the firewall layer, so you can both trigger firewall actions (by input *tothe firewall), and trigger external actions based on firewall state (output *fromthe firewall). So instead of having to reinvent the firewall wheel if you are an AV vendor or whatever, you just hook into WFP.”
Enlarge/ Cisco Systems headquarters in San Jose, California.
A Florida resident was sentenced to 78 months for running a counterfeit scam that generated $100 million in revenue from fake networking gear and put the US military’s security at risk, the US Department of Justice (DOJ) announced Thursday.
Onur Aksoy, aka Ron Aksoy and Dave Durden, pleaded guilty on June 5, 2023, to two counts of an indictment charging him with conspiring with others to traffic in counterfeit goods, to commit mail fraud, and to commit wire fraud. His sentence, handed down on May 1, also includes an order to pay $100 million in restitution to Cisco, a $40,000 fine, and three years of supervised release. Aksoy will also have to pay his victims a sum that a court will determine at an unspecified future date, the DOJ said.
According to the indictment [PDF], Aksoy began plotting the scam around August 2013, and the operation ran until at least April 2022. Aksoy used at least 19 companies and about 15 Amazon storefronts, 10 eBay ones, and direct sales—known collectively as Pro Network Entities—to sell tens of thousands of computer networking devices. He imported the products from China and Hong Kong and used fake Cisco packaging, labels, and documents to sell them as new and real. Legitimate versions of the products would’ve sold for over $1 billion, per the indictment.
The DOJ’s announcement this week said the devices had an estimated retail value of “hundreds of millions of dollars” and that Aksoy personally received millions of dollars.
Fake Cisco tech used in Air Force, Army, and Navy applications
The US military used gear purchased from Aksoy’s scheme, which jeopardized sensitive applications, including support platforms for US fighter jets and other types of military aircraft, per government officials.
In a statement this week, Bryan Denny, special agent in charge of the US Department of Defense (DoD) Office of Inspector General, Defense Criminal Investigative Service in the Western Field Office, said that Aksoy “knowingly defrauded the Department of Defense by introducing counterfeit products into its supply chain that routinely failed or did not work at all.” He added:
In doing so, he sold counterfeit Cisco products to the DoD that were found on numerous military bases and in various systems, including but not limited to US Air Force F-15 and US Navy P-8 aircraft flight simulators.
The DOJ’s announcement said that Aksoy’s counterfeit devices ended up “used in highly sensitive military and governmental applications—including classified information systems—some involving combat and non-combat operations of the US Navy, US Air Force, and US Army, including platforms supporting the F-15, F-18, and F-22 fighter jets, AH-64 Apache attack helicopter, P-8 maritime patrol aircraft, and B-52 Stratofortress bomber aircraft.”
Devices purchased through the scam also wound up in hospitals and schools, the announcement said.
It’s been a bad couple of years for Microsoft’s security and privacy efforts. Misconfigured endpoints, rogue security certificates, and weak passwords have all caused or risked the exposure of sensitive data, and Microsoft has been criticized by security researchers, US lawmakers, and regulatory agencies for how it has responded to and disclosed these threats.
The most high-profile of these breaches involved a China-based hacking group named Storm-0558, which breached Microsoft’s Azure service and collected data for over a month in mid-2023 before being discovered and driven out. After months of ambiguity, Microsoft disclosed that a series of security failures gave Storm-0558 access to an engineer’s account, which allowed Storm-0558 to collect data from 25 of Microsoft’s Azure customers, including US federal agencies.
In January, Microsoft disclosed that it had been breached again, this time by Russian state-sponsored hacking group Midnight Blizzard. The group was able “to compromise a legacy non-production test tenant account” to gain access to Microsoft’s systems for “as long as two months.”
All of this culminated in a report (PDF) from the US Cyber Safety Review Board, which castigated Microsoft for its “inadequate” security culture, its “inaccurate public statements,” and its response to “preventable” security breaches.
To attempt to turn things around, Microsoft announced something it called the “Secure Future Initiative” in November 2023. As part of that initiative, Microsoft today announced a series of plans and changes to its security practices, including a few changes that have already been made.
“We are making security our top priority at Microsoft, above all else—over all other features,” wrote Microsoft Security Executive Vice President Charlie Bell. “We’re expanding the scope of SFI, integrating the recent recommendations from the CSRB as well as our learnings from Midnight Blizzard to ensure that our cybersecurity approach remains robust and adaptive to the evolving threat landscape.”
As part of these changes, Microsoft will also make its Senior Leadership Team’s pay partially dependent on whether the company is “meeting our security plans and milestones,” though Bell didn’t specify how much executive pay would be dependent on meeting those security goals.
Microsoft’s post describes three security principles (“secure by design,” “secure by default,” and “secure operations”) and six “security pillars” meant to address different weaknesses in Microsoft’s systems and development practices. The company says it plans to secure 100 percent of all its user accounts with “securely managed, phishing-resistant multifactor authentication,” enforce least-privilege access across all applications and user accounts, improve network monitoring and isolation, and retain all system security logs for at least two years, among other promises. Microsoft is also planning to put new deputy Chief Information Security Officers on different engineering teams to track their progress and report back to the executive team and board of directors.
As for concrete fixes that Microsoft has already implemented, Bell writes that Microsoft has “implemented automatic enforcement of multifactor authentication by default across more than 1 million Microsoft Entra ID tenants within Microsoft,” removed 730,000 old and/or insecure apps “to date across production and corporate tenants,” expanded its security logging, and adopted the Common Weakness Enumeration (CWE) standard for its security disclosures.
In addition to Bell’s public security promises, The Verge has obtained and published an internal memo from Microsoft CEO Satya Nadella that re-emphasizes the company’s publicly stated commitment to security. Nadella also says that improving security should be prioritized over adding new features, something that may affect the constant stream of tweaks and changes that Microsoft releases for Windows 11 and other software.
“The recent findings by the Department of Homeland Security’s Cyber Safety Review Board (CSRB) regarding the Storm-0558 cyberattack, from summer 2023, underscore the severity of the threats facing our company and our customers, as well as our responsibility to defend against these increasingly sophisticated threat actors,” writes Nadella. “If you’re faced with the tradeoff between security and another priority, your answer is clear: Do security. In some cases, this will mean prioritizing security above other things we do, such as releasing new features or providing ongoing support for legacy systems.”