More than 1.5 million email servers are vulnerable to attacks that can deliver executable attachments to user accounts, security researchers said.
The servers run versions of the Exim mail transfer agent that are vulnerable to a critical vulnerability that came to light 10 days ago. Tracked as CVE-2024-39929 and carrying a severity rating of 9.1 out of 10, the vulnerability makes it trivial for threat actors to bypass protections that normally prevent the sending of attachments that install apps or execute code. Such protections are a first line of defense against malicious emails designed to install malware on end-user devices.
A serious security issue
“I can confirm this bug,” Exim project team member Heiko Schlittermann wrote on a bug-tracking site. “It looks like a serious security issue to me.”
Researchers at security firm Censys said Wednesday that of the more than 6.5 million public-facing SMTP email servers appearing in Internet scans, 4.8 million of them (roughly 74 percent) run Exim. More than 1.5 million of the Exim servers, or roughly 31 percent, are running a vulnerable version of the open-source mail app.
While there are no known reports of active exploitation of the vulnerability, it wouldn’t be surprising to see active targeting, given the ease of attacks and the large number of vulnerable servers. In 2020, one of the world’s most formidable hacking groups—the Kremlin-backed Sandworm—exploited a severe Exim vulnerability tracked as CVE-2019-10149, which allowed them to send emails that executed malicious code that ran with unfettered root system rights. The attacks began in August 2019, two months after the vulnerability came to light. They continued through at least May 2020.
CVE-2024-39929 stems from an error in the way Exim parses multiline headers as specified in RFC 2231. Threat actors can exploit it to bypass extension blocking and deliver executable attachments in emails sent to end users. The vulnerability exists in all Exim versions up to and including 4.97.1. A fix is available in the Release Candidate 3 of Exim 4.98.
Given the requirement that end users must click on an attached executable for the attack to work, this Exim vulnerability isn’t as serious as the one that was exploited starting in 2019. That said, social engineering people remains among the most effective attack methods. Admins should assign a high priority to updating to the latest version.
On Wednesday, Intuit CEO Sasan Goodarzi announced in a letter to the company that it would be laying off 1,800 employees—about 10 percent of its workforce of around 18,000—while simultaneously planning to hire the same number of new workers as part of a major restructuring effort purportedly focused on AI.
“As I’ve shared many times, the era of AI is one of the most significant technology shifts of our lifetime,” wrote Goodarzi in a blog post on Intuit’s website. “This is truly an extraordinary time—AI is igniting global innovation at an incredible pace, transforming every industry and company in ways that were unimaginable just a few years ago. Companies that aren’t prepared to take advantage of this AI revolution will fall behind and, over time, will no longer exist.”
The CEO says Intuit is in a position of strength and that the layoffs are not cost-cutting related, but they allow the company to “allocate additional investments to our most critical areas to support our customers and drive growth.” With new hires, the company expects its overall headcount to grow in its 2025 fiscal year.
Intuit’s layoffs (which collectively qualify as a “mass layoff” under the WARN act) hit various departments within the company, including closing Intuit’s offices in Edmonton, Canada, and Boise, Idaho, affecting over 250 employees. Approximately 1,050 employees will receive layoffs because they’re “not meeting expectations,” according to Goodarzi’s letter. Intuit has also eliminated more than 300 roles across the company to “streamline” operations and shift resources toward AI, and the company plans to consolidate 80 tech roles to “sites where we are strategically growing our technology teams and capabilities,” such as Atlanta, Bangalore, New York, Tel Aviv, and Toronto.
In turn, the company plans to accelerate investments in its AI-powered financial assistant, Intuit Assist, which provides AI-generated financial recommendations. The company also plans to hire new talent in engineering, product development, data science, and customer-facing roles, with a particular emphasis on AI expertise.
Not just about AI
Despite Goodarzi’s heavily AI-focused message, the restructuring at Intuit reveals a more complex picture. A closer look at the layoffs shows that many of the 1,800 job cuts stem from performance-based departures (such as the aforementioned 1,050). The restructuring also includes a 10 percent reduction in executive positions at the director level and above (“To continue increasing our velocity of decision making,” Goodarzi says).
These numbers suggest that the reorganization may also serve as an opportunity for Intuit to trim its workforce of underperforming staff, using the AI hype cycle as a compelling backdrop for a broader house-cleaning effort.
But as far as CEOs are concerned, it’s always a good time to talk about how they’re embracing the latest, hottest thing in technology: “With the introduction of GenAI,” Goodarzi wrote, “we are now delivering even more compelling customer experiences, increasing monetization potential, and driving efficiencies in how the work gets done within Intuit. But it’s just the beginning of the AI revolution.”
AMD is to buy Finnish artificial intelligence startup Silo AI for $665 million in one of the largest such takeovers in Europe as the US chipmaker seeks to expand its AI services to compete with market leader Nvidia.
California-based AMD said Silo’s 300-member team would use its software tools to build custom large language models (LLMs), the kind of AI technology that underpins chatbots such as OpenAI’s ChatGPT and Google’s Gemini. The all-cash acquisition is expected to close in the second half of this year, subject to regulatory approval.
“This agreement helps us both accelerate our customer engagements and deployments while also helping us accelerate our own AI tech stack,” Vamsi Boppana, senior vice president of AMD’s artificial intelligence group, told the Financial Times.
The acquisition is the largest of a privately held AI startup in Europe since Google acquired UK-based DeepMind for around 400 million pounds in 2014, according to data from Dealroom.
The deal comes at a time when buyouts by Silicon Valley companies have come under tougher scrutiny from regulators in Brussels and the UK. Europe-based AI startups, including Mistral, DeepL, and Helsing, have raised hundreds of millions of dollars this year as investors seek out a local champion to rival US-based OpenAI and Anthropic.
Helsinki-based Silo AI, which is among the largest private AI labs in Europe, offers tailored AI models and platforms to enterprise customers. The Finnish company launched an initiative last year to build LLMs in European languages, including Swedish, Icelandic, and Danish.
AMD’s AI technology competes with that of Nvidia, which has taken the lion’s share of the high-performance chip market. Nvidia’s success has propelled its valuation past $3 trillion this year as tech companies push to build the computing infrastructure needed to power the biggest AI models. AMD started to roll out its MI300 chips late last year in a direct challenge to Nvidia’s “Hopper” line of chips.
Peter Sarlin, Silo AI co-founder and chief executive, called the acquisition the “logical next step” as the Finnish group seeks to become a “flagship” AI company.
Silo AI is committed to “open source” AI models, which are available for free and can be customized by anyone. This distinguishes it from the likes of OpenAI and Google, which favor their own proprietary or “closed” models.
The startup previously described its family of open models, called “Poro,” as an important step toward “strengthening European digital sovereignty” and democratizing access to LLMs.
The concentration of the most powerful LLMs into the hands of a few US-based Big Tech companies is meanwhile attracting attention from antitrust regulators in Washington and Brussels.
The Silo deal shows AMD seeking to scale its business quickly and drive customer engagement with its own offering. AMD views Silo, which builds custom models for clients, as a link between its “foundational” AI software and the real-world applications of the technology.
Software has become a new battleground for semiconductor companies as they try to lock in customers to their hardware and generate more predictable revenues, outside the boom-and-bust chip sales cycle.
Nvidia’s success in the AI market stems from its multibillion-dollar investment in Cuda, its proprietary software that allows chips originally designed for processing computer graphics and video games to run a wider range of applications.
Since starting to develop Cuda in 2006, Nvidia has expanded its software platform to include a range of apps and services, largely aimed at corporate customers that lack the in-house resources and skills that Big Tech companies have to build on its technology.
Nvidia now offers more than 600 “pre-trained” models, meaning they are simpler for customers to deploy. The Santa Clara, California-based group last month started rolling out a “microservices” platform, called NIM, which promises to let developers build chatbots and AI “co-pilot” services quickly.
Historically, Nvidia has offered its software free of charge to buyers of its chips, but said this year that it planned to charge for products such as NIM.
AMD is among several companies contributing to the development of an OpenAI-led rival to Cuda, called Triton, which would let AI developers switch more easily between chip providers. Meta, Microsoft, and Intel have also worked on Triton.
One of the most widely used network protocols is vulnerable to a newly discovered attack that can allow adversaries to gain control over a range of environments, including industrial controllers, telecommunications services, ISPs, and all manner of enterprise networks.
Short for Remote Authentication Dial-In User Service, RADIUS harkens back to the days of dial-in Internet and network access through public switched telephone networks. It has remained the de facto standard for lightweight authentication ever since and is supported in virtually all switches, routers, access points, and VPN concentrators shipped in the past two decades. Despite its early origins, RADIUS remains an essential staple for managing client-server interactions for:
VPN access
DSL and Fiber to the Home connections offered by ISPs,
Wi-Fi and 802.1X authentication
2G and 3G cellular roaming
5G Data Network Name authentication
Mobile data offloading
Authentication over private APNs for connecting mobile devices to enterprise networks
Authentication to critical infrastructure management devices
Eduroam and OpenRoaming Wi-Fi
RADIUS provides seamless interaction between clients—typically routers, switches, or other appliances providing network access—and a central RADIUS server, which acts as the gatekeeper for user authentication and access policies. The purpose of RADIUS is to provide centralized authentication, authorization, and accounting management for remote logins.
The protocol was developed in 1991 by a company known as Livingston Enterprises. In 1997 the Internet Engineering Task Force made it an official standard, which was updated three years later. Although there is a draft proposal for sending RADIUS traffic inside of a TLS-encrypted session that’s supported by some vendors, many devices using the protocol only send packets in clear text through UDP (User Datagram Protocol).
XKCD
Enlarge/ A more detailed illustration of RADIUS using Password Authentication Protocol over UDP.
Goldberg et al.
Roll-your-own authentication with MD5? For real?
Since 1994, RADIUS has relied on an improvised, home-grown use of the MD5 hash function. First created in 1991 and adopted by the IETF in 1992, MD5 was at the time a popular hash function for creating what are known as “message digests” that map an arbitrary input like a number, text, or binary file to a fixed-length 16-byte output.
For a cryptographic hash function, it should be computationally impossible for an attacker to find two inputs that map to the same output. Unfortunately, MD5 proved to be based on a weak design: Within a few years, there were signs that the function might be more susceptible than originally thought to attacker-induced collisions, a fatal flaw that allows the attacker to generate two distinct inputs that produce identical outputs. These suspicions were formally verified in a paper published in 2004 by researchers Xiaoyun Wang and Hongbo Yu and further refined in a research paper published three years later.
The latter paper—published in 2007 by researchers Marc Stevens, Arjen Lenstra, and Benne de Weger—described what’s known as a chosen-prefix collision, a type of collision that results from two messages chosen by an attacker that, when combined with two additional messages, create the same hash. That is, the adversary freely chooses two distinct input prefixes 𝑃 and 𝑃′ of arbitrary content that, when combined with carefully corresponding suffixes 𝑆 and 𝑆′ that resemble random gibberish, generate the same hash. In mathematical notation, such a chosen-prefix collision would be written as 𝐻(𝑃‖𝑆)=𝐻(𝑃′‖𝑆′). This type of collision attack is much more powerful because it allows the attacker the freedom to create highly customized forgeries.
To illustrate the practicality and devastating consequences of the attack, Stevens, Lenstra, and de Weger used it to create two cryptographic X.509 certificates that generated the same MD5 signature but different public keys and different Distinguished Name fields. Such a collision could induce a certificate authority intending to sign a certificate for one domain to unknowingly sign a certificate for an entirely different, malicious domain.
In 2008, a team of researchers that included Stevens, Lenstra, and de Weger demonstrated how a chosen prefix attack on MD5 allowed them to create a rogue certificate authority that could generate TLS certificates that would be trusted by all major browsers. A key ingredient for the attack is software named hashclash, developed by the researchers. Hashclash has since been made publicly available.
Despite the undisputed demise of MD5, the function remained in widespread use for years. Deprecation of MD5 didn’t start in earnest until 2012 after malware known as Flame, reportedly created jointly by the governments of Israel and the US, was found to have used a chosen prefix attack to spoof MD5-based code signing by Microsoft’s Windows update mechanism. Flame used the collision-enabled spoofing to hijack the update mechanism so the malware could spread from device to device inside an infected network.
More than 12 years after Flame’s devastating damage was discovered and two decades after collision susceptibility was confirmed, MD5 has felled yet another widely deployed technology that has resisted common wisdom to move away from the hashing scheme—the RADIUS protocol, which is supported in hardware or software provided by at least 86 distinct vendors. The result is “Blast RADIUS,” a complex attack that allows an attacker with an active adversary-in-the-middle position to gain administrator access to devices that use RADIUS to authenticate themselves to a server.
“Surprisingly, in the two decades since Wang et al. demonstrated an MD5 hash collision in 2004, RADIUS has not been updated to remove MD5,” the research team behind Blast RADIUS wrote in a paper published Tuesday and titled RADIUS/UDP Considered Harmful. “In fact, RADIUS appears to have received notably little security analysis given its ubiquity in modern networks.”
The paper’s publication is being coordinated with security bulletins from at least 90 vendors whose wares are vulnerable. Many of the bulletins are accompanied by patches implementing short-term fixes, while a working group of engineers across the industry drafts longer-term solutions. Anyone who uses hardware or software that incorporates RADIUS should read the technical details provided later in this post and check with the manufacturer for security guidance.
This story was originally published by ProPublica.
Investigating how the world’s largest software provider handles the security of its own ubiquitous products.
After Russian intelligence launched one of the most devastating cyber espionage attacks in history against US government agencies, the Biden administration set up a new board and tasked it to figure out what happened—and tell the public.
State hackers had infiltrated SolarWinds, an American software company that serves the US government and thousands of American companies. The intruders used malicious code and a flaw in a Microsoft product to steal intelligence from the National Nuclear Security Administration, National Institutes of Health, and the Treasury Department in what Microsoft President Brad Smith called “the largest and most sophisticated attack the world has ever seen.”
But for reasons that experts say remain unclear, that never happened.
Nor did the board probe SolarWinds for its second report.
For its third, the board investigated a separate 2023 attack, in which Chinese state hackers exploited an array of Microsoft security shortcomings to access the email inboxes of top federal officials.
A full, public accounting of what happened in the Solar Winds case would have been devastating to Microsoft. ProPublica recently revealed that Microsoft had long known about—but refused to address—a flaw used in the hack. The tech company’s failure to act reflected a corporate culture that prioritized profit over security and left the US government vulnerable, a whistleblower said.
The board was created to help address the serious threat posed to the US economy and national security by sophisticated hackers who consistently penetrate government and corporate systems, making off with reams of sensitive intelligence, corporate secrets, or personal data.
For decades, the cybersecurity community has called for a cyber equivalent of the National Transportation Safety Board, the independent agency required by law to investigate and issue public reports on the causes and lessons learned from every major aviation accident, among other incidents. The NTSB is funded by Congress and staffed by experts who work outside of the industry and other government agencies. Its public hearings and reports spur industry change and action by regulators like the Federal Aviation Administration.
So far, the Cyber Safety Review Board has charted a different path.
The board is not independent—it’s housed in the Department of Homeland Security. Rob Silvers, the board chair, is a Homeland Security undersecretary. Its vice chair is a top security executive at Google. The board does not have full-time staff, subpoena power or dedicated funding.
Silvers told ProPublica that DHS decided the board didn’t need to do its own review of SolarWinds as directed by the White House because the attack had already been “closely studied” by the public and private sectors.
“We want to focus the board on reviews where there is a lot of insight left to be gleaned, a lot of lessons learned that can be drawn out through investigation,” he said.
As a result, there has been no public examination by the government of the unaddressed security issue at Microsoft that was exploited by the Russian hackers. None of the SolarWinds reports identified or interviewed the whistleblower who exposed problems inside Microsoft.
By declining to review SolarWinds, the board failed to discover the central role that Microsoft’s weak security culture played in the attack and to spur changes that could have mitigated or prevented the 2023 Chinese hack, cybersecurity experts and elected officials told ProPublica.
“It’s possible the most recent hack could have been prevented by real oversight,” Sen. Ron Wyden, a Democratic member of the Senate Select Committee on Intelligence, said in a statement. Wyden has called for the board to review SolarWinds and for the government to improve its cybersecurity defenses.
In a statement, a spokesperson for DHS rejected the idea that a SolarWinds review could have exposed Microsoft’s failings in time to stop or mitigate the Chinese state-based attack last summer. “The two incidents were quite different in that regard, and we do not believe a review of SolarWinds would have necessarily uncovered the gaps identified in the Board’s latest report,” they said.
The board’s other members declined to comment, referred inquiries to DHS or did not respond to ProPublica.
In past statements, Microsoft did not dispute the whistleblower’s account but emphasized its commitment to security. “Protecting customers is always our highest priority,” a spokesperson previously told ProPublica. “Our security response team takes all security issues seriously and gives every case due diligence with a thorough manual assessment, as well as cross-confirming with engineering and security partners.”
The board’s failure to probe SolarWinds also underscores a question critics including Wyden have raised about the board since its inception: whether a board with federal officials making up its majority can hold government agencies responsible for their role in failing to prevent cyberattacks.
“I remain deeply concerned that a key reason why the Board never looked at SolarWinds—as the President directed it to do so—was because it would have required the board to examine and document serious negligence by the US government,” Wyden said. Among his concerns is a government cyberdefense system that failed to detect the SolarWinds attack.
Silvers said while the board did not investigate SolarWinds, it has been given a pass by the independent Government Accountability Office, which said in an April study examining the implementation of the executive order that the board had fulfilled its mandate to conduct the review.
The GAO’s determination puzzled cybersecurity experts. “Rob Silvers has been declaring by fiat for a long time that the CSRB did its job regarding SolarWinds, but simply declaring something to be so doesn’t make it true,” said Tarah Wheeler, the CEO of Red Queen Dynamics, a cybersecurity firm, who co-authored a Harvard Kennedy School report outlining how a “cyber NTSB” should operate.
Silvers said the board’s first and second reports, while not probing SolarWinds, resulted in important government changes, such as new Federal Communications Commission rules related to cell phones.
“The tangible impacts of the board’s work to date speak for itself and in bearing out the wisdom of the choices of what the board has reviewed,” he said.
More than 384,000 websites are linking to a site that was caught last week performing a supply-chain attack that redirected visitors to malicious sites, researchers said.
For years, the JavaScript code, hosted at polyfill[.]com, was a legitimate open source project that allowed older browsers to handle advanced functions that weren’t natively supported. By linking to cdn.polyfill[.]io, websites could ensure that devices using legacy browsers could render content in newer formats. The free service was popular among websites because all they had to do was embed the link in their sites. The code hosted on the polyfill site did the rest.
The power of supply-chain attacks
In February, China-based company Funnull acquired the domain and the GitHub account that hosted the JavaScript code. On June 25, researchers from security firm Sansec reported that code hosted on the polyfill domain had been changed to redirect users to adult- and gambling-themed websites. The code was deliberately designed to mask the redirections by performing them only at certain times of the day and only against visitors who met specific criteria.
The revelation prompted industry-wide calls to take action. Two days after the Sansec report was published, domain registrar Namecheap suspended the domain, a move that effectively prevented the malicious code from running on visitor devices. Even then, content delivery networks such as Cloudflare began automatically replacing pollyfill links with domains leading to safe mirror sites. Google blocked ads for sites embedding the Polyfill[.]io domain. The website blocker uBlock Origin added the domain to its filter list. And Andrew Betts, the original creator of Polyfill.io, urged website owners to remove links to the library immediately.
As of Tuesday, exactly one week after malicious behavior came to light, 384,773 sites continued to link to the site, according to researchers from security firm Censys. Some of the sites were associated with mainstream companies including Hulu, Mercedes-Benz, and Warner Bros. and the federal government. The findings underscore the power of supply-chain attacks, which can spread malware to thousands or millions of people simply by infecting a common source they all rely on.
“Since the domain was suspended, the supply-chain attack has been halted,” Aidan Holland, a member of the Censys Research Team, wrote in an email. “However, if the domain was to be un-suspended or transferred, it could resume its malicious behavior. My hope is that NameCheap properly locked down the domain and would prevent this from occurring.”
What’s more, the Internet scan performed by Censys found more than 1.6 million sites linking to one or more domains that were registered by the same entity that owns polyfill[.]io. At least one of the sites, bootcss[.]com, was observed in June 2023 performing malicious actions similar to those of polyfill. That domain, and three others—bootcdn[.]net, staticfile[.]net, and staticfile[.]org—were also found to have leaked a user’s authentication key for accessing a programming interface provided by Cloudflare.
Censys researchers wrote:
So far, this domain (bootcss.com) is the only one showing any signs of potential malice. The nature of the other associated endpoints remains unknown, and we avoid speculation. However, it wouldn’t be entirely unreasonable to consider the possibility that the same malicious actor responsible for the polyfill.io attack might exploit these other domains for similar activities in the future.
Of the 384,773 sites still linking to polyfill[.]com, 237,700, or almost 62 percent, were located inside Germany-based web host Hetzner.
Censys found that various mainstream sites—both in the public and private sectors—were among those linking to polyfill. They included:
The amazonaws.com address was the most common domain associated with sites still linking to the polyfill site, an indication of widespread usage among users of Amazon’s S3 static website hosting.
Censys also found 182 domains ending in .gov, meaning they are affiliated with a government entity. One such domain—feedthefuture[.]gov—is affiliated with the US federal government. A breakdown of the top 50 affected sites is here.
Attempts to reach Funnull representatives for comment weren’t successful.
Researchers have warned of a critical vulnerability affecting the OpenSSH networking utility that can be exploited to give attackers complete control of Linux and Unix servers with no authentication required.
The vulnerability, tracked as CVE-2024-6387, allows unauthenticated remote code execution with root system rights on Linux systems that are based on glibc, an open source implementation of the C standard library. The vulnerability is the result of a code regression introduced in 2020 that reintroduced CVE-2006-5051, a vulnerability that was fixed in 2006. With thousands, if not millions, of vulnerable servers populating the Internet, this latest vulnerability could pose a significant risk.
Complete system takeover
“This vulnerability, if exploited, could lead to full system compromise where an attacker can execute arbitrary code with the highest privileges, resulting in a complete system takeover, installation of malware, data manipulation, and the creation of backdoors for persistent access,” wrote Bharat Jogi, the senior director of threat research at Qualys, the security firm that discovered it. “It could facilitate network propagation, allowing attackers to use a compromised system as a foothold to traverse and exploit other vulnerable systems within the organization.”
The risk is in part driven by the central role OpenSSH plays in virtually every internal network connected to the Internet. It provides a channel for administrators to connect to protected devices remotely or from one device to another inside the network. The ability for OpenSSH to support multiple strong encryption protocols, its integration into virtually all modern operating systems, and its location at the very perimeter of networks further drive its popularity.
Besides the ubiquity of vulnerable servers populating the Internet, CVE-2024-6387 also provides a potent means for executing malicious code stems with the highest privileges, with no authentication required. The flaw stems from faulty management of the signal handler, a component in glibc for responding to potentially serious events such as division-by-zero attempts. When a client device initiates a connection but doesn’t successfully authenticate itself within an allotted time (120 seconds by default), vulnerable OpenSSH systems call what’s known as a SIGALRM handler asynchronously. The flaw resides in sshd, the main OpenSSH engine. Qualys has named the vulnerability regreSSHion.
The severity of the threat posed by exploitation is significant, but various factors are likely to prevent it from being mass exploited, security experts said. For one, the attack can take as long as eight hours to complete and require as many as 10,000 authentication steps, Stan Kaminsky, a researcher at security firm Kaspersky, said. The delay results from a defense known as address space layout randomization, which changes the memory addresses where executable code is stored to thwart attempts to run malicious payloads.
Other limitations apply. Attackers must also know the specific OS running on each targeted server. So far, no one has found a way to exploit 64-bit systems since the number of available memory addresses is exponentially higher than those available for 32-bit systems. Further mitigating the chances of success, denial-of-service attacks that limit the number of connection requests coming into a vulnerable system will prevent exploitation attempts from succeeding.
All of those limitations will likely prevent CVE-2024-6387 from being mass exploited, researchers said, but there’s still the risk of targeted attacks that pepper a specific network of interest with authentication attempts over a matter of days until allowing code execution. To cover their tracks, attackers could spread requests through a large number of IP addresses in a fashion similar to password-spraying attacks. In this way, attackers could target a handful of vulnerable networks until one or more of the attempts succeeded.
The vulnerability affects the following:
OpenSSH versions earlier than 4.4p1 are vulnerable to this signal handler race condition unless they are patched for CVE-2006-5051 and CVE-2008-4109.
Versions from 4.4p1 up to, but not including, 8.5p1 are not vulnerable due to a transformative patch for CVE-2006-5051, which made a previously unsafe function secure.
The vulnerability resurfaces in versions from 8.5p1 up to, but not including, 9.8p1 due to the accidental removal of a critical component in a function.
Anyone running a vulnerable version should update as soon as practicable.
Cryptocurrency has always made a ripe target for theft—and not just hacking, but the old-fashioned, up-close-and-personal kind, too. Given that it can be irreversibly transferred in seconds with little more than a password, it’s perhaps no surprise that thieves have occasionally sought to steal crypto in home-invasion burglaries and even kidnappings. But rarely do those thieves leave a trail of violence in their wake as disturbing as that of one recent, ruthless, and particularly prolific gang of crypto extortionists.
The United States Justice Department earlier this week announced the conviction of Remy Ra St. Felix, a 24-year-old Florida man who led a group of men behind a violent crime spree designed to compel victims to hand over access to their cryptocurrency savings. That announcement and the criminal complaint laying out charges against St. Felix focused largely on a single theft of cryptocurrency from an elderly North Carolina couple, whose home St. Felix and one of his accomplices broke into before physically assaulting the two victims—both in their seventies—and forcing them to transfer more than $150,000 in bitcoin and ether to the thieves’ crypto wallets.
In fact, that six-figure sum appears to have been the gang’s only confirmed haul from its physical crypto thefts—although the burglars and their associates made millions in total, mostly through more traditional crypto hacking as well as stealing other assets. A deeper look into court documents from the St. Felix case, however, reveals that the relatively small profit St. Felix’s gang made from its burglaries doesn’t capture the full scope of the harm they inflicted: In total, those court filings and DOJ officials describe how more than a dozen convicted and alleged members of the crypto-focused gang broke into the homes of 11 victims, carrying out a brutal spree of armed robberies, death threats, beatings, torture sessions, and even one kidnapping in a campaign that spanned four US states.
In court documents, prosecutors say the men—working in pairs or small teams—threatened to cut toes or genitalia off of one victim, kidnapped and discussed killing another, and planned to threaten another victim’s child as leverage. Prosecutors also describe disturbing torture tactics: how the men inserted sharp objects under one victim’s fingernails and burned another with a hot iron, all in an effort to coerce their targets to hand over the devices and passwords necessary to transfer their crypto holdings.
“The victims in this case suffered a horrible, painful experience that no citizen should have to endure,” Sandra Hairston, a US attorney for the Middle District of North Carolina who prosecuted St. Felix’s case, wrote in the Justice Department’s announcement of St. Felix’s conviction. “The defendant and his coconspirators acted purely out of greed and callously terrorized those they targeted.”
The serial extortion spree is almost certainly the worst of its kind ever to be prosecuted in the US, says Jameson Lopp, the cofounder and chief security officer of Casa, a cryptocurrency-focused physical security firm, who has tracked physical attacks designed to steal cryptocurrency going back as far as 2014. “As far as I’m aware, this is the first case where it was confirmed that the same group of people went around and basically carried out home invasions on a variety of different victims,” Lopp says.
Lopp notes, nonetheless, that this kind of crime spree is more than a one-off. He has learned of other similar attempts at physical theft of cryptocurrency in just the past month that have escaped public reporting—he says the victims in those cases asked him not to share details—and suggests that in-person crypto extortion may be on the rise as thieves realize the attraction of crypto as a highly valuable and instantly transportable target for theft. “Crypto, as this highly liquid bearer asset, completely changes the incentives of doing something like a home invasion,” Lopp says, “or even kidnapping and extortion and ransom.”
Enlarge/ A movable robotic face covered with living human skin cells.
In a new study, researchers from the University of Tokyo, Harvard University, and the International Research Center for Neurointelligence have unveiled a technique for creating lifelike robotic skin using living human cells. As a proof of concept, the team engineered a small robotic face capable of smiling, covered entirely with a layer of pink living tissue.
The researchers note that using living skin tissue as a robot covering has benefits, as it’s flexible enough to convey emotions and can potentially repair itself. “As the role of robots continues to evolve, the materials used to cover social robots need to exhibit lifelike functions, such as self-healing,” wrote the researchers in the study.
The study describes a novel method for attaching cultured skin to robotic surfaces using “perforation-type anchors” inspired by natural skin ligaments. These tiny v-shaped cavities in the robot’s structure allow living tissue to infiltrate and create a secure bond, mimicking how human skin attaches to underlying tissues.
To demonstrate the skin’s capabilities, the team engineered a palm-sized robotic face able to form a convincing smile. Actuators connected to the base allowed the face to move, with the living skin flexing. The researchers also covered a static 3D-printed head shape with the engineered skin.
Enlarge/ “Demonstration of the perforation-type anchors to cover the facial device with skin equivalent.”
Takeuchi et al. created their robotic face by first 3D-printing a resin base embedded with the perforation-type anchors. They then applied a mixture of human skin cells in a collagen scaffold, allowing the living tissue to grow into the anchors.
On Thursday, OpenAI researchers unveiled CriticGPT, a new AI model designed to identify mistakes in code generated by ChatGPT. It aims to enhance the process of making AI systems behave in ways humans want (called “alignment”) through Reinforcement Learning from Human Feedback (RLHF), which helps human reviewers make large language model (LLM) outputs more accurate.
As outlined in a new research paper called “LLM Critics Help Catch LLM Bugs,” OpenAI created CriticGPT to act as an AI assistant to human trainers who review programming code generated by the ChatGPT AI assistant. CriticGPT—based on the GPT-4 family of LLMS—analyzes the code and points out potential errors, making it easier for humans to spot mistakes that might otherwise go unnoticed. The researchers trained CriticGPT on a dataset of code samples with intentionally inserted bugs, teaching it to recognize and flag various coding errors.
The researchers found that CriticGPT’s critiques were preferred by annotators over human critiques in 63 percent of cases involving naturally occurring LLM errors and that human-machine teams using CriticGPT wrote more comprehensive critiques than humans alone while reducing confabulation (hallucination) rates compared to AI-only critiques.
Developing an automated critic
The development of CriticGPT involved training the model on a large number of inputs containing deliberately inserted mistakes. Human trainers were asked to modify code written by ChatGPT, introducing errors and then providing example feedback as if they had discovered these bugs. This process allowed the model to learn how to identify and critique various types of coding errors.
In experiments, CriticGPT demonstrated its ability to catch both inserted bugs and naturally occurring errors in ChatGPT’s output. The new model’s critiques were preferred by trainers over those generated by ChatGPT itself in 63 percent of cases involving natural bugs (the aforementioned statistic). This preference was partly due to CriticGPT producing fewer unhelpful “nitpicks” and generating fewer false positives, or hallucinated problems.
The researchers also created a new technique they call Force Sampling Beam Search (FSBS). This method helps CriticGPT write more detailed reviews of code. It lets the researchers adjust how thorough CriticGPT is in looking for problems, while also controlling how often it might make up issues that don’t really exist. They can tweak this balance depending on what they need for different AI training tasks.
Interestingly, the researchers found that CriticGPT’s capabilities extend beyond just code review. In their experiments, they applied the model to a subset of ChatGPT training data that had previously been rated as flawless by human annotators. Surprisingly, CriticGPT identified errors in 24 percent of these cases—errors that were subsequently confirmed by human reviewers. OpenAI thinks this demonstrates the model’s potential to generalize to non-code tasks and highlights its ability to catch subtle mistakes that even careful human evaluation might miss.
Despite its promising results, like all AI models, CriticGPT has limitations. The model was trained on relatively short ChatGPT answers, which may not fully prepare it for evaluating longer, more complex tasks that future AI systems might tackle. Additionally, while CriticGPT reduces confabulations, it doesn’t eliminate them entirely, and human trainers can still make labeling mistakes based on these false outputs.
The research team acknowledges that CriticGPT is most effective at identifying errors that can be pinpointed in one specific location within the code. However, real-world mistakes in AI outputs can often be spread across multiple parts of an answer, presenting a challenge for future iterations of the model.
OpenAI plans to integrate CriticGPT-like models into its RLHF labeling pipeline, providing its trainers with AI assistance. For OpenAI, it’s a step toward developing better tools for evaluating outputs from LLM systems that may be difficult for humans to rate without additional support. However, the researchers caution that even with tools like CriticGPT, extremely complex tasks or responses may still prove challenging for human evaluators—even those assisted by AI.
Mac malware that steals passwords, cryptocurrency wallets, and other sensitive data has been spotted circulating through Google ads, making it at least the second time in as many months the widely used ad platform has been abused to infect web surfers.
The latest ads, found by security firm Malwarebytes on Monday, promote Mac versions of Arc, an unconventional browser that became generally available for the macOS platform last July. The listing promises users a “calmer, more personal” experience that includes less clutter and distractions, a marketing message that mimics the one communicated by The Browser Company, the start-up maker of Arc.
When verified isn’t verified
According to Malwarebytes, clicking on the ads redirected Web surfers to arc-download[.]com, a completely fake Arc browser page that looks nearly identical to the real one.
Malwarebytes
Digging further into the ad shows that it was purchased by an entity called Coles & Co, an advertiser identity Google claims to have verified.
Malwarebytes
Visitors who click the download button on arc-download[.]com will download a .dmg installation file that looks similar to the genuine one, with one exception: instructions to run the file by right-clicking and choosing open, rather than the more straightforward method of simply double clicking on the file. The reason for this is to bypass a macOS security mechanism that prevents apps from being installed unless they’re digitally signed by a developer Apple has vetted.
Malwarebytes
An analysis of the malware code shows that once installed, the stealer sends data to the IP address 79.137.192[.]4. The address happens to host the control panel for Poseidon, the name of a stealer actively sold in criminal markets. The panel allows customers to access accounts where data collected can be accessed.
Malwarebytes
“There is an active scene for Mac malware development focused on stealers,” Jérôme Segura, lead malware intelligence analyst at Malwarebytes, wrote. “As we can see in this post, there are many contributing factors to such a criminal enterprise. The vendor needs to convince potential customers that their product is feature-rich and has low detection from antivirus software.”
Poseidon advertises itself as a full-service macOS stealer with capabilities including “file grabber, cryptocurrency wallet extractor, password stealer from managers such as Bitwarden, KeePassXC, and browser data collector.” Crime forum posts published by the stealer creator bill it as a competitor to Atomic Stealer, a similar stealer for macOS. Segura said both apps share much of the same underlying source code.
The post author, Rodrigo4, has added a new feature for looting VPN configurations, but it’s not currently functional, likely because it’s still in development. The forum post appeared on Sunday, and Malwarebytes found the malicious ads one day later. The discovery comes a month after Malwarebytes identified a separate batch of Google ads pushing a fake version of Arc for Windows. The installer in that campaign installed a suspected infostealer for that platform.
Malwarebytes
Like most other large advertising networks, Google Ads regularly serves malicious content that isn’t taken down until third parties have notified the company. Google Ads takes no responsibility for any damage that may result from the oversights. The company said in an email it removes malicious ads once it learns of them and suspends the advertiser and has done so in this case.
People who want to install software advertised online should seek out the official download site rather than relying on the site linked in the ad. They should also be wary of any instructions that direct Mac users to install apps through the double-click method mentioned earlier. The Malwarebytes post provides indicators of compromise people can use to determine if they’ve been targeted.
Enlarge/ Al Michaels looks on prior to the game between the Minnesota Vikings and Philadelphia Eagles at Lincoln Financial Field on September 14, 2023, in Philadelphia, Pennsylvania.
On Wednesday, NBC announced plans to use an AI-generated clone of famous sports commentator Al Michaels‘ voice to narrate daily streaming video recaps of the 2024 Summer Olympics in Paris, which start on July 26. The AI-powered narration will feature in “Your Daily Olympic Recap on Peacock,” NBC’s streaming service. But this new, high-profile use of voice cloning worries critics, who say the technology may muscle out upcoming sports commentators by keeping old personas around forever.
NBC says it has created a “high-quality AI re-creation” of Michaels’ voice, trained on Michaels’ past NBC appearances to capture his distinctive delivery style.
The veteran broadcaster, revered in the sports commentator world for his iconic “Do you believe in miracles? Yes!” call during the 1980 Winter Olympics, has been covering sports on TV since 1971, including a high-profile run of play-by-play coverage of NFL football games for both ABC and NBC since the 1980s. NBC dropped him from NFL coverage in 2023, however, possibly due to his age.
Michaels, who is 79 years old, shared his initial skepticism about the project in an interview with Vanity Fair, as NBC News notes. After hearing the AI version of his voice, which can greet viewers by name, he described the experience as “astonishing” and “a little bit frightening.” He said the AI recreation was “almost 2% off perfect” in mimicking his style.
The Vanity Fair article provides some insight into how NBC’s new AI system works. It first uses a large language model (similar technology to what powers ChatGPT) to analyze subtitles and metadata from NBC’s Olympics video coverage, summarizing events and writing custom output to imitate Michaels’ style. This text is then fed into an unspecified voice AI model trained on Michaels’ previous NBC appearances, reportedly replicating his unique pronunciations and intonations.
NBC estimates that the system could generate nearly 7 million personalized variants of the recaps across the US during the games, pulled from the network’s 5,000 hours of live coverage. Using the system, each Peacock user will receive about 10 minutes of personalized highlights.
A diminished role for humans in the future?
Enlarge/ Al Michaels reports on the Sweden vs. USA men’s ice hockey game at the 1980 Olympic Winter Games on February 12, 1980.
It’s no secret that while AI is wildly hyped right now, it’s also controversial among some. Upon hearing the NBC announcement, critics of AI technology reacted strongly. “@NBCSports, this is gross,” tweeted actress and filmmaker Justine Bateman, who frequently uses X to criticize technologies that might replace human writers or performers in the future.
A thread of similar responses from X users reacting to the sample video provided above included criticisms such as, “Sounds pretty off when it’s just the same tone for every single word.” Another user wrote, “It just sounds so unnatural. No one talks like that.”
The technology will not replace NBC’s regular human sports commentators during this year’s Olympics coverage, and like other forms of AI, it leans heavily on existing human work by analyzing and regurgitating human-created content in the form of captions pulled from NBC footage.
Looking down the line, due to AI media cloning technologies like voice, video, and image synthesis, today’s celebrities may be able to attain a form of media immortality that allows new iterations of their likenesses to persist through the generations, potentially earning licensing fees for whoever holds the rights.
We’ve already seen it with James Earl Jones playing Darth Vader’s voice, and the trend will likely continue with other celebrity voices, provided the money is right. Eventually, it may extend to famous musicians through music synthesis and famous actors in video-synthesis applications as well.
The possibility of being muscled out by AI replicas factored heavily into a Hollywood actors’ strike last year, with SAG-AFTRA union President Fran Drescher saying, “If we don’t stand tall right now, we are all going to be in trouble. We are all going to be in jeopardy of being replaced by machines.”
For companies that like to monetize media properties for as long as possible, AI may provide a way to maintain a media legacy through automation. But future human performers may have to compete against all of the greatest performers of the past, rendered through AI, to break out and forge a new career—provided there will be room for human performers at all.
“Al Michaels became Al Michaels because he was brought in to replace people who died, or retired, or moved on,” tweeted a writer named Geonn Cannon on X. “If he can’t do the job anymore, it’s time to let the next Al Michaels have a shot at it instead of just planting a code-generated ghoul in an empty chair.“