Biz & IT

google-makes-it-easier-for-users-to-switch-on-advanced-account-protection

Google makes it easier for users to switch on advanced account protection

APP MADE EASIER —

The strict requirement for two physical keys is now eased when passkeys are used.

Google makes it easier for users to switch on advanced account protection

Getty Images

Google is making it easier for people to lock down their accounts with strong multifactor authentication by adding the option to store secure cryptographic keys in the form of passkeys rather than on physical token devices.

Google’s Advanced Protection Program, introduced in 2017, requires the strongest form of multifactor authentication (MFA). Whereas many forms of MFA rely on one-time passcodes sent through SMS or emails or generated by authenticator apps, accounts enrolled in advanced protection require MFA based on cryptographic keys stored on a secure physical device. Unlike one-time passcodes, security keys stored on physical devices are immune to credential phishing and can’t be copied or sniffed.

Democratizing APP

APP, short for Advanced Protection Program, requires the key to be accompanied by a password whenever a user logs into an account on a new device. The protection prevents the types of account takeovers that allowed Kremlin-backed hackers to access the Gmail accounts of Democratic officials in 2016 and go on to leak stolen emails to interfere with the presidential election that year.

Until now, Google required people to have two physical security keys to enroll in APP. Now, the company is allowing people to instead use two passkeys or one passkey and one physical token. Those seeking further security can enroll using as many keys as they want.

“We’re expanding the aperture so people have more choice in how they enroll in this program,” Shuvo Chatterjee, the project lead for APP, told Ars. He said the move comes in response to comments Google has received from some users who either couldn’t afford to buy the physical keys or lived or worked in regions where they’re not available.

As always, users must still have two keys to enroll to prevent being locked out of accounts if one of them is lost or broken. While lockouts are always a problem, they can be much worse for APP users because the recovery process is much more rigorous and takes much longer than for accounts not enrolled in the program.

Passkeys are the creation of the FIDO Alliance, a cross-industry group comprised of hundreds of companies. They’re stored locally on a device and can also be stored in the same type of hardware token storing MFA keys. Passkeys can’t be extracted from the device and require either a PIN or a scan of a fingerprint or face. They provide two factors of authentication: something the user knows—the underlying password used when the passkey was first generated—and something the user has—in the form of the device storing the passkey.

Of course, the relaxed requirements only go so far since users still must have two devices. But by expanding the types of devices needed,  APP becomes more accessible since many people already have a phone and computer, Chatterjee said.

“If you’re in a place where you can’t get security keys, it’s more convenient,” he explained. “This is a step toward democratizing how much access [users] get to this highest security tier Google offers.”

Despite the increased scrutiny involved in the recovery process for APP accounts, Google is renewing its recommendation that users provide a phone number and email address as backup.

“The most resilient thing to do is have multiple things on file, so if you lose that security key or the key blows up, you have a way to get back into your account,” Chatterjee said. He’s not providing the “secret sauce” details about how the process works, but he said it involves “tons of signals we look at to figure out what’s really happening.

“Even if you do have a recovery phone, a recovery phone by itself isn’t going to get you access to your account,” he said. “So if you get SIM swapped, it doesn’t mean someone gets access to your account. It’s a combination of various factors. It’s the summation of that that will help you on your path to recovery.”

Google users can enroll in APP by visiting this link.

Google makes it easier for users to switch on advanced account protection Read More »

openai-reportedly-nears-breakthrough-with-“reasoning”-ai,-reveals-progress-framework

OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress framework

studies in hype-otheticals —

Five-level AI classification system probably best seen as a marketing exercise.

Illustration of a robot with many arms.

OpenAI recently unveiled a five-tier system to gauge its advancement toward developing artificial general intelligence (AGI), according to an OpenAI spokesperson who spoke with Bloomberg. The company shared this new classification system on Tuesday with employees during an all-hands meeting, aiming to provide a clear framework for understanding AI advancement. However, the system describes hypothetical technology that does not yet exist and is possibly best interpreted as a marketing move to garner investment dollars.

OpenAI has previously stated that AGI—a nebulous term for a hypothetical concept that means an AI system that can perform novel tasks like a human without specialized training—is currently the primary goal of the company. The pursuit of technology that can replace humans at most intellectual work drives most of the enduring hype over the firm, even though such a technology would likely be wildly disruptive to society.

OpenAI CEO Sam Altman has previously stated his belief that AGI could be achieved within this decade, and a large part of the CEO’s public messaging has been related to how the company (and society in general) might handle the disruption that AGI may bring. Along those lines, a ranking system to communicate AI milestones achieved internally on the path to AGI makes sense.

OpenAI’s five levels—which it plans to share with investors—range from current AI capabilities to systems that could potentially manage entire organizations. The company believes its technology (such as GPT-4o that powers ChatGPT) currently sits at Level 1, which encompasses AI that can engage in conversational interactions. However, OpenAI executives reportedly told staff they’re on the verge of reaching Level 2, dubbed “Reasoners.”

Bloomberg lists OpenAI’s five “Stages of Artificial Intelligence” as follows:

  • Level 1: Chatbots, AI with conversational language
  • Level 2: Reasoners, human-level problem solving
  • Level 3: Agents, systems that can take actions
  • Level 4: Innovators, AI that can aid in invention
  • Level 5: Organizations, AI that can do the work of an organization

A Level 2 AI system would reportedly be capable of basic problem-solving on par with a human who holds a doctorate degree but lacks access to external tools. During the all-hands meeting, OpenAI leadership reportedly demonstrated a research project using their GPT-4 model that the researchers believe shows signs of approaching this human-like reasoning ability, according to someone familiar with the discussion who spoke with Bloomberg.

The upper levels of OpenAI’s classification describe increasingly potent hypothetical AI capabilities. Level 3 “Agents” could work autonomously on tasks for days. Level 4 systems would generate novel innovations. The pinnacle, Level 5, envisions AI managing entire organizations.

This classification system is still a work in progress. OpenAI plans to gather feedback from employees, investors, and board members, potentially refining the levels over time.

Ars Technica asked OpenAI about the ranking system and the accuracy of the Bloomberg report, and a company spokesperson said they had “nothing to add.”

The problem with ranking AI capabilities

OpenAI isn’t alone in attempting to quantify levels of AI capabilities. As Bloomberg notes, OpenAI’s system feels similar to levels of autonomous driving mapped out by automakers. And in November 2023, researchers at Google DeepMind proposed their own five-level framework for assessing AI advancement, showing that other AI labs have also been trying to figure out how to rank things that don’t yet exist.

OpenAI’s classification system also somewhat resembles Anthropic’s “AI Safety Levels” (ASLs) first published by the maker of the Claude AI assistant in September 2023. Both systems aim to categorize AI capabilities, though they focus on different aspects. Anthropic’s ASLs are more explicitly focused on safety and catastrophic risks (such as ASL-2, which refers to “systems that show early signs of dangerous capabilities”), while OpenAI’s levels track general capabilities.

However, any AI classification system raises questions about whether it’s possible to meaningfully quantify AI progress and what constitutes an advancement (or even what constitutes a “dangerous” AI system, as in the case of Anthropic). The tech industry so far has a history of overpromising AI capabilities, and linear progression models like OpenAI’s potentially risk fueling unrealistic expectations.

There is currently no consensus in the AI research community on how to measure progress toward AGI or even if AGI is a well-defined or achievable goal. As such, OpenAI’s five-tier system should likely be viewed as a communications tool to entice investors that shows the company’s aspirational goals rather than a scientific or even technical measurement of progress.

OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress framework Read More »

exim-vulnerability-affecting-1.5-million-servers-lets-attackers-attach-malicious-files

Exim vulnerability affecting 1.5 million servers lets attackers attach malicious files

GIRD YOUR LOINS —

Based on past attacks, It wouldn’t be surprising to see active targeting this time too.

Exim vulnerability affecting 1.5 million servers lets attackers attach malicious files

More than 1.5 million email servers are vulnerable to attacks that can deliver executable attachments to user accounts, security researchers said.

The servers run versions of the Exim mail transfer agent that are vulnerable to a critical vulnerability that came to light 10 days ago. Tracked as CVE-2024-39929 and carrying a severity rating of 9.1 out of 10, the vulnerability makes it trivial for threat actors to bypass protections that normally prevent the sending of attachments that install apps or execute code. Such protections are a first line of defense against malicious emails designed to install malware on end-user devices.

A serious security issue

“I can confirm this bug,” Exim project team member Heiko Schlittermann wrote on a bug-tracking site. “It looks like a serious security issue to me.”

Researchers at security firm Censys said Wednesday that of the more than 6.5 million public-facing SMTP email servers appearing in Internet scans, 4.8 million of them (roughly 74 percent) run Exim. More than 1.5 million of the Exim servers, or roughly 31 percent, are running a vulnerable version of the open-source mail app.

While there are no known reports of active exploitation of the vulnerability, it wouldn’t be surprising to see active targeting, given the ease of attacks and the large number of vulnerable servers. In 2020, one of the world’s most formidable hacking groups—the Kremlin-backed Sandworm—exploited a severe Exim vulnerability tracked as CVE-2019-10149, which allowed them to send emails that executed malicious code that ran with unfettered root system rights. The attacks began in August 2019, two months after the vulnerability came to light. They continued through at least May 2020.

CVE-2024-39929 stems from an error in the way Exim parses multiline headers as specified in RFC 2231. Threat actors can exploit it to bypass extension blocking and deliver executable attachments in emails sent to end users. The vulnerability exists in all Exim versions up to and including 4.97.1. A fix is available in the Release Candidate 3 of Exim 4.98.

Given the requirement that end users must click on an attached executable for the attack to work, this Exim vulnerability isn’t as serious as the one that was exploited starting in 2019. That said, social engineering people remains among the most effective attack methods. Admins should assign a high priority to updating to the latest version.

Exim vulnerability affecting 1.5 million servers lets attackers attach malicious files Read More »

intuit’s-ai-gamble:-mass-layoff-of-1,800-paired-with-hiring-spree

Intuit’s AI gamble: Mass layoff of 1,800 paired with hiring spree

In the name of AI —

Intuit CEO: “Companies that aren’t prepared to take advantage of [AI] will fall behind.”

Signage for financial software company Intuit at the company's headquarters in the Silicon Valley town of Mountain View, California, August 24, 2016.

On Wednesday, Intuit CEO Sasan Goodarzi announced in a letter to the company that it would be laying off 1,800 employees—about 10 percent of its workforce of around 18,000—while simultaneously planning to hire the same number of new workers as part of a major restructuring effort purportedly focused on AI.

“As I’ve shared many times, the era of AI is one of the most significant technology shifts of our lifetime,” wrote Goodarzi in a blog post on Intuit’s website. “This is truly an extraordinary time—AI is igniting global innovation at an incredible pace, transforming every industry and company in ways that were unimaginable just a few years ago. Companies that aren’t prepared to take advantage of this AI revolution will fall behind and, over time, will no longer exist.”

The CEO says Intuit is in a position of strength and that the layoffs are not cost-cutting related, but they allow the company to “allocate additional investments to our most critical areas to support our customers and drive growth.” With new hires, the company expects its overall headcount to grow in its 2025 fiscal year.

Intuit’s layoffs (which collectively qualify as a “mass layoff” under the WARN act) hit various departments within the company, including closing Intuit’s offices in Edmonton, Canada, and Boise, Idaho, affecting over 250 employees. Approximately 1,050 employees will receive layoffs because they’re “not meeting expectations,” according to Goodarzi’s letter. Intuit has also eliminated more than 300 roles across the company to “streamline” operations and shift resources toward AI, and the company plans to consolidate 80 tech roles to “sites where we are strategically growing our technology teams and capabilities,” such as Atlanta, Bangalore, New York, Tel Aviv, and Toronto.

In turn, the company plans to accelerate investments in its AI-powered financial assistant, Intuit Assist, which provides AI-generated financial recommendations. The company also plans to hire new talent in engineering, product development, data science, and customer-facing roles, with a particular emphasis on AI expertise.

Not just about AI

Despite Goodarzi’s heavily AI-focused message, the restructuring at Intuit reveals a more complex picture. A closer look at the layoffs shows that many of the 1,800 job cuts stem from performance-based departures (such as the aforementioned 1,050). The restructuring also includes a 10 percent reduction in executive positions at the director level and above (“To continue increasing our velocity of decision making,” Goodarzi says).

These numbers suggest that the reorganization may also serve as an opportunity for Intuit to trim its workforce of underperforming staff, using the AI hype cycle as a compelling backdrop for a broader house-cleaning effort.

But as far as CEOs are concerned, it’s always a good time to talk about how they’re embracing the latest, hottest thing in technology: “With the introduction of GenAI,” Goodarzi wrote, “we are now delivering even more compelling customer experiences, increasing monetization potential, and driving efficiencies in how the work gets done within Intuit. But it’s just the beginning of the AI revolution.”

Intuit’s AI gamble: Mass layoff of 1,800 paired with hiring spree Read More »

in-bid-to-loosen-nvidia’s-grip-on-ai,-amd-to-buy-finnish-startup-for-$665m

In bid to loosen Nvidia’s grip on AI, AMD to buy Finnish startup for $665M

AI tech stack —

The acquisition is the largest of its kind in Europe in a decade.

In bid to loosen Nvidia’s grip on AI, AMD to buy Finnish startup for $665M

AMD is to buy Finnish artificial intelligence startup Silo AI for $665 million in one of the largest such takeovers in Europe as the US chipmaker seeks to expand its AI services to compete with market leader Nvidia.

California-based AMD said Silo’s 300-member team would use its software tools to build custom large language models (LLMs), the kind of AI technology that underpins chatbots such as OpenAI’s ChatGPT and Google’s Gemini. The all-cash acquisition is expected to close in the second half of this year, subject to regulatory approval.

“This agreement helps us both accelerate our customer engagements and deployments while also helping us accelerate our own AI tech stack,” Vamsi Boppana, senior vice president of AMD’s artificial intelligence group, told the Financial Times.

The acquisition is the largest of a privately held AI startup in Europe since Google acquired UK-based DeepMind for around 400 million pounds in 2014, according to data from Dealroom.

The deal comes at a time when buyouts by Silicon Valley companies have come under tougher scrutiny from regulators in Brussels and the UK. Europe-based AI startups, including Mistral, DeepL, and Helsing, have raised hundreds of millions of dollars this year as investors seek out a local champion to rival US-based OpenAI and Anthropic.

Helsinki-based Silo AI, which is among the largest private AI labs in Europe, offers tailored AI models and platforms to enterprise customers. The Finnish company launched an initiative last year to build LLMs in European languages, including Swedish, Icelandic, and Danish.

AMD’s AI technology competes with that of Nvidia, which has taken the lion’s share of the high-performance chip market. Nvidia’s success has propelled its valuation past $3 trillion this year as tech companies push to build the computing infrastructure needed to power the biggest AI models. AMD started to roll out its MI300 chips late last year in a direct challenge to Nvidia’s “Hopper” line of chips.

Peter Sarlin, Silo AI co-founder and chief executive, called the acquisition the “logical next step” as the Finnish group seeks to become a “flagship” AI company.

Silo AI is committed to “open source” AI models, which are available for free and can be customized by anyone. This distinguishes it from the likes of OpenAI and Google, which favor their own proprietary or “closed” models.

The startup previously described its family of open models, called “Poro,” as an important step toward “strengthening European digital sovereignty” and democratizing access to LLMs.

The concentration of the most powerful LLMs into the hands of a few US-based Big Tech companies is meanwhile attracting attention from antitrust regulators in Washington and Brussels.

The Silo deal shows AMD seeking to scale its business quickly and drive customer engagement with its own offering. AMD views Silo, which builds custom models for clients, as a link between its “foundational” AI software and the real-world applications of the technology.

Software has become a new battleground for semiconductor companies as they try to lock in customers to their hardware and generate more predictable revenues, outside the boom-and-bust chip sales cycle.

Nvidia’s success in the AI market stems from its multibillion-dollar investment in Cuda, its proprietary software that allows chips originally designed for processing computer graphics and video games to run a wider range of applications.

Since starting to develop Cuda in 2006, Nvidia has expanded its software platform to include a range of apps and services, largely aimed at corporate customers that lack the in-house resources and skills that Big Tech companies have to build on its technology.

Nvidia now offers more than 600 “pre-trained” models, meaning they are simpler for customers to deploy. The Santa Clara, California-based group last month started rolling out a “microservices” platform, called NIM, which promises to let developers build chatbots and AI “co-pilot” services quickly.

Historically, Nvidia has offered its software free of charge to buyers of its chips, but said this year that it planned to charge for products such as NIM.

AMD is among several companies contributing to the development of an OpenAI-led rival to Cuda, called Triton, which would let AI developers switch more easily between chip providers. Meta, Microsoft, and Intel have also worked on Triton.

© 2024 The Financial Times Ltd. All rights reserved. Please do not copy and paste FT articles and redistribute by email or post to the web.

In bid to loosen Nvidia’s grip on AI, AMD to buy Finnish startup for $665M Read More »

new-blast-radius-attack-breaks-30-year-old-protocol-used-in-networks-everywhere

New Blast-RADIUS attack breaks 30-year-old protocol used in networks everywhere

AUTHENTICATION PROTOCOL SHATTERED —

Ubiquitous RADIUS scheme uses homegrown authentication based on MD5. Yup, you heard right.

New Blast-RADIUS attack breaks 30-year-old protocol used in networks everywhere

Getty Images

One of the most widely used network protocols is vulnerable to a newly discovered attack that can allow adversaries to gain control over a range of environments, including industrial controllers, telecommunications services, ISPs, and all manner of enterprise networks.

Short for Remote Authentication Dial-In User Service, RADIUS harkens back to the days of dial-in Internet and network access through public switched telephone networks. It has remained the de facto standard for lightweight authentication ever since and is supported in virtually all switches, routers, access points, and VPN concentrators shipped in the past two decades. Despite its early origins, RADIUS remains an essential staple for managing client-server interactions for:

  • VPN access
  • DSL and Fiber to the Home connections offered by ISPs,
  • Wi-Fi and 802.1X authentication
  • 2G and 3G cellular roaming
  • 5G Data Network Name authentication
  • Mobile data offloading
  • Authentication over private APNs for connecting mobile devices to enterprise networks
  • Authentication to critical infrastructure management devices
  • Eduroam and OpenRoaming Wi-Fi

RADIUS provides seamless interaction between clients—typically routers, switches, or other appliances providing network access—and a central RADIUS server, which acts as the gatekeeper for user authentication and access policies. The purpose of RADIUS is to provide centralized authentication, authorization, and accounting management for remote logins.

The protocol was developed in 1991 by a company known as Livingston Enterprises. In 1997 the Internet Engineering Task Force made it an official standard, which was updated three years later. Although there is a draft proposal for sending RADIUS traffic inside of a TLS-encrypted session that’s supported by some vendors, many devices using the protocol only send packets in clear text through UDP (User Datagram Protocol).

XKCD

A more detailed illustration of RADIUS using Password Authentication Protocol over UDP.

Enlarge / A more detailed illustration of RADIUS using Password Authentication Protocol over UDP.

Goldberg et al.

Roll-your-own authentication with MD5? For real?

Since 1994, RADIUS has relied on an improvised, home-grown use of the MD5 hash function. First created in 1991 and adopted by the IETF in 1992, MD5 was at the time a popular hash function for creating what are known as “message digests” that map an arbitrary input like a number, text, or binary file to a fixed-length 16-byte output.

For a cryptographic hash function, it should be computationally impossible for an attacker to find two inputs that map to the same output. Unfortunately, MD5 proved to be based on a weak design: Within a few years, there were signs that the function might be more susceptible than originally thought to attacker-induced collisions, a fatal flaw that allows the attacker to generate two distinct inputs that produce identical outputs. These suspicions were formally verified in a paper published in 2004 by researchers Xiaoyun Wang and Hongbo Yu and further refined in a research paper published three years later.

The latter paper—published in 2007 by researchers Marc Stevens, Arjen Lenstra, and Benne de Weger—described what’s known as a chosen-prefix collision, a type of collision that results from two messages chosen by an attacker that, when combined with two additional messages, create the same hash. That is, the adversary freely chooses two distinct input prefixes 𝑃 and 𝑃′ of arbitrary content that, when combined with carefully corresponding suffixes 𝑆 and 𝑆′ that resemble random gibberish, generate the same hash. In mathematical notation, such a chosen-prefix collision would be written as 𝐻(𝑃‖𝑆)=𝐻(𝑃′‖𝑆′). This type of collision attack is much more powerful because it allows the attacker the freedom to create highly customized forgeries.

To illustrate the practicality and devastating consequences of the attack, Stevens, Lenstra, and de Weger used it to create two cryptographic X.509 certificates that generated the same MD5 signature but different public keys and different Distinguished Name fields. Such a collision could induce a certificate authority intending to sign a certificate for one domain to unknowingly sign a certificate for an entirely different, malicious domain.

In 2008, a team of researchers that included Stevens, Lenstra, and de Weger demonstrated how a chosen prefix attack on MD5 allowed them to create a rogue certificate authority that could generate TLS certificates that would be trusted by all major browsers. A key ingredient for the attack is software named hashclash, developed by the researchers. Hashclash has since been made publicly available.

Despite the undisputed demise of MD5, the function remained in widespread use for years. Deprecation of MD5 didn’t start in earnest until 2012 after malware known as Flame, reportedly created jointly by the governments of Israel and the US, was found to have used a chosen prefix attack to spoof MD5-based code signing by Microsoft’s Windows update mechanism. Flame used the collision-enabled spoofing to hijack the update mechanism so the malware could spread from device to device inside an infected network.

More than 12 years after Flame’s devastating damage was discovered and two decades after collision susceptibility was confirmed, MD5 has felled yet another widely deployed technology that has resisted common wisdom to move away from the hashing scheme—the RADIUS protocol, which is supported in hardware or software provided by at least 86 distinct vendors. The result is “Blast RADIUS,” a complex attack that allows an attacker with an active adversary-in-the-middle position to gain administrator access to devices that use RADIUS to authenticate themselves to a server.

“Surprisingly, in the two decades since Wang et al. demonstrated an MD5 hash collision in 2004, RADIUS has not been updated to remove MD5,” the research team behind Blast RADIUS wrote in a paper published Tuesday and titled RADIUS/UDP Considered Harmful. “In fact, RADIUS appears to have received notably little security analysis given its ubiquity in modern networks.”

The paper’s publication is being coordinated with security bulletins from at least 90 vendors whose wares are vulnerable. Many of the bulletins are accompanied by patches implementing short-term fixes, while a working group of engineers across the industry drafts longer-term solutions. Anyone who uses hardware or software that incorporates RADIUS should read the technical details provided later in this post and check with the manufacturer for security guidance.

New Blast-RADIUS attack breaks 30-year-old protocol used in networks everywhere Read More »

the-president-ordered-a-board-to-probe-a-massive-russian-cyberattack-it-never-did.

The president ordered a board to probe a massive Russian cyberattack. It never did.

In this photo illustration, a Microsoft logo seen displayed on a smartphone with a Cyber Security illustration image in the background.

This story was originally published by ProPublica.

Investigating how the world’s largest software provider handles the security of its own ubiquitous products.

After Russian intelligence launched one of the most devastating cyber espionage attacks in history against US government agencies, the Biden administration set up a new board and tasked it to figure out what happened—and tell the public.

State hackers had infiltrated SolarWinds, an American software company that serves the US government and thousands of American companies. The intruders used malicious code and a flaw in a Microsoft product to steal intelligence from the National Nuclear Security Administration, National Institutes of Health, and the Treasury Department in what Microsoft President Brad Smith called “the largest and most sophisticated attack the world has ever seen.”

The president issued an executive order establishing the Cyber Safety Review Board in May 2021 and ordered it to start work by reviewing the SolarWinds attack.

But for reasons that experts say remain unclear, that never happened.

Nor did the board probe SolarWinds for its second report.

For its third, the board investigated a separate 2023 attack, in which Chinese state hackers exploited an array of Microsoft security shortcomings to access the email inboxes of top federal officials.

A full, public accounting of what happened in the Solar Winds case would have been devastating to Microsoft. ProPublica recently revealed that Microsoft had long known about—but refused to address—a flaw used in the hack. The tech company’s failure to act reflected a corporate culture that prioritized profit over security and left the US government vulnerable, a whistleblower said.

The board was created to help address the serious threat posed to the US economy and national security by sophisticated hackers who consistently penetrate government and corporate systems, making off with reams of sensitive intelligence, corporate secrets, or personal data.

For decades, the cybersecurity community has called for a cyber equivalent of the National Transportation Safety Board, the independent agency required by law to investigate and issue public reports on the causes and lessons learned from every major aviation accident, among other incidents. The NTSB is funded by Congress and staffed by experts who work outside of the industry and other government agencies. Its public hearings and reports spur industry change and action by regulators like the Federal Aviation Administration.

So far, the Cyber Safety Review Board has charted a different path.

The board is not independent—it’s housed in the Department of Homeland Security. Rob Silvers, the board chair, is a Homeland Security undersecretary. Its vice chair is a top security executive at Google. The board does not have full-time staff, subpoena power or dedicated funding.

Silvers told ProPublica that DHS decided the board didn’t need to do its own review of SolarWinds as directed by the White House because the attack had already been “closely studied” by the public and private sectors.

“We want to focus the board on reviews where there is a lot of insight left to be gleaned, a lot of lessons learned that can be drawn out through investigation,” he said.

As a result, there has been no public examination by the government of the unaddressed security issue at Microsoft that was exploited by the Russian hackers. None of the SolarWinds reports identified or interviewed the whistleblower who exposed problems inside Microsoft.

By declining to review SolarWinds, the board failed to discover the central role that Microsoft’s weak security culture played in the attack and to spur changes that could have mitigated or prevented the 2023 Chinese hack, cybersecurity experts and elected officials told ProPublica.

“It’s possible the most recent hack could have been prevented by real oversight,” Sen. Ron Wyden, a Democratic member of the Senate Select Committee on Intelligence, said in a statement. Wyden has called for the board to review SolarWinds and for the government to improve its cybersecurity defenses.

In a statement, a spokesperson for DHS rejected the idea that a SolarWinds review could have exposed Microsoft’s failings in time to stop or mitigate the Chinese state-based attack last summer. “The two incidents were quite different in that regard, and we do not believe a review of SolarWinds would have necessarily uncovered the gaps identified in the Board’s latest report,” they said.

The board’s other members declined to comment, referred inquiries to DHS or did not respond to ProPublica.

In past statements, Microsoft did not dispute the whistleblower’s account but emphasized its commitment to security. “Protecting customers is always our highest priority,” a spokesperson previously told ProPublica. “Our security response team takes all security issues seriously and gives every case due diligence with a thorough manual assessment, as well as cross-confirming with engineering and security partners.”

The board’s failure to probe SolarWinds also underscores a question critics including Wyden have raised about the board since its inception: whether a board with federal officials making up its majority can hold government agencies responsible for their role in failing to prevent cyberattacks.

“I remain deeply concerned that a key reason why the Board never looked at SolarWinds—as the President directed it to do so—was because it would have required the board to examine and document serious negligence by the US government,” Wyden said. Among his concerns is a government cyberdefense system that failed to detect the SolarWinds attack.

Silvers said while the board did not investigate SolarWinds, it has been given a pass by the independent Government Accountability Office, which said in an April study examining the implementation of the executive order that the board had fulfilled its mandate to conduct the review.

The GAO’s determination puzzled cybersecurity experts. “Rob Silvers has been declaring by fiat for a long time that the CSRB did its job regarding SolarWinds, but simply declaring something to be so doesn’t make it true,” said Tarah Wheeler, the CEO of Red Queen Dynamics, a cybersecurity firm, who co-authored a Harvard Kennedy School report outlining how a “cyber NTSB” should operate.

Silvers said the board’s first and second reports, while not probing SolarWinds, resulted in important government changes, such as new Federal Communications Commission rules related to cell phones.

“The tangible impacts of the board’s work to date speak for itself and in bearing out the wisdom of the choices of what the board has reviewed,” he said.

The president ordered a board to probe a massive Russian cyberattack. It never did. Read More »

384,000-sites-pull-code-from-sketchy-code-library-recently-bought-by-chinese-firm

384,000 sites pull code from sketchy code library recently bought by Chinese firm

The supply-chain threat that won’t die —

Many website admins, it seems, have yet to get memo to remove Polyfill[.]io links.

384,000 sites pull code from sketchy code library recently bought by Chinese firm

Getty Images

More than 384,000 websites are linking to a site that was caught last week performing a supply-chain attack that redirected visitors to malicious sites, researchers said.

For years, the JavaScript code, hosted at polyfill[.]com, was a legitimate open source project that allowed older browsers to handle advanced functions that weren’t natively supported. By linking to cdn.polyfill[.]io, websites could ensure that devices using legacy browsers could render content in newer formats. The free service was popular among websites because all they had to do was embed the link in their sites. The code hosted on the polyfill site did the rest.

The power of supply-chain attacks

In February, China-based company Funnull acquired the domain and the GitHub account that hosted the JavaScript code. On June 25, researchers from security firm Sansec reported that code hosted on the polyfill domain had been changed to redirect users to adult- and gambling-themed websites. The code was deliberately designed to mask the redirections by performing them only at certain times of the day and only against visitors who met specific criteria.

The revelation prompted industry-wide calls to take action. Two days after the Sansec report was published, domain registrar Namecheap suspended the domain, a move that effectively prevented the malicious code from running on visitor devices. Even then, content delivery networks such as Cloudflare began automatically replacing pollyfill links with domains leading to safe mirror sites. Google blocked ads for sites embedding the Polyfill[.]io domain. The website blocker uBlock Origin added the domain to its filter list. And Andrew Betts, the original creator of Polyfill.io, urged website owners to remove links to the library immediately.

As of Tuesday, exactly one week after malicious behavior came to light, 384,773 sites continued to link to the site, according to researchers from security firm Censys. Some of the sites were associated with mainstream companies including Hulu, Mercedes-Benz, and Warner Bros. and the federal government. The findings underscore the power of supply-chain attacks, which can spread malware to thousands or millions of people simply by infecting a common source they all rely on.

“Since the domain was suspended, the supply-chain attack has been halted,” Aidan Holland, a member of the Censys Research Team, wrote in an email. “However, if the domain was to be un-suspended or transferred, it could resume its malicious behavior. My hope is that NameCheap properly locked down the domain and would prevent this from occurring.”

What’s more, the Internet scan performed by Censys found more than 1.6 million sites linking to one or more domains that were registered by the same entity that owns polyfill[.]io. At least one of the sites, bootcss[.]com, was observed in June 2023 performing malicious actions similar to those of polyfill. That domain, and three others—bootcdn[.]net, staticfile[.]net, and staticfile[.]org—were also found to have leaked a user’s authentication key for accessing a programming interface provided by Cloudflare.

Censys researchers wrote:

So far, this domain (bootcss.com) is the only one showing any signs of potential malice. The nature of the other associated endpoints remains unknown, and we avoid speculation. However, it wouldn’t be entirely unreasonable to consider the possibility that the same malicious actor responsible for the polyfill.io attack might exploit these other domains for similar activities in the future.

Of the 384,773 sites still linking to polyfill[.]com, 237,700, or almost 62 percent, were located inside Germany-based web host Hetzner.

Censys found that various mainstream sites—both in the public and private sectors—were among those linking to polyfill. They included:

  • Warner Bros. (www.warnerbros.com)
  • Hulu (www.hulu.com)
  • Mercedes-Benz (shop.mercedes-benz.com)
  • Pearson (digital-library-qa.pearson.com, digital-library-stg.pearson.com)
  • ns-static-assets.s3.amazonaws.com

The amazonaws.com address was the most common domain associated with sites still linking to the polyfill site, an indication of widespread usage among users of Amazon’s S3 static website hosting.

Censys also found 182 domains ending in .gov, meaning they are affiliated with a government entity. One such domain—feedthefuture[.]gov—is affiliated with the US federal government. A breakdown of the top 50 affected sites is here.

Attempts to reach Funnull representatives for comment weren’t successful.

384,000 sites pull code from sketchy code library recently bought by Chinese firm Read More »

“regresshion”-vulnerability-in-openssh-gives-attackers-root-on-linux

“RegreSSHion” vulnerability in OpenSSH gives attackers root on Linux

RELAPSE —

Full system compromise possible by peppering servers with thousands of connection requests.

“RegreSSHion” vulnerability in OpenSSH gives attackers root on Linux

Researchers have warned of a critical vulnerability affecting the OpenSSH networking utility that can be exploited to give attackers complete control of Linux and Unix servers with no authentication required.

The vulnerability, tracked as CVE-2024-6387, allows unauthenticated remote code execution with root system rights on Linux systems that are based on glibc, an open source implementation of the C standard library. The vulnerability is the result of a code regression introduced in 2020 that reintroduced CVE-2006-5051, a vulnerability that was fixed in 2006. With thousands, if not millions, of vulnerable servers populating the Internet, this latest vulnerability could pose a significant risk.

Complete system takeover

“This vulnerability, if exploited, could lead to full system compromise where an attacker can execute arbitrary code with the highest privileges, resulting in a complete system takeover, installation of malware, data manipulation, and the creation of backdoors for persistent access,” wrote Bharat Jogi, the senior director of threat research at Qualys, the security firm that discovered it. “It could facilitate network propagation, allowing attackers to use a compromised system as a foothold to traverse and exploit other vulnerable systems within the organization.”

The risk is in part driven by the central role OpenSSH plays in virtually every internal network connected to the Internet. It provides a channel for administrators to connect to protected devices remotely or from one device to another inside the network. The ability for OpenSSH to support multiple strong encryption protocols, its integration into virtually all modern operating systems, and its location at the very perimeter of networks further drive its popularity.

Besides the ubiquity of vulnerable servers populating the Internet, CVE-2024-6387 also provides a potent means for executing malicious code stems with the highest privileges, with no authentication required. The flaw stems from faulty management of the signal handler, a component in glibc for responding to potentially serious events such as division-by-zero attempts. When a client device initiates a connection but doesn’t successfully authenticate itself within an allotted time (120 seconds by default), vulnerable OpenSSH systems call what’s known as a SIGALRM handler asynchronously. The flaw resides in sshd, the main OpenSSH engine. Qualys has named the vulnerability regreSSHion.

The severity of the threat posed by exploitation is significant, but various factors are likely to prevent it from being mass exploited, security experts said. For one, the attack can take as long as eight hours to complete and require as many as 10,000 authentication steps, Stan Kaminsky, a researcher at security firm Kaspersky, said. The delay results from a defense known as address space layout randomization, which changes the memory addresses where executable code is stored to thwart attempts to run malicious payloads.

Other limitations apply. Attackers must also know the specific OS running on each targeted server. So far, no one has found a way to exploit 64-bit systems since the number of available memory addresses is exponentially higher than those available for 32-bit systems. Further mitigating the chances of success, denial-of-service attacks that limit the number of connection requests coming into a vulnerable system will prevent exploitation attempts from succeeding.

All of those limitations will likely prevent CVE-2024-6387 from being mass exploited, researchers said, but there’s still the risk of targeted attacks that pepper a specific network of interest with authentication attempts over a matter of days until allowing code execution. To cover their tracks, attackers could spread requests through a large number of IP addresses in a fashion similar to password-spraying attacks. In this way, attackers could target a handful of vulnerable networks until one or more of the attempts succeeded.

The vulnerability affects the following:

  • OpenSSH versions earlier than 4.4p1 are vulnerable to this signal handler race condition unless they are patched for CVE-2006-5051 and CVE-2008-4109.
  • Versions from 4.4p1 up to, but not including, 8.5p1 are not vulnerable due to a transformative patch for CVE-2006-5051, which made a previously unsafe function secure.
  • The vulnerability resurfaces in versions from 8.5p1 up to, but not including, 9.8p1 due to the accidental removal of a critical component in a function.

Anyone running a vulnerable version should update as soon as practicable.

“RegreSSHion” vulnerability in OpenSSH gives attackers root on Linux Read More »

inside-a-violent-gang’s-ruthless-crypto-stealing-home-invasion-spree

Inside a violent gang’s ruthless crypto-stealing home invasion spree

brutal extortion —

More than a dozen men threatened, assaulted, tortured, or kidnapped 11 victims.

photo illustration of Cyber thieves stealing Bitcoin on laptop screen

Cryptocurrency has always made a ripe target for theft—and not just hacking, but the old-fashioned, up-close-and-personal kind, too. Given that it can be irreversibly transferred in seconds with little more than a password, it’s perhaps no surprise that thieves have occasionally sought to steal crypto in home-invasion burglaries and even kidnappings. But rarely do those thieves leave a trail of violence in their wake as disturbing as that of one recent, ruthless, and particularly prolific gang of crypto extortionists.

The United States Justice Department earlier this week announced the conviction of Remy Ra St. Felix, a 24-year-old Florida man who led a group of men behind a violent crime spree designed to compel victims to hand over access to their cryptocurrency savings. That announcement and the criminal complaint laying out charges against St. Felix focused largely on a single theft of cryptocurrency from an elderly North Carolina couple, whose home St. Felix and one of his accomplices broke into before physically assaulting the two victims—both in their seventies—and forcing them to transfer more than $150,000 in bitcoin and ether to the thieves’ crypto wallets.

In fact, that six-figure sum appears to have been the gang’s only confirmed haul from its physical crypto thefts—although the burglars and their associates made millions in total, mostly through more traditional crypto hacking as well as stealing other assets. A deeper look into court documents from the St. Felix case, however, reveals that the relatively small profit St. Felix’s gang made from its burglaries doesn’t capture the full scope of the harm they inflicted: In total, those court filings and DOJ officials describe how more than a dozen convicted and alleged members of the crypto-focused gang broke into the homes of 11 victims, carrying out a brutal spree of armed robberies, death threats, beatings, torture sessions, and even one kidnapping in a campaign that spanned four US states.

In court documents, prosecutors say the men—working in pairs or small teams—threatened to cut toes or genitalia off of one victim, kidnapped and discussed killing another, and planned to threaten another victim’s child as leverage. Prosecutors also describe disturbing torture tactics: how the men inserted sharp objects under one victim’s fingernails and burned another with a hot iron, all in an effort to coerce their targets to hand over the devices and passwords necessary to transfer their crypto holdings.

“The victims in this case suffered a horrible, painful experience that no citizen should have to endure,” Sandra Hairston, a US attorney for the Middle District of North Carolina who prosecuted St. Felix’s case, wrote in the Justice Department’s announcement of St. Felix’s conviction. “The defendant and his coconspirators acted purely out of greed and callously terrorized those they targeted.”

The serial extortion spree is almost certainly the worst of its kind ever to be prosecuted in the US, says Jameson Lopp, the cofounder and chief security officer of Casa, a cryptocurrency-focused physical security firm, who has tracked physical attacks designed to steal cryptocurrency going back as far as 2014. “As far as I’m aware, this is the first case where it was confirmed that the same group of people went around and basically carried out home invasions on a variety of different victims,” Lopp says.

Lopp notes, nonetheless, that this kind of crime spree is more than a one-off. He has learned of other similar attempts at physical theft of cryptocurrency in just the past month that have escaped public reporting—he says the victims in those cases asked him not to share details—and suggests that in-person crypto extortion may be on the rise as thieves realize the attraction of crypto as a highly valuable and instantly transportable target for theft. “Crypto, as this highly liquid bearer asset, completely changes the incentives of doing something like a home invasion,” Lopp says, “or even kidnapping and extortion and ransom.”

Inside a violent gang’s ruthless crypto-stealing home invasion spree Read More »

researchers-craft-smiling-robot-face-from-living-human-skin-cells

Researchers craft smiling robot face from living human skin cells

A movable robotic face covered with living human skin cells.

Enlarge / A movable robotic face covered with living human skin cells.

In a new study, researchers from the University of Tokyo, Harvard University, and the International Research Center for Neurointelligence have unveiled a technique for creating lifelike robotic skin using living human cells. As a proof of concept, the team engineered a small robotic face capable of smiling, covered entirely with a layer of pink living tissue.

The researchers note that using living skin tissue as a robot covering has benefits, as it’s flexible enough to convey emotions and can potentially repair itself. “As the role of robots continues to evolve, the materials used to cover social robots need to exhibit lifelike functions, such as self-healing,” wrote the researchers in the study.

Shoji Takeuchi, Michio Kawai, Minghao Nie, and Haruka Oda authored the study, titled “Perforation-type anchors inspired by skin ligament for robotic face covered with living skin,” which is due for July publication in Cell Reports Physical Science. We learned of the study from a report published earlier this week by New Scientist.

The study describes a novel method for attaching cultured skin to robotic surfaces using “perforation-type anchors” inspired by natural skin ligaments. These tiny v-shaped cavities in the robot’s structure allow living tissue to infiltrate and create a secure bond, mimicking how human skin attaches to underlying tissues.

To demonstrate the skin’s capabilities, the team engineered a palm-sized robotic face able to form a convincing smile. Actuators connected to the base allowed the face to move, with the living skin flexing. The researchers also covered a static 3D-printed head shape with the engineered skin.

Enlarge / “Demonstration of the perforation-type anchors to cover the facial device with skin equivalent.”

Takeuchi et al. created their robotic face by first 3D-printing a resin base embedded with the perforation-type anchors. They then applied a mixture of human skin cells in a collagen scaffold, allowing the living tissue to grow into the anchors.

Researchers craft smiling robot face from living human skin cells Read More »

openai’s-new-“criticgpt”-model-is-trained-to-criticize-gpt-4-outputs

OpenAI’s new “CriticGPT” model is trained to criticize GPT-4 outputs

automated critic —

Research model catches bugs in AI-generated code, improving human oversight of AI.

An illustration created by OpenAI.

Enlarge / An illustration created by OpenAI.

On Thursday, OpenAI researchers unveiled CriticGPT, a new AI model designed to identify mistakes in code generated by ChatGPT. It aims to enhance the process of making AI systems behave in ways humans want (called “alignment”) through Reinforcement Learning from Human Feedback (RLHF), which helps human reviewers make large language model (LLM) outputs more accurate.

As outlined in a new research paper called “LLM Critics Help Catch LLM Bugs,” OpenAI created CriticGPT to act as an AI assistant to human trainers who review programming code generated by the ChatGPT AI assistant. CriticGPT—based on the GPT-4 family of LLMS—analyzes the code and points out potential errors, making it easier for humans to spot mistakes that might otherwise go unnoticed. The researchers trained CriticGPT on a dataset of code samples with intentionally inserted bugs, teaching it to recognize and flag various coding errors.

The researchers found that CriticGPT’s critiques were preferred by annotators over human critiques in 63 percent of cases involving naturally occurring LLM errors and that human-machine teams using CriticGPT wrote more comprehensive critiques than humans alone while reducing confabulation (hallucination) rates compared to AI-only critiques.

Developing an automated critic

The development of CriticGPT involved training the model on a large number of inputs containing deliberately inserted mistakes. Human trainers were asked to modify code written by ChatGPT, introducing errors and then providing example feedback as if they had discovered these bugs. This process allowed the model to learn how to identify and critique various types of coding errors.

In experiments, CriticGPT demonstrated its ability to catch both inserted bugs and naturally occurring errors in ChatGPT’s output. The new model’s critiques were preferred by trainers over those generated by ChatGPT itself in 63 percent of cases involving natural bugs (the aforementioned statistic). This preference was partly due to CriticGPT producing fewer unhelpful “nitpicks” and generating fewer false positives, or hallucinated problems.

The researchers also created a new technique they call Force Sampling Beam Search (FSBS). This method helps CriticGPT write more detailed reviews of code. It lets the researchers adjust how thorough CriticGPT is in looking for problems, while also controlling how often it might make up issues that don’t really exist. They can tweak this balance depending on what they need for different AI training tasks.

Interestingly, the researchers found that CriticGPT’s capabilities extend beyond just code review. In their experiments, they applied the model to a subset of ChatGPT training data that had previously been rated as flawless by human annotators. Surprisingly, CriticGPT identified errors in 24 percent of these cases—errors that were subsequently confirmed by human reviewers. OpenAI thinks this demonstrates the model’s potential to generalize to non-code tasks and highlights its ability to catch subtle mistakes that even careful human evaluation might miss.

Despite its promising results, like all AI models, CriticGPT has limitations. The model was trained on relatively short ChatGPT answers, which may not fully prepare it for evaluating longer, more complex tasks that future AI systems might tackle. Additionally, while CriticGPT reduces confabulations, it doesn’t eliminate them entirely, and human trainers can still make labeling mistakes based on these false outputs.

The research team acknowledges that CriticGPT is most effective at identifying errors that can be pinpointed in one specific location within the code. However, real-world mistakes in AI outputs can often be spread across multiple parts of an answer, presenting a challenge for future iterations of the model.

OpenAI plans to integrate CriticGPT-like models into its RLHF labeling pipeline, providing its trainers with AI assistance. For OpenAI, it’s a step toward developing better tools for evaluating outputs from LLM systems that may be difficult for humans to rate without additional support. However, the researchers caution that even with tools like CriticGPT, extremely complex tasks or responses may still prove challenging for human evaluators—even those assisted by AI.

OpenAI’s new “CriticGPT” model is trained to criticize GPT-4 outputs Read More »