Biz & IT

time-saved-by-ai-offset-by-new-work-created,-study-suggests

Time saved by AI offset by new work created, study suggests

A new study analyzing the Danish labor market in 2023 and 2024 suggests that generative AI models like ChatGPT have had almost no significant impact on overall wages or employment yet, despite rapid adoption in some workplaces. The findings, detailed in a working paper by economists from the University of Chicago and the University of Copenhagen, provide an early, large-scale empirical look at AI’s transformative potential.

In “Large Language Models, Small Labor Market Effects,” economists Anders Humlum and Emilie Vestergaard focused specifically on the impact of AI chatbots across 11 occupations often considered vulnerable to automation, including accountants, software developers, and customer support specialists. Their analysis covered data from 25,000 workers and 7,000 workplaces in Denmark.

Despite finding widespread and often employer-encouraged adoption of these tools, the study concluded that “AI chatbots have had no significant impact on earnings or recorded hours in any occupation” during the period studied. The confidence intervals in their statistical analysis ruled out average effects larger than 1 percent.

“The adoption of these chatbots has been remarkably fast,” Humlum told The Register about the study. “Most workers in the exposed occupations have now adopted these chatbots… But then when we look at the economic outcomes, it really has not moved the needle.”

AI creating more work?

During the study, the researchers investigated how company investment in AI affected worker adoption and how chatbots changed workplace processes. While corporate investment boosted AI tool adoption—saving time for 64 to 90 percent of users across studied occupations—the actual benefits were less substantial than expected.

The study revealed that AI chatbots actually created new job tasks for 8.4 percent of workers, including some who did not use the tools themselves, offsetting potential time savings. For example, many teachers now spend time detecting whether students use ChatGPT for homework, while other workers review AI output quality or attempt to craft effective prompts.

Time saved by AI offset by new work created, study suggests Read More »

why-mfa-is-getting-easier-to-bypass-and-what-to-do-about-it

Why MFA is getting easier to bypass and what to do about it

These sorts of adversary-in-the-middle attacks have grown increasingly common. In 2022, for instance, a single group used it in a series of attacks that stole more than 10,000 credentials from 137 organizations and led to the network compromise of authentication provider Twilio, among others.

One company that was targeted in the attack campaign but wasn’t breached was content delivery network Cloudflare. The reason the attack failed was because it uses MFA based on WebAuthn, the standard that makes passkeys work. Services that use WebAuthn are highly resistant to adversary-in-the-middle attacks, if not absolutely immune. There are two reasons for this.

First, WebAuthn credentials are cryptographically bound to the URL they authenticate. In the above example, the credentials would work only on https://accounts.google.com. If a victim tried to use the credential to log in to https://accounts.google.com.evilproxy[.]com, the login would fail each time.

Additionally, WebAuthn-based authentication must happen on or in proximity to the device the victim is using to log in to the account. This occurs because the credential is also cryptographically bound to a victim device. Because the authentication can only happen on the victim device, it’s impossible for an adversary in the middle to actually use it in a phishing attack on their own device.

Phishing has emerged as one of the most vexing security problems facing organizations, their employees, and their users. MFA in the form of a one-time password, or traditional push notifications, definitely adds friction to the phishing process, but with proxy-in-the-middle attacks becoming easier and more common, the effectiveness of these forms of MFA is growing increasingly easier to defeat.

WebAuthn-based MFA comes in multiple forms; a key, known as a passkey, stored on a phone, computer, Yubikey, or similar dongle is the most common example. Thousands of sites now support WebAuthn, and it’s easy for most end users to enroll. As a side note, MFA based on U2F, the predecessor standard to WebAuthn, also prevents adversary-in-the-middle attacks from succeeding, although the latter provides flexibility and additional security.

Post updated to add details about passkeys.

Why MFA is getting easier to bypass and what to do about it Read More »

windows-rdp-lets-you-log-in-using-revoked-passwords-microsoft-is-ok-with-that.

Windows RDP lets you log in using revoked passwords. Microsoft is OK with that.

The ability to use a revoked password to log in through RDP occurs when a Windows machine that’s signed in with a Microsoft or Azure account is configured to enable remote desktop access. In that case, users can log in over RDP with a dedicated password that’s validated against a locally stored credential. Alternatively, users can log in using the credentials for the online account that was used to sign in to the machine.

A screenshot of an RDP configuration window showing a Microsoft account (for Hotmail) has remote access.

Even after users change their account password, however, it remains valid for RDP logins indefinitely. In some cases, Wade reported, multiple older passwords will work while newer ones won’t. The result: persistent RDP access that bypasses cloud verification, multifactor authentication, and Conditional Access policies.

Wade and another expert in Windows security said that the little-known behavior could prove costly in scenarios where a Microsoft or Azure account has been compromised, for instance when the passwords for them have been publicly leaked. In such an event, the first course of action is to change the password to prevent an adversary from using it to access sensitive resources. While the password change prevents the adversary from logging in to the Microsoft or Azure account, the old password will give an adversary access to the user’s machine through RDP indefinitely.

“This creates a silent, remote backdoor into any system where the password was ever cached,” Wade wrote in his report. “Even if the attacker never had access to that system, Windows will still trust the password.”

Will Dormann, a senior vulnerability analyst at security firm Analygence, agreed.

“It doesn’t make sense from a security perspective,” he wrote in an online interview. “If I’m a sysadmin, I’d expect that the moment I change the password of an account, then that account’s old credentials cannot be used anywhere. But this is not the case.”

Credential caching is a problem

The mechanism that makes all of this possible is credential caching on the hard drive of the local machine. The first time a user logs in using Microsoft or Azure account credentials, RDP will confirm the password’s validity online. Windows then stores the credential in a cryptographically secured format on the local machine. From then on, Windows will validate any password entered during an RDP login by comparing it against the locally stored credential, with no online lookup. With that, the revoked password will still give remote access through RDP.

Windows RDP lets you log in using revoked passwords. Microsoft is OK with that. Read More »

millions-of-apple-airplay-enabled-devices-can-be-hacked-via-wi-fi

Millions of Apple Airplay-enabled devices can be hacked via Wi-Fi

Oligo also notes that many of the vulnerable devices have microphones and could be turned into listening devices for espionage. The researchers did not go so far as to create proof-of-concept malware for any particular target that would demonstrate that trick.

Oligo says it warned Apple about its AirBorne findings in the late fall and winter of last year, and Apple responded in the months since then by pushing out security updates. The researchers collaborated with Apple to test and validate the fixes for Macs and other Apple products.

Apple tells WIRED that it has also created patches that are available for impacted third-party devices. The company emphasizes, though, that there are limitations to the attacks that would be possible on AirPlay-enabled devices as a result of the bugs, because an attacker must be on the same Wi-Fi network as a target to exploit them. Apple adds that while there is potentially some user data on devices like TVs and speakers, it is typically very limited.

Below is a video of the Oligo researchers demonstrating their AirBorne hacking technique to take over an AirPlay-enabled Bose speaker to show their company’s logo for AirBorne. (The researchers say they didn’t intend to single out Bose, but just happened to have one of the company’s speakers on hand for testing.) Bose did not immediately respond to WIRED’s request for comment.

Speaker Demo. Courtesy of Oligo

The AirBorne vulnerabilities Oligo found also affect CarPlay, the radio protocol used to connect to vehicles’ dashboard interfaces. Oligo warns that this means hackers could hijack a car’s automotive computer, known as its head unit, in any of more than 800 CarPlay-enabled car and truck models. In those car-specific cases, though, the AirBorne vulnerabilities could only be exploited if the hacker is able to pair their own device with the head unit via Bluetooth or a USB connection, which drastically restricts the threat of CarPlay-based vehicle hacking.

The AirPlay SDK flaws in home media devices, by contrast, may present a more practical vulnerability for hackers seeking to hide on a network, whether to install ransomware or carry out stealthy espionage, all while hiding on devices that are often forgotten by both consumers and corporate or government network defenders. “The amount of devices that were vulnerable to these issues, that’s what alarms me,” says Oligo researcher Uri Katz. “When was the last time you updated your speaker?”

Millions of Apple Airplay-enabled devices can be hacked via Wi-Fi Read More »

the-end-of-an-ai-that-shocked-the-world:-openai-retires-gpt-4

The end of an AI that shocked the world: OpenAI retires GPT-4

One of the most influential—and by some counts, notorious—AI models yet released will soon fade into history. OpenAI announced on April 10 that GPT-4 will be “fully replaced” by GPT-4o in ChatGPT at the end of April, bringing a public-facing end to the model that accelerated a global AI race when it launched in March 2023.

“Effective April 30, 2025, GPT-4 will be retired from ChatGPT and fully replaced by GPT-4o,” OpenAI wrote in its April 10 changelog for ChatGPT. While ChatGPT users will no longer be able to chat with the older AI model, the company added that “GPT-4 will still be available in the API,” providing some reassurance to developers who might still be using the older model for various tasks.

The retirement marks the end of an era that began on March 14, 2023, when GPT-4 demonstrated capabilities that shocked some observers: reportedly scoring at the 90th percentile on the Uniform Bar Exam, acing AP tests, and solving complex reasoning problems that stumped previous models. Its release created a wave of immense hype—and existential panic—about AI’s ability to imitate human communication and composition.

A screenshot of GPT-4's introduction to ChatGPT Plus customers from March 14, 2023.

A screenshot of GPT-4’s introduction to ChatGPT Plus customers from March 14, 2023. Credit: Benj Edwards / Ars Technica

While ChatGPT launched in November 2022 with GPT-3.5 under the hood, GPT-4 took AI language models to a new level of sophistication, and it was a massive undertaking to create. It combined data scraped from the vast corpus of human knowledge into a set of neural networks rumored to weigh in at a combined total of 1.76 trillion parameters, which are the numerical values that hold the data within the model.

Along the way, the model reportedly cost more than $100 million to train, according to comments by OpenAI CEO Sam Altman, and required vast computational resources to develop. Training the model may have involved over 20,000 high-end GPUs working in concert—an expense few organizations besides OpenAI and its primary backer, Microsoft, could afford.

Industry reactions, safety concerns, and regulatory responses

Curiously, GPT-4’s impact began before OpenAI’s official announcement. In February 2023, Microsoft integrated its own early version of the GPT-4 model into its Bing search engine, creating a chatbot that sparked controversy when it tried to convince Kevin Roose of The New York Times to leave his wife and when it “lost its mind” in response to an Ars Technica article.

The end of an AI that shocked the world: OpenAI retires GPT-4 Read More »

trump-admin-lashes-out-as-amazon-considers-displaying-tariff-costs-on-its-sites

Trump admin lashes out as Amazon considers displaying tariff costs on its sites

This morning, Punchbowl News reported that Amazon was considering listing the cost of tariffs as a separate line item on its site, citing “a person familiar with the plan.” Amazon later acknowledged that there had been internal discussions to that effect but only for its import-focused Amazon Haul sub-store and that the company didn’t plan to actually list tariff prices for any items.

“This was never approved and is not going to happen,” reads Amazon’s two-sentence statement.

Amazon issued such a specific and forceful on-the-record denial in part because it had drawn the ire of the Trump administration. In a press briefing early this morning, White House Press Secretary Karoline Leavitt was asked a question about the report, which the administration responded to as though Amazon had made a formal announcement about the policy.

“This is a hostile and political act by Amazon,” Leavitt said, before blaming the Biden administration for high inflation and claiming that Amazon had “partnered with a Chinese propaganda arm.”

The Washington Post also reported that Trump had called Amazon founder Jeff Bezos to complain about the report.

Amazon’s internal discussions reflect the current confusion around the severe and rapidly changing import tariffs imposed by the Trump administration, particularly tariffs of 145 percent on goods imported from China. Other retailers, particularly sites like Temu, AliExpress, and Shein, have all taken their own steps, either adding labels to listings when import taxes have already been included in the price, or adding import taxes as a separate line item in users’ carts at checkout as Amazon had discussed doing.

A Temu cart showing the price of an item’s import tax as a separate line item. Amazon reportedly considered and discarded a similar idea for its Amazon Haul sub-site.

Small purchases are seeing big hits

Most of these items are currently excluded from tariffs because of something called the de minimis exemption, which applies to any shipment valued under $800. The administration currently plans to end the de minimis exemption for packages coming from China or Hong Kong beginning on May 2, though the administration’s plans could change (as they frequently have before).

Trump admin lashes out as Amazon considers displaying tariff costs on its sites Read More »

ai-generated-code-could-be-a-disaster-for-the-software-supply-chain-here’s-why.

AI-generated code could be a disaster for the software supply chain. Here’s why.

AI-generated computer code is rife with references to non-existent third-party libraries, creating a golden opportunity for supply-chain attacks that poison legitimate programs with malicious packages that can steal data, plant backdoors, and carry out other nefarious actions, newly published research shows.

The study, which used 16 of the most widely used large language models to generate 576,000 code samples, found that 440,000 of the package dependencies they contained were “hallucinated,” meaning they were non-existent. Open source models hallucinated the most, with 21 percent of the dependencies linking to non-existent libraries. A dependency is an essential code component that a separate piece of code requires to work properly. Dependencies save developers the hassle of rewriting code and are an essential part of the modern software supply chain.

Package hallucination flashbacks

These non-existent dependencies represent a threat to the software supply chain by exacerbating so-called dependency confusion attacks. These attacks work by causing a software package to access the wrong component dependency, for instance by publishing a malicious package and giving it the same name as the legitimate one but with a later version stamp. Software that depends on the package will, in some cases, choose the malicious version rather than the legitimate one because the former appears to be more recent.

Also known as package confusion, this form of attack was first demonstrated in 2021 in a proof-of-concept exploit that executed counterfeit code on networks belonging to some of the biggest companies on the planet, Apple, Microsoft, and Tesla included. It’s one type of technique used in software supply-chain attacks, which aim to poison software at its very source, in an attempt to infect all users downstream.

“Once the attacker publishes a package under the hallucinated name, containing some malicious code, they rely on the model suggesting that name to unsuspecting users,” Joseph Spracklen, a University of Texas at San Antonio Ph.D. student and lead researcher, told Ars via email. “If a user trusts the LLM’s output and installs the package without carefully verifying it, the attacker’s payload, hidden in the malicious package, would be executed on the user’s system.”

AI-generated code could be a disaster for the software supply chain. Here’s why. Read More »

chatgpt-goes-shopping-with-new-product-browsing-feature

ChatGPT goes shopping with new product-browsing feature

On Thursday, OpenAI announced the addition of shopping features to ChatGPT Search. The new feature allows users to search for products and purchase them through merchant websites after being redirected from the ChatGPT interface. Product placement is not sponsored, and the update affects all users, regardless of whether they’ve signed in to an account.

Adam Fry, ChatGPT search product lead at OpenAI, showed Ars Technica’s sister site Wired how the new shopping system works during a demonstration. Users researching products like espresso machines or office chairs receive recommendations based on their stated preferences, stored memories, and product reviews from around the web.

According to Wired, the shopping experience in ChatGPT resembles Google Shopping. When users click on a product image, the interface displays multiple retailers like Amazon and Walmart on the right side of the screen, with buttons to complete purchases. OpenAI is currently experimenting with categories that include electronics, fashion, home goods, and beauty products.

Product reviews shown in ChatGPT come from various online sources, including publishers and user forums like Reddit. Users can instruct ChatGPT to prioritize which review sources to use when creating product recommendations.

An example of the ChatGPT shopping experience provided by OpenAI.

An example of the ChatGPT shopping experience provided by OpenAI. Credit: OpenAI

Unlike Google’s algorithm-based approach to product recommendations, ChatGPT reportedly attempts to understand product reviews and user preferences in a more conversational manner.  If someone mentions they prefer black clothing from specific retailers in a chat, the system incorporates those preferences in future shopping recommendations.

ChatGPT goes shopping with new product-browsing feature Read More »

backblaze-responds-to-claims-of-“sham-accounting,”-customer-backups-at-risk

Backblaze responds to claims of “sham accounting,” customer backups at risk

Backblaze went public in November 2021 and raised $100 million. Morpheus noted that since then, “Backblaze has reported losses every quarter, its outstanding share count has grown by 80 percent, and its share price has declined by 71 percent.”

Following Morpheus’ report, Investing reported on Thursday that Backblaze shares fell 8.3 percent.

Beyond the financial implications for stockholders, Morpheus’ report has sparked some concern for the primarily small businesses and individuals relying on Backblaze for data backup. Today, for example, How-To Geek reported that “Backblaze backups might be in trouble,” in reference to Morpheus’ report. The publication said that if Morpheus’ reporting was accurate, Backblaze doesn’t appear to be heading toward profitability. In its Q4 2024 earnings report [PDF], Backblaze reported a net loss of $48.5 million. In 2023, it reported a net loss of $59.7 million.

“If Backblaze suddenly shuts down, customers might lose access to existing backups,” How-To Geek said.

Backblaze responds

Ars Technica reached out to Backblaze about its response to concerns about the company’s financials resulting in lost backups. Patrick Thomas, Backblaze’s VP of marketing, called Morpheus’ claims “baseless.” He added:

The report is inaccurate and misleading, based largely on litigation of the same nature, and a clear attempt by short sellers to manipulate our stock price for financial gain.

Thomas also claimed that “independent, third-party reviews” have already found that there have been “no wrongdoing or issues” with Backblaze’s public financial results.

“Our storage cloud continues to deliver reliable, high-performance services that Backblaze customers rely on, and we remain fully focused on driving innovation and creating long-term value for our customers, employees, and investors,” Thomas said.

Backblaze will announce its Q1 2025 results on May 7. Regardless of what lies ahead for the company’s finances and litigation, commitment to the 3-2-1 backup rule remains prudent.

Backblaze responds to claims of “sham accounting,” customer backups at risk Read More »

ios-and-android-juice-jacking-defenses-have-been-trivial-to-bypass-for-years

iOS and Android juice jacking defenses have been trivial to bypass for years


SON OF JUICE JACKING ARISES

New ChoiceJacking attack allows malicious chargers to steal data from phones.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

About a decade ago, Apple and Google started updating iOS and Android, respectively, to make them less susceptible to “juice jacking,” a form of attack that could surreptitiously steal data or execute malicious code when users plug their phones into special-purpose charging hardware. Now, researchers are revealing that, for years, the mitigations have suffered from a fundamental defect that has made them trivial to bypass.

“Juice jacking” was coined in a 2011 article on KrebsOnSecurity detailing an attack demonstrated at a Defcon security conference at the time. Juice jacking works by equipping a charger with hidden hardware that can access files and other internal resources of phones, in much the same way that a computer can when a user connects it to the phone.

An attacker would then make the chargers available in airports, shopping malls, or other public venues for use by people looking to recharge depleted batteries. While the charger was ostensibly only providing electricity to the phone, it was also secretly downloading files or running malicious code on the device behind the scenes. Starting in 2012, both Apple and Google tried to mitigate the threat by requiring users to click a confirmation button on their phones before a computer—or a computer masquerading as a charger—could access files or execute code on the phone.

The logic behind the mitigation was rooted in a key portion of the USB protocol that, in the parlance of the specification, dictates that a USB port can facilitate a “host” device or a “peripheral” device at any given time, but not both. In the context of phones, this meant they could either:

  • Host the device on the other end of the USB cord—for instance, if a user connects a thumb drive or keyboard. In this scenario, the phone is the host that has access to the internals of the drive, keyboard or other peripheral device.
  • Act as a peripheral device that’s hosted by a computer or malicious charger, which under the USB paradigm is a host that has system access to the phone.

An alarming state of USB security

Researchers at the Graz University of Technology in Austria recently made a discovery that completely undermines the premise behind the countermeasure: They’re rooted under the assumption that USB hosts can’t inject input that autonomously approves the confirmation prompt. Given the restriction against a USB device simultaneously acting as a host and peripheral, the premise seemed sound. The trust models built into both iOS and Android, however, present loopholes that can be exploited to defeat the protections. The researchers went on to devise ChoiceJacking, the first known attack to defeat juice-jacking mitigations.

“We observe that these mitigations assume that an attacker cannot inject input events while establishing a data connection,” the researchers wrote in a paper scheduled to be presented in August at the Usenix Security Symposium in Seattle. “However, we show that this assumption does not hold in practice.”

The researchers continued:

We present a platform-agnostic attack principle and three concrete attack techniques for Android and iOS that allow a malicious charger to autonomously spoof user input to enable its own data connection. Our evaluation using a custom cheap malicious charger design reveals an alarming state of USB security on mobile platforms. Despite vendor customizations in USB stacks, ChoiceJacking attacks gain access to sensitive user files (pictures, documents, app data) on all tested devices from 8 vendors including the top 6 by market share.

In response to the findings, Apple updated the confirmation dialogs in last month’s release of iOS/iPadOS 18.4 to require a user authentication in the form of a PIN or password. While the researchers were investigating their ChoiceJacking attacks last year, Google independently updated its confirmation with the release of version 15 in November. The researchers say the new mitigation works as expected on fully updated Apple and Android devices. Given the fragmentation of the Android ecosystem, however, many Android devices remain vulnerable.

All three of the ChoiceJacking techniques defeat Android juice-jacking mitigations. One of them also works against those defenses in Apple devices. In all three, the charger acts as a USB host to trigger the confirmation prompt on the targeted phone.

The attacks then exploit various weaknesses in the OS that allow the charger to autonomously inject “input events” that can enter text or click buttons presented in screen prompts as if the user had done so directly into the phone. In all three, the charger eventually gains two conceptual channels to the phone: (1) an input one allowing it to spoof user consent and (2) a file access connection that can steal files.

An illustration of ChoiceJacking attacks. (1) The victim device is attached to the malicious charger. (2) The charger establishes an extra input channel. (3) The charger initiates a data connection. User consent is needed to confirm it. (4) The charger uses the input channel to spoof user consent. Credit: Draschbacher et al.

It’s a keyboard, it’s a host, it’s both

In the ChoiceJacking variant that defeats both Apple- and Google-devised juice-jacking mitigations, the charger starts as a USB keyboard or a similar peripheral device. It sends keyboard input over USB that invokes simple key presses, such as arrow up or down, but also more complex key combinations that trigger settings or open a status bar.

The input establishes a Bluetooth connection to a second miniaturized keyboard hidden inside the malicious charger. The charger then uses the USB Power Delivery, a standard available in USB-C connectors that allows devices to either provide or receive power to or from the other device, depending on messages they exchange, a process known as the USB PD Data Role Swap.

A simulated ChoiceJacking charger. Bidirectional USB lines allow for data role swaps. Credit: Draschbacher et al.

With the charger now acting as a host, it triggers the file access consent dialog. At the same time, the charger still maintains its role as a peripheral device that acts as a Bluetooth keyboard that approves the file access consent dialog.

The full steps for the attack, provided in the Usenix paper, are:

1. The victim device is connected to the malicious charger. The device has its screen unlocked.

2. At a suitable moment, the charger performs a USB PD Data Role (DR) Swap. The mobile device now acts as a USB host, the charger acts as a USB input device.

3. The charger generates input to ensure that BT is enabled.

4. The charger navigates to the BT pairing screen in the system settings to make the mobile device discoverable.

5. The charger starts advertising as a BT input device.

6. By constantly scanning for newly discoverable Bluetooth devices, the charger identifies the BT device address of the mobile device and initiates pairing.

7. Through the USB input device, the charger accepts the Yes/No pairing dialog appearing on the mobile device. The Bluetooth input device is now connected.

8. The charger sends another USB PD DR Swap. It is now the USB host, and the mobile device is the USB device.

9. As the USB host, the charger initiates a data connection.

10. Through the Bluetooth input device, the charger confirms its own data connection on the mobile device.

This technique works against all but one of the 11 phone models tested, with the holdout being an Android device running the Vivo Funtouch OS, which doesn’t fully support the USB PD protocol. The attacks against the 10 remaining models take about 25 to 30 seconds to establish the Bluetooth pairing, depending on the phone model being hacked. The attacker then has read and write access to files stored on the device for as long as it remains connected to the charger.

Two more ways to hack Android

The two other members of the ChoiceJacking family work only against the juice-jacking mitigations that Google put into Android. In the first, the malicious charger invokes the Android Open Access Protocol, which allows a USB host to act as an input device when the host sends a special message that puts it into accessory mode.

The protocol specifically dictates that while in accessory mode, a USB host can no longer respond to other USB interfaces, such as the Picture Transfer Protocol for transferring photos and videos and the Media Transfer Protocol that enables transferring files in other formats. Despite the restriction, all of the Android devices tested violated the specification by accepting AOAP messages sent, even when the USB host hadn’t been put into accessory mode. The charger can exploit this implementation flaw to autonomously complete the required user confirmations.

The remaining ChoiceJacking technique exploits a race condition in the Android input dispatcher by flooding it with a specially crafted sequence of input events. The dispatcher puts each event into a queue and processes them one by one. The dispatcher waits for all previous input events to be fully processed before acting on a new one.

“This means that a single process that performs overly complex logic in its key event handler will delay event dispatching for all other processes or global event handlers,” the researchers explained.

They went on to note, “A malicious charger can exploit this by starting as a USB peripheral and flooding the event queue with a specially crafted sequence of key events. It then switches its USB interface to act as a USB host while the victim device is still busy dispatching the attacker’s events. These events therefore accept user prompts for confirming the data connection to the malicious charger.”

The Usenix paper provides the following matrix showing which devices tested in the research are vulnerable to which attacks.

The susceptibility of tested devices to all three ChoiceJacking attack techniques. Credit: Draschbacher et al.

User convenience over security

In an email, the researchers said that the fixes provided by Apple and Google successfully blunt ChoiceJacking attacks in iPhones, iPads, and Pixel devices. Many Android devices made by other manufacturers, however, remain vulnerable because they have yet to update their devices to Android 15. Other Android devices—most notably those from Samsung running the One UI 7 software interface—don’t implement the new authentication requirement, even when running on Android 15. The omission leaves these models vulnerable to ChoiceJacking. In an email, principal paper author Florian Draschbacher wrote:

The attack can therefore still be exploited on many devices, even though we informed the manufacturers about a year ago and they acknowledged the problem. The reason for this slow reaction is probably that ChoiceJacking does not simply exploit a programming error. Rather, the problem is more deeply rooted in the USB trust model of mobile operating systems. Changes here have a negative impact on the user experience, which is why manufacturers are hesitant. [It] means for enabling USB-based file access, the user doesn’t need to simply tap YES on a dialog but additionally needs to present their unlock PIN/fingerprint/face. This inevitably slows down the process.

The biggest threat posed by ChoiceJacking is to Android devices that have been configured to enable USB debugging. Developers often turn on this option so they can troubleshoot problems with their apps, but many non-developers enable it so they can install apps from their computer, root their devices so they can install a different OS, transfer data between devices, and recover bricked phones. Turning it on requires a user to flip a switch in Settings > System > Developer options.

If a phone has USB Debugging turned on, ChoiceJacking can gain shell access through the Android Debug Bridge. From there, an attacker can install apps, access the file system, and execute malicious binary files. The level of access through the Android Debug Mode is much higher than that through Picture Transfer Protocol and Media Transfer Protocol, which only allow read and write access to system files.

The vulnerabilities are tracked as:

    • CVE-2025-24193 (Apple)
    • CVE-2024-43085 (Google)
    • CVE-2024-20900 (Samsung)
    • CVE-2024-54096 (Huawei)

A Google spokesperson confirmed that the weaknesses were patched in Android 15 but didn’t speak to the base of Android devices from other manufacturers, who either don’t support the new OS or the new authentication requirement it makes possible. Apple declined to comment for this post.

Word that juice-jacking-style attacks are once again possible on some Android devices and out-of-date iPhones is likely to breathe new life into the constant warnings from federal authorities, tech pundits, news outlets, and local and state government agencies that phone users should steer clear of public charging stations.

As I reported in 2023, these warnings are mostly scaremongering, and the advent of ChoiceJacking does little to change that, given that there are no documented cases of such attacks in the wild. That said, people using Android devices that don’t support Google’s new authentication requirement may want to refrain from public charging.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

iOS and Android juice jacking defenses have been trivial to bypass for years Read More »

in-the-age-of-ai,-we-must-protect-human-creativity-as-a-natural-resource

In the age of AI, we must protect human creativity as a natural resource


Op-ed: As AI outputs flood the Internet, diverse human perspectives are our most valuable resource.

Ironically, our present AI age has shone a bright spotlight on the immense value of human creativity as breakthroughs in technology threaten to undermine it. As tech giants rush to build newer AI models, their web crawlers vacuum up creative content, and those same models spew floods of synthetic media, risking drowning out the human creative spark in an ocean of pablum.

Given this trajectory, AI-generated content may soon exceed the entire corpus of historical human creative works, making the preservation of the human creative ecosystem not just an ethical concern but an urgent imperative. The alternative is nothing less than a gradual homogenization of our cultural landscape, where machine learning flattens the richness of human expression into a mediocre statistical average.

A limited resource

By ingesting billions of creations, chatbots learn to talk, and image synthesizers learn to draw. Along the way, the AI companies behind them treat our shared culture like an inexhaustible resource to be strip-mined, with little thought for the consequences.

But human creativity isn’t the product of an industrial process; it’s inherently throttled precisely because we are finite biological beings who draw inspiration from real lived experiences while balancing creativity with the necessities of life—sleep, emotional recovery, and limited lifespans. Creativity comes from making connections, and it takes energy, time, and insight for those connections to be meaningful. Until recently, a human brain was a prerequisite for making those kinds of connections, and there’s a reason why that is valuable.

Every human brain isn’t just a store of data—it’s a knowledge engine that thinks in a unique way, creating novel combinations of ideas. Instead of having one “connection machine” (an AI model) duplicated a million times, we have seven billion neural networks, each with a unique perspective. Relying on the diversity of thought derived from human cognition helps us escape the monolithic thinking that may emerge if everyone were to draw from the same AI-generated sources.

Today, the AI industry’s business models unintentionally echo the ways in which early industrialists approached forests and fisheries—as free inputs to exploit without considering ecological limits.

Just as pollution from early factories unexpectedly damaged the environment, AI systems risk polluting the digital environment by flooding the Internet with synthetic content. Like a forest that needs careful management to thrive or a fishery vulnerable to collapse from overexploitation, the creative ecosystem can be degraded even if the potential for imagination remains.

Depleting our creative diversity may become one of the hidden costs of AI, but that diversity is worth preserving. If we let AI systems deplete or pollute the human outputs they depend on, what happens to AI models—and ultimately to human society—over the long term?

AI’s creative debt

Every AI chatbot or image generator exists only because of human works, and many traditional artists argue strongly against current AI training approaches, labeling them plagiarism. Tech companies tend to disagree, although their positions vary. For example, in 2023, imaging giant Adobe took an unusual step by training its Firefly AI models solely on licensed stock photos and public domain works, demonstrating that alternative approaches are possible.

Adobe’s licensing model offers a contrast to companies like OpenAI, which rely heavily on scraping vast amounts of Internet content without always distinguishing between licensed and unlicensed works.

Photo of a mining dumptruck and water tank in an open pit copper mine.

OpenAI has argued that this type of scraping constitutes “fair use” and effectively claims that competitive AI models at current performance levels cannot be developed without relying on unlicensed training data, despite Adobe’s alternative approach.

The “fair use” argument often hinges on the legal concept of “transformative use,” the idea that using works for a fundamentally different purpose from creative expression—such as identifying patterns for AI—does not violate copyright. Generative AI proponents often argue that their approach is how human artists learn from the world around them.

Meanwhile, artists are expressing growing concern about losing their livelihoods as corporations turn to cheap, instantaneously generated AI content. They also call for clear boundaries and consent-driven models rather than allowing developers to extract value from their creations without acknowledgment or remuneration.

Copyright as crop rotation

This tension between artists and AI reveals a deeper ecological perspective on creativity itself. Copyright’s time-limited nature was designed as a form of resource management, like crop rotation or regulated fishing seasons that allow for regeneration. Copyright expiration isn’t a bug; its designers hoped it would ensure a steady replenishment of the public domain, feeding the ecosystem from which future creativity springs.

On the other hand, purely AI-generated outputs cannot be copyrighted in the US, potentially brewing an unprecedented explosion in public domain content, although it’s content that contains smoothed-over imitations of human perspectives.

Treating human-generated content solely as raw material for AI training disrupts this ecological balance between “artist as consumer of creative ideas” and “artist as producer.” Repeated legislative extensions of copyright terms have already significantly delayed the replenishment cycle, keeping works out of the public domain for much longer than originally envisioned. Now, AI’s wholesale extraction approach further threatens this delicate balance.

The resource under strain

Our creative ecosystem is already showing measurable strain from AI’s impact, from tangible present-day infrastructure burdens to concerning future possibilities.

Aggressive AI crawlers already effectively function as denial-of-service attacks on certain sites, with Cloudflare documenting GPTBot’s immediate impact on traffic patterns. Wikimedia’s experience provides clear evidence of current costs: AI crawlers caused a documented 50 percent bandwidth surge, forcing the nonprofit to divert limited resources to defensive measures rather than to its core mission of knowledge sharing. As Wikimedia says, “Our content is free, our infrastructure is not.” Many of these crawlers demonstrably ignore established technical boundaries like robots.txt files.

Beyond infrastructure strain, our information environment also shows signs of degradation. Google has publicly acknowledged rising volumes of “spammy, low-quality,” often auto-generated content appearing in search results. A Wired investigation found concrete examples of AI-generated plagiarism sometimes outranking original reporting in search results. This kind of digital pollution led Ross Anderson of Cambridge University to compare it to filling oceans with plastic—it’s a contamination of our shared information spaces.

Looking to the future, more risks may emerge. Ted Chiang’s comparison of LLMs to lossy JPEGs offers a framework for understanding potential problems, as each AI generation summarizes web information into an increasingly “blurry” facsimile of human knowledge. The logical extension of this process—what some researchers term “model collapse“—presents a risk of degradation in our collective knowledge ecosystem if models are trained indiscriminately on their own outputs. (However, this differs from carefully designed synthetic data that can actually improve model efficiency.)

This downward spiral of AI pollution may soon resemble a classic “tragedy of the commons,” in which organizations act from self-interest at the expense of shared resources. If AI developers continue extracting data without limits or meaningful contributions, the shared resource of human creativity could eventually degrade for everyone.

Protecting the human spark

While AI models that simulate creativity in writing, coding, images, audio, or video can achieve remarkable imitations of human works, this sophisticated mimicry currently lacks the full depth of the human experience.

For example, AI models lack a body that endures the pain and travails of human life. They don’t grow over the course of a human lifespan in real time. When an AI-generated output happens to connect with us emotionally, it often does so by imitating patterns learned from a human artist who has actually lived that pain or joy.

A photo of a young woman painter in her art studio.

Even if future AI systems develop more sophisticated simulations of emotional states or embodied experiences, they would still fundamentally differ from human creativity, which emerges organically from lived biological experience, cultural context, and social interaction.

That’s because the world constantly changes. New types of human experience emerge. If an ethically trained AI model is to remain useful, researchers must train it on recent human experiences, such as viral trends, evolving slang, and cultural shifts.

Current AI solutions, like retrieval-augmented generation (RAG), address this challenge somewhat by retrieving up-to-date, external information to supplement their static training data. Yet even RAG methods depend heavily on validated, high-quality human-generated content—the very kind of data at risk if our digital environment becomes overwhelmed with low-quality AI-produced output.

This need for high-quality, human-generated data is a major reason why companies like OpenAI have pursued media deals (including a deal signed with Ars Technica parent Condé Nast last August). Yet paradoxically, the same models fed on valuable human data often produce the low-quality spam and slop that floods public areas of the Internet, degrading the very ecosystem they rely on.

AI as creative support

When used carelessly or excessively, generative AI is a threat to the creative ecosystem, but we can’t wholly discount the tech as a tool in a human creative’s arsenal. The history of art is full of technological changes (new pigments, brushes, typewriters, word processors) that transform the nature of artistic production while augmenting human creativity.

Bear with me because there’s a great deal of nuance here that is easy to miss among today’s more impassioned reactions to people using AI as a blunt instrument of creating mediocrity.

While many artists rightfully worry about AI’s extractive tendencies, research published in Harvard Business Review indicates that AI tools can potentially amplify rather than merely extract creative capacity, suggesting that a symbiotic relationship is possible under the right conditions.

Inherent in this argument is that the responsible use of AI is reflected in the skill of the user. You can use a paintbrush to paint a wall or paint the Mona Lisa. Similarly, generative AI can mindlessly fill a canvas with slop, or a human can utilize it to express their own ideas.

Machine learning tools (such as those in Adobe Photoshop) already help human creatives prototype concepts faster, iterate on variations they wouldn’t have considered, or handle some repetitive production tasks like object removal or audio transcription, freeing humans to focus on conceptual direction and emotional resonance.

These potential positives, however, don’t negate the need for responsible stewardship and respecting human creativity as a precious resource.

Cultivating the future

So what might a sustainable ecosystem for human creativity actually involve?

Legal and economic approaches will likely be key. Governments could legislate that AI training must be opt-in, or at the very least, provide a collective opt-out registry (as the EU’s “AI Act” does).

Other potential mechanisms include robust licensing or royalty systems, such as creating a royalty clearinghouse (like the music industry’s BMI or ASCAP) for efficient licensing and fair compensation. Those fees could help compensate human creatives and encourage them to keep creating well into the future.

Deeper shifts may involve cultural values and governance. Inspired by models like Japan’s “Living National Treasures“—where the government funds artisans to preserve vital skills and support their work. Could we establish programs that similarly support human creators while also designating certain works or practices as “creative reserves,” funding the further creation of certain creative works even if the economic market for them dries up?

Or a more radical shift might involve an “AI commons”—legally declaring that any AI model trained on publicly scraped data should be owned collectively as a shared public domain, ensuring that its benefits flow back to society and don’t just enrich corporations.

Photo of family Harvesting Organic Crops On Farm

Meanwhile, Internet platforms have already been experimenting with technical defenses against industrial-scale AI demands. Examples include proof-of-work challenges, slowdown “tarpits” (e.g., Nepenthes), shared crawler blocklists (“ai.robots.txt“), commercial tools (Cloudflare’s AI Labyrinth), and Wikimedia’s “WE5: Responsible Use of Infrastructure” initiative.

These solutions aren’t perfect, and implementing any of them would require overcoming significant practical hurdles. Strict regulations might slow beneficial AI development; opt-out systems burden creators, while opt-in models can be complex to track. Meanwhile, tech defenses often invite arms races. Finding a sustainable, equitable balance remains the core challenge. The issue won’t be solved in a day.

Invest in people

While navigating these complex systemic challenges will take time and collective effort, there is a surprisingly direct strategy that organizations can adopt now: investing in people. Don’t sacrifice human connection and insight to save money with mediocre AI outputs.

Organizations that cultivate unique human perspectives and integrate them with thoughtful AI augmentation will likely outperform those that pursue cost-cutting through wholesale creative automation. Investing in people acknowledges that while AI can generate content at scale, the distinctiveness of human insight, experience, and connection remains priceless.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

In the age of AI, we must protect human creativity as a natural resource Read More »

new-android-spyware-is-targeting-russian-military-personnel-on-the-front-lines

New Android spyware is targeting Russian military personnel on the front lines

Russian military personnel are being targeted with recently discovered Android malware that steals their contacts and tracks their location.

The malware is hidden inside a modified app for Alpine Quest mapping software, which is used by, among others, hunters, athletes, and Russian personnel stationed in the war zone in Ukraine. The app displays various topographical maps for use online and offline. The trojanized Alpine Quest app is being pushed on a dedicated Telegram channel and in unofficial Android app repositories. The chief selling point of the trojanized app is that it provides a free version of Alpine Quest Pro, which is usually available only to paying users.

Looks like the real thing

The malicious module is named Android.Spy.1292.origin. In a blog post, researchers at Russia-based security firm Dr.Web wrote:

Because Android.Spy.1292.origin is embedded into a copy of the genuine app, it looks and operates as the original, which allows it to stay undetected and execute malicious tasks for longer periods of time.

Each time it is launched, the trojan collects and sends the following data to the C&C server:

  • the user’s mobile phone number and their accounts;
  • contacts from the phonebook;
  • the current date;
  • the current geolocation;
  • information about the files stored on the device;
  • the app’s version.

If there are files of interest to the threat actors, they can update the app with a module that steals them. The threat actors behind Android.Spy.1292.origin are particularly interested in confidential documents sent over Telegram and WhatsApp. They also show interest in the file locLog, the location log created by Alpine Quest. The modular design of the app makes it possible for it to receive additional updates that expand its capabilities even further.

New Android spyware is targeting Russian military personnel on the front lines Read More »