Biz & IT

google-introduces-advanced-protection-mode-for-its-most-at-risk-android-users

Google introduces Advanced Protection mode for its most at-risk Android users

Google is adding a new security setting to Android to provide an extra layer of resistance against attacks that infect devices, tap calls traveling through insecure carrier networks, and deliver scams through messaging services.

On Tuesday, the company unveiled the Advanced Protection mode, most of which will be rolled out in the upcoming release of Android 16. The setting comes as mercenary malware sold by NSO Group and a cottage industry of other exploit sellers continues to thrive. These players provide attacks-as-a-service through end-to-end platforms that exploit zero-day vulnerabilities on targeted devices, infect them with advanced spyware, and then capture contacts, message histories, locations, and other sensitive information. Over the past decade, phones running fully updated versions of Android and iOS have routinely been hacked through these services.

A core suite of enhanced security features

Advanced Protection is Google’s latest answer to this type of attack. By flipping a single button in device settings, users can enable a host of protections that can thwart some of the most common techniques used in sophisticated hacks. In some cases, the protections hamper performance and capabilities of the device, so Google is recommending the new mode mainly for journalists, elected officials, and other groups who are most often targeted or have the most to lose when infected.

“With the release of Android 16, users who choose to activate Advanced Protection will gain immediate access to a core suite of enhanced security features,” Google’s product manager for Android Security, Il-Sung Lee, wrote. “Additional Advanced Protection features like Intrusion Logging, USB protection, the option to disable auto-reconnect to insecure networks, and integration with Scam Detection for Phone by Google will become available later this year.”

Google introduces Advanced Protection mode for its most at-risk Android users Read More »

ai-agents-that-autonomously-trade-cryptocurrency-aren’t-ready-for-prime-time

AI agents that autonomously trade cryptocurrency aren’t ready for prime time

The researchers wrote:

The implications of this vulnerability are particularly severe given that ElizaOSagents are designed to interact with multiple users simultaneously, relying on shared contextual inputs from all participants. A single successful manipulation by a malicious actor can compromise the integrity of the entire system, creating cascading effects that are both difficult to detect and mitigate. For example, on ElizaOS’s Discord server, various bots are deployed to assist users with debugging issues or engaging in general conversations. A successful context manipulation targeting any one of these bots could disrupt not only individual interactions but also harm the broader community relying on these agents for support

and engagement.

This attack exposes a core security flaw: while plugins execute sensitive operations, they depend entirely on the LLM’s interpretation of context. If the context is compromised, even legitimate user inputs can trigger malicious actions. Mitigating this threat requires strong integrity checks on stored context to ensure that only verified, trusted data informs decision-making during plugin execution.

In an email, ElizaOS creator Shaw Walters said the framework, like all natural-language interfaces, is designed “as a replacement, for all intents and purposes, for lots and lots of buttons on a webpage.” Just as a website developer should never include a button that gives visitors the ability to execute malicious code, so too should administrators implementing ElizaOS-based agents carefully limit what agents can do by creating allow lists that permit an agent’s capabilities as a small set of pre-approved actions.

Walters continued:

From the outside it might seem like an agent has access to their own wallet or keys, but what they have is access to a tool they can call which then accesses those, with a bunch of authentication and validation between.

So for the intents and purposes of the paper, in the current paradigm, the situation is somewhat moot by adding any amount of access control to actions the agents can call, which is something we address and demo in our latest latest version of Eliza—BUT it hints at a much harder to deal with version of the same problem when we start giving the agent more computer control and direct access to the CLI terminal on the machine it’s running on. As we explore agents that can write new tools for themselves, containerization becomes a bit trickier, or we need to break it up into different pieces and only give the public facing agent small pieces of it… since the business case of this stuff still isn’t clear, nobody has gotten terribly far, but the risks are the same as giving someone that is very smart but lacking in judgment the ability to go on the internet. Our approach is to keep everything sandboxed and restricted per user, as we assume our agents can be invited into many different servers and perform tasks for different users with different information. Most agents you download off Github do not have this quality, the secrets are written in plain text in an environment file.

In response, Atharv Singh Patlan, the lead co-author of the paper, wrote: “Our attack is able to counteract any role based defenses. The memory injection is not that it would randomly call a transfer: it is that whenever a transfer is called, it would end up sending to the attacker’s address. Thus, when the ‘admin’ calls transfer, the money will be sent to the attacker.”

AI agents that autonomously trade cryptocurrency aren’t ready for prime time Read More »

new-pope-chose-his-name-based-on-ai’s-threats-to-“human-dignity”

New pope chose his name based on AI’s threats to “human dignity”

“Like any product of human creativity, AI can be directed toward positive or negative ends,” Francis said in January. “When used in ways that respect human dignity and promote the well-being of individuals and communities, it can contribute positively to the human vocation. Yet, as in all areas where humans are called to make decisions, the shadow of evil also looms here. Where human freedom allows for the possibility of choosing what is wrong, the moral evaluation of this technology will need to take into account how it is directed and used.”

History repeats with new technology

While Pope Francis led the call for respecting human dignity in the face of AI, it’s worth looking a little deeper into the historical inspiration for Leo XIV’s name choice.

In the 1891 encyclical Rerum Novarum, the earlier Leo XIII directly confronted the labor upheaval of the Industrial Revolution, which generated unprecedented wealth and productive capacity but came with severe human costs. At the time, factory conditions had created what the pope called “the misery and wretchedness pressing so unjustly on the majority of the working class.” Workers faced 16-hour days, child labor, dangerous machinery, and wages that barely sustained life.

The 1891 encyclical rejected both unchecked capitalism and socialism, instead proposing Catholic social doctrine that defended workers’ rights to form unions, earn living wages, and rest on Sundays. Leo XIII argued that labor possessed inherent dignity and that employers held moral obligations to their workers. The document shaped modern Catholic social teaching and influenced labor movements worldwide, establishing the church as an advocate for workers caught between industrial capital and revolutionary socialism.

Just as mechanization disrupted traditional labor in the 1890s, artificial intelligence now potentially threatens employment patterns and human dignity in ways that Pope Leo XIV believes demand similar moral leadership from the church.

“In our own day,” Leo XIV concluded in his formal address on Saturday, “the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice, and labor.”

New pope chose his name based on AI’s threats to “human dignity” Read More »

new-lego-building-ai-creates-models-that-actually-stand-up-in-real-life

New Lego-building AI creates models that actually stand up in real life

The LegoGPT system works in three parts, shown in this diagram.

The LegoGPT system works in three parts, shown in this diagram. Credit: Pun et al.

The researchers also expanded the system’s abilities by adding texture and color options. For example, using an appearance prompt like “Electric guitar in metallic purple,” LegoGPT can generate a guitar model, with bricks assigned a purple color.

Testing with robots and humans

To prove their designs worked in real life, the researchers had robots assemble the AI-created Lego models. They used a dual-robot arm system with force sensors to pick up and place bricks according to the AI-generated instructions.

Human testers also built some of the designs by hand, showing that the AI creates genuinely buildable models. “Our experiments show that LegoGPT produces stable, diverse, and aesthetically pleasing Lego designs that align closely with the input text prompts,” the team noted in its paper.

When tested against other AI systems for 3D creation, LegoGPT stands out through its focus on structural integrity. The team tested against several alternatives, including LLaMA-Mesh and other 3D generation models, and found its approach produced the highest percentage of stable structures.

A video of two robot arms building a LegoGPT creation, provided by the researchers.

Still, there are some limitations. The current version of LegoGPT only works within a 20×20×20 building space and uses a mere eight standard brick types. “Our method currently supports a fixed set of commonly used Lego bricks,” the team acknowledged. “In future work, we plan to expand the brick library to include a broader range of dimensions and brick types, such as slopes and tiles.”

The researchers also hope to scale up their training dataset to include more objects than the 21 categories currently available. Meanwhile, others can literally build on their work—the researchers released their dataset, code, and models on their project website and GitHub.

New Lego-building AI creates models that actually stand up in real life Read More »

ai-use-damages-professional-reputation,-study-suggests

AI use damages professional reputation, study suggests

Using AI can be a double-edged sword, according to new research from Duke University. While generative AI tools may boost productivity for some, they might also secretly damage your professional reputation.

On Thursday, the Proceedings of the National Academy of Sciences (PNAS) published a study showing that employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers.

“Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs,” write researchers Jessica A. Reif, Richard P. Larrick, and Jack B. Soll of Duke’s Fuqua School of Business.

The Duke team conducted four experiments with over 4,400 participants to examine both anticipated and actual evaluations of AI tool users. Their findings, presented in a paper titled “Evidence of a social evaluation penalty for using AI,” reveal a consistent pattern of bias against those who receive help from AI.

What made this penalty particularly concerning for the researchers was its consistency across demographics. They found that the social stigma against AI use wasn’t limited to specific groups.

Fig. 1. Effect sizes for differences in expected perceptions and disclosure to others (Study 1). Note: Positive d values indicate higher values in the AI Tool condition, while negative d values indicate lower values in the AI Tool condition. N = 497. Error bars represent 95% CI. Correlations among variables range from | r |= 0.53 to 0.88.

Fig. 1 from the paper “Evidence of a social evaluation penalty for using AI.” Credit: Reif et al.

“Testing a broad range of stimuli enabled us to examine whether the target’s age, gender, or occupation qualifies the effect of receiving help from Al on these evaluations,” the authors wrote in the paper. “We found that none of these target demographic attributes influences the effect of receiving Al help on perceptions of laziness, diligence, competence, independence, or self-assuredness. This suggests that the social stigmatization of AI use is not limited to its use among particular demographic groups. The result appears to be a general one.”

The hidden social cost of AI adoption

In the first experiment conducted by the team from Duke, participants imagined using either an AI tool or a dashboard creation tool at work. It revealed that those in the AI group expected to be judged as lazier, less competent, less diligent, and more replaceable than those using conventional technology. They also reported less willingness to disclose their AI use to colleagues and managers.

The second experiment confirmed these fears were justified. When evaluating descriptions of employees, participants consistently rated those receiving AI help as lazier, less competent, less diligent, less independent, and less self-assured than those receiving similar help from non-AI sources or no help at all.

AI use damages professional reputation, study suggests Read More »

fidji-simo-joins-openai-as-new-ceo-of-applications

Fidji Simo joins OpenAI as new CEO of Applications

In the message, Altman described Simo as bringing “a rare blend of leadership, product and operational expertise” and expressed that her addition to the team makes him “even more optimistic about our future as we continue advancing toward becoming the superintelligence company.”

Simo becomes the newest high-profile female executive at OpenAI following the departure of Chief Technology Officer Mira Murati in September. Murati, who had been with the company since 2018 and helped launch ChatGPT, left alongside two other senior leaders and founded Thinking Machines Lab in February.

OpenAI’s evolving structure

The leadership addition comes as OpenAI continues to evolve beyond its origins as a research lab. In his announcement, Altman described how the company now operates in three distinct areas: as a research lab focused on artificial general intelligence (AGI), as a “global product company serving hundreds of millions of users,” and as an “infrastructure company” building systems that advance research and deliver AI tools “at unprecedented scale.”

Altman mentioned that as CEO of OpenAI, he will “continue to directly oversee success across all pillars,” including Research, Compute, and Applications, while staying “closely involved with key company decisions.”

The announcement follows recent news that OpenAI abandoned its original plan to cede control of its nonprofit branch to a for-profit entity. The company began as a nonprofit research lab in 2015 before creating a for-profit subsidiary in 2019, maintaining its original mission “to ensure artificial general intelligence benefits everyone.”

Fidji Simo joins OpenAI as new CEO of Applications Read More »

doge-software-engineer’s-computer-infected-by-info-stealing-malware

DOGE software engineer’s computer infected by info-stealing malware

Login credentials belonging to an employee at both the Cybersecurity and Infrastructure Security Agency and the Department of Government Efficiency have appeared in multiple public leaks from info-stealer malware, a strong indication that devices belonging to him have been hacked in recent years.

Kyle Schutt is a 30-something-year-old software engineer who, according to Dropsite News, gained access in February to a “core financial management system” belonging to the Federal Emergency Management Agency. As an employee of DOGE, Schutt accessed FEMA’s proprietary software for managing both disaster and non-disaster funding grants. Under his role at CISA, he likely is privy to sensitive information regarding the security of civilian federal government networks and critical infrastructure throughout the US.

A steady stream of published credentials

According to journalist Micah Lee, user names and passwords for logging in to various accounts belonging to Schutt have been published at least four times since 2023 in logs from stealer malware. Stealer malware typically infects devices through trojanized apps, phishing, or software exploits. Besides pilfering login credentials, stealers can also log all keystrokes and capture or record screen output. The data is then sent to the attacker and, occasionally after that, can make its way into public credential dumps.

“I have no way of knowing exactly when Schutt’s computer was hacked, or how many times,” Lee wrote. “I don’t know nearly enough about the origins of these stealer log datasets. He might have gotten hacked years ago and the stealer log datasets were just published recently. But he also might have gotten hacked within the last few months.”

Lee went on to say that credentials belonging to a Gmail account known to belong to Schutt have appeared in 51 data breaches and five pastes tracked by breach notification service Have I Been Pwned. Among the breaches that supplied the credentials is one from 2013 that pilfered password data for 3 million Adobe account holders, one in a 2016 breach that stole credentials for 164 million LinkedIn users, a 2020 breach affecting 167 million users of Gravatar, and a breach last year of the conservative news site The Post Millennial.

DOGE software engineer’s computer infected by info-stealing malware Read More »

whatsapp-provides-no-cryptographic-management-for-group-messages

WhatsApp provides no cryptographic management for group messages

The flow of adding new members to a WhatsApp group message is:

  • A group member sends an unsigned message to the WhatsApp server that designates which users are group members, for instance, Alice, Bob, and Charlie
  • The server informs all existing group members that Alice, Bob, and Charlie have been added
  • The existing members have the option of deciding whether to accept messages from Alice, Bob, and Charlie, and whether messages exchanged with them should be encrypted

With no cryptographic signatures verifying an existing member who wants to add a new member, additions can be made by anyone with the ability to control the server or messages that flow into it. Using the common fictional scenario for illustrating end-to-end encryption, this lack of cryptographic assurance leaves open the possibility that Malory can join a group and gain access to the human-readable messages exchanged there.

WhatsApp isn’t the only messenger lacking cryptographic assurances for new group members. In 2022, a team that included some of the same researchers that analyzed WhatsApp found that Matrix—an open source and proprietary platform for chat and collaboration clients and servers—also provided no cryptographic means for ensuring only authorized members join a group. The Telegram messenger, meanwhile, offers no end-to-end encryption for group messages, making the app among the weakest for ensuring the confidentiality of group messages.

By contrast, the open source Signal messenger provides a cryptographic assurance that only an existing group member designated as the group admin can add new members. In an email, researcher Benjamin Dowling, also of King’s College, explained:

Signal implements “cryptographic group management.” Roughly this means that the administrator of a group, a user, signs a message along the lines of “Alice, Bob and Charley are in this group” to everyone else. Then, everybody else in the group makes their decision on who to encrypt to and who to accept messages from based on these cryptographically signed messages, [meaning] who to accept as a group member. The system used by Signal is a bit different [than WhatsApp], since [Signal] makes additional efforts to avoid revealing the group membership to the server, but the core principles remain the same.

On a high-level, in Signal, groups are associated with group membership lists that are stored on the Signal server. An administrator of the group generates a GroupMasterKey that is used to make changes to this group membership list. In particular, the GroupMasterKey is sent to other group members via Signal, and so is unknown to the server. Thus, whenever an administrator wants to make a change to the group (for instance, invite another user), they need to create an updated membership list (authenticated with the GroupMasterKey) telling other users of the group who to add. Existing users are notified of the change and update their group list, and perform the appropriate cryptographic operations with the new member so the existing member can begin sending messages to the new members as part of the group.

Most messaging apps, including Signal, don’t certify the identity of their users. That means there’s no way Signal can verify that the person using an account named Alice does, in fact, belong to Alice. It’s fully possible that Malory could create an account and name it Alice. (As an aside, and in sharp contrast to Signal, the account members that belong to a given WhatsApp group are visible to insiders, hackers, and to anyone with a valid subpoena.)

WhatsApp provides no cryptographic management for group messages Read More »

vmware-perpetual-license-holders-receive-cease-and-desist-letters-from-broadcom

VMware perpetual license holders receive cease-and-desist letters from Broadcom

Broadcom has been sending cease-and-desist letters to owners of VMware perpetual licenses with expired support contracts, Ars Technica has confirmed.

Following its November 2023 acquisition of VMware, Broadcom ended VMware perpetual license sales. Users with perpetual licenses can still use the software they bought, but they are unable to renew support services unless they had a pre-existing contract enabling them to do so. The controversial move aims to push VMware users to buy subscriptions to VMware products bundled such that associated costs have increased by 300 percent or, in some cases, more.

Some customers have opted to continue using VMware unsupported, often as they research alternatives, such as VMware rivals or devirtualization.

Over the past weeks, some users running VMware unsupported have reported receiving cease-and-desist letters from Broadcom informing them that their contract with VMware and, thus, their right to receive support services, has expired. The letter [PDF], reviewed by Ars Technica and signed by Broadcom managing director Michael Brown, tells users that they are to stop using any maintenance releases/updates, minor releases, major releases/upgrades extensions, enhancements, patches, bug fixes, or security patches, save for zero-day security patches, issued since their support contract ended.

The letter tells users that the implementation of any such updates “past the Expiration Date must be immediately removed/deinstalled,” adding:

Any such use of Support past the Expiration Date constitutes a material breach of the Agreement with VMware and an infringement of VMware’s intellectual property rights, potentially resulting in claims for enhanced damages and attorneys’ fees.

Some customers of Members IT Group, a managed services provider (MSP) in Canada, have received this letter, despite not receiving VMware updates since their support contracts expired, CTO Dean Colpitts told Ars. One customer, he said, received a letter six days after their support contract expired.

Similarly, users online have reported receiving cease-and-desist letters even though they haven’t issued updates since losing VMware support. One user on Spiceworks’ community forum reported receiving such a letter even though they migrated off of VMware and to Proxmox.

VMware perpetual license holders receive cease-and-desist letters from Broadcom Read More »

jury-orders-nso-to-pay-$167-million-for-hacking-whatsapp-users

Jury orders NSO to pay $167 million for hacking WhatsApp users

A jury has awarded WhatsApp $167 million in punitive damages in a case the company brought against Israel-based NSO Group for exploiting a software vulnerability that hijacked the phones of thousands of users.

The verdict, reached Tuesday, comes as a major victory not just for Meta-owned WhatsApp but also for privacy- and security-rights advocates who have long criticized the practices of NSO and other exploit sellers. The jury also awarded WhatsApp $444 million in compensatory damages.

Clickless exploit

WhatsApp sued NSO in 2019 for an attack that targeted roughly 1,400 mobile phones belonging to attorneys, journalists, human-rights activists, political dissidents, diplomats, and senior foreign government officials. NSO, which works on behalf of governments and law enforcement authorities in various countries, exploited a critical WhatsApp vulnerability that allowed it to install NSO’s proprietary spyware Pegasus on iOS and Android devices. The clickless exploit worked by placing a call to a target’s app. A target did not have to answer the call to be infected.

“Today’s verdict in WhatsApp’s case is an important step forward for privacy and security as the first victory against the development and use of illegal spyware that threatens the safety and privacy of everyone,” WhatsApp said in a statement. “Today, the jury’s decision to force NSO, a notorious foreign spyware merchant, to pay damages is a critical deterrent to this malicious industry against their illegal acts aimed at American companies and the privacy and security of the people we serve.”

NSO created WhatsApp accounts in 2018 and used them a year later to initiate calls that exploited the critical vulnerability on phones, which, among others, included 100 members of “civil society” from 20 countries, according to an investigation research group Citizen Lab performed on behalf of WhatsApp. The calls passed through WhatsApp servers and injected malicious code into the memory of targeted devices. The targeted phones would then use WhatsApp servers to connect to malicious servers maintained by NSO.

Jury orders NSO to pay $167 million for hacking WhatsApp users Read More »

man-pleads-guilty-to-using-malicious-ai-software-to-hack-disney-employee

Man pleads guilty to using malicious AI software to hack Disney employee

A California man has pleaded guilty to hacking an employee of The Walt Disney Company by tricking the person into running a malicious version of a widely used open source AI image-generation tool.

Ryan Mitchell Kramer, 25, pleaded guilty to one count of accessing a computer and obtaining information and one count of threatening to damage a protected computer, the US Attorney for the Central District of California said Monday. In a plea agreement, Kramer said he published an app on GitHub for creating AI-generated art. The program contained malicious code that gave access to computers that installed it. Kramer operated using the moniker NullBulge.

Not the ComfyUI you’re looking for

According to researchers at VPNMentor, the program Kramer used was ComfyUI_LLMVISION, which purported to be an extension for the legitimate ComfyUI image generator and had functions added to it for copying passwords, payment card data, and other sensitive information from machines that installed it. The fake extension then sent the data to a Discord server that Kramer operated. To better disguise the malicious code, it was folded into files that used the names OpenAI and Anthropic.

Two files automatically downloaded by ComfyUI_LLMVISION, as displayed by a user’s Python package manager. Credit: VPNMentor

The Disney employee downloaded ComfyUI_LLMVISION in April 2024. After gaining unauthorized access to the victim’s computer and online accounts, Kramer accessed private Disney Slack channels. In May, he downloaded roughly 1.1 terabytes of confidential data from thousands of the channels.

In early July, Kramer contacted the employee and pretended to be a member of a hacktivist group. Later that month, after receiving no reply from the employee, Kramer publicly released the stolen information, which, besides private Disney material, also included the employee’s bank, medical, and personal information.

In the plea agreement, Kramer admitted that two other victims had installed ComfyUI_LLMVISION, and he gained unauthorized access to their computers and accounts as well. The FBI is investigating. Kramer is expected to make his first court appearance in the coming weeks.

Man pleads guilty to using malicious AI software to hack Disney employee Read More »

signal-clone-used-by-trump-official-stops-operations-after-report-it-was-hacked

Signal clone used by Trump official stops operations after report it was hacked

Waltz was removed from his post late last week, with Trump nominating him to serve as ambassador to the United Nations.

TeleMessage website removes Signal mentions

The TeleMessage website until recently boasted the ability to “capture, archive and monitor mobile communication” through text messages, voice calls, WhatsApp, WeChat, Telegram, and Signal, as seen in an Internet Archive capture from Saturday. Another archived page says that TeleMessage “captures and records Signal calls, messages, deletions, including text, multimedia, [and] files,” and “maintain[s] all Signal app features and functionality as well as the Signal encryption.”

The TeleMessage home page currently makes no mention of Signal, and links on the page have been disabled.

The anonymous hacker who reportedly infiltrated TeleMessage told 404 Media that it took about 15 to 20 minutes and “wasn’t much effort at all.” While the hacker did not obtain Waltz’s messages, “the hack shows that the archived chat logs are not end-to-end encrypted between the modified version of the messaging app and the ultimate archive destination controlled by the TeleMessage customer,” according to 404 Media.

“Data related to Customs and Border Protection (CBP), the cryptocurrency giant Coinbase, and other financial institutions are included in the hacked material, according to screenshots of messages and backend systems obtained by 404 Media,” the report said. 404 Media added that the “hacker did not access all messages stored or collected by TeleMessage, but could have likely accessed more data if they decided to, underscoring the extreme risk posed by taking ordinarily secure end-to-end encrypted messaging apps such as Signal and adding an extra archiving feature to them.”

Signal clone used by Trump official stops operations after report it was hacked Read More »